names
stringlengths 1
98
| readmes
stringlengths 8
608k
| topics
stringlengths 0
442
| labels
stringclasses 6
values |
---|---|---|---|
EmbeddedProject | embeddedproject ble rtc on the cypress pioneer kit for embedded systems design with hussein eece321 02 | os |
|
18CSMP68 | logo https github com iamishandubey 18csmp68 blob main docs logo png mobile application development share this repo using bit ly 18csmp68 https bit ly 18csmp68 effective from the academic year 2018 2019 semester vi course code 18csmp68 laboratory objectives this laboratory 18csmp68 will enable students to learn and acquire the art of android programming configureandroid studio to run the applications understand and implement android s user interface functions create modify and query on sqlite database inspect different methods of sharing data using services learning resources official kotlin reference https kotlinlang org docs kotlin pdf html official android documentation https developer android com docs android developers youtube channel https www youtube com user androiddevelopers material io basics https material io design introduction components for android https material io components platform android list of applications part a visiting card https github com iamishandubey 18csmp68 tree main visitingcard create an application to design avisiting card the visiting card should havea companylogoatthe top right corner the company name should be displayed in capital letters aligned to the center information like the name of the employee job title phone number address email fax and the website address isto be displayed insert a horizontal line between the job title and the phone number calculator https github com iamishandubey 18csmp68 tree main calculator develop an android application usingcontrols like button textview edittext for designing a calculatorhaving basic functionality like addition subtraction multiplication anddivision loginsignup https github com iamishandubey 18csmp68 tree main loginsignup create a sign up activity with username and password validation of password should happen based on the following rules password should contain uppercase and lowercase letters password should contain letters and numbers password should contain special characters minimum length of the password the default value is 8 on successful sign up proceed to the next login activity here the user should sign in using the username and password created during signup activity if the username and password are matched then navigate to the next activity whichdisplays a message saying successful login or else display a toast message saying login failed the user is given only two attempts and after thatdisplay a toast message saying failed login attempts and disable the sign in button use bundle to transfer information from one activity to another wallpaper https github com iamishandubey 18csmp68 tree main wallpaper develop an application to set an image as wallpaper on click of a button the wallpaper image should start to change randomly every 30 seconds counter https github com iamishandubey 18csmp68 tree main counter write a program to create an activity with two buttons start and stop on pressingofthestart button the activity must start the counter by displaying the numbers from one and the counter must keep on counting until the stop button is pressed display the counter value in a textviewcontrol parsing xml and json https github com iamishandubey 18csmp68 tree main parsingxmlandjson create two files of xml and json type with values for city name latitude longitude temperature andhumidity develop an application to create an activity with two buttons to parse the xml and json files which when clicked should display the data in their respective layouts side by side text to speech https github com iamishandubey 18csmp68 tree main texttospeech develop a simple application withoneedittextso that the user can write some text in it create a button called convert text to speech that converts the user input text into voice phone dialer https github com iamishandubey 18csmp68 tree main phonedialer create an activity like a phone dialer withcalland save buttons on pressing the call button it must call the phone number and on pressing the save button it must save the number to the phone contacts part b mini projects file application https github com iamishandubey 18csmp68 tree main fileapplication write a program to create an activity having a text box and also save open and create buttons the user has to write some text in the text box on pressing the create button the text should be saved as a text file in mksdcard on subsequent changes to the text the save button should be pressed to store the latest content to the same file on pressing the open button it should display the contents from the previously stored files in the text box if the user tries to save the contents in the textbox to a file without creating it then a toast message has to be displayed saying first create a file media player https github com iamishandubey 18csmp68 tree main mediaplayer create an application to demonstrate a basic media playerthat allows the user to forward backward play and pause an audio also make use of the indicator in the seek bar to move the audio forward or backward as required asynchronous task https github com iamishandubey 18csmp68 tree main asynchronoustask develop an application to demonstrate the use of asynchronous tasks in android the asynchronous task should implement the functionality of a simple moving banner on pressing the start task button the banner message should scrollfrom right to left on pressing the stop task button the banner message should stop let the banner message be demonstration of asynchronous task clipboard https github com iamishandubey 18csmp68 tree main clipboard develop an application that makes use of the clipboard framework for copying and pasting of the text the activity consists of two edittext controls and two buttons to trigger the copy and paste functionality car emi calculator https github com iamishandubey 18csmp68 tree main caremicalculator create an aidl service that calculates car loan emi the formula to calculate emi is e p r 1 r n 1 r n 1 where e the emi payable on the car loan amount p the car loan principal amount r the interest rate value computed on a monthly basis n the loan tenure in the form of months the down payment amount has to be deducted from the principal amount paid towards buying the car develop an application that makes use of this aidl service to calculate the emi this application should have four edittext to read the principalamount down payment interest rate loan term in months and a button named as calculate monthly emi on click of this button the result should be shown in a textview also calculate the emi by varying the loan term and interest rate values | android-development kotlin kotlin-android vtulab vtulabprogrammes 18csmp68 | front_end |
front-end-curriculum | front end curriculum this is a small static jekyll site that contains the front end program s lessons and projects license a rel license href http creativecommons org licenses by nc sa 4 0 img alt creative commons license style border width 0 src https i creativecommons org l by nc sa 4 0 88x31 png a br this work is licensed under a a rel license href http creativecommons org licenses by nc sa 4 0 creative commons attribution noncommercial sharealike 4 0 international license a getting started development branch a develpment server is running for the development branch of this project you can find that site here https sage cupcake e1a0c3 netlify app prerequisites in order to get this repo up and running you will need to have ruby 2 7 4 installed and active install homebrew if you don t already have it shell bin bash c curl fssl https raw githubusercontent com homebrew install head install sh follow prompts if necessary you may need to enter your computer password as well as run a few scripts prompted in the terminal output install ruby version manager shell curl ssl https get rvm io bash s stable install ruby version 2 7 4 using rvm shell rvm install 2 7 4 you will also need the bundler and jekyll gems installed shell gem install bundler gem install jekyll installing once you have ruby bundler and jekyll installed you can install dependencies by running shell bundle and after all dependencies are installed you can run the following to start your local server on port 4000 shell bundle exec jekyll serve styling your lesson plans you can add styled boxes to your lesson plans for different areas of content standard box html section class call to action in your notebook what would you expect to be logged when we get to line 10 why section will result in the following styled box styled box https user images githubusercontent com 17582916 60548262 e75fd180 9cde 11e9 8964 03c4ee6152d9 png answer solution box the heading in the answer box must be an h3 you can include any text within the section after that html section class answer the answer here is an answer to the on your own section section will result in the following styled box collapsed answer https user images githubusercontent com 17582916 72355972 a725d680 36a5 11ea 8755 077ebf0d34dc png expanded answer https user images githubusercontent com 17582916 72356019 be64c400 36a5 11ea 87e6 a5a7310db2bc png note box html section class note note this hoisting behavior adds some complexity to the javascript language and is important to understand thoroughly in order to anticipate the values of your variables at any given time section note box https user images githubusercontent com 17582916 60548280 f2b2fd00 9cde 11e9 848c 6d58f4b6ebde png cfu exit ticket box html section class checks for understanding exit ticket what are 3 easy and actionable accessibility steps you can take in all of your projects from here on out section cfu box https user images githubusercontent com 17582916 60548305 ff375580 9cde 11e9 9e06 739244d68973 png do not indent your markdown within the section tag or else it will not work algolia search the site uses algolia https www algolia com dashboard for search indexing to re index the search when new lessons are added or lessons are removed run this command in your terminal at the root of the curriculum directory shell algolia api key admin api key bundle exec jekyll algolia push where admin api key is replaced with the actual admin api key found in the algolia account dashboard | front_end |
|
blockchain | blockchain 1 2 3 4 pow hash 5 http server 6 | golang blockchain demo | blockchain |
SimpleGithubStats | simplegithubstats create a simple github stats contained information about your profile img align center src https simple github stats vercel app user erickcestari date 02 01 2020 how to use change the query user to your username and change the date to when you started coding html img align center src https simple github stats vercel app user erickcestari date 02 01 2020 | server |
|
iota | clojars http clojars org iota latest version svg http clojars org iota build status https api travis ci org thebusby iota png branch master https travis ci org thebusby iota iota iota is a clojure library for handling large text files in memory and offers the following benefits tuned for clojure s reducers letting you reduce over large files quickly uses java nio s mmap for rapid io and handling files larger than available memory efficiently stores data as it is represented in the file and only converts to java lang string when necessary offers efficient indexing and caching that emulates clojure s native vector and seq data structures adjustable buffer sizes for io and caching enabling tuning for specific data sets why write this library i wanted to be able to use clojure reducers against large text files to speed up data processing and without needing more than 10 memory overhead due to java s inefficient storage of strings http www javamex com tutorials memory string memory usage shtml i found that a 1gb tsv file consumed 10gb of ram when loaded line by line into a clojure vector details iota offers iota seq and iota vec for two different use cases both treat a line as delimited by a byte separator default is newline as an element differences iota seq iota vec on creation quick mmap s the file and stops slow mmap s the file and iterates throught the entire file to generate an index sequential access scans the buffer for the next byte separator quick n records are read at once and cached random access o n just don t quick o 1 via index via reducers buffer is divided in half repeatedly until it is smaller than specified size and then entire buffer is converted to string for processing treated exactly like a clojure vector but each thread gets it s own cache advice if you ll only be reading the entire file at a time then use iota seq if you need random access across the file then use iota vec if you need random access and line numbers then iota numbered vec warning if you re not using reducers with iota seq use clojure s line seq instead note for iota vec and iota seq empty lines will return nil for iota numbered vec empty lines will return the line number as a string usage clojure def file vec iota vec filename map the file into memory and generate index of lines slow def file seq iota seq filename map the file into memory quick returns first line of file first file vec first file seq last file vec returns last line of file nth file vec 2 returns the 3rd line of the file count number of non empty fields in tsv file iota seq filename clojure core reducers filter identity filter out empty lines clojure core reducers map clojure string split t 1 filter fn string s not isempty s remove empty fields count clojure core reducers fold skips the first line of the file good for ignoring a header iota subvec file vec 1 rest file seq known issues records must be delimited by a single byte value hence 2 byte encodings like utf 16 and ucs 2 can t be parsed correctly artifacts iota artifacts are available on clojars https clojars org iota with instructions for leiningen gradle and maven license mit http opensource org licenses mit i d also like to thank my employer gracenote for allowing me to create this open source port copyright c 2012 2013 alan busby | server |
|
IoT | iot meu project utilizando o uwp e raspberry com windows 10 iot my projectu using uwp and raspberry with windows 10 iot | server |
|
chai | qodana https github com droidconke chai actions workflows qodana yml badge svg https github com droidconke chai actions workflows qodana yml chai ci https github com droidconke chai actions workflows main yml badge svg https github com droidconke chai actions workflows main yml p align center a href https github com droidconke droidconke2022android img src https raw githubusercontent com droidconke iconpack master images chaicover png alt chai design logo width 330 height 150 a chai design system what is a design system to learn more about this look at this why design systems https github com droidconke chai blob master docs whydesignsystems md explains the need for a design system in the context of compose about chai chai is swahili word for tea this is the droidconke design system link to the design doc https xd adobe com view eb1ed4ed fd4d 4ba2 b2f7 a91c7379a022 be4d screen cfea72b5 9007 4335 ae86 9162594c094f this project shows how you can use this design system in a multi module app monorepo set up structure of chai s design system project the chai design system project architecture is captured in detail 1 why design systems https github com droidconke chai blob master docs whydesignsystems md explains the need for a design system in the context of compose 2 architecture https github com droidconke chai blob master docs architecture md the architecture of the project 3 chai design system architecture https github com droidconke chai blob master docs chaiarchitecture md the architecture of the design system 4 buildlogic https github com droidconke chai blob master docs buildlogic md handles how we build the app wiyth gradle ditches the build src in favour of convengion polugins 5 chailinter https github com droidconke chai blob master docs chailinter md explains the design system linter 6 atoms https github com droidconke chai blob master docs atoms md explains the atoms in the design system 7 components https github com droidconke chai blob master docs components md design system components 8 chaidemop https github com droidconke chai blob master docs components md design system components implementing chai to implement chai see the example implementation that exists by running chaidemo that contains the various implementations of the elements of the design system running project known issue with gradle if you run into an error when building project ist probably a false negative run or just press green play icon on android studio from the left here gradlew sync and output complete html report should not display errors then gradlew tasks to see a list of tasks you can run from the root of the project or just press the gradle icon with the downward arrow at the top right of android studio to sync project with gradle files and you should be ok tasks documentation architecture reason for design system constituents of chai build system project infrastructure setup workflows git workflows release pipelines publishing sample app release to playstore chaidemo publish to maven build system build logic setup convention plugins setup sample application contributing hop on here for a chat and ask questions https github com droidconke chai discussions no dms please | architecture design-system jetpack-compose | os |
guacamole | guacamole guacamole is a collection of tools we use at khan academy to train our models from new data on a regular basis these tools are meant to be compatible with a variety of data formats from anyone who has learning data especially but not only data from online instruction the tools the pipeline currently included here trains multi dimensional item response theory mirt models including both item correctness and response time if you have that data coming soon the mirt model is well suited to testing data at khan academy we use it for our assessments guacamole walkthrough guacamole imgs guacamole jpg guacamole is a useful tool for teachers and researchers to analyze and improve test items and students getting started getting guacamole to run get numpy scipy and matplotlib working there are several strategies for this depending on platform the normal and generally correct way to install python libraries is using pip but that often chokes on each of these if installing with pip doesn t work i recommend using the scipy superpack http fonnesbeck github io scipysuperpack for mac or following the scipy stack installation instructions http www scipy org install html for linux or windows for a heavier weight but easy alternative you can try anaconda https store continuum io cshop anaconda i recommend installing git as well next download guacamole git clone git github com khan guacamole git go to the guacamole directory and run start mirt pipeline py generate train n 2 visualize it should take less than a minute and if some graphs pop up you re good to go that n 2 is just to make things faster this will not be a good model it ll only learn for two epochs and you probably want it to learn for about 15 walkthrough guacamole has a ton of features and abilities and this walkthrough shows a few of them if you want a quick overview of what s available and you hate reading when it s not on the terminal run start mirt pipeline py help for an overview of the arguments generate data data is generated with start mirt pipeline py generate this constructs a bunch of students with fake abilities a bunch of exercises with fake difficulties and simulates those students doing those exercises you can examine the generated data in path to guac sample data all responses it should look something like merrie addition 1 1 true merrie identifying points 1 1 false yoshiko addition 1 1 true yoshiko slope intercept form 1 true yoshiko graphing proportional relationships 1 true yoshiko constructions 1 1 true caitlyn addition 1 1 true caitlyn identifying points 1 1 true hortense vertical angles 2 1 false hortense visualizing and interpreting relationships between patterns 1 true hortense slope intercept form 1 false mendy graphing proportional relationships 1 true mendy constructions 1 1 false these columns are name exercise time taken and correct you can read more about data formats in train util model training util py train a model you can train a model on data with start mirt pipeline py train by default this looks at the place that generate writes at sample data all responses if you re interested in using your own data you can use start mirt pipeline py train data file path to yout data optional parallelization this will run for a while if you want it to go faster you can parallelize with the w command i use w 6 on my eight core computer on a cluster the number of workers can be really big and training can be really fast on some systems like ubuntu this only works when you have affinity installed if multiple workers does not result in any speedup try pip install affinity now that your model is trained it s in sample data models model json you can actually use this model to run an adaptive test now or you can examine it in a more readable format if you want to save your model somewhere else send in m desired model file json examining models there are a few ways to examine a model and evaluate how good it is report the simplest is to run start mirt pipeline py report this prints out a formatted view of your exercises exercise bias dim 1 area and circumference of circles 0 1527 0 1627 equation of an ellipse 0 2390 0 0599 two sided limits from graphs 0 3392 0 0654 subtraction 1 0 3454 0 1430 scaling vectors 0 3814 0 1886 dividing polynomials by binomials 1 0 3991 0 3312 area of triangles 1 0 4223 0 0746 volume 1 0 5618 0 0479 understanding decimals place value 0 8695 0 0884 understanding multiplying fractions and whole numbers 0 9794 0 1425 for more information on what these terms mean check out irt on wikipedia https en wikipedia org wiki item response theory roc curve an roc curve http en wikipedia org wiki receiver operating characteristic is a simple way of comparing models for predictive accuracy when there are binary classifications when we train a model we hold out a test set and the roc goes through that set of assessment and makes predictions about the accuracy of users on random questions to see a roc curve run start mirt pipeline py roc viz this will take a bit of time as it simulates generating a roc curve from your test data problems the model used for each problem can be thought of as a model meant to predict the probability that a student will answer that problem correctly given their ability running start mirt pipeline py sigmoid viz gives you a visualization of each problem in that context scoring so scoring here is relative we don t give anything on a score of 0 100 instead we give the mean of the student s estimated ability which should have a mean around 0 and be approximately normally distributed this prints a student s id and their score for instance natasha 0 281607745587 ema 0 423702530148 hung 0 135014957553 ian 1 00330296356 jannet 0 141838668862 jan 0 205517676995 louetta 0 145722766169 racheal 0 102205839596 louvenia 0 0097095052554 soledad 0 400148176133 kaylene 0 409522253404 kathryn 0 245113015341 i got these names from the census in case you re wondering i really like the internet you can score with start mirt pipeline py score adaptive test any time after training we can create an adaptive test that gets the most information possible per question start taking the interactive test with start mirt pipeline py test you can specify the number of questions you d like in your interactive test with i this isn t super cool by itself because it just simulates answering questions correctly 1 or incorrectly but it should be easy to hook this up as the backend to an interactive adaptive testing engine that s what we do at khan academy the algorithms guacamole aspires to be a general purpose library with a spectrum of commonly used algorithms for analyzing educational data especially at scale for now we support a few common algorithms multidimensional item response theory item response theory is a classic technique in psychometrics to calibrate tests and test items with student abilities resulting in difficulty ratings for test items and ability ratings for students visualizations a few visualizations are available for the data first you can see an roc curve given your parameters roc viz roc curve imgs roc png you can also see graphs of each exercise by difficulty and discrimination start mirt pipeline py sigmoid viz sigmoids imgs sigmoids png to see how well each student did call start mirt pipeline py score the names the names are from the us census bureau khan academy data this library is designed to be used on khan academy data we have sample non student data in that format now if you are interested in using our real data at scale in your research you should visit http khanacademy org r research http khanacademy org r research and then email us at research khanacademy org mailto research khanacademy org if these tools are useful to you let us know if you d like to contribute you can submit a pull request or apply to work at khan academy https www khanacademy org careers we re hiring data scientists and software engineers for both full time positions and internships authors eliana feasley jace kohlmeier matt faus jascha sohl dickstein 2014 | ai |
|
icse-react-assets | icse react assets openssf best practices https bestpractices coreinfrastructure org projects 7561 badge https bestpractices coreinfrastructure org projects 7561 icse react assets is a collection of forms and inputs using carbon react https github com carbon design system carbon tree main packages react these assets are designed to streamline the creation of some front end assets for automation tools documentation documentation for all components is available on storybook http ibm github io icse react assets installation prerequisites carbon react https github com carbon design system carbon tree main packages react ensure all carbon styles are imported into your project installing icse react assets 1 from your react client directory use the command npm i icse react assets 2 import components using es6 import statements example import js import icseheading from icse react assets running the local development environment 1 git clone https github ibm com icse icse react assets git cd icse react assets 2 npm run i all 3 test a build with npm update npm run build 4 run npm run dev to run the playground environment to test any components running unit tests running tests to run local unit tests with mocha use the command npm run test getting test coverage use the command npm run coverage to determine unit test coverage for this package currently only tests in src lib are covered when using this command building the library for local development to use the library within an application to test your changes navigate to the base directory where your application is cloned shell cd icse react assets build the repo make sure you have bumped the package version before doing this shell npm run build package the library and save it to a location of your choosing for this example the location is shell npm pack pack destination finally go to your application and change the entry for icse react assets in your package json to this json icse react assets file your file destination icse react assets package version tgz contribution guidelines 1 create a new branch for your component 2 add the component within src components 3 export the component in index js as js export default as underconstruction from components underconstruction 4 run npm update npm run build to build the library 5 test your component in the playground navigate to playground src app js and render your component 6 run npm run i all to npm install in both apps and make sure the library is updated within playground 7 run npm run dev to ensure your component renders functions properly in the playground environment 8 create a pr documenting components in storybook storybook is a documentation tool for component libraries that allows us to more thoroughly document and test components a live version of our storybook is available here http ibm github io icse react assets after migrating a component please document your component in storybook src stories component name stories js you can follow icsetextinput stories js as an example to get started with storybook you will first need to install it and its dependencies bash cd storybook npm i you may encounter an error with mdx js if you do run this command as well bash npm i mdx js react 1 6 22 d legacy peer deps test that storybook runs correctly bash cd npm run storybook this stories file can be broken down into a few distinct parts 1 component information documentation you will first want to export a default object with a few nested objects within it this object describes what the component you are writing documentation for is where it should be located on the sidebar any initial values and a main description for the doc js export default component icsetextinput component name title components inputs icsetextinput in tabs under components inputs icsetextinput default bound story is default args this is an object of props and their default values in the component argtypes disabled name of the prop description a boolean value for if the textinput is disabled description of the prop from readme type required true required prop or not table defaultvalue summary false default value control boolean what type of value we can set parameters docs description component icsetextinput is a text input component that allows the user to input text into a field and the developer to easily validate it put the component description from readme here 2 stories next create your stories this should be pretty similar to any examples for the component previously created the only difference being passing in the args object instead of nothing you can create any functions you need to pass within this component and then pass them to the component as props just as done in the previous example files js const icsetextinputstory args const value setvalue usestate const invalidcallback function return value return icsetextinput args value value onchange event setvalue event target value invalidcallback invalidcallback 3 binding your story lastly bind your story and export it in this example we are using default as there is only one type of icsetextinput however for components that have multiple types you may create multiple stories and bind each story js export const default icsetextinputstory bind after this file is created you can run the storybook and ensure everything works properly bash npm run storybook only after your pr is approved and before it is merged please run the following commands to deploy to github pages bash npm run predeploy npm run deploy storybook contributing found a bug or need an additional feature file an issue in this repository with the following information and they will be responded to in a timely manner bugs a detailed title describing the issue beginning with bug and the current release for sprint one filing a bug would have the title bug 0 1 0 issue title steps to recreate said bug including non sensitive variables optional corresponding output logs as text or as part of a code block tag bug issues with the bug label if you come across a vulnerability that needs to be addressed immediately use the vulnerability label features a detailed title describing the desired feature that includes the current release for sprint one a feature would have the title 0 1 0 feature name a detailed description including the user story a checkbox list of needed features tag the issue with the enhancement label want to work on an issue be sure to assign it to yourself and branch from main when you re done making the required changes create a pull request pull requests do not merge directly to main pull requests should reference the corresponding issue filed in this repository please be sure to maintain code coverage before merging at least two reviews are required to merge a pull request when creating a pull request please ensure that details about unexpected changes to the codebase are provided in the description | cloud |
|
system-design | system design hey welcome to the course i hope this course provides a great learning experience this course is also available on my website https karanpratapsingh com courses system design and as an ebook on leanpub https leanpub com systemdesign please leave a as motivation if this was helpful table of contents getting started what is system design what is system design chapter i ip ip osi model osi model tcp and udp tcp and udp domain name system dns domain name system dns load balancing load balancing clustering clustering caching caching content delivery network cdn content delivery network cdn proxy proxy availability availability scalability scalability storage storage chapter ii databases and dbms databases and dbms sql databases sql databases nosql databases nosql databases sql vs nosql databases sql vs nosql databases database replication database replication indexes indexes normalization and denormalization normalization and denormalization acid and base consistency models acid and base consistency models cap theorem cap theorem pacelc theorem pacelc theorem transactions transactions distributed transactions distributed transactions sharding sharding consistent hashing consistent hashing database federation database federation chapter iii n tier architecture n tier architecture message brokers message brokers message queues message queues publish subscribe publish subscribe enterprise service bus esb enterprise service bus esb monoliths and microservices monoliths and microservices event driven architecture eda event driven architecture eda event sourcing event sourcing command and query responsibility segregation cqrs command and query responsibility segregation cqrs api gateway api gateway rest graphql grpc rest graphql grpc long polling websockets server sent events sse long polling websockets server sent events sse chapter iv geohashing and quadtrees geohashing and quadtrees circuit breaker circuit breaker rate limiting rate limiting service discovery service discovery sla slo sli sla slo sli disaster recovery disaster recovery virtual machines vms and containers virtual machines vms and containers oauth 2 0 and openid connect oidc oauth 20 and openid connect oidc single sign on sso single sign on sso ssl tls mtls ssl tls mtls chapter v system design interviews system design interviews url shortener url shortener whatsapp whatsapp twitter twitter netflix netflix uber uber appendix next steps next steps references references what is system design before we start this course let s talk about what even is system design system design is the process of defining the architecture interfaces and data for a system that satisfies specific requirements system design meets the needs of your business or organization through coherent and efficient systems it requires a systematic approach to building and engineering systems a good system design requires us to think about everything from infrastructure all the way down to the data and how it s stored why is system design so important system design helps us define a solution that meets the business requirements it is one of the earliest decisions we can make when building a system often it is essential to think from a high level as these decisions are very difficult to correct later it also makes it easier to reason about and manage architectural changes as the system evolves ip an ip address is a unique address that identifies a device on the internet or a local network ip stands for internet protocol which is the set of rules governing the format of data sent via the internet or local network in essence ip addresses are the identifier that allows information to be sent between devices on a network they contain location information and make devices accessible for communication the internet needs a way to differentiate between different computers routers and websites ip addresses provide a way of doing so and form an essential part of how the internet works versions now let s learn about the different versions of ip addresses ipv4 the original internet protocol is ipv4 which uses a 32 bit numeric dot decimal notation that only allows for around 4 billion ip addresses initially it was more than enough but as internet adoption grew we needed something better example 102 22 192 181 ipv6 ipv6 is a new protocol that was introduced in 1998 deployment commenced in the mid 2000s and since the internet users have grown exponentially it is still ongoing this new protocol uses 128 bit alphanumeric hexadecimal notation this means that ipv6 can provide about 340e 36 ip addresses that s more than enough to meet the growing demand for years to come example 2001 0db8 85a3 0000 0000 8a2e 0370 7334 types let s discuss types of ip addresses public a public ip address is an address where one primary address is associated with your whole network in this type of ip address each of the connected devices has the same ip address example ip address provided to your router by the isp private a private ip address is a unique ip number assigned to every device that connects to your internet network which includes devices like computers tablets and smartphones which are used in your household example ip addresses generated by your home router for your devices static a static ip address does not change and is one that was manually created as opposed to having been assigned these addresses are usually more expensive but are more reliable example they are usually used for important things like reliable geo location services remote access server hosting etc dynamic a dynamic ip address changes from time to time and is not always the same it has been assigned by a dynamic host configuration protocol dhcp https en wikipedia org wiki dynamic host configuration protocol server dynamic ip addresses are the most common type of internet protocol address they are cheaper to deploy and allow us to reuse ip addresses within a network as needed example they are more commonly used for consumer equipment and personal use osi model the osi model is a logical and conceptual model that defines network communication used by systems open to interconnection and communication with other systems the open system interconnection osi model also defines a logical network and effectively describes computer packet transfer by using various layers of protocols the osi model can be seen as a universal language for computer networking it s based on the concept of splitting up a communication system into seven abstract layers each one stacked upon the last why does the osi model matter the open system interconnection osi model has defined the common terminology used in networking discussions and documentation this allows us to take a very complex communications process apart and evaluate its components while this model is not directly implemented in the tcp ip networks that are most common today it can still help us do so much more such as make troubleshooting easier and help identify threats across the entire stack encourage hardware manufacturers to create networking products that can communicate with each other over the network essential for developing a security first mindset separate a complex function into simpler components layers the seven abstraction layers of the osi model can be defined as follows from top to bottom osi model https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i osi model osi model png application this is the only layer that directly interacts with data from the user software applications like web browsers and email clients rely on the application layer to initiate communication but it should be made clear that client software applications are not part of the application layer rather the application layer is responsible for the protocols and data manipulation that the software relies on to present meaningful data to the user application layer protocols include http as well as smtp presentation the presentation layer is also called the translation layer the data from the application layer is extracted here and manipulated as per the required format to transmit over the network the functions of the presentation layer are translation encryption decryption and compression session this is the layer responsible for opening and closing communication between the two devices the time between when the communication is opened and closed is known as the session the session layer ensures that the session stays open long enough to transfer all the data being exchanged and then promptly closes the session in order to avoid wasting resources the session layer also synchronizes data transfer with checkpoints transport the transport layer also known as layer 4 is responsible for end to end communication between the two devices this includes taking data from the session layer and breaking it up into chunks called segments before sending it to the network layer layer 3 it is also responsible for reassembling the segments on the receiving device into data the session layer can consume network the network layer is responsible for facilitating data transfer between two different networks the network layer breaks up segments from the transport layer into smaller units called packets on the sender s device and reassembles these packets on the receiving device the network layer also finds the best physical path for the data to reach its destination this is known as routing if the two devices communicating are on the same network then the network layer is unnecessary data link the data link layer is very similar to the network layer except the data link layer facilitates data transfer between two devices on the same network the data link layer takes packets from the network layer and breaks them into smaller pieces called frames physical this layer includes the physical equipment involved in the data transfer such as the cables and switches this is also the layer where the data gets converted into a bit stream which is a string of 1s and 0s the physical layer of both devices must also agree on a signal convention so that the 1s can be distinguished from the 0s on both devices tcp and udp tcp transmission control protocol tcp is connection oriented meaning once a connection has been established data can be transmitted in both directions tcp has built in systems to check for errors and to guarantee data will be delivered in the order it was sent making it the perfect protocol for transferring information like still images data files and web pages tcp https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i tcp and udp tcp png but while tcp is instinctively reliable its feedback mechanisms also result in a larger overhead translating to greater use of the available bandwidth on the network udp user datagram protocol udp is a simpler connectionless internet protocol in which error checking and recovery services are not required with udp there is no overhead for opening a connection maintaining a connection or terminating a connection data is continuously sent to the recipient whether or not they receive it udp https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i tcp and udp udp png it is largely preferred for real time communications like broadcast or multicast network transmission we should use udp over tcp when we need the lowest latency and late data is worse than the loss of data tcp vs udp tcp is a connection oriented protocol whereas udp is a connectionless protocol a key difference between tcp and udp is speed as tcp is comparatively slower than udp overall udp is a much faster simpler and more efficient protocol however retransmission of lost data packets is only possible with tcp tcp provides ordered delivery of data from user to server and vice versa whereas udp is not dedicated to end to end communications nor does it check the readiness of the receiver feature tcp udp connection requires an established connection connectionless protocol guaranteed delivery can guarantee delivery of data cannot guarantee delivery of data re transmission re transmission of lost packets is possible no re transmission of lost packets speed slower than udp faster than tcp broadcasting does not support broadcasting supports broadcasting use cases https http smtp pop ftp etc video streaming dns voip etc domain name system dns earlier we learned about ip addresses that enable every machine to connect with other machines but as we know humans are more comfortable with names than numbers it s easier to remember a name like google com than something like 122 250 192 232 this brings us to domain name system dns which is a hierarchical and decentralized naming system used for translating human readable domain names to ip addresses how dns works how dns works https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i domain name system how dns works png dns lookup involves the following eight steps 1 a client types example com http example com into a web browser the query travels to the internet and is received by a dns resolver 2 the resolver then recursively queries a dns root nameserver 3 the root server responds to the resolver with the address of a top level domain tld 4 the resolver then makes a request to the com tld 5 the tld server then responds with the ip address of the domain s nameserver example com http example com 6 lastly the recursive resolver sends a query to the domain s nameserver 7 the ip address for example com http example com is then returned to the resolver from the nameserver 8 the dns resolver then responds to the web browser with the ip address of the domain requested initially once the ip address has been resolved the client should be able to request content from the resolved ip address for example the resolved ip may return a webpage to be rendered in the browser server types now let s look at the four key groups of servers that make up the dns infrastructure dns resolver a dns resolver also known as a dns recursive resolver is the first stop in a dns query the recursive resolver acts as a middleman between a client and a dns nameserver after receiving a dns query from a web client a recursive resolver will either respond with cached data or send a request to a root nameserver followed by another request to a tld nameserver and then one last request to an authoritative nameserver after receiving a response from the authoritative nameserver containing the requested ip address the recursive resolver then sends a response to the client dns root server a root server accepts a recursive resolver s query which includes a domain name and the root nameserver responds by directing the recursive resolver to a tld nameserver based on the extension of that domain com net org etc the root nameservers are overseen by a nonprofit called the internet corporation for assigned names and numbers icann https www icann org there are 13 dns root nameservers known to every recursive resolver note that while there are 13 root nameservers that doesn t mean that there are only 13 machines in the root nameserver system there are 13 types of root nameservers but there are multiple copies of each one all over the world which use anycast routing https en wikipedia org wiki anycast to provide speedy responses tld nameserver a tld nameserver maintains information for all the domain names that share a common domain extension such as com net or whatever comes after the last dot in a url management of tld nameservers is handled by the internet assigned numbers authority iana https www iana org which is a branch of icann https www icann org the iana breaks up the tld servers into two main groups generic top level domains these are domains like com org net edu and gov country code top level domains these include any domains that are specific to a country or state examples include uk us ru and jp authoritative dns server the authoritative nameserver is usually the resolver s last step in the journey for an ip address the authoritative nameserver contains information specific to the domain name it serves e g google com http google com and it can provide a recursive resolver with the ip address of that server found in the dns a record or if the domain has a cname record alias it will provide the recursive resolver with an alias domain at which point the recursive resolver will have to perform a whole new dns lookup to procure a record from an authoritative nameserver often an a record containing an ip address if it cannot find the domain returns the nxdomain message query types there are three types of queries in a dns system recursive in a recursive query a dns client requires that a dns server typically a dns recursive resolver will respond to the client with either the requested resource record or an error message if the resolver can t find the record iterative in an iterative query a dns client provides a hostname and the dns resolver returns the best answer it can if the dns resolver has the relevant dns records in its cache it returns them if not it refers the dns client to the root server or another authoritative name server that is nearest to the required dns zone the dns client must then repeat the query directly against the dns server it was referred non recursive a non recursive query is a query in which the dns resolver already knows the answer it either immediately returns a dns record because it already stores it in a local cache or queries a dns name server which is authoritative for the record meaning it definitely holds the correct ip for that hostname in both cases there is no need for additional rounds of queries like in recursive or iterative queries rather a response is immediately returned to the client record types dns records aka zone files are instructions that live in authoritative dns servers and provide information about a domain including what ip address is associated with that domain and how to handle requests for that domain these records consist of a series of text files written in what is known as dns syntax dns syntax is just a string of characters used as commands that tell the dns server what to do all dns records also have a ttl which stands for time to live and indicates how often a dns server will refresh that record there are more record types but for now let s look at some of the most commonly used ones a address record this is the record that holds the ip address of a domain aaaa ip version 6 address record the record that contains the ipv6 address for a domain as opposed to a records which stores the ipv4 address cname canonical name record forwards one domain or subdomain to another domain does not provide an ip address mx mail exchanger record directs mail to an email server txt text record this record lets an admin store text notes in the record these records are often used for email security ns name server records stores the name server for a dns entry soa start of authority stores admin information about a domain srv service location record specifies a port for specific services ptr reverse lookup pointer record provides a domain name in reverse lookups cert certificate record stores public key certificates subdomains a subdomain is an additional part of our main domain name it is commonly used to logically separate a website into sections we can create multiple subdomains or child domains on the main domain for example blog example com where blog is the subdomain example is the primary domain and com is the top level domain tld similar examples can be support example com or careers example com dns zones a dns zone is a distinct part of the domain namespace which is delegated to a legal entity like a person organization or company who is responsible for maintaining the dns zone a dns zone is also an administrative function allowing for granular control of dns components such as authoritative name servers dns caching a dns cache sometimes called a dns resolver cache is a temporary database maintained by a computer s operating system that contains records of all the recent visits and attempted visits to websites and other internet domains in other words a dns cache is just a memory of recent dns lookups that our computer can quickly refer to when it s trying to figure out how to load a website the domain name system implements a time to live ttl on every dns record ttl specifies the number of seconds the record can be cached by a dns client or server when the record is stored in a cache whatever ttl value came with it gets stored as well the server continues to update the ttl of the record stored in the cache counting down every second when it hits zero the record is deleted or purged from the cache at that point if a query for that record is received the dns server has to start the resolution process reverse dns a reverse dns lookup is a dns query for the domain name associated with a given ip address this accomplishes the opposite of the more commonly used forward dns lookup in which the dns system is queried to return an ip address the process of reverse resolving an ip address uses ptr records if the server does not have a ptr record it cannot resolve a reverse lookup reverse lookups are commonly used by email servers email servers check and see if an email message came from a valid server before bringing it onto their network many email servers will reject messages from any server that does not support reverse lookups or from a server that is highly unlikely to be legitimate note reverse dns lookups are not universally adopted as they are not critical to the normal function of the internet examples these are some widely used managed dns solutions route53 https aws amazon com route53 cloudflare dns https www cloudflare com dns google cloud dns https cloud google com dns azure dns https azure microsoft com en in services dns ns1 https ns1 com products managed dns load balancing load balancing lets us distribute incoming network traffic across multiple resources ensuring high availability and reliability by sending requests only to resources that are online this provides the flexibility to add or subtract resources as demand dictates load balancing https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i load balancing load balancer png for additional scalability and redundancy we can try to load balance at each layer of our system load balancing layers https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i load balancing load balancer layers png but why modern high traffic websites must serve hundreds of thousands if not millions of concurrent requests from users or clients to cost effectively scale to meet these high volumes modern computing best practice generally requires adding more servers a load balancer can sit in front of the servers and route client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization this ensures that no single server is overworked which could degrade performance if a single server goes down the load balancer redirects traffic to the remaining online servers when a new server is added to the server group the load balancer automatically starts sending requests to it workload distribution this is the core functionality provided by a load balancer and has several common variations host based distributes requests based on the requested hostname path based using the entire url to distribute requests as opposed to just the hostname content based inspects the message content of a request this allows distribution based on content such as the value of a parameter layers generally speaking load balancers operate at one of the two levels network layer this is the load balancer that works at the network s transport layer also known as layer 4 this performs routing based on networking information such as ip addresses and is not able to perform content based routing these are often dedicated hardware devices that can operate at high speed application layer this is the load balancer that operates at the application layer also known as layer 7 load balancers can read requests in their entirety and perform content based routing this allows the management of load based on a full understanding of traffic types let s look at different types of load balancers software software load balancers usually are easier to deploy than hardware versions they also tend to be more cost effective and flexible and they are used in conjunction with software development environments the software approach gives us the flexibility of configuring the load balancer to our environment s specific needs the boost in flexibility may come at the cost of having to do more work to set up the load balancer compared to hardware versions which offer more of a closed box approach software balancers give us more freedom to make changes and upgrades software load balancers are widely used and are available either as installable solutions that require configuration and management or as a managed cloud service hardware as the name implies a hardware load balancer relies on physical on premises hardware to distribute application and network traffic these devices can handle a large volume of traffic but often carry a hefty price tag and are fairly limited in terms of flexibility hardware load balancers include proprietary firmware that requires maintenance and updates as new versions and security patches are released dns dns load balancing is the practice of configuring a domain in the domain name system dns such that client requests to the domain are distributed across a group of server machines unfortunately dns load balancing has inherent problems limiting its reliability and efficiency most significantly dns does not check for server and network outages or errors it always returns the same set of ip addresses for a domain even if servers are down or inaccessible routing algorithms now let s discuss commonly used routing algorithms round robin requests are distributed to application servers in rotation weighted round robin builds on the simple round robin technique to account for differing server characteristics such as compute and traffic handling capacity using weights that can be assigned via dns records by the administrator least connections a new request is sent to the server with the fewest current connections to clients the relative computing capacity of each server is factored into determining which one has the least connections least response time sends requests to the server selected by a formula that combines the fastest response time and fewest active connections least bandwidth this method measures traffic in megabits per second mbps sending client requests to the server with the least mbps of traffic hashing distributes requests based on a key we define such as the client ip address or the request url advantages load balancing also plays a key role in preventing downtime other advantages of load balancing include the following scalability redundancy flexibility efficiency redundant load balancers as you must ve already guessed the load balancer itself can be a single point of failure to overcome this a second or n number of load balancers can be used in a cluster mode and if there s a failure detection and the active load balancer fails another passive load balancer can take over which will make our system more fault tolerant redundant load balancing https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i load balancing redundant load balancer png features here are some commonly desired features of load balancers autoscaling starting up and shutting down resources in response to demand conditions sticky sessions the ability to assign the same user or device to the same resource in order to maintain the session state on the resource healthchecks the ability to determine if a resource is down or performing poorly in order to remove the resource from the load balancing pool persistence connections allowing a server to open a persistent connection with a client such as a websocket encryption handling encrypted connections such as tls and ssl certificates presenting certificates to a client and authentication of client certificates compression compression of responses caching an application layer load balancer may offer the ability to cache responses logging logging of request and response metadata can serve as an important audit trail or source for analytics data request tracing assigning each request a unique id for the purposes of logging monitoring and troubleshooting redirects the ability to redirect an incoming request based on factors such as the requested path fixed response returning a static response for a request such as an error message examples following are some of the load balancing solutions commonly used in the industry amazon elastic load balancing https aws amazon com elasticloadbalancing azure load balancing https azure microsoft com en in services load balancer gcp load balancing https cloud google com load balancing digitalocean load balancer https www digitalocean com products load balancer nginx https www nginx com haproxy http www haproxy org clustering at a high level a computer cluster is a group of two or more computers or nodes that run in parallel to achieve a common goal this allows workloads consisting of a high number of individual parallelizable tasks to be distributed among the nodes in the cluster as a result these tasks can leverage the combined memory and processing power of each computer to increase overall performance to build a computer cluster the individual nodes should be connected to a network to enable internode communication the software can then be used to join the nodes together and form a cluster it may have a shared storage device and or local storage on each node cluster https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i clustering cluster png typically at least one node is designated as the leader node and acts as the entry point to the cluster the leader node may be responsible for delegating incoming work to the other nodes and if necessary aggregating the results and returning a response to the user ideally a cluster functions as if it were a single system a user accessing the cluster should not need to know whether the system is a cluster or an individual machine furthermore a cluster should be designed to minimize latency and prevent bottlenecks in node to node communication types computer clusters can generally be categorized into three types highly available or fail over load balancing high performance computing configurations the two most commonly used high availability ha clustering configurations are active active and active passive active active active active https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i clustering active active png an active active cluster is typically made up of at least two nodes both actively running the same kind of service simultaneously the main purpose of an active active cluster is to achieve load balancing a load balancer distributes workloads across all nodes to prevent any single node from getting overloaded because there are more nodes available to serve there will also be an improvement in throughput and response times active passive active passive https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i clustering active passive png like the active active cluster configuration an active passive cluster also consists of at least two nodes however as the name active passive implies not all nodes are going to be active for example in the case of two nodes if the first node is already active then the second node must be passive or on standby advantages four key advantages of cluster computing are as follows high availability scalability performance cost effective load balancing vs clustering load balancing shares some common traits with clustering but they are different processes clustering provides redundancy and boosts capacity and availability servers in a cluster are aware of each other and work together toward a common purpose but with load balancing servers are not aware of each other instead they react to the requests they receive from the load balancer we can employ load balancing in conjunction with clustering but it also is applicable in cases involving independent servers that share a common purpose such as to run a website business application web service or some other it resource challenges the most obvious challenge clustering presents is the increased complexity of installation and maintenance an operating system the application and its dependencies must each be installed and updated on every node this becomes even more complicated if the nodes in the cluster are not homogeneous resource utilization for each node must also be closely monitored and logs should be aggregated to ensure that the software is behaving correctly additionally storage becomes more difficult to manage a shared storage device must prevent nodes from overwriting one another and distributed data stores have to be kept in sync examples clustering is commonly used in the industry and often many technologies offer some sort of clustering mode for example containers e g kubernetes https kubernetes io amazon ecs https aws amazon com ecs databases e g cassandra https cassandra apache org index html mongodb https www mongodb com cache e g redis https redis io docs manual scaling caching there are only two hard things in computer science cache invalidation and naming things phil karlton caching https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i caching caching png a cache s primary purpose is to increase data retrieval performance by reducing the need to access the underlying slower storage layer trading off capacity for speed a cache typically stores a subset of data transiently in contrast to databases whose data is usually complete and durable caches take advantage of the locality of reference principle recently requested data is likely to be requested again caching and memory like a computer s memory a cache is a compact fast performing memory that stores data in a hierarchy of levels starting at level one and progressing from there sequentially they are labeled as l1 l2 l3 and so on a cache also gets written if requested such as when there has been an update and new content needs to be saved to the cache replacing the older content that was saved no matter whether the cache is read or written it s done one block at a time each block also has a tag that includes the location where the data was stored in the cache when data is requested from the cache a search occurs through the tags to find the specific content that s needed in level one l1 of the memory if the correct data isn t found more searches are conducted in l2 if the data isn t found there searches are continued in l3 then l4 and so on until it has been found then it s read and loaded if the data isn t found in the cache at all then it s written into it for quick retrieval the next time cache hit and cache miss cache hit a cache hit describes the situation where content is successfully served from the cache the tags are searched in the memory rapidly and when the data is found and read it s considered a cache hit cold warm and hot caches a cache hit can also be described as cold warm or hot in each of these the speed at which the data is read is described a hot cache is an instance where data was read from the memory at the fastest possible rate this happens when the data is retrieved from l1 a cold cache is the slowest possible rate for data to be read though it s still successful so it s still considered a cache hit the data is just found lower in the memory hierarchy such as in l3 or lower a warm cache is used to describe data that s found in l2 or l3 it s not as fast as a hot cache but it s still faster than a cold cache generally calling a cache warm is used to express that it s slower and closer to a cold cache than a hot one cache miss a cache miss refers to the instance when the memory is searched and the data isn t found when this happens the content is transferred and written into the cache cache invalidation cache invalidation is a process where the computer system declares the cache entries as invalid and removes or replaces them if the data is modified it should be invalidated in the cache if not this can cause inconsistent application behavior there are three kinds of caching systems write through cache write through cache https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i caching write through cache png data is written into the cache and the corresponding database simultaneously pro fast retrieval complete data consistency between cache and storage con higher latency for write operations write around cache write around cache https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i caching write around cache png where write directly goes to the database or permanent storage bypassing the cache pro this may reduce latency con it increases cache misses because the cache system has to read the information from the database in case of a cache miss as a result this can lead to higher read latency in the case of applications that write and re read the information quickly read happen from slower back end storage and experiences higher latency write back cache write back cache https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i caching write back cache png where the write is only done to the caching layer and the write is confirmed as soon as the write to the cache completes the cache then asynchronously syncs this write to the database pro this would lead to reduced latency and high throughput for write intensive applications con there is a risk of data loss in case the caching layer crashes we can improve this by having more than one replica acknowledging the write in the cache eviction policies following are some of the most common cache eviction policies first in first out fifo the cache evicts the first block accessed first without any regard to how often or how many times it was accessed before last in first out lifo the cache evicts the block accessed most recently first without any regard to how often or how many times it was accessed before least recently used lru discards the least recently used items first most recently used mru discards in contrast to lru the most recently used items first least frequently used lfu counts how often an item is needed those that are used least often are discarded first random replacement rr randomly selects a candidate item and discards it to make space when necessary distributed cache distributed cache https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i caching distributed cache png a distributed cache is a system that pools together the random access memory ram of multiple networked computers into a single in memory data store used as a data cache to provide fast access to data while most caches are traditionally in one physical server or hardware component a distributed cache can grow beyond the memory limits of a single computer by linking together multiple computers global cache global cache https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i caching global cache png as the name suggests we will have a single shared cache that all the application nodes will use when the requested data is not found in the global cache it s the responsibility of the cache to find out the missing piece of data from the underlying data store use cases caching can have many real world use cases such as database caching content delivery network cdn domain name system dns caching api caching when not to use caching let s also look at some scenarios where we should not use cache caching isn t helpful when it takes just as long to access the cache as it does to access the primary data store caching doesn t work as well when requests have low repetition higher randomness because caching performance comes from repeated memory access patterns caching isn t helpful when the data changes frequently as the cached version gets out of sync and the primary data store must be accessed every time it s important to note that a cache should not be used as permanent data storage they are almost always implemented in volatile memory because it is faster and thus should be considered transient advantages below are some advantages of caching improves performance reduce latency reduce load on the database reduce network cost increase read throughput examples here are some commonly used technologies for caching redis https redis io memcached https memcached org amazon elasticache https aws amazon com elasticache aerospike https aerospike com content delivery network cdn a content delivery network cdn is a geographically distributed group of servers that work together to provide fast delivery of internet content generally static files such as html css js photos and videos are served from cdn cdn map https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i content delivery network cdn map png why use a cdn content delivery network cdn increases content availability and redundancy while reducing bandwidth costs and improving security serving content from cdns can significantly improve performance as users receive content from data centers close to them and our servers do not have to serve requests that the cdn fulfills how does a cdn work cdn https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i content delivery network cdn png in a cdn the origin server contains the original versions of the content while the edge servers are numerous and distributed across various locations around the world to minimize the distance between the visitors and the website s server a cdn stores a cached version of its content in multiple geographical locations known as edge locations each edge location contains several caching servers responsible for content delivery to visitors within its proximity once the static assets are cached on all the cdn servers for a particular location all subsequent website visitor requests for static assets will be delivered from these edge servers instead of the origin thus reducing the origin load and improving scalability for example when someone in the uk requests our website which might be hosted in the usa they will be served from the closest edge location such as the london edge location this is much quicker than having the visitor make a complete request to the origin server which will increase the latency types cdns are generally divided into two types push cdns push cdns receive new content whenever changes occur on the server we take full responsibility for providing content uploading directly to the cdn and rewriting urls to point to the cdn we can configure when content expires and when it is updated content is uploaded only when it is new or changed minimizing traffic but maximizing storage sites with a small amount of traffic or sites with content that isn t often updated work well with push cdns content is placed on the cdns once instead of being re pulled at regular intervals pull cdns in a pull cdn situation the cache is updated based on request when the client sends a request that requires static assets to be fetched from the cdn if the cdn doesn t have it then it will fetch the newly updated assets from the origin server and populate its cache with this new asset and then send this new cached asset to the user contrary to the push cdn this requires less maintenance because cache updates on cdn nodes are performed based on requests from the client to the origin server sites with heavy traffic work well with pull cdns as traffic is spread out more evenly with only recently requested content remaining on the cdn disadvantages as we all know good things come with extra costs so let s discuss some disadvantages of cdns extra charges it can be expensive to use a cdn especially for high traffic services restrictions some organizations and countries have blocked the domains or ip addresses of popular cdns location if most of our audience is located in a country where the cdn has no servers the data on our website may have to travel further than without using any cdn examples here are some widely used cdns amazon cloudfront https aws amazon com cloudfront google cloud cdn https cloud google com cdn cloudflare cdn https www cloudflare com cdn fastly https www fastly com products cdn proxy a proxy server is an intermediary piece of hardware software sitting between the client and the backend server it receives requests from clients and relays them to the origin servers typically proxies are used to filter requests log requests or sometimes transform requests by adding removing headers encrypting decrypting or compression types there are two types of proxies forward proxy a forward proxy often called a proxy proxy server or web proxy is a server that sits in front of a group of client machines when those computers make requests to sites and services on the internet the proxy server intercepts those requests and then communicates with web servers on behalf of those clients like a middleman forward proxy https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i proxy forward proxy png advantages here are some advantages of a forward proxy block access to certain content allows access to geo restricted https en wikipedia org wiki geo blocking content provides anonymity avoid other browsing restrictions although proxies provide the benefits of anonymity they can still track our personal information setup and maintenance of a proxy server can be costly and requires configurations reverse proxy a reverse proxy is a server that sits in front of one or more web servers intercepting requests from clients when clients send requests to the origin server of a website those requests are intercepted by the reverse proxy server the difference between a forward and reverse proxy is subtle but important a simplified way to sum it up would be to say that a forward proxy sits in front of a client and ensures that no origin server ever communicates directly with that specific client on the other hand a reverse proxy sits in front of an origin server and ensures that no client ever communicates directly with that origin server reverse proxy https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i proxy reverse proxy png introducing reverse proxy results in increased complexity a single reverse proxy is a single point of failure configuring multiple reverse proxies i e a failover further increases complexity advantages here are some advantages of using a reverse proxy improved security caching ssl encryption load balancing scalability and flexibility load balancer vs reverse proxy wait isn t reverse proxy similar to a load balancer well no as a load balancer is useful when we have multiple servers often load balancers route traffic to a set of servers serving the same function while reverse proxies can be useful even with just one web server or application server a reverse proxy can also act as a load balancer but not the other way around examples below are some commonly used proxy technologies nginx https www nginx com haproxy http www haproxy org traefik https doc traefik io traefik envoy https www envoyproxy io availability availability is the time a system remains operational to perform its required function in a specific period it is a simple measure of the percentage of time that a system service or machine remains operational under normal conditions the nine s of availability availability is often quantified by uptime or downtime as a percentage of time the service is available it is generally measured in the number of 9s availability frac uptime uptime downtime if availability is 99 00 available it is said to have 2 nines of availability and if it is 99 9 it is called 3 nines and so on availability percent downtime year downtime month downtime week 90 one nine 36 53 days 72 hours 16 8 hours 99 two nines 3 65 days 7 20 hours 1 68 hours 99 9 three nines 8 77 hours 43 8 minutes 10 1 minutes 99 99 four nines 52 6 minutes 4 32 minutes 1 01 minutes 99 999 five nines 5 25 minutes 25 9 seconds 6 05 seconds 99 9999 six nines 31 56 seconds 2 59 seconds 604 8 milliseconds 99 99999 seven nines 3 15 seconds 263 milliseconds 60 5 milliseconds 99 999999 eight nines 315 6 milliseconds 26 3 milliseconds 6 milliseconds 99 9999999 nine nines 31 6 milliseconds 2 6 milliseconds 0 6 milliseconds availability in sequence vs parallel if a service consists of multiple components prone to failure the service s overall availability depends on whether the components are in sequence or in parallel sequence overall availability decreases when two components are in sequence availability space total availability space foo availability space bar for example if both foo and bar each had 99 9 availability their total availability in sequence would be 99 8 parallel overall availability increases when two components are in parallel availability space total 1 1 availability space foo 1 availability space bar for example if both foo and bar each had 99 9 availability their total availability in parallel would be 99 9999 availability vs reliability if a system is reliable it is available however if it is available it is not necessarily reliable in other words high reliability contributes to high availability but it is possible to achieve high availability even with an unreliable system high availability vs fault tolerance both high availability and fault tolerance apply to methods for providing high uptime levels however they accomplish the objective differently a fault tolerant system has no service interruption but a significantly higher cost while a highly available system has minimal service interruption fault tolerance requires full hardware redundancy as if the main system fails with no loss in uptime another system should take over scalability scalability is the measure of how well a system responds to changes by adding or removing resources to meet demands scalability https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter i scalability scalability png let s discuss different types of scaling vertical scaling vertical scaling also known as scaling up expands a system s scalability by adding more power to an existing machine in other words vertical scaling refers to improving an application s capability via increasing hardware capacity advantages simple to implement easier to manage data consistent disadvantages risk of high downtime harder to upgrade can be a single point of failure horizontal scaling horizontal scaling also known as scaling out expands a system s scale by adding more machines it improves the performance of the server by adding more instances to the existing pool of servers allowing the load to be distributed more evenly advantages increased redundancy better fault tolerance flexible and efficient easier to upgrade disadvantages increased complexity data inconsistency increased load on downstream services storage storage is a mechanism that enables a system to retain data either temporarily or permanently this topic is mostly skipped over in the context of system design however it is important to have a basic understanding of some common types of storage techniques that can help us fine tune our storage components let s discuss some important storage concepts raid raid redundant array of independent disks is a way of storing the same data on multiple hard disks or solid state drives ssds to protect data in the case of a drive failure there are different raid levels however and not all have the goal of providing redundancy let s discuss some commonly used raid levels raid 0 also known as striping data is split evenly across all the drives in the array raid 1 also known as mirroring at least two drives contains the exact copy of a set of data if a drive fails others will still work raid 5 striping with parity requires the use of at least 3 drives striping the data across multiple drives like raid 0 but also has a parity distributed across the drives raid 6 striping with double parity raid 6 is like raid 5 but the parity data are written to two drives raid 10 combines striping plus mirroring from raid 0 and raid 1 it provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers comparison let s compare all the features of different raid levels features raid 0 raid 1 raid 5 raid 6 raid 10 description striping mirroring striping with parity striping with double parity striping and mirroring minimum disks 2 2 3 4 4 read performance high high high high high write performance high medium high high medium cost low high low low high fault tolerance none single drive failure single drive failure two drive failure up to one disk failure in each sub array capacity utilization 100 50 67 94 50 80 50 volumes volume is a fixed amount of storage on a disk or tape the term volume is often used as a synonym for the storage itself but it is possible for a single disk to contain more than one volume or a volume to span more than one disk file storage file storage is a solution to store data as files and present it to its final users as a hierarchical directories structure the main advantage is to provide a user friendly solution to store and retrieve files to locate a file in file storage the complete path of the file is required it is economical and easily structured and is usually found on hard drives which means that they appear exactly the same for the user and on the hard drive example amazon efs https aws amazon com efs azure files https azure microsoft com en in services storage files google cloud filestore https cloud google com filestore etc block storage block storage divides data into blocks chunks and stores them as separate pieces each block of data is given a unique identifier which allows a storage system to place the smaller pieces of data wherever it is most convenient block storage also decouples data from user environments allowing that data to be spread across multiple environments this creates multiple paths to the data and allows the user to retrieve it quickly when a user or application requests data from a block storage system the underlying storage system reassembles the data blocks and presents the data to the user or application example amazon ebs https aws amazon com ebs object storage object storage which is also known as object based storage breaks data files up into pieces called objects it then stores those objects in a single repository which can be spread out across multiple networked systems example amazon s3 https aws amazon com s3 azure blob storage https azure microsoft com en in services storage blobs google cloud storage https cloud google com storage etc nas a nas network attached storage is a storage device connected to a network that allows storage and retrieval of data from a central location for authorized network users nas devices are flexible meaning that as we need additional storage we can add to what we have it s faster less expensive and provides all the benefits of a public cloud on site giving us complete control hdfs the hadoop distributed file system hdfs is a distributed file system designed to run on commodity hardware hdfs is highly fault tolerant and is designed to be deployed on low cost hardware hdfs provides high throughput access to application data and is suitable for applications that have large data sets it has many similarities with existing distributed file systems hdfs is designed to reliably store very large files across machines in a large cluster it stores each file as a sequence of blocks all blocks in a file except the last block are the same size the blocks of a file are replicated for fault tolerance databases and dbms what is a database a database is an organized collection of structured information or data typically stored electronically in a computer system a database is usually controlled by a database management system dbms together the data and the dbms along with the applications that are associated with them are referred to as a database system often shortened to just database what is dbms a database typically requires a comprehensive database software program known as a database management system dbms a dbms serves as an interface between the database and its end users or programs allowing users to retrieve update and manage how the information is organized and optimized a dbms also facilitates oversight and control of databases enabling a variety of administrative operations such as performance monitoring tuning and backup and recovery components here are some common components found across different databases schema the role of a schema is to define the shape of a data structure and specify what kinds of data can go where schemas can be strictly enforced across the entire database loosely enforced on part of the database or they might not exist at all table each table contains various columns just like in a spreadsheet a table can have as meager as two columns and upwards of a hundred or more columns depending upon the kind of information being put in the table column a column contains a set of data values of a particular type one value for each row of the database a column may contain text values numbers enums timestamps etc row data in a table is recorded in rows there can be thousands or millions of rows in a table having any particular information types database types https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii databases and dbms database types png below are different types of databases sql https karanpratapsingh com courses system design sql databases nosql https karanpratapsingh com courses system design nosql databases document key value graph timeseries wide column multi model sql and nosql databases are broad topics and will be discussed separately in sql databases https karanpratapsingh com courses system design sql databases and nosql databases https karanpratapsingh com courses system design nosql databases learn how they compare to each other in sql vs nosql databases https karanpratapsingh com courses system design sql vs nosql databases challenges some common challenges faced while running databases at scale absorbing significant increases in data volume the explosion of data coming in from sensors connected machines and dozens of other sources ensuring data security data breaches are happening everywhere these days it s more important than ever to ensure that data is secure but also easily accessible to users keeping up with demand companies need real time access to their data to support timely decision making and to take advantage of new opportunities managing and maintaining the database and infrastructure as databases become more complex and data volumes grow companies are faced with the expense of hiring additional talent to manage their databases removing limits on scalability a business needs to grow if it s going to survive and its data management must grow along with it but it s very difficult to predict how much capacity the company will need particularly with on premises databases ensuring data residency data sovereignty or latency requirements some organizations have use cases that are better suited to run on premises in those cases engineered systems that are pre configured and pre optimized for running the database are ideal sql databases a sql or relational database is a collection of data items with pre defined relationships between them these items are organized as a set of tables with columns and rows tables are used to hold information about the objects to be represented in the database each column in a table holds a certain kind of data and a field stores the actual value of an attribute the rows in the table represent a collection of related values of one object or entity each row in a table could be marked with a unique identifier called a primary key and rows among multiple tables can be made related using foreign keys this data can be accessed in many different ways without re organizing the database tables themselves sql databases usually follow the acid consistency model https karanpratapsingh com courses system design acid and base consistency models acid materialized views a materialized view is a pre computed data set derived from a query specification and stored for later use because the data is pre computed querying a materialized view is faster than executing a query against the base table of the view this performance difference can be significant when a query is run frequently or is sufficiently complex it also enables data subsetting and improves the performance of complex queries that run on large data sets which reduces network loads there are other uses of materialized views but they are mostly used for performance and replication n 1 query problem the n 1 query problem happens when the data access layer executes n additional sql statements to fetch the same data that could have been retrieved when executing the primary sql query the larger the value of n the more queries will be executed the larger the performance impact this is commonly seen in graphql and orm object relational mapping tools and can be addressed by optimizing the sql query or using a dataloader that batches consecutive requests and makes a single data request under the hood advantages let s look at some advantages of using relational databases simple and accurate accessibility data consistency flexibility disadvantages below are the disadvantages of relational databases expensive to maintain difficult schema evolution performance hits join denormalization etc difficult to scale due to poor horizontal scalability examples here are some commonly used relational databases postgresql https www postgresql org mysql https www mysql com mariadb https mariadb org amazon aurora https aws amazon com rds aurora nosql databases nosql is a broad category that includes any database that doesn t use sql as its primary data access language these types of databases are also sometimes referred to as non relational databases unlike in relational databases data in a nosql database doesn t have to conform to a pre defined schema nosql databases follow base consistency model https karanpratapsingh com courses system design acid and base consistency models base below are different types of nosql databases document a document database also known as a document oriented database or a document store is a database that stores information in documents they are general purpose databases that serve a variety of use cases for both transactional and analytical applications advantages intuitive and flexible easy horizontal scaling schemaless disadvantages schemaless non relational examples mongodb https www mongodb com amazon documentdb https aws amazon com documentdb couchdb https couchdb apache org key value one of the simplest types of nosql databases key value databases save data as a group of key value pairs made up of two data items each they re also sometimes referred to as a key value store advantages simple and performant highly scalable for high volumes of traffic session management optimized lookups disadvantages basic crud values can t be filtered lacks indexing and scanning capabilities not optimized for complex queries examples redis https redis io memcached https memcached org amazon dynamodb https aws amazon com dynamodb aerospike https aerospike com graph a graph database is a nosql database that uses graph structures for semantic queries with nodes edges and properties to represent and store data instead of tables or documents the graph relates the data items in the store to a collection of nodes and edges the edges representing the relationships between the nodes the relationships allow data in the store to be linked together directly and in many cases retrieved with one operation advantages query speed agile and flexible explicit data representation disadvantages complex no standardized query language use cases fraud detection recommendation engines social networks network mapping examples neo4j https neo4j com arangodb https www arangodb com amazon neptune https aws amazon com neptune janusgraph https janusgraph org time series a time series database is a database optimized for time stamped or time series data advantages fast insertion and retrieval efficient data storage use cases iot data metrics analysis application monitoring understand financial trends examples influxdb https www influxdata com apache druid https druid apache org wide column wide column databases also known as wide column stores are schema agnostic data is stored in column families rather than in rows and columns advantages highly scalable can handle petabytes of data ideal for real time big data applications disadvantages expensive increased write time use cases business analytics attribute based data storage examples bigtable https cloud google com bigtable apache cassandra https cassandra apache org scylladb https www scylladb com multi model multi model databases combine different database models i e relational graph key value document etc into a single integrated backend this means they can accommodate various data types indexes queries and store data in more than one model advantages flexibility suitable for complex projects data consistent disadvantages complex less mature examples arangodb https www arangodb com azure cosmos db https azure microsoft com en in services cosmos db couchbase https www couchbase com sql vs nosql databases in the world of databases there are two main types of solutions sql relational and nosql non relational databases both of them differ in the way they were built the kind of information they store and how they store it relational databases are structured and have predefined schemas while non relational databases are unstructured distributed and have a dynamic schema high level differences here are some high level differences between sql and nosql storage sql stores data in tables where each row represents an entity and each column represents a data point about that entity nosql databases have different data storage models such as key value graph document etc schema in sql each record conforms to a fixed schema meaning the columns must be decided and chosen before data entry and each row must have data for each column the schema can be altered later but it involves modifying the database using migrations whereas in nosql schemas are dynamic fields can be added on the fly and each record or equivalent doesn t have to contain data for each field querying sql databases use sql structured query language for defining and manipulating the data which is very powerful in a nosql database queries are focused on a collection of documents different databases have different syntax for querying scalability in most common situations sql databases are vertically scalable which can get very expensive it is possible to scale a relational database across multiple servers but this is a challenging and time consuming process on the other hand nosql databases are horizontally scalable meaning we can add more servers easily to our nosql database infrastructure to handle large traffic any cheap commodity hardware or cloud instances can host nosql databases thus making it a lot more cost effective than vertical scaling a lot of nosql technologies also distribute data across servers automatically reliability the vast majority of relational databases are acid compliant so when it comes to data reliability and a safe guarantee of performing transactions sql databases are still the better bet most of the nosql solutions sacrifice acid compliance for performance and scalability reasons as always we should always pick the technology that fits the requirements better so let s look at some reasons for picking sql or nosql based database for sql structured data with strict schema relational data need for complex joins transactions lookups by index are very fast for nosql dynamic or flexible schema non relational data no need for complex joins very data intensive workload very high throughput for iops database replication replication is a process that involves sharing information to ensure consistency between redundant resources such as multiple databases to improve reliability fault tolerance or accessibility master slave replication the master serves reads and writes replicating writes to one or more slaves which serve only reads slaves can also replicate additional slaves in a tree like fashion if the master goes offline the system can continue to operate in read only mode until a slave is promoted to a master or a new master is provisioned master slave replication https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii database replication master slave replication png advantages backups of the entire database of relatively no impact on the master applications can read from the slave s without impacting the master slaves can be taken offline and synced back to the master without any downtime disadvantages replication adds more hardware and additional complexity downtime and possibly loss of data when a master fails all writes also have to be made to the master in a master slave architecture the more read slaves the more we have to replicate which will increase replication lag master master replication both masters serve reads writes and coordinate with each other if either master goes down the system can continue to operate with both reads and writes master master replication https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii database replication master master replication png advantages applications can read from both masters distributes write load across both master nodes simple automatic and quick failover disadvantages not as simple as master slave to configure and deploy either loosely consistent or have increased write latency due to synchronization conflict resolution comes into play as more write nodes are added and as latency increases synchronous vs asynchronous replication the primary difference between synchronous and asynchronous replication is how the data is written to the replica in synchronous replication data is written to primary storage and the replica simultaneously as such the primary copy and the replica should always remain synchronized in contrast asynchronous replication copies the data to the replica after the data is already written to the primary storage although the replication process may occur in near real time it is more common for replication to occur on a scheduled basis and it is more cost effective indexes indexes are well known when it comes to databases they are used to improve the speed of data retrieval operations on the data store an index makes the trade offs of increased storage overhead and slower writes since we not only have to write the data but also have to update the index for the benefit of faster reads indexes are used to quickly locate data without having to examine every row in a database table indexes can be created using one or more columns of a database table providing the basis for both rapid random lookups and efficient access to ordered records indexes https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii indexes indexes png an index is a data structure that can be perceived as a table of contents that points us to the location where actual data lives so when we create an index on a column of a table we store that column and a pointer to the whole row in the index indexes are also used to create different views of the same data for large data sets this is an excellent way to specify different filters or sorting schemes without resorting to creating multiple additional copies of the data one quality that database indexes can have is that they can be dense or sparse each of these index qualities comes with its own trade offs let s look at how each index type would work dense index in a dense index an index record is created for every row of the table records can be located directly as each record of the index holds the search key value and the pointer to the actual record dense index https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii indexes dense index png dense indexes require more maintenance than sparse indexes at write time since every row must have an entry the database must maintain the index on inserts updates and deletes having an entry for every row also means that dense indexes will require more memory the benefit of a dense index is that values can be quickly found with just a binary search dense indexes also do not impose any ordering requirements on the data sparse index in a sparse index records are created only for some of the records sparse index https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii indexes sparse index png sparse indexes require less maintenance than dense indexes at write time since they only contain a subset of the values this lighter maintenance burden means that inserts updates and deletes will be faster having fewer entries also means that the index will use less memory finding data is slower since a scan across the page typically follows the binary search sparse indexes are also optional when working with ordered data normalization and denormalization terms before we go any further let s look at some commonly used terms in normalization and denormalization keys primary key column or group of columns that can be used to uniquely identify every row of the table composite key a primary key made up of multiple columns super key set of all keys that can uniquely identify all the rows present in a table candidate key attributes that identify rows uniquely in a table foreign key it is a reference to a primary key of another table alternate key keys that are not primary keys are known as alternate keys surrogate key a system generated value that uniquely identifies each entry in a table when no other column was able to hold properties of a primary key dependencies partial dependency occurs when the primary key determines some other attributes functional dependency it is a relationship that exists between two attributes typically between the primary key and non key attribute within a table transitive functional dependency occurs when some non key attribute determines some other attribute anomalies database anomaly happens when there is a flaw in the database due to incorrect planning or storing everything in a flat database this is generally addressed by the process of normalization there are three types of database anomalies insertion anomaly occurs when we are not able to insert certain attributes in the database without the presence of other attributes update anomaly occurs in case of data redundancy and partial update in other words a correct update of the database needs other actions such as addition deletion or both deletion anomaly occurs where deletion of some data requires deletion of other data example let s consider the following table which is not normalized id name role team 1 peter software engineer a 2 brian devops engineer b 3 hailey product manager c 4 hailey product manager c 5 steve frontend engineer d let s imagine we hired a new person john but they might not be assigned a team immediately this will cause an insertion anomaly as the team attribute is not yet present next let s say hailey from team c got promoted to reflect that change in the database we will need to update 2 rows to maintain consistency which can cause an update anomaly finally we would like to remove team b but to do that we will also need to remove additional information such as name and role this is an example of a deletion anomaly normalization normalization is the process of organizing data in a database this includes creating tables and establishing relationships between those tables according to rules designed both to protect the data and to make the database more flexible by eliminating redundancy and inconsistent dependency why do we need normalization the goal of normalization is to eliminate redundant data and ensure data is consistent a fully normalized database allows its structure to be extended to accommodate new types of data without changing the existing structure too much as a result applications interacting with the database are minimally affected normal forms normal forms are a series of guidelines to ensure that the database is normalized let s discuss some essential normal forms 1nf for a table to be in the first normal form 1nf it should follow the following rules repeating groups are not permitted identify each set of related data with a primary key set of related data should have a separate table mixing data types in the same column is not permitted 2nf for a table to be in the second normal form 2nf it should follow the following rules satisfies the first normal form 1nf should not have any partial dependency 3nf for a table to be in the third normal form 3nf it should follow the following rules satisfies the second normal form 2nf transitive functional dependencies are not permitted bcnf boyce codd normal form or bcnf is a slightly stronger version of the third normal form 3nf used to address certain types of anomalies not dealt with by 3nf as originally defined sometimes it is also known as the 3 5 normal form 3 5nf for a table to be in the boyce codd normal form bcnf it should follow the following rules satisfied the third normal form 3nf for every functional dependency x y x should be the super key there are more normal forms such as 4nf 5nf and 6nf but we won t discuss them here check out this amazing video https www youtube com watch v gfqaeyec8 8 that goes into detail in a relational database a relation is often described as normalized if it meets the third normal form most 3nf relations are free of insertion update and deletion anomalies as with many formal rules and specifications real world scenarios do not always allow for perfect compliance if you decide to violate one of the first three rules of normalization make sure that your application anticipates any problems that could occur such as redundant data and inconsistent dependencies advantages here are some advantages of normalization reduces data redundancy better data design increases data consistency enforces referential integrity disadvantages let s look at some disadvantages of normalization data design is complex slower performance maintenance overhead require more joins denormalization denormalization is a database optimization technique in which we add redundant data to one or more tables this can help us avoid costly joins in a relational database it attempts to improve read performance at the expense of some write performance redundant copies of the data are written in multiple tables to avoid expensive joins once data becomes distributed with techniques such as federation and sharding managing joins across the network further increases complexity denormalization might circumvent the need for such complex joins note denormalization does not mean reversing normalization advantages let s look at some advantages of denormalization retrieving data is faster writing queries is easier reduction in number of tables convenient to manage disadvantages below are some disadvantages of denormalization expensive inserts and updates increases complexity of database design increases data redundancy more chances of data inconsistency acid and base consistency models let s discuss the acid and base consistency models acid the term acid stands for atomicity consistency isolation and durability acid properties are used for maintaining data integrity during transaction processing in order to maintain consistency before and after a transaction relational databases follow acid properties let us understand these terms atomic all operations in a transaction succeed or every operation is rolled back consistent on the completion of a transaction the database is structurally sound isolated transactions do not contend with one another contentious access to data is moderated by the database so that transactions appear to run sequentially durable once the transaction has been completed and the writes and updates have been written to the disk it will remain in the system even if a system failure occurs base with the increasing amount of data and high availability requirements the approach to database design has also changed dramatically to increase the ability to scale and at the same time be highly available we move the logic from the database to separate servers in this way the database becomes more independent and focused on the actual process of storing data in the nosql database world acid transactions are less common as some databases have loosened the requirements for immediate consistency data freshness and accuracy in order to gain other benefits like scale and resilience base properties are much looser than acid guarantees but there isn t a direct one for one mapping between the two consistency models let us understand these terms basic availability the database appears to work most of the time soft state stores don t have to be write consistent nor do different replicas have to be mutually consistent all the time eventual consistency the data might not be consistent immediately but eventually it becomes consistent reads in the system are still possible even though they may not give the correct response due to inconsistency acid vs base trade offs there s no right answer to whether our application needs an acid or a base consistency model both the models have been designed to satisfy different requirements while choosing a database we need to keep the properties of both the models and the requirements of our application in mind given base s loose consistency developers need to be more knowledgeable and rigorous about consistent data if they choose a base store for their application it s essential to be familiar with the base behavior of the chosen database and work within those constraints on the other hand planning around base limitations can sometimes be a major disadvantage when compared to the simplicity of acid transactions a fully acid database is the perfect fit for use cases where data reliability and consistency are essential cap theorem cap theorem states that a distributed system can deliver only two of the three desired characteristics consistency availability and partition tolerance cap cap theorem https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii cap theorem cap theorem png let s take a detailed look at the three distributed system characteristics to which the cap theorem refers consistency consistency means that all clients see the same data at the same time no matter which node they connect to for this to happen whenever data is written to one node it must be instantly forwarded or replicated across all the nodes in the system before the write is deemed successful availability availability means that any client making a request for data gets a response even if one or more nodes are down partition tolerance partition tolerance means the system continues to work despite message loss or partial failure a system that is partition tolerant can sustain any amount of network failure that doesn t result in a failure of the entire network data is sufficiently replicated across combinations of nodes and networks to keep the system up through intermittent outages consistency availability tradeoff we live in a physical world and can t guarantee the stability of a network so distributed databases must choose partition tolerance p this implies a tradeoff between consistency c and availability a ca database a ca database delivers consistency and availability across all nodes it can t do this if there is a partition between any two nodes in the system and therefore can t deliver fault tolerance example postgresql https www postgresql org mariadb https mariadb org cp database a cp database delivers consistency and partition tolerance at the expense of availability when a partition occurs between any two nodes the system has to shut down the non consistent node until the partition is resolved example mongodb https www mongodb com apache hbase https hbase apache org ap database an ap database delivers availability and partition tolerance at the expense of consistency when a partition occurs all nodes remain available but those at the wrong end of a partition might return an older version of data than others when the partition is resolved the ap databases typically re syncs the nodes to repair all inconsistencies in the system example apache cassandra https cassandra apache org couchdb https couchdb apache org pacelc theorem the pacelc theorem is an extension of the cap theorem the cap theorem states that in the case of network partitioning p in a distributed system one has to choose between availability a and consistency c pacelc extends the cap theorem by introducing latency l as an additional attribute of a distributed system the theorem states that else e even when the system is running normally in the absence of partitions one has to choose between latency l and consistency c the pacelc theorem was first described by daniel j abadi https scholar google com citations user zxeef2gaaaaj pacelc theorem https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii pacelc theorem pacelc theorem png pacelc theorem was developed to address a key limitation of the cap theorem as it makes no provision for performance or latency for example according to the cap theorem a database can be considered available if a query returns a response after 30 days obviously such latency would be unacceptable for any real world application transactions a transaction is a series of database operations that are considered to be a single unit of work the operations in a transaction either all succeed or they all fail in this way the notion of a transaction supports data integrity when part of a system fails not all databases choose to support acid transactions usually because they are prioritizing other optimizations that are hard or theoretically impossible to implement together usually relational databases support acid transactions and non relational databases don t there are exceptions states a transaction in a database can be in one of the following states transaction states https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii transactions transaction states png active in this state the transaction is being executed this is the initial state of every transaction partially committed when a transaction executes its final operation it is said to be in a partially committed state committed if a transaction executes all its operations successfully it is said to be committed all its effects are now permanently established on the database system failed the transaction is said to be in a failed state if any of the checks made by the database recovery system fails a failed transaction can no longer proceed further aborted if any of the checks fail and the transaction has reached a failed state then the recovery manager rolls back all its write operations on the database to bring the database back to its original state where it was prior to the execution of the transaction transactions in this state are aborted the database recovery module can select one of the two operations after a transaction aborts restart the transaction kill the transaction terminated if there isn t any roll back or the transaction comes from the committed state then the system is consistent and ready for a new transaction and the old transaction is terminated distributed transactions a distributed transaction is a set of operations on data that is performed across two or more databases it is typically coordinated across separate nodes connected by a network but may also span multiple databases on a single server why do we need distributed transactions unlike an acid transaction on a single database a distributed transaction involves altering data on multiple databases consequently distributed transaction processing is more complicated because the database must coordinate the committing or rollback of the changes in a transaction as a self contained unit in other words all the nodes must commit or all must abort and the entire transaction rolls back this is why we need distributed transactions now let s look at some popular solutions for distributed transactions two phase commit two phase commit https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii distributed transactions two phase commit png the two phase commit 2pc protocol is a distributed algorithm that coordinates all the processes that participate in a distributed transaction on whether to commit or abort roll back the transaction this protocol achieves its goal even in many cases of temporary system failure and is thus widely used however it is not resilient to all possible failure configurations and in rare cases manual intervention is needed to remedy an outcome this protocol requires a coordinator node which basically coordinates and oversees the transaction across different nodes the coordinator tries to establish the consensus among a set of processes in two phases hence the name phases two phase commit consists of the following phases prepare phase the prepare phase involves the coordinator node collecting consensus from each of the participant nodes the transaction will be aborted unless each of the nodes responds that they re prepared commit phase if all participants respond to the coordinator that they are prepared then the coordinator asks all the nodes to commit the transaction if a failure occurs the transaction will be rolled back problems following problems may arise in the two phase commit protocol what if one of the nodes crashes what if the coordinator itself crashes it is a blocking protocol three phase commit three phase commit https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii distributed transactions three phase commit png three phase commit 3pc is an extension of the two phase commit where the commit phase is split into two phases this helps with the blocking problem that occurs in the two phase commit protocol phases three phase commit consists of the following phases prepare phase this phase is the same as the two phase commit pre commit phase coordinator issues the pre commit message and all the participating nodes must acknowledge it if a participant fails to receive this message in time then the transaction is aborted commit phase this step is also similar to the two phase commit protocol why is the pre commit phase helpful the pre commit phase accomplishes the following if the participant nodes are found in this phase that means that every participant has completed the first phase the completion of prepare phase is guaranteed every phase can now time out and avoid indefinite waits sagas sagas https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii distributed transactions sagas png a saga is a sequence of local transactions each local transaction updates the database and publishes a message or event to trigger the next local transaction in the saga if a local transaction fails because it violates a business rule then the saga executes a series of compensating transactions that undo the changes that were made by the preceding local transactions coordination there are two common implementation approaches choreography each local transaction publishes domain events that trigger local transactions in other services orchestration an orchestrator tells the participants what local transactions to execute problems the saga pattern is particularly hard to debug there s a risk of cyclic dependency between saga participants lack of participant data isolation imposes durability challenges testing is difficult because all services must be running to simulate a transaction sharding before we discuss sharding let s talk about data partitioning data partitioning data partitioning is a technique to break up a database into many smaller parts it is the process of splitting up a database or a table across multiple machines to improve the manageability performance and availability of a database methods there are many different ways one could use to decide how to break up an application database into multiple smaller dbs below are two of the most popular methods used by various large scale applications horizontal partitioning or sharding in this strategy we split the table data horizontally based on the range of values defined by the partition key it is also referred to as database sharding vertical partitioning in vertical partitioning we partition the data vertically based on columns we divide tables into relatively smaller tables with few elements and each part is present in a separate partition in this tutorial we will specifically focus on sharding what is sharding sharding is a database architecture pattern related to horizontal partitioning which is the practice of separating one table s rows into multiple different tables known as partitions or shards each partition has the same schema and columns but also a subset of the shared data likewise the data held in each is unique and independent of the data held in other partitions sharding https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii sharding sharding png the justification for data sharding is that after a certain point it is cheaper and more feasible to scale horizontally by adding more machines than to scale it vertically by adding powerful servers sharding can be implemented at both application or the database level partitioning criteria there are a large number of criteria available for data partitioning some most commonly used criteria are hash based this strategy divides the rows into different partitions based on a hashing algorithm rather than grouping database rows based on continuous indexes the disadvantage of this method is that dynamically adding removing database servers becomes expensive list based in list based partitioning each partition is defined and selected based on the list of values on a column rather than a set of contiguous ranges of values range based range partitioning maps data to various partitions based on ranges of values of the partitioning key in other words we partition the table in such a way that each partition contains rows within a given range defined by the partition key ranges should be contiguous but not overlapping where each range specifies a non inclusive lower and upper bound for a partition any partitioning key values equal to or higher than the upper bound of the range are added to the next partition composite as the name suggests composite partitioning partitions the data based on two or more partitioning techniques here we first partition the data using one technique and then each partition is further subdivided into sub partitions using the same or some other method advantages but why do we need sharding here are some advantages availability provides logical independence to the partitioned database ensuring the high availability of our application here individual partitions can be managed independently scalability proves to increase scalability by distributing the data across multiple partitions security helps improve the system s security by storing sensitive and non sensitive data in different partitions this could provide better manageability and security to sensitive data query performance improves the performance of the system instead of querying the whole database now the system has to query only a smaller partition data manageability divides tables and indexes into smaller and more manageable units disadvantages complexity sharding increases the complexity of the system in general joins across shards once a database is partitioned and spread across multiple machines it is often not feasible to perform joins that span multiple database shards such joins will not be performance efficient since data has to be retrieved from multiple servers rebalancing if the data distribution is not uniform or there is a lot of load on a single shard in such cases we have to rebalance our shards so that the requests are as equally distributed among the shards as possible when to use sharding here are some reasons why sharding might be the right choice leveraging existing hardware instead of high end machines maintain data in distinct geographic regions quickly scale by adding more shards better performance as each machine is under less load when more concurrent connections are required consistent hashing let s first understand the problem we re trying to solve why do we need this in traditional hashing based distribution methods we use a hash function to hash our partition keys i e request id or ip then if we use the modulo against the total number of nodes server or databases this will give us the node where we want to route our request simple hashing https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii consistent hashing simple hashing png begin align hash key 1 to h 1 bmod n node 0 hash key 2 to h 2 bmod n node 1 hash key 3 to h 3 bmod n node 2 hash key n to h n bmod n node n 1 end align where key request id or ip h hash function result n total number of nodes node the node where the request will be routed the problem with this is if we add or remove a node it will cause n to change meaning our mapping strategy will break as the same requests will now map to a different server as a consequence the majority of requests will need to be redistributed which is very inefficient we want to uniformly distribute requests among different nodes such that we should be able to add or remove nodes with minimal effort hence we need a distribution scheme that does not depend directly on the number of nodes or servers so that when adding or removing nodes the number of keys that need to be relocated is minimized consistent hashing solves this horizontal scalability problem by ensuring that every time we scale up or down we do not have to re arrange all the keys or touch all the servers now that we understand the problem let s discuss consistent hashing in detail how does it work consistent hashing is a distributed hashing scheme that operates independently of the number of nodes in a distributed hash table by assigning them a position on an abstract circle or hash ring this allows servers and objects to scale without affecting the overall system consistent hashing https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii consistent hashing consistent hashing png using consistent hashing only k n data would require re distributing r k n where r data that would require re distribution k number of partition keys n number of nodes the output of the hash function is a range let s say 0 m 1 which we can represent on our hash ring we hash the requests and distribute them on the ring depending on what the output was similarly we also hash the node and distribute them on the same ring as well begin align hash key 1 p 1 hash key 2 p 2 hash key 3 p 3 hash key n p m 1 end align where key request node id or ip p position on the hash ring m total range of the hash ring now when the request comes in we can simply route it to the closest node in a clockwise can be counterclockwise as well manner this means that if a new node is added or removed we can use the nearest node and only a fraction of the requests need to be re routed in theory consistent hashing should distribute the load evenly however it doesn t happen in practice usually the load distribution is uneven and one server may end up handling the majority of the request becoming a hotspot essentially a bottleneck for the system we can fix this by adding extra nodes but that can be expensive let s see how we can address these issues virtual nodes in order to ensure a more evenly distributed load we can introduce the idea of a virtual node sometimes also referred to as a vnode instead of assigning a single position to a node the hash range is divided into multiple smaller ranges and each physical node is assigned several of these smaller ranges each of these subranges is considered a vnode hence virtual nodes are basically existing physical nodes mapped multiple times across the hash ring to minimize changes to a node s assigned range virtual nodes https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii consistent hashing virtual nodes png for this we can use k number of hash functions begin align hash 1 key 1 p 1 hash 2 key 2 p 2 hash 3 key 3 p 3 hash k key n p m 1 end align where key request node id or ip k number of hash functions p position on the hash ring m total range of the hash ring as vnodes help spread the load more evenly across the physical nodes on the cluster by diving the hash ranges into smaller subranges this speeds up the re balancing process after adding or removing nodes this also helps us reduce the probability of hotspots data replication to ensure high availability and durability consistent hashing replicates each data item on multiple n nodes in the system where the value n is equivalent to the replication factor the replication factor is the number of nodes that will receive the copy of the same data in eventually consistent systems this is done asynchronously advantages let s look at some advantages of consistent hashing makes rapid scaling up and down more predictable facilitates partitioning and replication across nodes enables scalability and availability reduces hotspots disadvantages below are some disadvantages of consistent hashing increases complexity cascading failures load distribution can still be uneven key management can be expensive when nodes transiently fail examples let s look at some examples where consistent hashing is used data partitioning in apache cassandra https cassandra apache org load distribution across multiple storage hosts in amazon dynamodb https aws amazon com dynamodb database federation federation or functional partitioning splits up databases by function the federation architecture makes several distinct physical databases appear as one logical database to end users all of the components in a federation are tied together by one or more federal schemas that express the commonality of data throughout the federation these federated schemas are used to specify the information that can be shared by the federation components and to provide a common basis for communication among them database federation https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter ii database federation database federation png federation also provides a cohesive unified view of data derived from multiple sources the data sources for federated systems can include databases and various other forms of structured and unstructured data characteristics let s look at some key characteristics of a federated database transparency federated database masks user differences and implementations of underlying data sources therefore the users do not need to be aware of where the data is stored heterogeneity data sources can differ in many ways a federated database system can handle different hardware network protocols data models etc extensibility new sources may be needed to meet the changing needs of the business a good federated database system needs to make it easy to add new sources autonomy a federated database does not change existing data sources interfaces should remain the same data integration a federated database can integrate data from different protocols database management systems etc advantages here are some advantages of federated databases flexible data sharing autonomy among the database components access heterogeneous data in a unified way no tight coupling of applications with legacy databases disadvantages below are some disadvantages of federated databases adds more hardware and additional complexity joining data from two databases is complex dependence on autonomous data sources query performance and scalability n tier architecture n tier architecture divides an application into logical layers and physical tiers layers are a way to separate responsibilities and manage dependencies each layer has a specific responsibility a higher layer can use services in a lower layer but not the other way around n tier architecture https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii n tier architecture n tier architecture png tiers are physically separated running on separate machines a tier can call to another tier directly or use asynchronous messaging although each layer might be hosted in its own tier that s not required several layers might be hosted on the same tier physically separating the tiers improves scalability and resiliency and adds latency from the additional network communication an n tier architecture can be of two types in a closed layer architecture a layer can only call the next layer immediately down in an open layer architecture a layer can call any of the layers below it a closed layer architecture limits the dependencies between layers however it might create unnecessary network traffic if one layer simply passes requests along to the next layer types of n tier architectures let s look at some examples of n tier architecture 3 tier architecture 3 tier is widely used and consists of the following different layers presentation layer handles user interactions with the application business logic layer accepts the data from the application layer validates it as per business logic and passes it to the data layer data access layer receives the data from the business layer and performs the necessary operation on the database 2 tier architecture in this architecture the presentation layer runs on the client and communicates with a data store there is no business logic layer or immediate layer between client and server single tier or 1 tier architecture it is the simplest one as it is equivalent to running the application on a personal computer all of the required components for an application to run are on a single application or server advantages here are some advantages of using n tier architecture can improve availability better security as layers can behave like a firewall separate tiers allow us to scale them as needed improve maintenance as different people can manage different tiers disadvantages below are some disadvantages of n tier architecture increased complexity of the system as a whole increased network latency as the number of tiers increases expensive as every tier will have its own hardware cost difficult to manage network security message brokers a message broker is a software that enables applications systems and services to communicate with each other and exchange information the message broker does this by translating messages between formal messaging protocols this allows interdependent services to talk with one another directly even if they were written in different languages or implemented on different platforms message broker https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii message brokers message broker png message brokers can validate store route and deliver messages to the appropriate destinations they serve as intermediaries between other applications allowing senders to issue messages without knowing where the receivers are whether or not they are active or how many of them there are this facilitates the decoupling of processes and services within systems models message brokers offer two basic message distribution patterns or messaging styles point to point messaging https karanpratapsingh com courses system design message queues this is the distribution pattern utilized in message queues with a one to one relationship between the message s sender and receiver publish subscribe messaging https karanpratapsingh com courses system design publish subscribe in this message distribution pattern often referred to as pub sub the producer of each message publishes it to a topic and multiple message consumers subscribe to topics from which they want to receive messages we will discuss these messaging patterns in detail in the later tutorials message brokers vs event streaming message brokers can support two or more messaging patterns including message queues and pub sub while event streaming platforms only offer pub sub style distribution patterns designed for use with high volumes of messages event streaming platforms are readily scalable they re capable of ordering streams of records into categories called topics and storing them for a predetermined amount of time unlike message brokers however event streaming platforms cannot guarantee message delivery or track which consumers have received the messages event streaming platforms offer more scalability than message brokers but fewer features that ensure fault tolerance like message resending as well as more limited message routing and queueing capabilities message brokers vs enterprise service bus esb enterprise service bus esb https karanpratapsingh com courses system design enterprise service bus infrastructure is complex and can be challenging to integrate and expensive to maintain it s difficult to troubleshoot them when problems occur in production environments they re not easy to scale and updating is tedious whereas message brokers are a lightweight alternative to esbs that provide similar functionality a mechanism for inter service communication at a lower cost they re well suited for use in the microservices architectures https karanpratapsingh com courses system design monoliths microservices microservices that have become more prevalent as esbs have fallen out of favor examples here are some commonly used message brokers nats https nats io apache kafka https kafka apache org rabbitmq https www rabbitmq com activemq https activemq apache org message queues a message queue is a form of service to service communication that facilitates asynchronous communication it asynchronously receives messages from producers and sends them to consumers queues are used to effectively manage requests in large scale distributed systems in small systems with minimal processing loads and small databases writes can be predictably fast however in more complex and large systems writes can take an almost non deterministic amount of time message queue https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii message queues message queue png working messages are stored in the queue until they are processed and deleted each message is processed only once by a single consumer here s how it works a producer publishes a job to the queue then notifies the user of the job status a consumer picks up the job from the queue processes it then signals that the job is complete advantages let s discuss some advantages of using a message queue scalability message queues make it possible to scale precisely where we need to when workloads peak multiple instances of our application can add all requests to the queue without the risk of collision decoupling message queues remove dependencies between components and significantly simplify the implementation of decoupled applications performance message queues enable asynchronous communication which means that the endpoints that are producing and consuming messages interact with the queue not each other producers can add requests to the queue without waiting for them to be processed reliability queues make our data persistent and reduce the errors that happen when different parts of our system go offline features now let s discuss some desired features of message queues push or pull delivery most message queues provide both push and pull options for retrieving messages pull means continuously querying the queue for new messages push means that a consumer is notified when a message is available we can also use long polling to allow pulls to wait a specified amount of time for new messages to arrive fifo first in first out queues in these queues the oldest or first entry sometimes called the head of the queue is processed first schedule or delay delivery many message queues support setting a specific delivery time for a message if we need to have a common delay for all messages we can set up a delay queue at least once delivery message queues may store multiple copies of messages for redundancy and high availability and resend messages in the event of communication failures or errors to ensure they are delivered at least once exactly once delivery when duplicates can t be tolerated fifo first in first out message queues will make sure that each message is delivered exactly once and only once by filtering out duplicates automatically dead letter queues a dead letter queue is a queue to which other queues can send messages that can t be processed successfully this makes it easy to set them aside for further inspection without blocking the queue processing or spending cpu cycles on a message that might never be consumed successfully ordering most message queues provide best effort ordering which ensures that messages are generally delivered in the same order as they re sent and that a message is delivered at least once poison pill messages poison pills are special messages that can be received but not processed they are a mechanism used in order to signal a consumer to end its work so it is no longer waiting for new inputs and are similar to closing a socket in a client server model security message queues will authenticate applications that try to access the queue this allows us to encrypt messages over the network as well as in the queue itself task queues tasks queues receive tasks and their related data run them then deliver their results they can support scheduling and can be used to run computationally intensive jobs in the background backpressure if queues start to grow significantly the queue size can become larger than memory resulting in cache misses disk reads and even slower performance backpressure can help by limiting the queue size thereby maintaining a high throughput rate and good response times for jobs already in the queue once the queue fills up clients get a server busy or http 503 status code to try again later clients can retry the request at a later time perhaps with exponential backoff https en wikipedia org wiki exponential backoff strategy examples following are some widely used message queues amazon sqs https aws amazon com sqs rabbitmq https www rabbitmq com activemq https activemq apache org zeromq https zeromq org publish subscribe similar to a message queue publish subscribe is also a form of service to service communication that facilitates asynchronous communication in a pub sub model any message published to a topic is pushed immediately to all the subscribers of the topic publish subscribe https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii publish subscribe publish subscribe png the subscribers to the message topic often perform different functions and can each do something different with the message in parallel the publisher doesn t need to know who is using the information that it is broadcasting and the subscribers don t need to know where the message comes from this style of messaging is a bit different than message queues where the component that sends the message often knows the destination it is sending to working unlike message queues which batch messages until they are retrieved message topics transfer messages with little or no queuing and push them out immediately to all subscribers here s how it works a message topic provides a lightweight mechanism to broadcast asynchronous event notifications and endpoints that allow software components to connect to the topic in order to send and receive those messages to broadcast a message a component called a publisher simply pushes a message to the topic all components that subscribe to the topic known as subscribers will receive every message that was broadcasted advantages let s discuss some advantages of using publish subscribe eliminate polling message topics allow instantaneous push based delivery eliminating the need for message consumers to periodically check or poll for new information and updates this promotes faster response time and reduces the delivery latency which can be particularly problematic in systems where delays cannot be tolerated dynamic targeting pub sub makes the discovery of services easier more natural and less error prone instead of maintaining a roster of peers where an application can send messages a publisher will simply post messages to a topic then any interested party will subscribe its endpoint to the topic and start receiving these messages subscribers can change upgrade multiply or disappear and the system dynamically adjusts decoupled and independent scaling publishers and subscribers are decoupled and work independently from each other which allows us to develop and scale them independently simplify communication the publish subscribe model reduces complexity by removing all the point to point connections with a single connection to a message topic which will manage subscriptions and decide what messages should be delivered to which endpoints features now let s discuss some desired features of publish subscribe push delivery pub sub messaging instantly pushes asynchronous event notifications when messages are published to the message topic subscribers are notified when a message is available multiple delivery protocols in the publish subscribe model topics can typically connect to multiple types of endpoints such as message queues serverless functions http servers etc fanout this scenario happens when a message is sent to a topic and then replicated and pushed to multiple endpoints fanout provides asynchronous event notifications which in turn allows for parallel processing filtering this feature empowers the subscriber to create a message filtering policy so that it will only get the notifications it is interested in as opposed to receiving every single message posted to the topic durability pub sub messaging services often provide very high durability and at least once delivery by storing copies of the same message on multiple servers security message topics authenticate applications that try to publish content this allows us to use encrypted endpoints and encrypt messages in transit over the network examples here are some commonly used publish subscribe technologies amazon sns https aws amazon com sns google pub sub https cloud google com pubsub enterprise service bus esb an enterprise service bus esb is an architectural pattern whereby a centralized software component performs integrations between applications it performs transformations of data models handles connectivity performs message routing converts communication protocols and potentially manages the composition of multiple requests the esb can make these integrations and transformations available as a service interface for reuse by new applications enterprise service bus https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii enterprise service bus enterprise service bus png advantages in theory a centralized esb offers the potential to standardize and dramatically simplify communication messaging and integration between services across the enterprise here are some advantages of using an esb improved developer productivity enables developers to incorporate new technologies into one part of an application without touching the rest of the application simpler more cost effective scalability components can be scaled independently of others greater resilience failure of one component does not impact the others and each microservice can adhere to its own availability requirements without risking the availability of other components in the system disadvantages while esbs were deployed successfully in many organizations in many other organizations the esb came to be seen as a bottleneck here are some disadvantages of using an esb making changes or enhancements to one integration could destabilize others who use that same integration a single point of failure can bring down all communications updates to the esb often impact existing integrations so there is significant testing required to perform any update esb is centrally managed which makes cross team collaboration challenging high configuration and maintenance complexity examples below are some widely used enterprise service bus esb technologies azure service bus https azure microsoft com en in services service bus ibm app connect https www ibm com in en cloud app connect apache camel https camel apache org fuse esb https www redhat com en technologies jboss middleware fuse monoliths and microservices monoliths a monolith is a self contained and independent application it is built as a single unit and is responsible for not just a particular task but can perform every step needed to satisfy a business need monolith https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii monoliths microservices monolith png advantages following are some advantages of monoliths simple to develop or debug fast and reliable communication easy monitoring and testing supports acid transactions disadvantages some common disadvantages of monoliths are maintenance becomes hard as the codebase grows tightly coupled application hard to extend requires commitment to a particular technology stack on each update the entire application is redeployed reduced reliability as a single bug can bring down the entire system difficult to scale or adopt new technologies modular monoliths a modular monolith is an approach where we build and deploy a single application that s the monolith part but we build it in a way that breaks up the code into independent modules for each of the features needed in our application this approach reduces the dependencies of a module in such as way that we can enhance or change a module without affecting other modules when done right this can be really beneficial in the long term as it reduces the complexity that comes with maintaining a monolith as the system grows microservices a microservices architecture consists of a collection of small autonomous services where each service is self contained and should implement a single business capability within a bounded context a bounded context is a natural division of business logic that provides an explicit boundary within which a domain model exists microservices https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii monoliths microservices microservices png each service has a separate codebase which can be managed by a small development team services can be deployed independently and a team can update an existing service without rebuilding and redeploying the entire application services are responsible for persisting their own data or external state database per service this differs from the traditional model where a separate data layer handles data persistence characteristics the microservices architecture style has the following characteristics loosely coupled services should be loosely coupled so that they can be independently deployed and scaled this will lead to the decentralization of development teams and thus enabling them to develop and deploy faster with minimal constraints and operational dependencies small but focused it s about scope and responsibilities and not size a service should be focused on a specific problem basically it does one thing and does it well ideally they can be independent of the underlying architecture built for businesses the microservices architecture is usually organized around business capabilities and priorities resilience fault tolerance services should be designed in such a way that they still function in case of failure or errors in environments with independently deployable services failure tolerance is of the highest importance highly maintainable service should be easy to maintain and test because services that cannot be maintained will be rewritten advantages here are some advantages of microservices architecture loosely coupled services services can be deployed independently highly agile for multiple development teams improves fault tolerance and data isolation better scalability as each service can be scaled independently eliminates any long term commitment to a particular technology stack disadvantages microservices architecture brings its own set of challenges complexity of a distributed system testing is more difficult expensive to maintain individual servers databases etc inter service communication has its own challenges data integrity and consistency network congestion and latency best practices let s discuss some microservices best practices model services around the business domain services should have loose coupling and high functional cohesion isolate failures and use resiliency strategies to prevent failures within a service from cascading services should only communicate through well designed apis avoid leaking implementation details data storage should be private to the service that owns the data avoid coupling between services causes of coupling include shared database schemas and rigid communication protocols decentralize everything individual teams are responsible for designing and building services avoid sharing code or data schemas fail fast by using a circuit breaker https karanpratapsingh com courses system design circuit breaker to achieve fault tolerance ensure that the api changes are backward compatible pitfalls below are some common pitfalls of microservices architecture service boundaries are not based on the business domain underestimating how hard is to build a distributed system shared database or common dependencies between services lack of business alignment lack of clear ownership lack of idempotency trying to do everything acid instead of base https karanpratapsingh com courses system design acid and base consistency models lack of design for fault tolerance may result in cascading failures beware of the distributed monolith distributed monolith is a system that resembles the microservices architecture but is tightly coupled within itself like a monolithic application adopting microservices architecture comes with a lot of advantages but while making one there are good chances that we might end up with a distributed monolith our microservices are just a distributed monolith if any of these apply to it requires low latency communication services don t scale easily dependency between services sharing the same resources such as databases tightly coupled systems one of the primary reasons to build an application using microservices architecture is to have scalability therefore microservices should have loosely coupled services which enable every service to be independent the distributed monolith architecture takes this away and causes most components to depend on one another increasing design complexity microservices vs service oriented architecture soa you might have seen service oriented architecture soa mentioned around the internet sometimes even interchangeably with microservices but they are different from each other and the main distinction between the two approaches comes down to scope service oriented architecture soa defines a way to make software components reusable via service interfaces these interfaces utilize common communication standards and focus on maximizing application service reusability whereas microservices are built as a collection of various smallest independent service units focused on team autonomy and decoupling why you don t need microservices architecture range https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii monoliths microservices architecture range png so you might be wondering monoliths seem like a bad idea to begin with why would anyone use that well it depends while each approach has its own advantages and disadvantages it is advised to start with a monolith when building a new system it is important to understand that microservices are not a silver bullet instead they solve an organizational problem microservices architecture is about your organizational priorities and team as much as it s about technology before making the decision to move to microservices architecture you need to ask yourself questions like is the team too large to work effectively on a shared codebase are teams blocked on other teams does microservices deliver clear business value for us is my business mature enough to use microservices is our current architecture limiting us with communication overhead if your application does not require to be broken down into microservices you don t need this there is no absolute necessity that all applications should be broken down into microservices we frequently draw inspiration from companies such as netflix and their use of microservices but we overlook the fact that we are not netflix they went through a lot of iterations and models before they had a market ready solution and this architecture became acceptable for them when they identified and solved the problem they were trying to tackle that s why it s essential to understand in depth if your business actually needs microservices what i m trying to say is microservices are solutions to complex concerns and if your business doesn t have complex issues you don t need them event driven architecture eda event driven architecture eda is about using events as a way to communicate within a system generally leveraging a message broker to publish and consume events asynchronously the publisher is unaware of who is consuming an event and the consumers are unaware of each other event driven architecture is simply a way of achieving loose coupling between services within a system what is an event an event is a data point that represents state changes in a system it doesn t specify what should happen and how the change should modify the system it only notifies the system of a particular state change when a user makes an action they trigger an event components event driven architectures have three key components event producers publishes an event to the router event routers filters and pushes the events to consumers event consumers uses events to reflect changes in the system event driven architecture https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii event driven architecture event driven architecture png note dots in the diagram represents different events in the system patterns there are several ways to implement the event driven architecture and which method we use depends on the use case but here are some common examples sagas https karanpratapsingh com courses system design distributed transactions sagas publish subscribe https karanpratapsingh com courses system design publish subscribe event sourcing https karanpratapsingh com courses system design event sourcing command and query responsibility segregation cqrs https karanpratapsingh com courses system design command and query responsibility segregation note each of these methods is discussed separately advantages let s discuss some advantages decoupled producers and consumers highly scalable and distributed easy to add new consumers improves agility challenges here are some challenges of event drive architecture guaranteed delivery error handling is difficult event driven systems are complex in general exactly once in order processing of events use cases below are some common use cases where event driven architectures are beneficial metadata and metrics server and security logs integrating heterogeneous systems fanout and parallel processing examples here are some widely used technologies for implementing event driven architectures nats https nats io apache kafka https kafka apache org amazon eventbridge https aws amazon com eventbridge amazon sns https aws amazon com sns google pubsub https cloud google com pubsub event sourcing instead of storing just the current state of the data in a domain use an append only store to record the full series of actions taken on that data the store acts as the system of record and can be used to materialize the domain objects event sourcing https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii event sourcing event sourcing png this can simplify tasks in complex domains by avoiding the need to synchronize the data model and the business domain while improving performance scalability and responsiveness it can also provide consistency for transactional data and maintain full audit trails and history that can enable compensating actions event sourcing vs event driven architecture eda event sourcing is seemingly constantly being confused with event driven architecture eda https karanpratapsingh com courses system design event driven architecture event driven architecture is about using events to communicate between service boundaries generally leveraging a message broker to publish and consume events asynchronously within other boundaries whereas event sourcing is about using events as a state which is a different approach to storing data rather than storing the current state we re instead going to be storing events also event sourcing is one of the several patterns to implement an event driven architecture advantages let s discuss some advantages of using event sourcing excellent for real time data reporting great for fail safety data can be reconstituted from the event store extremely flexible any type of message can be stored preferred way of achieving audit logs functionality for high compliance systems disadvantages following are the disadvantages of event sourcing requires an extremely efficient network infrastructure requires a reliable way to control message formats such as a schema registry different events will contain different payloads command and query responsibility segregation cqrs command query responsibility segregation cqrs is an architectural pattern that divides a system s actions into commands and queries it was first described by greg young https twitter com gregyoung in cqrs a command is an instruction a directive to perform a specific task it is an intention to change something and doesn t return a value only an indication of success or failure and a query is a request for information that doesn t change the system s state or cause any side effects command and query responsibility segregation https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii command and query responsibility segregation command and query responsibility segregation png the core principle of cqrs is the separation of commands and queries they perform fundamentally different roles within a system and separating them means that each can be optimized as needed which distributed systems can really benefit from cqrs with event sourcing the cqrs pattern is often used along with the event sourcing pattern cqrs based systems use separate read and write data models each tailored to relevant tasks and often located in physically separate stores when used with the event sourcing pattern the store of events is the write model and is the official source of information the read model of a cqrs based system provides materialized views of the data typically as highly denormalized views advantages let s discuss some advantages of cqrs allows independent scaling of read and write workloads easier scaling optimizations and architectural changes closer to business logic with loose coupling the application can avoid complex joins when querying clear boundaries between the system behavior disadvantages below are some disadvantages of cqrs more complex application design message failures or duplicate messages can occur dealing with eventual consistency is a challenge increased system maintenance efforts use cases here are some scenarios where cqrs will be helpful the performance of data reads must be fine tuned separately from the performance of data writes the system is expected to evolve over time and might contain multiple versions of the model or where business rules change regularly integration with other systems especially in combination with event sourcing where the temporal failure of one subsystem shouldn t affect the availability of the others better security to ensure that only the right domain entities are performing writes on the data api gateway the api gateway is an api management tool that sits between a client and a collection of backend services it is a single entry point into a system that encapsulates the internal system architecture and provides an api that is tailored to each client it also has other responsibilities such as authentication monitoring load balancing caching throttling logging etc api gateway https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii api gateway api gateway png why do we need an api gateway the granularity of apis provided by microservices is often different than what a client needs microservices typically provide fine grained apis which means that clients need to interact with multiple services hence an api gateway can provide a single entry point for all clients with some additional features and better management features below are some desired features of an api gateway authentication and authorization service discovery https karanpratapsingh com courses system design service discovery reverse proxy https karanpratapsingh com courses system design proxy reverse proxy caching https karanpratapsingh com courses system design caching security retry and circuit breaking https karanpratapsingh com courses system design circuit breaker load balancing https karanpratapsingh com courses system design load balancing logging tracing api composition rate limiting https karanpratapsingh com courses system design rate limiting and throttling versioning routing ip whitelisting or blacklisting advantages let s look at some advantages of using an api gateway encapsulates the internal structure of an api provides a centralized view of the api simplifies the client code monitoring analytics tracing and other such features disadvantages here are some possible disadvantages of an api gateway possible single point of failure might impact performance can become a bottleneck if not scaled properly configuration can be challenging backend for frontend bff pattern in the backend for frontend bff pattern we create separate backend services to be consumed by specific frontend applications or interfaces this pattern is useful when we want to avoid customizing a single backend for multiple interfaces this pattern was first described by sam newman https samnewman io also sometimes the output of data returned by the microservices to the front end is not in the exact format or filtered as needed by the front end to solve this issue the frontend should have some logic to reformat the data and therefore we can use bff to shift some of this logic to the intermediate layer backend for frontend https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii api gateway backend for frontend png the primary function of the backend for the frontend pattern is to get the required data from the appropriate service format the data and sent it to the frontend graphql https karanpratapsingh com courses system design rest graphql grpc graphql performs really well as a backend for frontend bff when to use this pattern we should consider using a backend for frontend bff pattern when a shared or general purpose backend service must be maintained with significant development overhead we want to optimize the backend for the requirements of a specific client customizations are made to a general purpose backend to accommodate multiple interfaces examples following are some widely used gateways technologies amazon api gateway https aws amazon com api gateway apigee api gateway https cloud google com apigee azure api gateway https azure microsoft com en in services api management kong api gateway https konghq com kong rest graphql grpc a good api design is always a crucial part of any system but it is also important to pick the right api technology so in this tutorial we will briefly discuss different api technologies such as rest graphql and grpc what s an api before we even get into api technologies let s first understand what is an api api stands for application programming interface it is a set of definitions and protocols for building and integrating application software it s sometimes referred to as a contract between an information provider and an information user establishing the content required from the producer and the content required by the consumer in other words if you want to interact with a computer or system to retrieve information or perform a function an api helps you communicate what you want to that system so it can understand and complete the request rest a rest api https www ics uci edu fielding pubs dissertation rest arch style htm also known as restful api is an application programming interface that conforms to the constraints of rest architectural style and allows for interaction with restful web services rest stands for representational state transfer and it was first introduced by roy fielding https roy gbiv com in the year 2000 in rest api the fundamental unit is a resource concepts let s discuss some concepts of a restful api constraints in order for an api to be considered restful it has to conform to these architectural constraints uniform interface there should be a uniform way of interacting with a given server client server a client server architecture managed through http stateless no client context shall be stored on the server between requests cacheable every response should include whether the response is cacheable or not and for how much duration responses can be cached at the client side layered system an application architecture needs to be composed of multiple layers code on demand return executable code to support a part of your application optional http verbs http defines a set of request methods to indicate the desired action to be performed for a given resource although they can also be nouns these request methods are sometimes referred to as http verbs each of them implements a different semantic but some common features are shared by a group of them below are some commonly used http verbs get request a representation of the specified resource head response is identical to a get request but without the response body post submits an entity to the specified resource often causing a change in state or side effects on the server put replaces all current representations of the target resource with the request payload delete deletes the specified resource patch applies partial modifications to a resource http response codes http response status codes https en wikipedia org wiki list of http status codes indicate whether a specific http request has been successfully completed there are five classes defined by the standard 1xx informational responses 2xx successful responses 3xx redirection responses 4xx client error responses 5xx server error responses for example http 200 means that the request was successful advantages let s discuss some advantages of rest api simple and easy to understand flexible and portable good caching support client and server are decoupled disadvantages let s discuss some disadvantages of rest api over fetching of data sometimes multiple round trips to the server are required use cases rest apis are pretty much used universally and are the default standard for designing apis overall rest apis are quite flexible and can fit almost all scenarios example here s an example usage of a rest api that operates on a users resource uri http verb description users get get all users users id get get a user by id users post add a new user users id patch update a user by id users id delete delete a user by id there is so much more to learn when it comes to rest apis i will highly recommend looking into hypermedia as the engine of application state hateoas https en wikipedia org wiki hateoas graphql graphql https graphql org is a query language and server side runtime for apis that prioritizes giving clients exactly the data they request and no more it was developed by facebook https engineering fb com and later open sourced in 2015 graphql is designed to make apis fast flexible and developer friendly additionally graphql gives api maintainers the flexibility to add or deprecate fields without impacting existing queries developers can build apis with whatever methods they prefer and the graphql specification will ensure they function in predictable ways to clients in graphql the fundamental unit is a query concepts let s briefly discuss some key concepts in graphql schema a graphql schema describes the functionality clients can utilize once they connect to the graphql server queries a query is a request made by the client it can consist of fields and arguments for the query the operation type of a query can also be a mutation https graphql org learn queries mutations which provides a way to modify server side data resolvers resolver is a collection of functions that generate responses for a graphql query in simple terms a resolver acts as a graphql query handler advantages let s discuss some advantages of graphql eliminates over fetching of data strongly defined schema code generation support payload optimization disadvantages let s discuss some disadvantages of graphql shifts complexity to server side caching becomes hard versioning is ambiguous n 1 problem use cases graphql proves to be essential in the following scenarios reducing app bandwidth usage as we can query multiple resources in a single query rapid prototyping for complex systems when we are working with a graph like data model example here s a graphql schema that defines a user type and a query type graphql type query getuser user type user id id name string city string state string using the above schema the client can request the required fields easily without having to fetch the entire resource or guess what the api might return graphql getuser id name city this will give the following response to the client json getuser id 123 name karan city san francisco learn more about graphql at graphql org https graphql org grpc grpc https grpc io is a modern open source high performance remote procedure call rpc https en wikipedia org wiki remote procedure call framework that can run in any environment it can efficiently connect services in and across data centers with pluggable support for load balancing tracing health checking authentication and much more concepts let s discuss some key concepts of grpc protocol buffers protocol buffers provide a language and platform neutral extensible mechanism for serializing structured data in a forward and backward compatible way it s like json except it s smaller and faster and it generates native language bindings service definition like many rpc systems grpc is based on the idea of defining a service and specifying the methods that can be called remotely with their parameters and return types grpc uses protocol buffers as the interface definition language idl https en wikipedia org wiki interface description language for describing both the service interface and the structure of the payload messages advantages let s discuss some advantages of grpc lightweight and efficient high performance built in code generation support bi directional streaming disadvantages let s discuss some disadvantages of grpc relatively new compared to rest and graphql limited browser support steeper learning curve not human readable use cases below are some good use cases for grpc real time communication via bi directional streaming efficient inter service communication in microservices low latency and high throughput communication polyglot environments example here s a basic example of a grpc service defined in a proto file using this definition we can easily code generate the helloservice service in the programming language of our choice protobuf service helloservice rpc sayhello hellorequest returns helloresponse message hellorequest string greeting 1 message helloresponse string reply 1 rest vs graphql vs grpc now that we know how these api designing techniques work let s compare them based on the following parameters will it cause tight coupling how chatty distinct api calls to get needed information are the apis what s the performance like how complex is it to integrate how well does the caching work built in tooling and code generation what s api discoverability like how easy is it to version apis type coupling chattiness performance complexity caching codegen discoverability versioning rest low high good medium great bad good easy graphql medium low good high custom good good custom grpc high medium great low custom great bad hard which api technology is better well the answer is none of them there is no silver bullet as each of these technologies has its own advantages and disadvantages users only care about using our apis in a consistent way so make sure to focus on your domain and requirements when designing your api long polling websockets server sent events sse web applications were initially developed around a client server model where the web client is always the initiator of transactions like requesting data from the server thus there was no mechanism for the server to independently send or push data to the client without the client first making a request let s discuss some approaches to overcome this problem long polling http long polling is a technique used to push information to a client as soon as possible from the server as a result the server does not have to wait for the client to send a request in long polling the server does not close the connection once it receives a request from the client instead the server responds only if any new message is available or a timeout threshold is reached long polling https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii long polling websockets server sent events long polling png once the client receives a response it immediately sends a new request to the server to have a new pending connection to send data to the client and the operation is repeated with this approach the server emulates a real time server push feature working let s understand how long polling works 1 the client makes an initial request and waits for a response 2 the server receives the request and delays sending anything until an update is available 3 once an update is available the response is sent to the client 4 the client receives the response and makes a new request immediately or after some defined interval to establish a connection again advantages here are some advantages of long polling easy to implement good for small scale projects nearly universally supported disadvantages a major downside of long polling is that it is usually not scalable below are some of the other reasons creates a new connection each time which can be intensive on the server reliable message ordering can be an issue for multiple requests increased latency as the server needs to wait for a new request websockets websocket provides full duplex communication channels over a single tcp connection it is a persistent connection between a client and a server that both parties can use to start sending data at any time the client establishes a websocket connection through a process known as the websocket handshake if the process succeeds then the server and client can exchange data in both directions at any time the websocket protocol enables the communication between a client and a server with lower overheads facilitating real time data transfer from and to the server websockets https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii long polling websockets server sent events websockets png this is made possible by providing a standardized way for the server to send content to the client without being asked and allowing for messages to be passed back and forth while keeping the connection open working let s understand how websockets work 1 the client initiates a websocket handshake process by sending a request 2 the request also contains an http upgrade https en wikipedia org wiki http 1 1 upgrade header header that allows the request to switch to the websocket protocol ws 3 the server sends a response to the client acknowledging the websocket handshake request 4 a websocket connection will be opened once the client receives a successful handshake response 5 now the client and server can start sending data in both directions allowing real time communication 6 the connection is closed once the server or the client decides to close the connection advantages below are some advantages of websockets full duplex asynchronous messaging better origin based security model lightweight for both client and server disadvantages let s discuss some disadvantages of websockets terminated connections aren t automatically recovered older browsers don t support websockets becoming less relevant server sent events sse server sent events sse is a way of establishing long term communication between client and server that enables the server to proactively push data to the client server sent events https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iii long polling websockets server sent events server sent events png it is unidirectional meaning once the client sends the request it can only receive the responses without the ability to send new requests over the same connection working let s understand how server sent events work 1 the client makes a request to the server 2 the connection between client and server is established and it remains open 3 the server sends responses or events to the client when new data is available advantages simple to implement and use for both client and server supported by most browsers no trouble with firewalls disadvantages unidirectional nature can be limiting limitation for the maximum number of open connections does not support binary data geohashing and quadtrees geohashing geohashing is a geocoding https en wikipedia org wiki address geocoding method used to encode geographic coordinates such as latitude and longitude into short alphanumeric strings it was created by gustavo niemeyer https twitter com gniemeyer in 2008 for example san francisco with coordinates 37 7564 122 4016 can be represented in geohash as 9q8yy9mf how does geohashing work geohash is a hierarchical spatial index that uses base 32 alphabet encoding the first character in a geohash identifies the initial location as one of the 32 cells this cell will also contain 32 cells this means that to represent a point the world is recursively divided into smaller and smaller cells with each additional bit until the desired precision is attained the precision factor also determines the size of the cell geohashing https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv geohashing and quadtrees geohashing png geohashing guarantees that points are spatially closer if their geohashes share a longer prefix which means the more characters in the string the more precise the location for example geohashes 9q8yy9mf and 9q8yy9vx are spatially closer as they share the prefix 9q8yy9 geohashing can also be used to provide a degree of anonymity as we don t need to expose the exact location of the user because depending on the length of the geohash we just know they are somewhere within an area the cell sizes of the geohashes of different lengths are as follows geohash length cell width cell height 1 5000 km 5000 km 2 1250 km 1250 km 3 156 km 156 km 4 39 1 km 19 5 km 5 4 89 km 4 89 km 6 1 22 km 0 61 km 7 153 m 153 m 8 38 2 m 19 1 m 9 4 77 m 4 77 m 10 1 19 m 0 596 m 11 149 mm 149 mm 12 37 2 mm 18 6 mm use cases here are some common use cases for geohashing it is a simple way to represent and store a location in a database it can also be shared on social media as urls since it is easier to share and remember than latitudes and longitudes we can efficiently find the nearest neighbors of a point through very simple string comparisons and efficient searching of indexes examples geohashing is widely used and it is supported by popular databases mysql https www mysql com redis http redis io amazon dynamodb https aws amazon com dynamodb google cloud firestore https cloud google com firestore quadtrees a quadtree is a tree data structure in which each internal node has exactly four children they are often used to partition a two dimensional space by recursively subdividing it into four quadrants or regions each child or leaf node stores spatial information quadtrees are the two dimensional analog of octrees https en wikipedia org wiki octree which are used to partition three dimensional space quadtree https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv geohashing and quadtrees quadtree png types of quadtrees quadtrees may be classified according to the type of data they represent including areas points lines and curves the following are common types of quadtrees point quadtrees point region pr quadtrees polygonal map pm quadtrees compressed quadtrees edge quadtrees why do we need quadtrees aren t latitudes and longitudes enough why do we need quadtrees while in theory using latitude and longitude we can determine things such as how close points are to each other using euclidean distance https en wikipedia org wiki euclidean distance for practical use cases it is simply not scalable because of its cpu intensive nature with large data sets quadtree subdivision https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv geohashing and quadtrees quadtree subdivision png quadtrees enable us to search points within a two dimensional range efficiently where those points are defined as latitude longitude coordinates or as cartesian x y coordinates additionally we can save further computation by only subdividing a node after a certain threshold and with the application of mapping algorithms such as the hilbert curve https en wikipedia org wiki hilbert curve we can easily improve range query performance use cases below are some common uses of quadtrees image representation processing and compression spacial indexing and range queries location based services like google maps uber etc mesh generation and computer graphics sparse data storage circuit breaker the circuit breaker is a design pattern used to detect failures and encapsulates the logic of preventing a failure from constantly recurring during maintenance temporary external system failure or unexpected system difficulties circuit breaker https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv circuit breaker circuit breaker png the basic idea behind the circuit breaker is very simple we wrap a protected function call in a circuit breaker object which monitors for failures once the failures reach a certain threshold the circuit breaker trips and all further calls to the circuit breaker return with an error without the protected call being made at all usually we ll also want some kind of monitor alert if the circuit breaker trips why do we need circuit breaking it s common for software systems to make remote calls to software running in different processes probably on different machines across a network one of the big differences between in memory calls and remote calls is that remote calls can fail or hang without a response until some timeout limit is reached what s worse is if we have many callers on an unresponsive supplier then we can run out of critical resources leading to cascading failures across multiple systems states let s discuss circuit breaker states closed when everything is normal the circuit breakers remain closed and all the request passes through to the services as normal if the number of failures increases beyond the threshold the circuit breaker trips and goes into an open state open in this state circuit breaker returns an error immediately without even invoking the services the circuit breakers move into the half open state after a certain timeout period elapses usually it will have a monitoring system where the timeout will be specified half open in this state the circuit breaker allows a limited number of requests from the service to pass through and invoke the operation if the requests are successful then the circuit breaker will go to the closed state however if the requests continue to fail then it goes back to the open state rate limiting rate limiting refers to preventing the frequency of an operation from exceeding a defined limit in large scale systems rate limiting is commonly used to protect underlying services and resources rate limiting is generally used as a defensive mechanism in distributed systems so that shared resources can maintain availability it also protects our apis from unintended or malicious overuse by limiting the number of requests that can reach our api in a given period of time rate limiting https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv rate limiting rate limiting png why do we need rate limiting rate limiting is a very important part of any large scale system and it can be used to accomplish the following avoid resource starvation as a result of denial of service dos attacks rate limiting helps in controlling operational costs by putting a virtual cap on the auto scaling of resources which if not monitored might lead to exponential bills rate limiting can be used as defense or mitigation against some common attacks for apis that process massive amounts of data rate limiting can be used to control the flow of that data algorithms there are various algorithms for api rate limiting each with its advantages and disadvantages let s briefly discuss some of these algorithms leaky bucket leaky bucket is an algorithm that provides a simple intuitive approach to rate limiting via a queue when registering a request the system appends it to the end of the queue processing for the first item on the queue occurs at a regular interval or first in first out fifo if the queue is full then additional requests are discarded or leaked token bucket here we use a concept of a bucket when a request comes in a token from the bucket must be taken and processed the request will be refused if no token is available in the bucket and the requester will have to try again later as a result the token bucket gets refreshed after a certain time period fixed window the system uses a window size of n seconds to track the fixed window algorithm rate each incoming request increments the counter for the window it discards the request if the counter exceeds a threshold sliding log sliding log rate limiting involves tracking a time stamped log for each request the system stores these logs in a time sorted hash set or table it also discards logs with timestamps beyond a threshold when a new request comes in we calculate the sum of logs to determine the request rate if the request would exceed the threshold rate then it is held sliding window sliding window is a hybrid approach that combines the fixed window algorithm s low processing cost and the sliding log s improved boundary conditions like the fixed window algorithm we track a counter for each fixed window next we account for a weighted value of the previous window s request rate based on the current timestamp to smooth out bursts of traffic rate limiting in distributed systems rate limiting becomes complicated when distributed systems are involved the two broad problems that come with rate limiting in distributed systems are inconsistencies when using a cluster of multiple nodes we might need to enforce a global rate limit policy because if each node were to track its rate limit a consumer could exceed a global rate limit when sending requests to different nodes the greater the number of nodes the more likely the user will exceed the global limit the simplest way to solve this problem is to use sticky sessions in our load balancers so that each consumer gets sent to exactly one node but this causes a lack of fault tolerance and scaling problems another approach might be to use a centralized data store like redis https redis io but this will increase latency and cause race conditions race conditions this issue happens when we use a naive get then set approach in which we retrieve the current rate limit counter increment it and then push it back to the datastore this model s problem is that additional requests can come through in the time it takes to perform a full cycle of read increment store each attempting to store the increment counter with an invalid lower counter value this allows a consumer to send a very large number of requests to bypass the rate limiting controls one way to avoid this problem is to use some sort of distributed locking mechanism around the key preventing any other processes from accessing or writing to the counter though the lock will become a significant bottleneck and will not scale well a better approach might be to use a set then get approach allowing us to quickly increment and check counter values without letting the atomic operations get in the way service discovery service discovery is the detection of services within a computer network service discovery protocol sdp is a networking standard that accomplishes the detection of networks by identifying resources why do we need service discovery in a monolithic application services invoke one another through language level methods or procedure calls however modern microservices based applications typically run in virtualized or containerized environments where the number of instances of a service and their locations change dynamically consequently we need a mechanism that enables the clients of service to make requests to a dynamically changing set of ephemeral service instances implementations there are two main service discovery patterns client side discovery client side service discovery https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv service discovery client side service discovery png in this approach the client obtains the location of another service by querying a service registry which is responsible for managing and storing the network locations of all the services server side discovery server side service discovery https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv service discovery server side service discovery png in this approach we use an intermediate component such as a load balancer the client makes a request to the service via a load balancer which then forwards the request to an available service instance service registry a service registry is basically a database containing the network locations of service instances to which the clients can reach out a service registry must be highly available and up to date service registration we also need a way to obtain service information often known as service registration let s look at two possible service registration approaches self registration when using the self registration model a service instance is responsible for registering and de registering itself in the service registry in addition if necessary a service instance sends heartbeat requests to keep its registration alive third party registration the registry keeps track of changes to running instances by polling the deployment environment or subscribing to events when it detects a newly available service instance it records it in its database the service registry also de registers terminated service instances service mesh service to service communication is essential in a distributed application but routing this communication both within and across application clusters becomes increasingly complex as the number of services grows service mesh enables managed observable and secure communication between individual services it works with a service discovery protocol to detect services istio https istio io latest about service mesh and envoy https www envoyproxy io are some of the most commonly used service mesh technologies examples here are some commonly used service discovery infrastructure tools etcd https etcd io consul https www consul io apache thrift https thrift apache org apache zookeeper https zookeeper apache org sla slo sli let s briefly discuss sla slo and sli these are mostly related to the business and site reliability side of things but good to know nonetheless why are they important slas slos and slis allow companies to define track and monitor the promises made for a service to its users together slas slos and slis should help teams generate more user trust in their services with an added emphasis on continuous improvement to incident management and response processes sla an sla or service level agreement is an agreement made between a company and its users of a given service the sla defines the different promises that the company makes to users regarding specific metrics such as service availability slas are often written by a company s business or legal team slo an slo or service level objective is the promise that a company makes to users regarding a specific metric such as incident response or uptime slos exist within an sla as individual promises contained within the full user agreement the slo is the specific goal that the service must meet in order to comply with the sla slos should always be simple clearly defined and easily measured to determine whether or not the objective is being fulfilled sli an sli or service level indicator is a key metric used to determine whether or not the slo is being met it is the measured value of the metric described within the slo in order to remain in compliance with the sla the sli s value must always meet or exceed the value determined by the slo disaster recovery disaster recovery dr is a process of regaining access and functionality of the infrastructure after events like a natural disaster cyber attack or even business disruptions disaster recovery relies upon the replication of data and computer processing in an off premises location not affected by the disaster when servers go down because of a disaster a business needs to recover lost data from a second location where the data is backed up ideally an organization can transfer its computer processing to that remote location as well in order to continue operations disaster recovery is often not actively discussed during system design interviews but it s important to have some basic understanding of this topic you can learn more about disaster recovery from aws well architected framework https docs aws amazon com wellarchitected latest reliability pillar plan for disaster recovery dr html why is disaster recovery important disaster recovery can have the following benefits minimize interruption and downtime limit damages fast restoration better customer retention terms let s discuss some important terms relevantly for disaster recovery disaster recovery https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv disaster recovery disaster recovery png rto recovery time objective rto is the maximum acceptable delay between the interruption of service and restoration of service this determines what is considered an acceptable time window when service is unavailable rpo recovery point objective rpo is the maximum acceptable amount of time since the last data recovery point this determines what is considered an acceptable loss of data between the last recovery point and the interruption of service strategies a variety of disaster recovery dr strategies can be part of a disaster recovery plan back up this is the simplest type of disaster recovery and involves storing data off site or on a removable drive cold site in this type of disaster recovery an organization sets up basic infrastructure in a second site hot site a hot site maintains up to date copies of data at all times hot sites are time consuming to set up and more expensive than cold sites but they dramatically reduce downtime virtual machines vms and containers before we discuss virtualization vs containerization let s learn what are virtual machines vms and containers virtual machines vm a virtual machine vm is a virtual environment that functions as a virtual computer system with its own cpu memory network interface and storage created on a physical hardware system a software called a hypervisor separates the machine s resources from the hardware and provisions them appropriately so they can be used by the vm vms are isolated from the rest of the system and multiple vms can exist on a single piece of hardware like a server they can be moved between host servers depending on the demand or to use resources more efficiently what is a hypervisor a hypervisor sometimes called a virtual machine monitor vmm isolates the operating system and resources from the virtual machines and enables the creation and management of those vms the hypervisor treats resources like cpu memory and storage as a pool of resources that can be easily reallocated between existing guests or new virtual machines why use a virtual machine server consolidation is a top reason to use vms most operating system and application deployments only use a small amount of the physical resources available by virtualizing our servers we can place many virtual servers onto each physical server to improve hardware utilization this keeps us from needing to purchase additional physical resources a vm provides an environment that is isolated from the rest of a system so whatever is running inside a vm won t interfere with anything else running on the host hardware because vms are isolated they are a good option for testing new applications or setting up a production environment we can also run a single purpose vm to support a specific use case containers a container is a standard unit of software that packages up code and all its dependencies such as specific versions of runtimes and libraries so that the application runs quickly and reliably from one computing environment to another containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run this decoupling allows container based applications to be deployed easily and consistently regardless of the target environment why do we need containers let s discuss some advantages of using containers separation of responsibility containerization provides a clear separation of responsibility as developers focus on application logic and dependencies while operations teams can focus on deployment and management workload portability containers can run virtually anywhere greatly easing development and deployment application isolation containers virtualize cpu memory storage and network resources at the operating system level providing developers with a view of the os logically isolated from other applications agile development containers allow developers to move much more quickly by avoiding concerns about dependencies and environments efficient operations containers are lightweight and allow us to use just the computing resources we need virtualization vs containerization virtualization vs containerization https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv virtual machines and containers virtualization vs containerization png in traditional virtualization a hypervisor virtualizes physical hardware the result is that each virtual machine contains a guest os a virtual copy of the hardware that the os requires to run and an application and its associated libraries and dependencies instead of virtualizing the underlying hardware containers virtualize the operating system so each container contains only the application and its dependencies making them much more lightweight than vms containers also share the os kernel and use a fraction of the memory vms require oauth 2 0 and openid connect oidc oauth 2 0 oauth 2 0 which stands for open authorization is a standard designed to provide consented access to resources on behalf of the user without ever sharing the user s credentials oauth 2 0 is an authorization protocol and not an authentication protocol it is designed primarily as a means of granting access to a set of resources for example remote apis or user s data concepts the oauth 2 0 protocol defines the following entities resource owner the user or system that owns the protected resources and can grant access to them client the client is the system that requires access to the protected resources authorization server this server receives requests from the client for access tokens and issues them upon successful authentication and consent by the resource owner resource server a server that protects the user s resources and receives access requests from the client it accepts and validates an access token from the client and returns the appropriate resources scopes they are used to specify exactly the reason for which access to resources may be granted acceptable scope values and which resources they relate to are dependent on the resource server access token a piece of data that represents the authorization to access resources on behalf of the end user how does oauth 2 0 work let s learn how oauth 2 0 works oauth2 https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv oauth2 and openid connect oauth2 png 1 the client requests authorization from the authorization server supplying the client id and secret as identification it also provides the scopes and an endpoint uri to send the access token or the authorization code 2 the authorization server authenticates the client and verifies that the requested scopes are permitted 3 the resource owner interacts with the authorization server to grant access 4 the authorization server redirects back to the client with either an authorization code or access token depending on the grant type a refresh token may also be returned 5 with the access token the client can request access to the resource from the resource server disadvantages here are the most common disadvantages of oauth 2 0 lacks built in security features no standard implementation no common set of scopes openid connect oauth 2 0 is designed only for authorization for granting access to data and features from one application to another openid connect oidc is a thin layer that sits on top of oauth 2 0 that adds login and profile information about the person who is logged in when an authorization server supports oidc it is sometimes called an identity provider idp since it provides information about the resource owner back to the client openid connect is relatively new resulting in lower adoption and industry implementation of best practices compared to oauth concepts the openid connect oidc protocol defines the following entities relying party the current application openid provider this is essentially an intermediate service that provides a one time code to the relying party token endpoint a web server that accepts the one time code otc and provides an access code that s valid for an hour the main difference between oidc and oauth 2 0 is that the token is provided using json web token jwt userinfo endpoint the relying party communicates with this endpoint providing a secure token and receiving information about the end user both oauth 2 0 and oidc are easy to implement and are json based which is supported by most web and mobile applications however the openid connect oidc specification is more strict than that of basic oauth single sign on sso single sign on sso is an authentication process in which a user is provided access to multiple applications or websites by using only a single set of login credentials this prevents the need for the user to log separately into the different applications the user credentials and other identifying information are stored and managed by a centralized system called identity provider idp the identity provider is a trusted system that provides access to other websites and applications single sign on sso based authentication systems are commonly used in enterprise environments where employees require access to multiple applications of their organizations components let s discuss some key components of single sign on sso identity provider idp user identity information is stored and managed by a centralized system called identity provider idp the identity provider authenticates the user and provides access to the service provider the identity provider can directly authenticate the user by validating a username and password or by validating an assertion about the user s identity as presented by a separate identity provider the identity provider handles the management of user identities in order to free the service provider from this responsibility service provider a service provider provides services to the end user they rely on identity providers to assert the identity of a user and typically certain attributes about the user are managed by the identity provider service providers may also maintain a local account for the user along with attributes that are unique to their service identity broker an identity broker acts as an intermediary that connects multiple service providers with various different identity providers using identity broker we can perform single sign on over any application without the hassle of the protocol it follows saml security assertion markup language is an open standard that allows clients to share security information about identity authentication and permission across different systems saml is implemented with the extensible markup language xml standard for sharing data saml specifically enables identity federation making it possible for identity providers idps to seamlessly and securely pass authenticated identities and their attributes to service providers how does sso work now let s discuss how single sign on works sso https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv single sign on sso png 1 the user requests a resource from their desired application 2 the application redirects the user to the identity provider idp for authentication 3 the user signs in with their credentials usually username and password 4 identity provider idp sends a single sign on response back to the client application 5 the application grants access to the user saml vs oauth 2 0 and openid connect oidc there are many differences between saml oauth and oidc saml uses xml to pass messages while oauth and oidc use json oauth provides a simpler experience while saml is geared towards enterprise security oauth and oidc use restful communication extensively which is why mobile and modern web applications find oauth and oidc a better experience for the user saml on the other hand drops a session cookie in a browser that allows a user to access certain web pages this is great for short lived workloads oidc is developer friendly and simpler to implement which broadens the use cases for which it might be implemented it can be implemented from scratch pretty fast via freely available libraries in all common programming languages saml can be complex to install and maintain which only enterprise size companies can handle well openid connect is essentially a layer on top of the oauth framework therefore it can offer a built in layer of permission that asks a user to agree to what the service provider might access although saml is also capable of allowing consent flow it achieves this by hard coding carried out by a developer and not as part of its protocol both of these authentication protocols are good at what they do as always a lot depends on our specific use cases and target audience advantages following are the benefits of using single sign on ease of use as users only need to remember one set of credentials ease of access without having to go through a lengthy authorization process enforced security and compliance to protect sensitive data simplifying the management with reduced it support cost and admin time disadvantages here are some disadvantages of single sign on single password vulnerability if the main sso password gets compromised all the supported applications get compromised the authentication process using single sign on is slower than traditional authentication as every application has to request the sso provider for verification examples these are some commonly used identity providers idp okta https www okta com google https cloud google com architecture identity single sign on auth0 https auth0 com onelogin https www onelogin com ssl tls mtls let s briefly discuss some important communication security protocols such as ssl tls and mtls i would say that from a big picture system design perspective this topic is not very important but still good to know about ssl ssl stands for secure sockets layer and it refers to a protocol for encrypting and securing communications that take place on the internet it was first developed in 1995 but since has been deprecated in favor of tls transport layer security why is it called an ssl certificate if it is deprecated most major certificate providers still refer to certificates as ssl certificates which is why the naming convention persists why was ssl so important originally data on the web was transmitted in plaintext that anyone could read if they intercepted the message ssl was created to correct this problem and protect user privacy by encrypting any data that goes between the user and a web server ssl also stops certain kinds of cyber attacks by preventing attackers from tampering with data in transit tls transport layer security or tls is a widely adopted security protocol designed to facilitate privacy and data security for communications over the internet tls evolved from a previous encryption protocol called secure sockets layer ssl a primary use case of tls is encrypting the communication between web applications and servers there are three main components to what the tls protocol accomplishes encryption hides the data being transferred from third parties authentication ensures that the parties exchanging information are who they claim to be integrity verifies that the data has not been forged or tampered with mtls mutual tls or mtls is a method for mutual authentication mtls ensures that the parties at each end of a network connection are who they claim to be by verifying that they both have the correct private key the information within their respective tls certificates provides additional verification why use mtls mtls helps ensure that the traffic is secure and trusted in both directions between a client and server this provides an additional layer of security for users who log in to an organization s network or applications it also verifies connections with client devices that do not follow a login process such as internet of things iot devices nowadays mtls is commonly used by microservices or distributed systems in a zero trust security model https en wikipedia org wiki zero trust security model to verify each other system design interviews system design is a very extensive topic and system design interviews are designed to evaluate your capability to produce technical solutions to abstract problems as such they re not designed for a specific answer the unique aspect of system design interviews is the two way nature between the candidate and the interviewer expectations are quite different at different engineering levels as well this is because someone with a lot of practical experience will approach it quite differently from someone who s new in the industry as a result it s hard to come up with a single strategy that will help us stay organized during the interview let s look at some common strategies for system design interviews requirements clarifications system design interview questions by nature are vague or abstract asking questions about the exact scope of the problem and clarifying functional requirements early in the interview is essential usually requirements are divided into three parts functional requirements these are the requirements that the end user specifically demands as basic functionalities that the system should offer all these functionalities need to be necessarily incorporated into the system as part of the contract for example what are the features that we need to design for this system what are the edge cases we need to consider if any in our design non functional requirements these are the quality constraints that the system must satisfy according to the project contract the priority or extent to which these factors are implemented varies from one project to another they are also called non behavioral requirements for example portability maintainability reliability scalability security etc for example each request should be processed with the minimum latency system should be highly available extended requirements these are basically nice to have requirements that might be out of the scope of the system for example our system should record metrics and analytics service health and performance monitoring estimation and constraints estimate the scale of the system we re going to design it is important to ask questions such as what is the desired scale that this system will need to handle what is the read write ratio of our system how many requests per second how much storage will be needed these questions will help us scale our design later data model design once we have the estimations we can start with defining the database schema doing so in the early stages of the interview would help us to understand the data flow which is the core of every system in this step we basically define all the entities and relationships between them what are the different entities in the system what are the relationships between these entities how many tables do we need is nosql a better choice here api design next we can start designing apis for the system these apis will help us define the expectations from the system explicitly we don t have to write any code just a simple interface defining the api requirements such as parameters functions classes types entities etc for example tsx createuser name string email string user it is advised to keep the interface as simple as possible and come back to it later when covering extended requirements high level component design now we have established our data model and api design it s time to identify system components such as load balancers api gateway etc that are needed to solve our problem and draft the first design of our system is it best to design a monolithic or a microservices architecture what type of database should we use once we have a basic diagram we can start discussing with the interviewer how the system will work from the client s perspective detailed design now it s time to go into detail about the major components of the system we designed as always discuss with the interviewer which component may need further improvements here is a good opportunity to demonstrate your experience in the areas of your expertise present different approaches advantages and disadvantages explain your design decisions and back them up with examples this is also a good time to discuss any additional features the system might be able to support though this is optional how should we partition our data what about load distribution should we use cache how will we handle a sudden spike in traffic also try not to be too opinionated about certain technologies statements like i believe that nosql databases are just better sql databases are not scalable reflect poorly as someone who has interviewed a lot of people over the years my two cents here would be to be humble about what you know and what you do not use your existing knowledge with examples to navigate this part of the interview identify and resolve bottlenecks finally it s time to discuss bottlenecks and approaches to mitigate them here are some important questions to ask do we have enough database replicas is there any single point of failure is database sharding required how can we make our system more robust how to improve the availability of our cache make sure to read the engineering blog of the company you re interviewing with this will help you get a sense of what technology stack they re using and which problems are important to them url shortener let s design a url shortener similar to services like bitly https bitly com tinyurl https tinyurl com app what is a url shortener a url shortener service creates an alias or a short url for a long url users are redirected to the original url when they visit these short links for example the following long url can be changed to a shorter url long url https karanpratapsingh com courses system design url shortener https karanpratapsingh com courses system design url shortener short url https bit ly 3i71d3o https bit ly 3i71d3o why do we need a url shortener url shortener saves space in general when we are sharing urls users are also less likely to mistype shorter urls moreover we can also optimize links across devices this allows us to track individual links requirements our url shortening system should meet the following requirements functional requirements given a url our service should generate a shorter and unique alias for it users should be redirected to the original url when they visit the short link links should expire after a default timespan non functional requirements high availability with minimal latency the system should be scalable and efficient extended requirements prevent abuse of services record analytics and metrics for redirections estimation and constraints let s start with the estimation and constraints note make sure to check any scale or traffic related assumptions with your interviewer traffic this will be a read heavy system so let s assume a 100 1 read write ratio with 100 million links generated per month reads writes per month for reads per month 100 times 100 space million 10 space billion month similarly for writes 1 times 100 space million 100 space million month what would be requests per second rps for our system 100 million requests per month translate into 40 requests per second frac 100 space million 30 space days times 24 space hrs times 3600 space seconds sim 40 space urls second and with a 100 1 read write ratio the number of redirections will be 100 times 40 space urls second 4000 space requests second bandwidth since we expect about 40 urls every second and if we assume each request is of size 500 bytes then the total incoming data for write requests would be 40 times 500 space bytes 20 space kb second similarly for the read requests since we expect about 4k redirections the total outgoing data would be 4000 space urls second times 500 space bytes sim 2 space mb second storage for storage we will assume we store each link or record in our database for 10 years since we expect around 100m new requests every month the total number of records we will need to store would be 100 space million times 10 space years times 12 space months 12 space billion like earlier if we assume each stored record will be approximately 500 bytes we will need around 6tb of storage 12 space billion times 500 space bytes 6 space tb cache for caching we will follow the classic pareto principle https en wikipedia org wiki pareto principle also known as the 80 20 rule this means that 80 of the requests are for 20 of the data so we can cache around 20 of our requests since we get around 4k read or redirection requests each second this translates into 350m requests per day 4000 space urls second times 24 space hours times 3600 space seconds sim 350 space million space requests day hence we will need around 35gb of memory per day 20 space percent times 350 space million times 500 space bytes 35 space gb day high level estimate here is our high level estimate type estimate writes new urls 40 s reads redirection 4k s bandwidth incoming 20 kb s bandwidth outgoing 2 mb s storage 10 years 6 tb memory caching 35 gb day data model design next we will focus on the data model design here is our database schema url shortener datamodel https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v url shortener url shortener datamodel png initially we can get started with just two tables users stores user s details such as name email createdat etc urls contains the new short url s properties such as expiration hash originalurl and userid of the user who created the short url we can also use the hash column as an index https karanpratapsingh com courses system design indexes to improve the query performance what kind of database should we use since the data is not strongly relational nosql databases such as amazon dynamodb https aws amazon com dynamodb apache cassandra https cassandra apache org index html or mongodb https www mongodb com will be a better choice here if we do decide to use an sql database then we can use something like azure sql database https azure microsoft com en in products azure sql database or amazon rds https aws amazon com rds for more details refer to sql vs nosql https karanpratapsingh com courses system design sql vs nosql databases api design let us do a basic api design for our services create url this api should create a new short url in our system given an original url tsx createurl apikey string originalurl string expiration date string parameters api key string api key provided by the user original url string original url to be shortened expiration date expiration date of the new url optional returns short url string new shortened url get url this api should retrieve the original url from a given short url tsx geturl apikey string shorturl string string parameters api key string api key provided by the user short url string short url mapped to the original url returns original url string original url to be retrieved delete url this api should delete a given shorturl from our system tsx deleteurl apikey string shorturl string boolean parameters api key string api key provided by the user short url string short url to be deleted returns result boolean represents whether the operation was successful or not why do we need an api key as you must ve noticed we re using an api key to prevent abuse of our services using this api key we can limit the users to a certain number of requests per second or minute this is quite a standard practice for developer apis and should cover our extended requirement high level design now let us do a high level design of our system url encoding our system s primary goal is to shorten a given url let s look at different approaches base62 approach in this approach we can encode the original url using base62 https en wikipedia org wiki base62 which consists of the capital letters a z the lower case letters a z and the numbers 0 9 number space of space urls 62 n where n number of characters in the generated url so if we want to generate a url that is 7 characters long we will generate 3 5 trillion different urls begin gather 62 5 sim 916 space million space urls 62 6 sim 56 8 space billion space urls 62 7 sim 3 5 space trillion space urls end gather this is the simplest solution here but it does not guarantee non duplicate or collision resistant keys md5 approach the md5 message digest algorithm https en wikipedia org wiki md5 is a widely used hash function producing a 128 bit hash value or 32 hexadecimal digits we can use these 32 hexadecimal digits for generating 7 characters long url md5 original url rightarrow base62encode rightarrow hash however this creates a new issue for us which is duplication and collision we can try to re compute the hash until we find a unique one but that will increase the overhead of our systems it s better to look for more scalable approaches counter approach in this approach we will start with a single server which will maintain the count of the keys generated once our service receives a request it can reach out to the counter which returns a unique number and increments the counter when the next request comes the counter again returns the unique number and this goes on counter 0 3 5 space trillion rightarrow base62encode rightarrow hash the problem with this approach is that it can quickly become a single point for failure and if we run multiple instances of the counter we can have collision as it s essentially a distributed system to solve this issue we can use a distributed system manager such as zookeeper https zookeeper apache org which can provide distributed synchronization zookeeper can maintain multiple ranges for our servers begin align range space 1 space 1 rightarrow 1 000 000 range space 2 space 1 000 001 rightarrow 2 000 000 range space 3 space 2 000 001 rightarrow 3 000 000 end align once a server reaches its maximum range zookeeper will assign an unused counter range to the new server this approach can guarantee non duplicate and collision resistant urls also we can run multiple instances of zookeeper to remove the single point of failure key generation service kgs as we discussed generating a unique key at scale without duplication and collisions can be a bit of a challenge to solve this problem we can create a standalone key generation service kgs that generates a unique key ahead of time and stores it in a separate database for later use this approach can make things simple for us how to handle concurrent access once the key is used we can mark it in the database to make sure we don t reuse it however if there are multiple server instances reading data concurrently two or more servers might try to use the same key the easiest way to solve this would be to store keys in two tables as soon as a key is used we move it to a separate table with appropriate locking in place also to improve reads we can keep some of the keys in memory kgs database estimations as per our discussion we can generate up to 56 8 billion unique 6 character long keys which will result in us having to store 300 gb of keys 6 space characters times 56 8 space billion sim 390 space gb while 390 gb seems like a lot for this simple use case it is important to remember this is for the entirety of our service lifetime and the size of the keys database would not increase like our main database caching now let s talk about caching https karanpratapsingh com courses system design caching as per our estimations we will require around 35 gb of memory per day to cache 20 of the incoming requests to our services for this use case we can use redis https redis io or memcached https memcached org servers alongside our api server for more details refer to caching https karanpratapsingh com courses system design caching design now that we have identified some core components let s do the first draft of our system design url shortener basic design https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v url shortener url shortener basic design png here s how it works creating a new url 1 when a user creates a new url our api server requests a new unique key from the key generation service kgs 2 key generation service provides a unique key to the api server and marks the key as used 3 api server writes the new url entry to the database and cache 4 our service returns an http 201 created response to the user accessing a url 1 when a client navigates to a certain short url the request is sent to the api servers 2 the request first hits the cache and if the entry is not found there then it is retrieved from the database and an http 301 redirect is issued to the original url 3 if the key is still not found in the database an http 404 not found error is sent to the user detailed design it s time to discuss the finer details of our design data partitioning to scale out our databases we will need to partition our data horizontal partitioning aka sharding https karanpratapsingh com courses system design sharding can be a good first step we can use partitions schemes such as hash based partitioning list based partitioning range based partitioning composite partitioning the above approaches can still cause uneven data and load distribution we can solve this using consistent hashing https karanpratapsingh com courses system design consistent hashing for more details refer to sharding https karanpratapsingh com courses system design sharding and consistent hashing https karanpratapsingh com courses system design consistent hashing database cleanup this is more of a maintenance step for our services and depends on whether we keep the expired entries or remove them if we do decide to remove expired entries we can approach this in two different ways active cleanup in active cleanup we will run a separate cleanup service which will periodically remove expired links from our storage and cache this will be a very lightweight service like a cron job https en wikipedia org wiki cron passive cleanup for passive cleanup we can remove the entry when a user tries to access an expired link this can ensure a lazy cleanup of our database and cache cache now let us talk about caching https karanpratapsingh com courses system design caching which cache eviction policy to use as we discussed before we can use solutions like redis https redis io or memcached https memcached org and cache 20 of the daily traffic but what kind of cache eviction policy would best fit our needs least recently used lru https en wikipedia org wiki cache replacement policies least recently used lru can be a good policy for our system in this policy we discard the least recently used key first how to handle cache miss whenever there is a cache miss our servers can hit the database directly and update the cache with the new entries metrics and analytics recording analytics and metrics is one of our extended requirements we can store and update metadata like visitor s country platform the number of views etc alongside the url entry in our database security for security we can introduce private urls and authorization a separate table can be used to store user ids that have permission to access a specific url if a user does not have proper permissions we can return an http 401 unauthorized error we can also use an api gateway https karanpratapsingh com courses system design api gateway as they can support capabilities like authorization rate limiting and load balancing out of the box identify and resolve bottlenecks url shortener advanced design https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v url shortener url shortener advanced design png let us identify and resolve bottlenecks such as single points of failure in our design what if the api service or key generation service crashes how will we distribute our traffic between our components how can we reduce the load on our database what if the key database used by kgs fails how to improve the availability of our cache to make our system more resilient we can do the following running multiple instances of our servers and key generation service introducing load balancers https karanpratapsingh com courses system design load balancing between clients servers databases and cache servers using multiple read replicas for our database as it s a read heavy system standby replica for our key database in case it fails multiple instances and replicas for our distributed cache whatsapp let s design a whatsapp https whatsapp com like instant messaging service similar to services like facebook messenger https www messenger com and wechat https www wechat com what is whatsapp whatsapp is a chat application that provides instant messaging services to its users it is one of the most used mobile applications on the planet connecting over 2 billion users in 180 countries whatsapp is also available on the web requirements our system should meet the following requirements functional requirements should support one on one chat group chats max 100 people should support file sharing image video etc non functional requirements high availability with minimal latency the system should be scalable and efficient extended requirements sent delivered and read receipts of the messages show the last seen time of users push notifications estimation and constraints let s start with the estimation and constraints note make sure to check any scale or traffic related assumptions with your interviewer traffic let us assume we have 50 million daily active users dau and on average each user sends at least 10 messages to 4 different people every day this gives us 2 billion messages per day 50 space million times 40 space messages 2 space billion day messages can also contain media such as images videos or other files we can assume that 5 percent of messages are media files shared by the users which gives us additional 100 million files we would need to store 5 space percent times 2 space billion 100 space million day what would be requests per second rps for our system 2 billion requests per day translate into 24k requests per second frac 2 space billion 24 space hrs times 3600 space seconds sim 24k space requests second storage if we assume each message on average is 100 bytes we will require about 200 gb of database storage every day 2 space billion times 100 space bytes sim 200 space gb day as per our requirements we also know that around 5 percent of our daily messages 100 million are media files if we assume each file is 100 kb on average we will require 10 tb of storage every day 100 space million times 100 space kb 10 space tb day and for 10 years we will require about 38 pb of storage 10 space tb 0 2 space tb times 10 space years times 365 space days sim 38 space pb bandwidth as our system is handling 10 2 tb of ingress every day we will require a minimum bandwidth of around 120 mb per second frac 10 2 space tb 24 space hrs times 3600 space seconds sim 120 space mb second high level estimate here is our high level estimate type estimate daily active users dau 50 million requests per second rps 24k s storage per day 10 2 tb storage 10 years 38 pb bandwidth 120 mb s data model design this is the general data model which reflects our requirements whatsapp datamodel https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v whatsapp whatsapp datamodel png we have the following tables users this table will contain a user s information such as name phonenumber and other details messages as the name suggests this table will store messages with properties such as type text image video etc content and timestamps for message delivery the message will also have a corresponding chatid or groupid chats this table basically represents a private chat between two users and can contain multiple messages users chats this table maps users and chats as multiple users can have multiple chats n m relationship and vice versa groups this table represents a group made up of multiple users users groups this table maps users and groups as multiple users can be a part of multiple groups n m relationship and vice versa what kind of database should we use while our data model seems quite relational we don t necessarily need to store everything in a single database as this can limit our scalability and quickly become a bottleneck we will split the data between different services each having ownership over a particular table then we can use a relational database such as postgresql https www postgresql org or a distributed nosql database such as apache cassandra https cassandra apache org index html for our use case api design let us do a basic api design for our services get all chats or groups this api will get all chats or groups for a given userid tsx getall userid uuid chat group parameters user id uuid id of the current user returns result chat group all the chats and groups the user is a part of get messages get all messages for a user given the channelid chat or group id tsx getmessages userid uuid channelid uuid message parameters user id uuid id of the current user channel id uuid id of the channel chat or group from which messages need to be retrieved returns messages message all the messages in a given chat or group send message send a message from a user to a channel chat or group tsx sendmessage userid uuid channelid uuid message message boolean parameters user id uuid id of the current user channel id uuid id of the channel chat or group user wants to send a message to message message the message text image video etc that the user wants to send returns result boolean represents whether the operation was successful or not join or leave a channel allows the user to join or leave a channel chat or group tsx joingroup userid uuid channelid uuid boolean leavegroup userid uuid channelid uuid boolean parameters user id uuid id of the current user channel id uuid id of the channel chat or group the user wants to join or leave returns result boolean represents whether the operation was successful or not high level design now let us do a high level design of our system architecture we will be using microservices architecture https karanpratapsingh com courses system design monoliths microservices microservices since it will make it easier to horizontally scale and decouple our services each service will have ownership of its own data model let s try to divide our system into some core services user service this is an http based service that handles user related concerns such as authentication and user information chat service the chat service will use websockets and establish connections with the client to handle chat and group message related functionality we can also use cache to keep track of all the active connections sort of like sessions which will help us determine if the user is online or not notification service this service will simply send push notifications to the users it will be discussed in detail separately presence service the presence service will keep track of the last seen status of all users it will be discussed in detail separately media service this service will handle the media images videos files etc uploads it will be discussed in detail separately what about inter service communication and service discovery since our architecture is microservices based services will be communicating with each other as well generally rest or http performs well but we can further improve the performance using grpc https karanpratapsingh com courses system design rest graphql grpc grpc which is more lightweight and efficient service discovery https karanpratapsingh com courses system design service discovery is another thing we will have to take into account we can also use a service mesh that enables managed observable and secure communication between individual services note learn more about rest graphql grpc https karanpratapsingh com courses system design rest graphql grpc and how they compare with each other real time messaging how do we efficiently send and receive messages we have two different options pull model the client can periodically send an http request to servers to check if there are any new messages this can be achieved via something like long polling https karanpratapsingh com courses system design long polling websockets server sent events long polling push model the client opens a long lived connection with the server and once new data is available it will be pushed to the client we can use websockets https karanpratapsingh com courses system design long polling websockets server sent events websockets or server sent events sse https karanpratapsingh com courses system design long polling websockets server sent events server sent events sse for this the pull model approach is not scalable as it will create unnecessary request overhead on our servers and most of the time the response will be empty thus wasting our resources to minimize latency using the push model with websockets https karanpratapsingh com courses system design long polling websockets server sent events websockets is a better choice because then we can push data to the client once it s available without any delay given the connection is open with the client also websockets provide full duplex communication unlike server sent events sse https karanpratapsingh com courses system design long polling websockets server sent events server sent events sse which are only unidirectional note learn more about long polling websockets server sent events sse https karanpratapsingh com courses system design long polling websockets server sent events last seen to implement the last seen functionality we can use a heartbeat https en wikipedia org wiki heartbeat computing mechanism where the client can periodically ping the servers indicating its liveness since this needs to be as low overhead as possible we can store the last active timestamp in the cache as follows key value user a 2022 07 01t14 32 50 user b 2022 07 05t05 10 35 user c 2022 07 10t04 33 25 this will give us the last time the user was active this functionality will be handled by the presence service combined with redis https redis io or memcached https memcached org as our cache another way to implement this is to track the latest action of the user once the last activity crosses a certain threshold such as user hasn t performed any action in the last 30 seconds we can show the user as offline and last seen with the last recorded timestamp this will be more of a lazy update approach and might benefit us over heartbeat in certain cases notifications once a message is sent in a chat or a group we will first check if the recipient is active or not we can get this information by taking the user s active connection and last seen into consideration if the recipient is not active the chat service will add an event to a message queue https karanpratapsingh com courses system design message queues with additional metadata such as the client s device platform which will be used to route the notification to the correct platform later on the notification service will then consume the event from the message queue and forward the request to firebase cloud messaging fcm https firebase google com docs cloud messaging or apple push notification service apns https developer apple com documentation usernotifications based on the client s device platform android ios web etc we can also add support for email and sms why are we using a message queue since most message queues provide best effort ordering which ensures that messages are generally delivered in the same order as they re sent and that a message is delivered at least once which is an important part of our service functionality while this seems like a classic publish subscribe https karanpratapsingh com courses system design publish subscribe use case it is actually not as mobile devices and browsers each have their own way of handling push notifications usually notifications are handled externally via firebase cloud messaging fcm or apple push notification service apns unlike message fan out which we commonly see in backend services we can use something like amazon sqs https aws amazon com sqs or rabbitmq https www rabbitmq com to support this functionality read receipts handling read receipts can be tricky for this use case we can wait for some sort of acknowledgment ack https en wikipedia org wiki acknowledgement data networks from the client to determine if the message was delivered and update the corresponding deliveredat field similarly we will mark the message as seen once the user opens the chat and update the corresponding seenat timestamp field design now that we have identified some core components let s do the first draft of our system design whatsapp basic design https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v whatsapp whatsapp basic design png detailed design it s time to discuss our design decisions in detail data partitioning to scale out our databases we will need to partition our data horizontal partitioning aka sharding https karanpratapsingh com courses system design sharding can be a good first step we can use partitions schemes such as hash based partitioning list based partitioning range based partitioning composite partitioning the above approaches can still cause uneven data and load distribution we can solve this using consistent hashing https karanpratapsingh com courses system design consistent hashing for more details refer to sharding https karanpratapsingh com courses system design sharding and consistent hashing https karanpratapsingh com courses system design consistent hashing caching in a messaging application we have to be careful about using cache as our users expect the latest data but many users will be requesting the same messages especially in a group chat so to prevent usage spikes from our resources we can cache older messages some group chats can have thousands of messages and sending that over the network will be really inefficient to improve efficiency we can add pagination to our system apis this decision will be helpful for users with limited network bandwidth as they won t have to retrieve old messages unless requested which cache eviction policy to use we can use solutions like redis https redis io or memcached https memcached org and cache 20 of the daily traffic but what kind of cache eviction policy would best fit our needs least recently used lru https en wikipedia org wiki cache replacement policies least recently used lru can be a good policy for our system in this policy we discard the least recently used key first how to handle cache miss whenever there is a cache miss our servers can hit the database directly and update the cache with the new entries for more details refer to caching https karanpratapsingh com courses system design caching media access and storage as we know most of our storage space will be used for storing media files such as images videos or other files our media service will be handling both access and storage of the user media files but where can we store files at scale well object storage https karanpratapsingh com courses system design storage object storage is what we re looking for object stores break data files up into pieces called objects it then stores those objects in a single repository which can be spread out across multiple networked systems we can also use distributed file storage such as hdfs https karanpratapsingh com courses system design storage hdfs or glusterfs https www gluster org fun fact whatsapp deletes media on its servers once it has been downloaded by the user we can use object stores like amazon s3 https aws amazon com s3 azure blob storage https azure microsoft com en in services storage blobs or google cloud storage https cloud google com storage for this use case content delivery network cdn content delivery network cdn https karanpratapsingh com courses system design content delivery network increases content availability and redundancy while reducing bandwidth costs generally static files such as images and videos are served from cdn we can use services like amazon cloudfront https aws amazon com cloudfront or cloudflare cdn https www cloudflare com cdn for this use case api gateway since we will be using multiple protocols like http websocket tcp ip deploying multiple l4 transport layer or l7 application layer type load balancers separately for each protocol will be expensive instead we can use an api gateway https karanpratapsingh com courses system design api gateway that supports multiple protocols without any issues api gateway can also offer other features such as authentication authorization rate limiting throttling and api versioning which will improve the quality of our services we can use services like amazon api gateway https aws amazon com api gateway or azure api gateway https azure microsoft com en in services api management for this use case identify and resolve bottlenecks whatsapp advanced design https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v whatsapp whatsapp advanced design png let us identify and resolve bottlenecks such as single points of failure in our design what if one of our services crashes how will we distribute our traffic between our components how can we reduce the load on our database how to improve the availability of our cache wouldn t api gateway be a single point of failure how can we make our notification system more robust how can we reduce media storage costs does chat service has too much responsibility to make our system more resilient we can do the following running multiple instances of each of our services introducing load balancers https karanpratapsingh com courses system design load balancing between clients servers databases and cache servers using multiple read replicas for our databases multiple instances and replicas for our distributed cache we can have a standby replica of our api gateway exactly once delivery and message ordering is challenging in a distributed system we can use a dedicated message broker https karanpratapsingh com courses system design message brokers such as apache kafka https kafka apache org or nats https nats io to make our notification system more robust we can add media processing and compression capabilities to the media service to compress large files similar to whatsapp which will save a lot of storage space and reduce cost we can create a group service separate from the chat service to further decouple our services twitter let s design a twitter https twitter com like social media service similar to services like facebook https facebook com instagram https instagram com etc what is twitter twitter is a social media service where users can read or post short messages up to 280 characters called tweets it is available on the web and mobile platforms such as android and ios requirements our system should meet the following requirements functional requirements should be able to post new tweets can be text image video etc should be able to follow other users should have a newsfeed feature consisting of tweets from the people the user is following should be able to search tweets non functional requirements high availability with minimal latency the system should be scalable and efficient extended requirements metrics and analytics retweet functionality favorite tweets estimation and constraints let s start with the estimation and constraints note make sure to check any scale or traffic related assumptions with your interviewer traffic this will be a read heavy system let us assume we have 1 billion total users with 200 million daily active users dau and on average each user tweets 5 times a day this gives us 1 billion tweets per day 200 space million times 5 space tweets 1 space billion day tweets can also contain media such as images or videos we can assume that 10 percent of tweets are media files shared by the users which gives us additional 100 million files we would need to store 10 space percent times 1 space billion 100 space million day what would be requests per second rps for our system 1 billion requests per day translate into 12k requests per second frac 1 space billion 24 space hrs times 3600 space seconds sim 12k space requests second storage if we assume each message on average is 100 bytes we will require about 100 gb of database storage every day 1 space billion times 100 space bytes sim 100 space gb day we also know that around 10 percent of our daily messages 100 million are media files per our requirements if we assume each file is 50 kb on average we will require 5 tb of storage every day 100 space million times 50 space kb 5 space tb day and for 10 years we will require about 19 pb of storage 5 space tb 0 1 space tb times 365 space days times 10 space years sim 19 space pb bandwidth as our system is handling 5 1 tb of ingress every day we will require a minimum bandwidth of around 60 mb per second frac 5 1 space tb 24 space hrs times 3600 space seconds sim 60 space mb second high level estimate here is our high level estimate type estimate daily active users dau 100 million requests per second rps 12k s storage per day 5 1 tb storage 10 years 19 pb bandwidth 60 mb s data model design this is the general data model which reflects our requirements twitter datamodel https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v twitter twitter datamodel png we have the following tables users this table will contain a user s information such as name email dob and other details tweets as the name suggests this table will store tweets and their properties such as type text image video etc content etc we will also store the corresponding userid favorites this table maps tweets with users for the favorite tweets functionality in our application followers this table maps the followers and followees https en wiktionary org wiki followee as users can follow each other n m relationship feeds this table stores feed properties with the corresponding userid feeds tweets this table maps tweets and feed n m relationship what kind of database should we use while our data model seems quite relational we don t necessarily need to store everything in a single database as this can limit our scalability and quickly become a bottleneck we will split the data between different services each having ownership over a particular table then we can use a relational database such as postgresql https www postgresql org or a distributed nosql database such as apache cassandra https cassandra apache org index html for our use case api design let us do a basic api design for our services post a tweet this api will allow the user to post a tweet on the platform tsx posttweet userid uuid content string mediaurl string boolean parameters user id uuid id of the user content string contents of the tweet media url string url of the attached media optional returns result boolean represents whether the operation was successful or not follow or unfollow a user this api will allow the user to follow or unfollow another user tsx follow followerid uuid followeeid uuid boolean unfollow followerid uuid followeeid uuid boolean parameters follower id uuid id of the current user followee id uuid id of the user we want to follow or unfollow media url string url of the attached media optional returns result boolean represents whether the operation was successful or not get newsfeed this api will return all the tweets to be shown within a given newsfeed tsx getnewsfeed userid uuid tweet parameters user id uuid id of the user returns tweets tweet all the tweets to be shown within a given newsfeed high level design now let us do a high level design of our system architecture we will be using microservices architecture https karanpratapsingh com courses system design monoliths microservices microservices since it will make it easier to horizontally scale and decouple our services each service will have ownership of its own data model let s try to divide our system into some core services user service this service handles user related concerns such as authentication and user information newsfeed service this service will handle the generation and publishing of user newsfeeds it will be discussed in detail separately tweet service the tweet service will handle tweet related use cases such as posting a tweet favorites etc search service the service is responsible for handling search related functionality it will be discussed in detail separately media service this service will handle the media images videos files etc uploads it will be discussed in detail separately notification service this service will simply send push notifications to the users analytics service this service will be used for metrics and analytics use cases what about inter service communication and service discovery since our architecture is microservices based services will be communicating with each other as well generally rest or http performs well but we can further improve the performance using grpc https karanpratapsingh com courses system design rest graphql grpc grpc which is more lightweight and efficient service discovery https karanpratapsingh com courses system design service discovery is another thing we will have to take into account we can also use a service mesh that enables managed observable and secure communication between individual services note learn more about rest graphql grpc https karanpratapsingh com courses system design rest graphql grpc and how they compare with each other newsfeed when it comes to the newsfeed it seems easy enough to implement but there are a lot of things that can make or break this feature so let s divide our problem into two parts generation let s assume we want to generate the feed for user a we will perform the following steps 1 retrieve the ids of all the users and entities hashtags topics etc user a follows 2 fetch the relevant tweets for each of the retrieved ids 3 use a ranking algorithm to rank the tweets based on parameters such as relevance time engagement etc 4 return the ranked tweets data to the client in a paginated manner feed generation is an intensive process and can take quite a lot of time especially for users following a lot of people to improve the performance the feed can be pre generated and stored in the cache then we can have a mechanism to periodically update the feed and apply our ranking algorithm to the new tweets publishing publishing is the step where the feed data is pushed according to each specific user this can be a quite heavy operation as a user may have millions of friends or followers to deal with this we have three different approaches pull model or fan out on load newsfeed pull model https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v twitter newsfeed pull model png when a user creates a tweet and a follower reloads their newsfeed the feed is created and stored in memory the most recent feed is only loaded when the user requests it this approach reduces the number of write operations on our database the downside of this approach is that the users will not be able to view recent feeds unless they pull the data from the server which will increase the number of read operations on the server push model or fan out on write newsfeed push model https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v twitter newsfeed push model png in this model once a user creates a tweet it is pushed to all the follower s feeds immediately this prevents the system from having to go through a user s entire followers list to check for updates however the downside of this approach is that it would increase the number of write operations on the database hybrid model a third approach is a hybrid model between the pull and push model it combines the beneficial features of the above two models and tries to provide a balanced approach between the two the hybrid model allows only users with a lesser number of followers to use the push model for users with a higher number of followers such as celebrities the pull model is used ranking algorithm as we discussed we will need a ranking algorithm to rank each tweet according to its relevance to each specific user for example facebook used to utilize an edgerank https en wikipedia org wiki edgerank algorithm here the rank of each feed item is described by rank affinity times weight times decay where affinity is the closeness of the user to the creator of the edge if a user frequently likes comments or messages the edge creator then the value of affinity will be higher resulting in a higher rank for the post weight is the value assigned according to each edge a comment can have a higher weightage than likes and thus a post with more comments is more likely to get a higher rank decay is the measure of the creation of the edge the older the edge the lesser will be the value of decay and eventually the rank nowadays algorithms are much more complex and ranking is done using machine learning models which can take thousands of factors into consideration retweets retweets are one of our extended requirements to implement this feature we can simply create a new tweet with the user id of the user retweeting the original tweet and then modify the type enum and content property of the new tweet to link it with the original tweet for example the type enum property can be of type tweet similar to text video etc and content can be the id of the original tweet here the first row indicates the original tweet while the second row is how we can represent a retweet id userid type content createdat ad34 291a 45f6 b36c 7a2c 62c4 4dc8 b1bb text hey this is my first tweet 1658905644054 f064 49ad 9aa2 84a6 6aa2 2bc9 4331 879f tweet ad34 291a 45f6 b36c 1658906165427 this is a very basic implementation to improve this we can create a separate table itself to store retweets search sometimes traditional dbms are not performant enough we need something which allows us to store search and analyze huge volumes of data quickly and in near real time and give results within milliseconds elasticsearch https www elastic co can help us with this use case elasticsearch https www elastic co is a distributed free and open search and analytics engine for all types of data including textual numerical geospatial structured and unstructured it is built on top of apache lucene https lucene apache org how do we identify trending topics trending functionality will be based on top of the search functionality we can cache the most frequently searched queries hashtags and topics in the last n seconds and update them every m seconds using some sort of batch job mechanism our ranking algorithm can also be applied to the trending topics to give them more weight and personalize them for the user notifications push notifications are an integral part of any social media platform we can use a message queue or a message broker such as apache kafka https kafka apache org with the notification service to dispatch requests to firebase cloud messaging fcm https firebase google com docs cloud messaging or apple push notification service apns https developer apple com documentation usernotifications which will handle the delivery of the push notifications to user devices for more details refer to the whatsapp https karanpratapsingh com courses system design whatsapp notifications system design where we discuss push notifications in detail detailed design it s time to discuss our design decisions in detail data partitioning to scale out our databases we will need to partition our data horizontal partitioning aka sharding https karanpratapsingh com courses system design sharding can be a good first step we can use partitions schemes such as hash based partitioning list based partitioning range based partitioning composite partitioning the above approaches can still cause uneven data and load distribution we can solve this using consistent hashing https karanpratapsingh com courses system design consistent hashing for more details refer to sharding https karanpratapsingh com courses system design sharding and consistent hashing https karanpratapsingh com courses system design consistent hashing mutual friends for mutual friends we can build a social graph for every user each node in the graph will represent a user and a directional edge will represent followers and followees after that we can traverse the followers of a user to find and suggest a mutual friend this would require a graph database such as neo4j https neo4j com and arangodb https www arangodb com this is a pretty simple algorithm to improve our suggestion accuracy we will need to incorporate a recommendation model which uses machine learning as part of our algorithm metrics and analytics recording analytics and metrics is one of our extended requirements as we will be using apache kafka https kafka apache org to publish all sorts of events we can process these events and run analytics on the data using apache spark https spark apache org which is an open source unified analytics engine for large scale data processing caching in a social media application we have to be careful about using cache as our users expect the latest data so to prevent usage spikes from our resources we can cache the top 20 of the tweets to further improve efficiency we can add pagination to our system apis this decision will be helpful for users with limited network bandwidth as they won t have to retrieve old messages unless requested which cache eviction policy to use we can use solutions like redis https redis io or memcached https memcached org and cache 20 of the daily traffic but what kind of cache eviction policy would best fit our needs least recently used lru https en wikipedia org wiki cache replacement policies least recently used lru can be a good policy for our system in this policy we discard the least recently used key first how to handle cache miss whenever there is a cache miss our servers can hit the database directly and update the cache with the new entries for more details refer to caching https karanpratapsingh com courses system design caching media access and storage as we know most of our storage space will be used for storing media files such as images videos or other files our media service will be handling both access and storage of the user media files but where can we store files at scale well object storage https karanpratapsingh com courses system design storage object storage is what we re looking for object stores break data files up into pieces called objects it then stores those objects in a single repository which can be spread out across multiple networked systems we can also use distributed file storage such as hdfs https karanpratapsingh com courses system design storage hdfs or glusterfs https www gluster org content delivery network cdn content delivery network cdn https karanpratapsingh com courses system design content delivery network increases content availability and redundancy while reducing bandwidth costs generally static files such as images and videos are served from cdn we can use services like amazon cloudfront https aws amazon com cloudfront or cloudflare cdn https www cloudflare com cdn for this use case identify and resolve bottlenecks twitter advanced design https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v twitter twitter advanced design png let us identify and resolve bottlenecks such as single points of failure in our design what if one of our services crashes how will we distribute our traffic between our components how can we reduce the load on our database how to improve the availability of our cache how can we make our notification system more robust how can we reduce media storage costs to make our system more resilient we can do the following running multiple instances of each of our services introducing load balancers https karanpratapsingh com courses system design load balancing between clients servers databases and cache servers using multiple read replicas for our databases multiple instances and replicas for our distributed cache exactly once delivery and message ordering is challenging in a distributed system we can use a dedicated message broker https karanpratapsingh com courses system design message brokers such as apache kafka https kafka apache org or nats https nats io to make our notification system more robust we can add media processing and compression capabilities to the media service to compress large files which will save a lot of storage space and reduce cost netflix let s design a netflix https netflix com like video streaming service similar to services like amazon prime video https www primevideo com disney plus https www disneyplus com hulu https www hulu com youtube https youtube com vimeo https vimeo com etc what is netflix netflix is a subscription based streaming service that allows its members to watch tv shows and movies on an internet connected device it is available on platforms such as the web ios android tv etc requirements our system should meet the following requirements functional requirements users should be able to stream and share videos the content team or users in youtube s case should be able to upload new videos movies tv shows episodes and other content users should be able to search for videos using titles or tags users should be able to comment on a video similar to youtube non functional requirements high availability with minimal latency high reliability no uploads should be lost the system should be scalable and efficient extended requirements certain content should be geo blocked https en wikipedia org wiki geo blocking resume video playback from the point user left off record metrics and analytics of videos estimation and constraints let s start with the estimation and constraints note make sure to check any scale or traffic related assumptions with your interviewer traffic this will be a read heavy system let us assume we have 1 billion total users with 200 million daily active users dau and on average each user watches 5 videos a day this gives us 1 billion videos watched per day 200 space million times 5 space videos 1 space billion day assuming a 200 1 read write ratio about 5 million videos will be uploaded every day frac 1 200 times 1 space billion 5 space million day what would be requests per second rps for our system 1 billion requests per day translate into 12k requests per second frac 1 space billion 24 space hrs times 3600 space seconds sim 12k space requests second storage if we assume each video is 100 mb on average we will require about 500 tb of storage every day 5 space million times 100 space mb 500 space tb day and for 10 years we will require an astounding 1 825 pb of storage 500 space tb times 365 space days times 10 space years sim 1 825 space pb bandwidth as our system is handling 500 tb of ingress every day we will require a minimum bandwidth of around 5 8 gb per second frac 500 space tb 24 space hrs times 3600 space seconds sim 5 8 space gb second high level estimate here is our high level estimate type estimate daily active users dau 200 million requests per second rps 12k s storage per day 500 tb storage 10 years 1 825 pb bandwidth 5 8 gb s data model design this is the general data model which reflects our requirements netflix datamodel https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v netflix netflix datamodel png we have the following tables users this table will contain a user s information such as name email dob and other details videos as the name suggests this table will store videos and their properties such as title streamurl tags etc we will also store the corresponding userid tags this table will simply store tags associated with a video views this table helps us to store all the views received on a video comments this table stores all the comments received on a video like youtube what kind of database should we use while our data model seems quite relational we don t necessarily need to store everything in a single database as this can limit our scalability and quickly become a bottleneck we will split the data between different services each having ownership over a particular table then we can use a relational database such as postgresql https www postgresql org or a distributed nosql database such as apache cassandra https cassandra apache org index html for our use case api design let us do a basic api design for our services upload a video given a byte stream this api enables video to be uploaded to our service tsx uploadvideo title string description string data stream byte tags string boolean parameters title string title of the new video description string description of the new video data byte byte stream of the video data tags string tags for the video optional returns result boolean represents whether the operation was successful or not streaming a video this api allows our users to stream a video with the preferred codec and resolution tsx streamvideo videoid uuid codec enum string resolution tuple int offset int videostream parameters video id uuid id of the video that needs to be streamed codec enum string required codec https en wikipedia org wiki video codec of the requested video such as h 265 h 264 vp9 etc resolution tuple int resolution https en wikipedia org wiki display resolution of the requested video offset int offset of the video stream in seconds to stream data from any point in the video optional returns stream videostream data stream of the requested video search for a video this api will enable our users to search for a video based on its title or tags tsx searchvideo query string nextpage string video parameters query string search query from the user next page string token for the next page this can be used for pagination optional returns videos video all the videos available for a particular search query add a comment this api will allow our users to post a comment on a video like youtube tsx comment videoid uuid comment string boolean parameters videoid uuid id of the video user wants to comment on comment string the text content of the comment returns result boolean represents whether the operation was successful or not high level design now let us do a high level design of our system architecture we will be using microservices architecture https karanpratapsingh com courses system design monoliths microservices microservices since it will make it easier to horizontally scale and decouple our services each service will have ownership of its own data model let s try to divide our system into some core services user service this service handles user related concerns such as authentication and user information stream service the stream service will handle video streaming related functionality search service the service is responsible for handling search related functionality it will be discussed in detail separately media service this service will handle the video uploads and processing it will be discussed in detail separately analytics service this service will be used for metrics and analytics use cases what about inter service communication and service discovery since our architecture is microservices based services will be communicating with each other as well generally rest or http performs well but we can further improve the performance using grpc https karanpratapsingh com courses system design rest graphql grpc grpc which is more lightweight and efficient service discovery https karanpratapsingh com courses system design service discovery is another thing we will have to take into account we can also use a service mesh that enables managed observable and secure communication between individual services note learn more about rest graphql grpc https karanpratapsingh com courses system design rest graphql grpc and how they compare with each other video processing video processing pipeline https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v netflix video processing pipeline png there are so many variables in play when it comes to processing a video for example an average data size of two hour raw 8k footage from a high end camera can easily be up to 4 tb thus we need to have some kind of processing to reduce both storage and delivery costs here s how we can process videos once they re uploaded by the content team or users in youtube s case and are queued for processing in our message queue https karanpratapsingh com courses system design message queues let s discuss how this works file chunker file chunking https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v netflix file chunking png this is the first step of our processing pipeline file chunking is the process of splitting a file into smaller pieces called chunks it can help us eliminate duplicate copies of repeating data on storage and reduces the amount of data sent over the network by only selecting changed chunks usually a video file can be split into equal size chunks based on timestamps but netflix instead splits chunks based on scenes this slight variation becomes a huge factor for a better user experience since whenever the client requests a chunk from the server there is a lower chance of interruption as a complete scene will be retrieved content filter this step checks if the video adheres to the content policy of the platform this can be pre approved as in the case of netflix according to content rating https en wikipedia org wiki motion picture content rating system of the media or can be strictly enforced like by youtube this entire process is done by a machine learning model which performs copyright piracy and nsfw checks if issues are found we can push the task to a dead letter queue dlq https karanpratapsingh com courses system design message queues dead letter queues and someone from the moderation team can do further inspection transcoder transcoding https en wikipedia org wiki transcoding is a process in which the original data is decoded to an intermediate uncompressed format which is then encoded into the target format this process uses different codecs https en wikipedia org wiki video codec to perform bitrate adjustment image downsampling or re encoding the media this results in a smaller size file and a much more optimized format for the target devices standalone solutions such as ffmpeg https ffmpeg org or cloud based solutions like aws elemental mediaconvert https aws amazon com mediaconvert can be used to implement this step of the pipeline quality conversion this is the last step of the processing pipeline and as the name suggests this step handles the conversion of the transcoded media from the previous step into different resolutions such as 4k 1440p 1080p 720p etc it allows us to fetch the desired quality of the video as per the user s request and once the media file finishes processing it gets uploaded to a distributed file storage such as hdfs https karanpratapsingh com courses system design storage hdfs glusterfs https www gluster org or an object storage https karanpratapsingh com courses system design storage object storage such as amazon s3 https aws amazon com s3 for later retrieval during streaming note we can add additional steps such as subtitles and thumbnails generation as part of our pipeline why are we using a message queue processing videos as a long running task and using a message queue https karanpratapsingh com courses system design message queues makes much more sense it also decouples our video processing pipeline from the upload functionality we can use something like amazon sqs https aws amazon com sqs or rabbitmq https www rabbitmq com to support this video streaming video streaming is a challenging task from both the client and server perspectives moreover internet connection speeds vary quite a lot between different users to make sure users don t re fetch the same content we can use a content delivery network cdn https karanpratapsingh com courses system design content delivery network netflix takes this a step further with its open connect https openconnect netflix com program in this approach they partner with thousands of internet service providers isps to localize their traffic and deliver their content more efficiently what is the difference between netflix s open connect and a traditional content delivery network cdn netflix open connect is a purpose built content delivery network cdn https karanpratapsingh com courses system design content delivery network responsible for serving netflix s video traffic around 95 of the traffic globally is delivered via direct connections between open connect and the isps their customers use to access the internet currently they have open connect appliances ocas in over 1000 separate locations around the world in case of issues open connect appliances ocas can failover and the traffic can be re routed to netflix servers additionally we can use adaptive bitrate streaming https en wikipedia org wiki adaptive bitrate streaming protocols such as http live streaming hls https en wikipedia org wiki http live streaming which is designed for reliability and it dynamically adapts to network conditions by optimizing playback for the available speed of the connections lastly for playing the video from where the user left off part of our extended requirements we can simply use the offset property we stored in the views table to retrieve the scene chunk at that particular timestamp and resume the playback for the user searching sometimes traditional dbms are not performant enough we need something which allows us to store search and analyze huge volumes of data quickly and in near real time and give results within milliseconds elasticsearch https www elastic co can help us with this use case elasticsearch https www elastic co is a distributed free and open search and analytics engine for all types of data including textual numerical geospatial structured and unstructured it is built on top of apache lucene https lucene apache org how do we identify trending content trending functionality will be based on top of the search functionality we can cache the most frequently searched queries in the last n seconds and update them every m seconds using some sort of batch job mechanism sharing sharing content is an important part of any platform for this we can have some sort of url shortener service in place that can generate short urls for the users to share for more details refer to the url shortener https karanpratapsingh com courses system design url shortener system design detailed design it s time to discuss our design decisions in detail data partitioning to scale out our databases we will need to partition our data horizontal partitioning aka sharding https karanpratapsingh com courses system design sharding can be a good first step we can use partitions schemes such as hash based partitioning list based partitioning range based partitioning composite partitioning the above approaches can still cause uneven data and load distribution we can solve this using consistent hashing https karanpratapsingh com courses system design consistent hashing for more details refer to sharding https karanpratapsingh com courses system design sharding and consistent hashing https karanpratapsingh com courses system design consistent hashing geo blocking platforms like netflix and youtube use geo blocking https en wikipedia org wiki geo blocking to restrict content in certain geographical areas or countries this is primarily done due to legal distribution laws that netflix has to adhere to when they make a deal with the production and distribution companies in the case of youtube this will be controlled by the user during the publishing of the content we can determine the user s location either using their ip https karanpratapsingh com courses system design ip or region settings in their profile then use services like amazon cloudfront https aws amazon com cloudfront which supports a geographic restrictions feature or a geolocation routing policy https docs aws amazon com route53 latest developerguide routing policy geo html with amazon route53 https aws amazon com route53 to restrict the content and re route the user to an error page if the content is not available in that particular region or country recommendations netflix uses a machine learning model which uses the user s viewing history to predict what the user might like to watch next an algorithm like collaborative filtering https en wikipedia org wiki collaborative filtering can be used however netflix like youtube uses its own algorithm called netflix recommendation engine which can track several data points such as user profile information like age gender and location browsing and scrolling behavior of the user time and date a user watched a title the device which was used to stream the content the number of searches and what terms were searched for more detail refer to netflix recommendation research https research netflix com research area recommendations metrics and analytics recording analytics and metrics is one of our extended requirements we can capture the data from different services and run analytics on the data using apache spark https spark apache org which is an open source unified analytics engine for large scale data processing additionally we can store critical metadata in the views table to increase data points within our data caching in a streaming platform caching is important we have to be able to cache as much static media content as possible to improve user experience we can use solutions like redis https redis io or memcached https memcached org but what kind of cache eviction policy would best fit our needs which cache eviction policy to use least recently used lru https en wikipedia org wiki cache replacement policies least recently used lru can be a good policy for our system in this policy we discard the least recently used key first how to handle cache miss whenever there is a cache miss our servers can hit the database directly and update the cache with the new entries for more details refer to caching https karanpratapsingh com courses system design caching media streaming and storage as most of our storage space will be used for storing media files such as thumbnails and videos per our discussion earlier the media service will be handling both the upload and processing of media files we will use distributed file storage such as hdfs https karanpratapsingh com courses system design storage hdfs glusterfs https www gluster org or an object storage https karanpratapsingh com courses system design storage object storage such as amazon s3 https aws amazon com s3 for storage and streaming of the content content delivery network cdn content delivery network cdn https karanpratapsingh com courses system design content delivery network increases content availability and redundancy while reducing bandwidth costs generally static files such as images and videos are served from cdn we can use services like amazon cloudfront https aws amazon com cloudfront or cloudflare cdn https www cloudflare com cdn for this use case identify and resolve bottlenecks netflix advanced design https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v netflix netflix advanced design png let us identify and resolve bottlenecks such as single points of failure in our design what if one of our services crashes how will we distribute our traffic between our components how can we reduce the load on our database how to improve the availability of our cache to make our system more resilient we can do the following running multiple instances of each of our services introducing load balancers https karanpratapsingh com courses system design load balancing between clients servers databases and cache servers using multiple read replicas for our databases multiple instances and replicas for our distributed cache uber let s design an uber https uber com like ride hailing service similar to services like lyft https www lyft com ola cabs https www olacabs com etc what is uber uber is a mobility service provider allowing users to book rides and a driver to transport them in a way similar to a taxi it is available on the web and mobile platforms such as android and ios requirements our system should meet the following requirements functional requirements we will design our system for two types of users customers and drivers customers customers should be able to see all the cabs in the vicinity with an eta and pricing information customers should be able to book a cab to a destination customers should be able to see the location of the driver drivers drivers should be able to accept or deny the customer requested ride once a driver accepts the ride they should see the pickup location of the customer drivers should be able to mark the trip as complete on reaching the destination non functional requirements high reliability high availability with minimal latency the system should be scalable and efficient extended requirements customers can rate the trip after it s completed payment processing metrics and analytics estimation and constraints let s start with the estimation and constraints note make sure to check any scale or traffic related assumptions with your interviewer traffic let us assume we have 100 million daily active users dau with 1 million drivers and on average our platform enables 10 million rides daily if on average each user performs 10 actions such as request a check available rides fares book rides etc we will have to handle 1 billion requests daily 100 space million times 10 space actions 1 space billion day what would be requests per second rps for our system 1 billion requests per day translate into 12k requests per second frac 1 space billion 24 space hrs times 3600 space seconds sim 12k space requests second storage if we assume each message on average is 400 bytes we will require about 400 gb of database storage every day 1 space billion times 400 space bytes sim 400 space gb day and for 10 years we will require about 1 4 pb of storage 400 space gb times 10 space years times 365 space days sim 1 4 space pb bandwidth as our system is handling 400 gb of ingress every day we will require a minimum bandwidth of around 4 mb per second frac 400 space gb 24 space hrs times 3600 space seconds sim 5 space mb second high level estimate here is our high level estimate type estimate daily active users dau 100 million requests per second rps 12k s storage per day 400 gb storage 10 years 1 4 pb bandwidth 5 mb s data model design this is the general data model which reflects our requirements uber datamodel https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v uber uber datamodel png we have the following tables customers this table will contain a customer s information such as name email and other details drivers this table will contain a driver s information such as name email dob and other details trips this table represents the trip taken by the customer and stores data such as source destination and status of the trip cabs this table stores data such as the registration number and type like uber go uber xl etc of the cab that the driver will be driving ratings as the name suggests this table stores the rating and feedback for the trip payments the payments table contains the payment related data with the corresponding tripid what kind of database should we use while our data model seems quite relational we don t necessarily need to store everything in a single database as this can limit our scalability and quickly become a bottleneck we will split the data between different services each having ownership over a particular table then we can use a relational database such as postgresql https www postgresql org or a distributed nosql database such as apache cassandra https cassandra apache org index html for our use case api design let us do a basic api design for our services request a ride through this api customers will be able to request a ride tsx requestride customerid uuid source tuple float destination tuple float cabtype enum string paymentmethod enum string ride parameters customer id uuid id of the customer source tuple float tuple containing the latitude and longitude of the trip s starting location destination tuple float tuple containing the latitude and longitude of the trip s destination returns result ride associated ride information of the trip cancel the ride this api will allow customers to cancel the ride tsx cancelride customerid uuid reason string boolean parameters customer id uuid id of the customer reason uuid reason for canceling the ride optional returns result boolean represents whether the operation was successful or not accept or deny the ride this api will allow the driver to accept or deny the trip tsx acceptride driverid uuid rideid uuid boolean denyride driverid uuid rideid uuid boolean parameters driver id uuid id of the driver ride id uuid id of the customer requested ride returns result boolean represents whether the operation was successful or not start or end the trip using this api a driver will be able to start and end the trip tsx starttrip driverid uuid tripid uuid boolean endtrip driverid uuid tripid uuid boolean parameters driver id uuid id of the driver trip id uuid id of the requested trip returns result boolean represents whether the operation was successful or not rate the trip this api will enable customers to rate the trip tsx ratetrip customerid uuid tripid uuid rating int feedback string boolean parameters customer id uuid id of the customer trip id uuid id of the completed trip rating int rating of the trip feedback string feedback about the trip by the customer optional returns result boolean represents whether the operation was successful or not high level design now let us do a high level design of our system architecture we will be using microservices architecture https karanpratapsingh com courses system design monoliths microservices microservices since it will make it easier to horizontally scale and decouple our services each service will have ownership of its own data model let s try to divide our system into some core services customer service this service handles customer related concerns such as authentication and customer information driver service this service handles driver related concerns such as authentication and driver information ride service this service will be responsible for ride matching and quadtree aggregation it will be discussed in detail separately trip service this service handles trip related functionality in our system payment service this service will be responsible for handling payments in our system notification service this service will simply send push notifications to the users it will be discussed in detail separately analytics service this service will be used for metrics and analytics use cases what about inter service communication and service discovery since our architecture is microservices based services will be communicating with each other as well generally rest or http performs well but we can further improve the performance using grpc https karanpratapsingh com courses system design rest graphql grpc grpc which is more lightweight and efficient service discovery https karanpratapsingh com courses system design service discovery is another thing we will have to take into account we can also use a service mesh that enables managed observable and secure communication between individual services note learn more about rest graphql grpc https karanpratapsingh com courses system design rest graphql grpc and how they compare with each other how is the service expected to work here s how our service is expected to work uber working https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v uber uber working png 1 customer requests a ride by specifying the source destination cab type payment method etc 2 ride service registers this request finds nearby drivers and calculates the estimated time of arrival eta 3 the request is then broadcasted to the nearby drivers for them to accept or deny 4 if the driver accepts the customer is notified about the live location of the driver with the estimated time of arrival eta while they wait for pickup 5 the customer is picked up and the driver can start the trip 6 once the destination is reached the driver will mark the ride as complete and collect payment 7 after the payment is complete the customer can leave a rating and feedback for the trip if they like location tracking how do we efficiently send and receive live location data from the client customers and drivers to our backend we have two different options pull model the client can periodically send an http request to servers to report its current location and receive eta and pricing information this can be achieved via something like long polling https karanpratapsingh com courses system design long polling websockets server sent events long polling push model the client opens a long lived connection with the server and once new data is available it will be pushed to the client we can use websockets https karanpratapsingh com courses system design long polling websockets server sent events websockets or server sent events sse https karanpratapsingh com courses system design long polling websockets server sent events server sent events sse for this the pull model approach is not scalable as it will create unnecessary request overhead on our servers and most of the time the response will be empty thus wasting our resources to minimize latency using the push model with websockets https karanpratapsingh com courses system design long polling websockets server sent events websockets is a better choice because then we can push data to the client once it s available without any delay given the connection is open with the client also websockets provide full duplex communication unlike server sent events sse https karanpratapsingh com courses system design long polling websockets server sent events server sent events sse which are only unidirectional additionally the client application should have some sort of background job mechanism to ping gps location while the application is in the background note learn more about long polling websockets server sent events sse https karanpratapsingh com courses system design long polling websockets server sent events ride matching we need a way to efficiently store and query nearby drivers let s explore different solutions we can incorporate into our design sql we already have access to the latitude and longitude of our customers and with databases like postgresql https www postgresql org and mysql https www mysql com we can perform a query to find nearby driver locations given a latitude and longitude x y within a radius r sql select from locations where lat between x r and x r and long between y r and y r however this is not scalable and performing this query on large datasets will be quite slow geohashing geohashing courses sytem design geohashing and quadtrees geohashing is a geocoding https en wikipedia org wiki address geocoding method used to encode geographic coordinates such as latitude and longitude into short alphanumeric strings it was created by gustavo niemeyer https twitter com gniemeyer in 2008 geohash is a hierarchical spatial index that uses base 32 alphabet encoding the first character in a geohash identifies the initial location as one of the 32 cells this cell will also contain 32 cells this means that to represent a point the world is recursively divided into smaller and smaller cells with each additional bit until the desired precision is attained the precision factor also determines the size of the cell geohashing https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv geohashing and quadtrees geohashing png for example san francisco with coordinates 37 7564 122 4016 can be represented in geohash as 9q8yy9mf now using the customer s geohash we can determine the nearest available driver by simply comparing it with the driver s geohash for better performance we will index and store the geohash of the driver in memory for faster retrieval quadtrees a quadtree courses sytem design geohashing and quadtrees quadtrees is a tree data structure in which each internal node has exactly four children they are often used to partition a two dimensional space by recursively subdividing it into four quadrants or regions each child or leaf node stores spatial information quadtrees are the two dimensional analog of octrees https en wikipedia org wiki octree which are used to partition three dimensional space quadtree https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv geohashing and quadtrees quadtree png quadtrees enable us to search points within a two dimensional range efficiently where those points are defined as latitude longitude coordinates or as cartesian x y coordinates we can save further computation by only subdividing a node after a certain threshold quadtree subdivision https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter iv geohashing and quadtrees quadtree subdivision png quadtree courses sytem design geohashing and quadtrees quadtrees seems perfect for our use case we can update the quadtree every time we receive a new location update from the driver to reduce the load on the quadtree servers we can use an in memory datastore such as redis https redis io to cache the latest updates and with the application of mapping algorithms such as the hilbert curve https en wikipedia org wiki hilbert curve we can perform efficient range queries to find nearby drivers for the customer what about race conditions race conditions can easily occur when a large number of customers will be requesting rides simultaneously to avoid this we can wrap our ride matching logic in a mutex https en wikipedia org wiki lock computer science to avoid any race conditions furthermore every action should be transactional in nature for more details refer to transactions https karanpratapsingh com courses system design transactions and distributed transactions https karanpratapsingh com courses system design distributed transactions how to find the best drivers nearby once we have a list of nearby drivers from the quadtree servers we can perform some sort of ranking based on parameters like average ratings relevance past customer feedback etc this will allow us to broadcast notifications to the best available drivers first dealing with high demand in cases of high demand we can use the concept of surge pricing surge pricing is a dynamic pricing method where prices are temporarily increased as a reaction to increased demand and mostly limited supply this surge price can be added to the base price of the trip for more details learn how surge pricing works https www uber com us en drive driver app how surge works with uber payments handling payments at scale is challenging to simplify our system we can use a third party payment processor like stripe https stripe com or paypal https www paypal com once the payment is complete the payment processor will redirect the user back to our application and we can set up a webhook https en wikipedia org wiki webhook to capture all the payment related data notifications push notifications will be an integral part of our platform we can use a message queue or a message broker such as apache kafka https kafka apache org with the notification service to dispatch requests to firebase cloud messaging fcm https firebase google com docs cloud messaging or apple push notification service apns https developer apple com documentation usernotifications which will handle the delivery of the push notifications to user devices for more details refer to the whatsapp https karanpratapsingh com courses system design whatsapp notifications system design where we discuss push notifications in detail detailed design it s time to discuss our design decisions in detail data partitioning to scale out our databases we will need to partition our data horizontal partitioning aka sharding https karanpratapsingh com courses system design sharding can be a good first step we can shard our database either based on existing partition schemes https karanpratapsingh com courses system design sharding partitioning criteria or regions if we divide the locations into regions using let s say zip codes we can effectively store all the data in a given region on a fixed node but this can still cause uneven data and load distribution we can solve this using consistent hashing https karanpratapsingh com courses system design consistent hashing for more details refer to sharding https karanpratapsingh com courses system design sharding and consistent hashing https karanpratapsingh com courses system design consistent hashing metrics and analytics recording analytics and metrics is one of our extended requirements we can capture the data from different services and run analytics on the data using apache spark https spark apache org which is an open source unified analytics engine for large scale data processing additionally we can store critical metadata in the views table to increase data points within our data caching in a location services based platform caching is important we have to be able to cache the recent locations of the customers and drivers for fast retrieval we can use solutions like redis https redis io or memcached https memcached org but what kind of cache eviction policy would best fit our needs which cache eviction policy to use least recently used lru https en wikipedia org wiki cache replacement policies least recently used lru can be a good policy for our system in this policy we discard the least recently used key first how to handle cache miss whenever there is a cache miss our servers can hit the database directly and update the cache with the new entries for more details refer to caching https karanpratapsingh com courses system design caching identify and resolve bottlenecks uber advanced design https raw githubusercontent com karanpratapsingh portfolio master public static courses system design chapter v uber uber advanced design png let us identify and resolve bottlenecks such as single points of failure in our design what if one of our services crashes how will we distribute our traffic between our components how can we reduce the load on our database how to improve the availability of our cache how can we make our notification system more robust to make our system more resilient we can do the following running multiple instances of each of our services introducing load balancers https karanpratapsingh com courses system design load balancing between clients servers databases and cache servers using multiple read replicas for our databases multiple instances and replicas for our distributed cache exactly once delivery and message ordering is challenging in a distributed system we can use a dedicated message broker https karanpratapsingh com courses system design message brokers such as apache kafka https kafka apache org or nats https nats io to make our notification system more robust next steps congratulations you ve finished the course now that you know the fundamentals of system design here are some additional resources distributed systems https www youtube com watch v ueamflpzzhe list plekd45zvjcdfuev ohr hdufe97ritdib by dr martin kleppmann system design interview an insider s guide https www amazon in system design interview insiders second dp b08cmf2cqf microservices https microservices io by chris richardson serverless computing https en wikipedia org wiki serverless computing kubernetes https kubernetes io it is also recommended to actively follow engineering blogs of companies putting what we learned in the course into practice at scale microsoft engineering https engineering microsoft com google research blog http googleresearch blogspot com netflix tech blog http techblog netflix com aws blog https aws amazon com blogs aws facebook engineering https www facebook com engineering uber engineering blog http eng uber com airbnb engineering http nerds airbnb com github engineering blog https github blog category engineering intel software blog https software intel com en us blogs linkedin engineering http engineering linkedin com blog paypal developer blog https medium com paypal engineering twitter engineering https blog twitter com engineering last but not least volunteer for new projects at your company and learn from senior engineers and architects to further improve your system design skills i hope this course was a great learning experience i would love to hear feedback from you wishing you all the best for further learning references here are the resources that were referenced while creating this course cloudflare learning center https www cloudflare com learning ibm blogs https www ibm com blogs fastly blogs https www fastly com blog ns1 blogs https ns1 com blog grokking the system design interview https www designgurus io course grokking the system design interview grokking microservices design patterns https www designgurus io course grokking microservices design patterns system design primer https github com donnemartin system design primer aws blogs https aws amazon com blogs architecture patterns by microsoft https learn microsoft com en us azure architecture patterns martin fowler https martinfowler com pagerduty resources https www pagerduty com resources vmware blogs https blogs vmware com learning all the diagrams were made using excalidraw https excalidraw com and are available here https github com karanpratapsingh system design tree main diagrams | architecture distributed-systems system-design system-design-interview interview tech engineering interview-preparation scalability microservices | os |
dataset-builder | dataset builder a script to help you quickly build custom computer vision datasets for classification and detection it allows you to define your classes and then fetches images from flickr it then organizes the folders and clean the file names if you re preparing the dataset for a detection or a segmentation task the script opens up makesense ai uploads the images with the corresponding label list so that you can start annotating once the annotation is done your labels can be exported and you ll be ready to train your awesome models requirements install flick scraper dependencies git clone https github com ultralytics flickr scraper cd flickr scraper pip install u r requirements txt request a flickr api key and secret https www flickr com services apps create apply create a config yaml file inside src that looks like this yaml key xxxxxxxxxxxxxxx secret xxxxxxxxxxxxxxx selenium pip install u selenium chromedriver 77 0 3865 40 in case you wish to run maskesense locally bash clone repository git clone https github com skalskip make sense git navigate to main dir cd make sense install dependencies npm install serve with hot reload at localhost 3000 npm start example when you run the script you can specify the following arguments output directory the root folder when images are downloaded limit the maximum number of downloaded images per category delete history whether you choose to erase previous downloads or not task classification detection or segmentation driver path to chrome driver bash python dataset builder py limit 20 delete history yes once the script runs you ll be asked to define your classes or queries img src images screenshot png here s what the output looks like after the download img src images output png width 350 object detection with make sense this only works if you choose a detection or segmentation task make sense is an awesome open source webapp that lets you easily label your image dataset for tasks such as localization you can check it out here https www makesense ai you can also clone it and run it locally for better performance https github com skalskip make sense in order to use this tool i ll be running it locally and interface with it using selenium once the dataset is downloaded selenium opens up a chrome browser upload the images to the app and fill in the label list this ultimately allows you to annotate demo youtube video p align center a href https www youtube com watch v qxlvmr9mrp4 img src https img youtube com vi qxlvmr9mrp4 0 jpg a p todo grin please feel free to contribute report any bugs in the issue section or request any feature you d like to see shipped accelerate the download of images via multiprocessing apply a quality check on the images integrate automatic tagging using pre trained networks license please be aware that this code is under the gpl3 license you must report each utilisation of this code to the author of this code ahmedbesbes please push your code using this api on a forked github repo public | computer-vision dataset-creation detection selenium segmentation classification | ai |
Employee-Database-SQL | employee database analysis the project consist on three parts data modeling inspect the csvs and sketch out an erd of the tables data engineering create a table schema for each of the six csv files data analysis sql queries to obtain the following details of each employee employee number last name first name gender and salary employees who were hired in 1986 manager of each department with the following information department number department name the manager s employee number last name first name and start and end employment dates department of each employee with the following information employee number last name first name and department name all employees whose first name is hercules and last names begin with b all employees in the sales department including their employee number last name first name and department name all employees in the sales and development departments including their employee number last name first name and department name frequency count of employee last names in descending order i e how many employees share each last name features data modeling engineering and analysis use of sqlalchemy for the connection to the database built with postgresql python outcomes here you can check some outcomes of the project however all the results are contained in the files feel free to check them out entity relationship diagram image3 png images image3 png bar chart of average salary by title image2 png images image2 png histogram fo salary ranges from employees image1 png images image1 png | server |
|
HR-Analysis- | hr analysis employee database your first major task is a research project on employees of the corporation from the 1980s and 1990s all that remain of the database of employees from that period are six csv files designed the tables to hold data in the csvs imported the csvs into a sql database and used the database to query the data for insights on the employees data modeling inspected the csvs and sketched out an erd of the tables data engineering used the information in the table schema for each of the six csv files imported each csv file into the corresponding sql table data analysis created queries to conduct the following analysis organied employee data by employee number last name first name sex and salary listed the first name last name and hire date for employees who were hired in 1986 highlighted the manager of each department with the following information department number department name the manager s employee number last name first name examined the department of each employee with the following information employee number last name first name and department name queried the first name last name and sex for employees whose first name is hercules and last names begin with b list of all employees in the sales department including their employee number last name first name and department name list of all employees in the sales and development departments including their employee number last name first name and department name how many employees share each last name | sql database data-analysis | server |
CpeDatabase | cpedatabase cpe database system project for software engineering | server |
|
scale | scale https raw githubusercontent com telekom scale main assets scale 20 20the 20telekom 20digital 20design 20system png scale is the digital design system for telekom products and experiences by default scale is built to align with our corporate brand and design but allows for easy customization to fit your particular product it helps you build your digital products faster and create superior experiences with ease with production ready components for code and design a central library and comprehensive documentation scale has everything you need currently scale is an open beta scale components are customizable and written in typescript if you want to represent the corporate identity of a separate brand you need to replace the telekom default theme see detailed instructions below scale badge https img shields io badge telekom scale 23e20074 svg license mpl 2 0 https img shields io badge license mpl 202 0 brightgreen svg license github code size in bytes https img shields io github languages code size telekom scale svg style flat square github repo size https img shields io github repo size telekom scale svg style flat square welcome to scale access the comprehensive documentation for scale https telekom github io scale are you a designer we provide a comprehensive sketch library for designers building telekom software all components in the sketch library are also available to your developers as code making the handover very smooth and straightforward for more information access the scale website https telekom github io scale path docs setup info getting started for designers page customizing scale for open source software although the code for scale is free and available under the mpl 2 0 license deutsche telekom fully reserves all rights to the telekom brand to prevent users from getting confused about the source of a digital product or experience there are strict restrictions on using the telekom brand and design even when built into code that we provide for any customization other than explicitly for the telekom you must replace the deutsche telekom default theme to use scale as open source software and customize it with a neutral theme please follow the instructions for our open source version open source version open source version by following the instructions for the open source version you obtain source code packages that use a neutral theme and are fully covered by the mpl 2 0 license setup with npm npm install telekom scale components neutral next to install the version prior to dark mode https github com telekom scale releases tag v3 0 0 beta 53 do npm install telekom scale components neutral without next to use the components you need to load a css file and some javascript the css file includes the fonts and design tokens setup with plain html html link rel stylesheet href node modules telekom scale components neutral dist scale components scale components css script type module src node modules telekom scale components neutral dist scale components scale components esm js script setup with a bundler or es modules javascript import telekom scale components neutral dist scale components scale components css import applypolyfills definecustomelements from telekom scale components neutral loader applypolyfills then definecustomelements window npm packages package name description telekom scale components neutral stencil components telekom scale components react neutral component proxies for react telekom scale components vue neutral component proxies for vue telekom scale components angular neutral component proxies for angular telekom scale design tokens neutral design tokens deprecated since v3 0 0 beta x please use telekom scale components neutral directly support for custom elements is already great deprecated since v3 0 0 beta 100 in favor of telekom design tokens https www npmjs com package telekom design tokens using the source code directly if you want to use the source code remove the following folders these folders contain all the protected brand and design assets of the telekom and are not available under the mpl 2 0 license folder content assets scale key visual packages components src components telekom telekom components packages components src telekom telekom fonts icons packages components src html telekom telekom code examples packages design tokens src telekom telekom design tokens packages visual tests visual tests storybook vue telekom branded storybook telekom version please note that the use of the telekom brand and design assets including but not limited to the logos the color magenta the typeface and icons as well as the footer and header components are not available for free use and require deutsche telekom s express permission for use in commerce setup with npm install the scale component library in your project with npm or yarn npm install telekom scale components next to install the version prior to dark mode https github com telekom scale releases tag v3 0 0 beta 53 do npm install telekom scale components without next setup with plain html html link rel stylesheet href node modules telekom scale components dist scale components scale components css script type module src node modules telekom scale components dist scale components scale components esm js script setup with a bundler or es modules javascript import telekom scale components dist scale components scale components css import applypolyfills definecustomelements from telekom scale components loader applypolyfills then definecustomelements window npm packages package name description telekom scale components stencil components telekom scale components react component proxies for react telekom scale components vue component proxies for vue telekom scale components angular component proxies for angular telekom scale design tokens telekom design tokens deprecated since v3 0 0 beta x please use telekom scale components directly support for custom elements is already great check out the info relative to frameworks in the documentation https telekom github io scale deprecated since v3 0 0 beta 100 in favor of telekom design tokens https www npmjs com package telekom design tokens using the source code directly simply clone download this repository and use the source code as is monorepo packages overview package name description components stencil components components angular component proxies for angular auto generated components react component proxies for react auto generated components vue component proxies for vue auto generated design token design tokens storybook vue our storybook visual tests visual snapshot testing contributing code of conduct this project has adopted the contributor covenant https www contributor covenant org in version 2 0 as our code of conduct please see the details in our code of conduct md code of conduct md all contributors must abide by the code of conduct how to contribute we always welcome and encourage contributions and feedback for more information on how to contribute the project structure as well as additional information see our contribution guidelines contributing md by participating in this project you agree to abide by its code of conduct code of conduct md at all times contributors our commitment to open source means that we are enabling even encouraging all interested parties to contribute and become part of its developer community licensing copyright c 2021 egor kirpichev and contributors deutsche telekom ag licensed under the mozilla public license 2 0 mpl 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license by reviewing the file license license in the repository unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license license for the specific language governing permissions and limitations under the license | web-components design-system typescript ui-toolkit monorepo stenciljs stenciljs-components sketch-generation | os |
microfe-client | microfe microfe short for micro frontends a naive infrastructure meta framework implementation for micro frontends this project intends to provide the necessary tooling to achieve independent apps loaded separately and run on different parts on a single web page in complete isolation for detailed information on the topic can be found micro frontends org https micro frontends org motivation when developing microservices there are lots of tools and libraries to help developers to focus the effort on the things needs to be done instead of fighting against a monolithic monster for now micro frontends idea is still premature and it needs time to grow something easy to use my intention is to contribute to this discussion and also provide necessary tooling and a sample architecture for developers who would like to give it a try providing an easy to use infrastructure for individuals and companies can be considered as an ultimate goal who will may can use microfe ideally microfe is not suitable for small teams and for them trying to use it would not be necessary for this kind of teams refactoring their monolithic fe apps would be more productive instead of using microfe to divide a relatively big app into smaller pieces and trying to maintain each piece if the project contains at least two independent teams which are responsible for the same monolithic app then microfe can be beneficial because microfe gives the opportunity of working on independent tech stack by each team it can provide isolation and managed communication channels between micro apps on micro frontends while companies growing they usually move from one team to two or more and they start to divide the code base and on the backend side microservice architecture has lots of benefits to scale the company up on the frontend side the code base becomes a growing monolith even if it is written in a modular fashion so scaling a front end team is not so easy and problems start to appear lack of communication between teams conflicting merges hard to change tech stacks hard to update dependencies and the list goes on similar to microservices the micro frontends provides the opportunity to isolate code bases and make the teams free to use any code standards and tech stack and focus on relatively small parts of the application goals isolated and independent apps a way to have a unified ui inter app communication i e authentication easy to maintain apps not to break already available build environments for major frameworks react angular vue freedom of tech stack choice requirements to run the microfe locally you need to clone and run micro fe registry https github com onerzafer micro fe registry the documentation for micro fe registry can be found under its own repository usage currently there is no npm package provided and the usage is not recommended at this phase yet if you are willing to experiment by yourself clone both repositories for micro fe registry part follow the instructions on its own repository then execute following commands bash npm install npm start this command will open your browser on http localhost 8080 and you will be able to see the page is running if you see just blank page be sure your micro fe registry installation is up and running and if it is running already please check if the requested micro apps are available on the registry folder with requested names if you have still problems of running please open an issue i will be happy to help you top level architecture the microfe library basically has 4 different main parts appsmanager loader router and store it also provides some helper functions and classes bootstrapper microfe decorator and provider these all parts of the microfe library can function with a specific micro apps wich implements microfe interface definition of a microfe app a microfe app should implement the following interface typescript interface microapp name string deps string initialize args appsmanager appsmanager config configinterface key string any any void initialize function may return the instance of the app bootstrapper the responsibility of bootstrapper is initializing the appsmanager and all other micro apps provided inside the library so it can be considered as the entry point of the microfe library the signature for bootstrapper can be described like this typescript const bootstrap routes route config configinterface microapps microapp void to boostrap the microfe meta framework the following example can be used as a refrence typescript import microfe bootstrap route configinterface from lib microfe deps layoutapp class main constructor console log initialised const routes route path redirectto angular path angular microapp demoangular tagname demo angular path react microapp reactdemo tagname react demo path static microapp staticapp tagname static app path microapp notfoundapp tagname not found app const config configinterface registryapi http localhost 3000 registry registrypublic http localhost 3000 bootstrap routes config main appsmanager the main functionality of appsmanager is creating the dependency tree and when all of the dependencies of a micro app are ready it instantiates the micro app by providing the dependencies instances the public api for appsmanager can be summarized as follows typescript interface appsmanager register app microapp void subscribe fn notfoundapps microapp void unsubscribe void appsmanager passes the config and itself as default dependency to all of the instances of provided micro apps which means all have the access to appsmanager and its public api alternatively appsmanager can be accessed from window global as appsmanager appsmanager is the only part which does not implement the microapp interface the rest of the library actually is a collection of micro apps loader when registered by bootstrapper the loader requires config and waits until it is provided with the config loader receives the micro fe registry public urls after getting the config appsmanagers instantiates the loader on constructer loader subscribes to appsmanagers and start the not found micro apps when a new not found micro app available the loader parse the micro app url by combining the name of the micro app and public url of micro fe registry injects it to the dom as a remote script naturally the browser loads the micro app from given url the loader can be a dependency and it has only one public function typescript const loader fetchmicroapp name string void router unlike the common routers the microfe router has limited functionality it is capable of solving the first part of the declared urls this implementation assumes the rest of the url will be resolved by the responsible micro app if the router can resolve the url from the browser location it triggers the loader fetchmicroapp function with the name of resolved micro app so it has two dependencies routes and loader route the routes object is an array of the route objects which has the following interface typescript interface route path string tagname string redirectto string microapp string micro router the router outlet when router instantiates it register a web component called micro router this is the expected place for all other micro apps loads on route hit the usage is pretty simple and available for all micro apps living on the client at the moment html micro router micro router currently it has no targetting of sub routes which means all of the micro router tags will display the same target micro app so current recommendations are using only one micro router the page in the future some sub routes can be targetted to some named micro router tags micro link router also provides a simple navigation element with no design all micro apps will be able to access it any time since it is provided as a web component like micro router micro link has one attribute which is href and if the given path is the current route it assigns itself automatically active class so no need to observe history and match correct path to put active class to the links html micro link href some cool page some cool page micro link when we navigate to some cool page this micro link above will be marked as active store with the assumption of only big teams and big code bases will need the microfe and nearly all of the already managing app state the microfe library provides a global shared inter app state this state can be used as a shared event bus or shared global state by nature this store is reactive and powered by rxjs yet it still has the similar functionalities of redux library typescript interface action type string key string any interface state key string any interface reducer action action state state state void interface reducertreepiece key string reducer reducertreepiece interface microappstore addreducer reducertreepiece reducertreepiece void dispatch action action void select selector string observable state the main issue with the microappstore is the reducers may arrive on different times the select function is pretty useful on this case because if the selected reducer is not available it sibly emits undefined and when the reducer arrives it emmits to all subscribers the related state typescript const todos microappsstore select todos todos subscribe todos console log todos immediatelly logs undefined const todosreducer state action action return state microappsstore addreducer todos todorecucer at this point todos subscribe will receive as todos and will log microfe decorator this decorator can be used as an helper for casting any class to a micro app typescript microfe deps layoutapp export class main private layoutapp constructor layoutapp this layoutapp layoutapp this render private render this layoutapp somelayoutappfunction the code block above will be equavalent to following code javascript name main deps layoutapp initialize function layoutapp layoutapp somelayoutappfunction provider the provider is a helper function to provide objects as micro app so any static data can be provided to other micro apps with provide function typescript const languageen hello hello const languageenprovider provide languageen then languageenprovider can be passed down to all micro apps which has the dependency as follows typescript bootstrap routes config main languageenprovider license mit https choosealicense com licenses mit | front_end |
|
cvkeep-frontend | cv keep front end cv keep is a free and open source platform intended to hold online resum s it s completely free pretty and simple you can use the self hosted version or clone the entire project and use for your own purposes including comercial ones without any kind of charge or legal implications if you intend to have an online resum hosted by cv keep just go to https cvkeep com and create your account technical docs self hosting this is the front end repository of the platform since it is free and open source you can clone it and do what you want since you use your own brand for documentation about architecture stack advanced usage development contribution and self hosting please click here https cv keep github io cvkeep docs | front_end |
|
tensorflow-without-a-phd | table width 100 tr td width 50 h2 featured code sample h2 b a href tensorflow planespotting tensorflow planespotting a b br code from the google cloud next 2018 session tensorflow deep learning and modern convnets without a phd other samples from the tensorflow without a phd series are in this repository too td width 50 a href https youtu be kc4201o83w0 img alt tensorflow deep learning and modern convnets without a phd src tensorflow planespotting img next2018thumb jpg a td tr table br tensorflow and deep learning without a phd series by martin gorner https twitter com martin gorner a crash course in six episodes for software developers who want to learn machine learning with examples theoretical concepts and engineering tips tricks and best practices to build and train the neural networks that solve your problems table width 100 tr td width 50 img alt tensorflow and deep learning without a phd src docs images flds1 png td td width 50 div align center a href https youtu be u4algiomyp4 video a a href https docs google com presentation d 1tvixw6itiz8igjp6u17tcgofrlsahwqmmowjlgqy9co pub slide id p slides a a href https codelabs developers google com codelabs cloud tensorflow mnist 0 codelab a a href tensorflow mnist tutorial code a br br div p the basics of building neural networks for software engineers neural weights and biases activation functions supervised learning and gradient descent tips and best practices for efficient training learning rate decay dropout regularisation and the intricacies of overfitting dense and convolutional neural networks this session starts with low level tensorflow and also has a sample of high level tensorflow code using layers and datasets code sample mnist handwritten digit recognition with 99 accuracy duration 55 min p td tr tr td width 50 div align center a href https youtu be vq2nnj4g6n0 t 76m video a a href https docs google com presentation d 18mizndrcoxb7g tccl2ezoels5udvacuxngznlnmole pub slide id g1245051c73 0 25 slides a a href tensorflow mnist tutorial readme batchnorm md code a br br div p what is batch normalisation how to use it appropriately and how to see if it is working or not code sample mnist handwritten digit recognition with 99 5 accuracy duration 25 min p td td width 50 img alt the superpower batch normalization src docs images flds2 png td tr tr td border 0 width 50 img alt tensorflow deep learning and recurrent neural networks without a phd src docs images flds3 png td td border 0 width 50 div align center a href https youtu be ftuwdxuffi8 video a a href https docs google com presentation d 18mizndrcoxb7g tccl2ezoels5udvacuxngznlnmole pub slide id p slides a a href tensorflow rnn tutorial codelab a a href https github com martin gorner tensorflow rnn shakespeare code a br br div p rnn basics the rnn cell as a state machine training and unrolling backpropagation through time more complex rnn cells lstm and gru cells application to language modeling and generation tensorflow apis for rnns code sample rnn generated shakespeare play duration 55 min p td tr tr td width 50 div align center a href https youtu be kc4201o83w0 video a a href https docs google com presentation d 19u0tm0jhl5tpzyarlilvy4qlsudbfnnx2hwsvzsfpi0 pub slides a a href tensorflow planespotting code a br br div p convolutional neural network architectures for image processing convnet basics convolution filters and how to stack them learnings from the inception model modules with parallel convolutions 1x1 convolutions a simple modern convnet architecture squeezenet convenets for detection the yolo you look only once architecture full scale model training and serving with tensorflow s estimator api on google cloud ml engine and cloud tpus tensor processing units application airplane detection in aerial imagery duration 55 min p td td width 50 img alt tensorflow deep learning and modern convnets without a phd src docs images flds4 png td tr tr td border 0 width 50 img alt tensorflow deep learning and modern rnn architectures without a phd src docs images flds5 png td td border 0 width 50 div align center a href https youtu be pzozmxcr37i video a a href https docs google com presentation d 17glpozfb l3wcr8fnejnjd9tei igtq1yqixzctor14 pub slides a a href https github com conversationai conversationai models tree master attention tutorial code a br br div p advanced rnn architectures for natural language processing word embeddings text classification bidirectional models sequence to sequence models for translation attention mechanisms this session also explores tensorflow s powerful seq2seq api applications toxic comment detection and langauge translation co author nithum thain duration 55 min p td tr tr td width 50 div align center a href https youtu be t1a3ntttvba video a a href https docs google com presentation d 1qlvvgkxzlm6 ooz4 zooab0wth2idhbfvubhsmvmk9i pub slides a a href tensorflow rl pong code a br br div p a neural network trained to play the game of pong from just the pixels of the game uses reinforcement learning and policy gradients the approach can be generalized to other problems involving a non differentiable step that cannot be trained using traditional supervised learning techniques a practical application neural architecture search neural networks designing neural networks co author yu han liu duration 40 min p td td width 50 img alt tensorflow and deep reinforcement learning without a phd src docs images flds6 png td tr table br br br table width 75 tr td colspan 4 quick access to all code samples td tr tr td width 33 b a href tensorflow mnist tutorial tensorflow mnist tutorial a b br dense and convolutional neural network tutorial td td width 33 b a href tensorflow rnn tutorial tensorflow rnn tutorial a b br recurrent neural network tutorial using temperature series td td width 33 b a href tensorflow rl pong tensorflow rl pong a b br pong with reinforcement learning td tr tr td width 33 b a href tensorflow planespotting tensorflow planespotting a b br airplane detection model td td width 33 b a href https github com conversationai conversationai models tree master attention tutorial conversationai attention tutorial a b br toxic comment detection with rnns and attention td tr table br br br disclaimer this is not an official google product but sample code provided for an educational purpose | ai |
|
plinth-18-teaser | plinth 18 teaser teaser for plinth the techno management literary fest the lnm institute of information technology jaipur live hosts the project is hosted on http karanagarwal me plinth 18 teaser description for the loading animation we wrote the svg paths for coming soon in the same font as the standard font for plinth then we used anime js for animating and synchronizing the fill in and fill out animations for every letter for the main site we used animation frames to run the animation where the stars are approaching the black hole and it is getting bigger clicking on the screen generates some more stars from the point of click finally when the size of the black hole reaches a certain radius a callback triggers an event which loads the logo and details for the fest image of plinth teaser teaser png the above image is a still form the site technologies used anime js for the loader animations node js for server side programming | server |
|
cloud-platform-chaos | cloud platform chaos moj cloud platform chaos engineering | cloud |
|
Learn-Natural-Language-Processing-Curriculum | learn natural language processing curriculum this is the curriculum for learn natural language processing by siraj raval on youtube course objective this is the curriculum for this https youtu be gazfsfcijxq video on learn natural language processing by siraj raval on youtube after completing this course start your own startup do consulting work or find a full time job related to nlp remember to believe in your ability to learn you can learn nlp you will learn nlp and if you stick to it eventually you will master it find a study buddy join the nlp curriculum channel in our slack channel to find one http wizards herokuapp com components each week video lectures reading assignments project s course length 8 weeks 2 3 hours of study per day tools used python pytorch nltk prerequisites learn python https www edx org course introduction python data science 2 statistics http web mit edu csvoss public usabo stats handout pdf probability https static1 squarespace com static 54bf3241e4b0f0d81bf7ff36 t 55e9494fe4b011aed10e48e5 1441352015658 probability cheatsheet pdf calculus http tutorial math lamar edu pdf calculus cheat sheet all pdf linear algebra https www souravsengupta com cds2016 lectures savov notes pdf week 1 language terminology preprocessing techniques description overview of nlp pragmatics semantics syntax morphology text preprocessing stemmings lemmatization tokenization stopword removal video lectures https web stanford edu jurafsky slp3 videos 1 2 5 https www youtube com watch v hyt bzlyvdu list pldcmcggul9rxtez1rsy6x5nhlbji8z3gz reading assignments ch 1 2 of speech and language processing 3rd ed slides project look at 1 1 to 3 4 to learn nltk https github com hb20007 hands on nltk tutorial then use nltk to perform stemming lemmatiziation tokenization stopword removal on a dataset of your choice week 2 language models lexicons pre deep learning description lexicons pre deep learning statistical language model pre deep learning hmm topic modeling w lda video lectures https courses cs washington edu courses csep517 17sp lectures 2 6 reading assignments 4 6 7 8 9 10 from the uwash course extra lda blog post https medium com lettier how does lda work ill explain using emoji 108abf40fa7d project https github com treb1en hiddenmarkovmodel pytorch build hidden markov model for weather prediction in pytorch week 3 word embeddings word sentence and document video lectures http web stanford edu class cs224n index html schedule lectures 1 5 reading assignments suggested readings from course project 3 assignments visualize and implement word2vec create dependency parser all in pytorch they are assigments from the stanford course week 4 5 deep sequence modeling description sequence to sequence models translation summarization question answering attention based models deep semantic similarity video lectures https www coursera org learn language processing week 4 reading assignments read this on deep semantic similarity models https kishorepv github io dssm ch 10 deep learning book on sequence modeling http www deeplearningbook org contents rnn html project 3 assignments create a translator and a summarizer all seq2seq models in pytorch week 6 dialogue systems description speech recognition dialog managers nlu video lectures https www coursera org learn language processing week 5 reading assignments ch 24 of this book https web stanford edu jurafsky slp3 24 pdf project create a dialogue system using pytorch https github com ywk991112 pytorch chatbot and a task oriented dialogue system using dialogflow to order food week 7 transfer learning video lectures my videos on bert and gpt 2 how to build biomedical startup https www youtube com watch v bdxfvr1gpsu https www youtube com watch v j9kbz5i8gdm https www youtube com watch v 0n95f eqzdw transfer learning with bert gpt 2 elmo reading assignments http ruder io nlp imagenet https lilianweng github io lil log 2019 01 31 generalized language models html http jalammar github io illustrated bert project play with this https github com huggingface pytorch pretrained bert examples pick 2 models use it for one of 9 downstream tasks compare their results week 8 future nlp description visual semantics deep reinforcement learning video lectures cmu video https www youtube com watch v isxzsaelqx0 module 5 6 of this https www edx org course natural language processing nlp 3 reading assignments https cs stanford edu people karpathy cvpr2015 pdf hilarious https medium com yoav goldberg an adversarial review of adversarial generation of natural language 409ac3378bd7 project policy gradient text summarization https github com yaserkl rlseq2seq policy gradient w self critic learning and temporal attention and intra decoder attention reimplement in pytorch | ai |
|
learning-blockchain | learning blockchain resources to learn blockchain technology and it s applications table of contents how to collaborate how to collaborate basics basics introductory courses introductory courses advance courses advance courses tutorials tutorials articles and blogs articles and blogs books books frameworks libraries and sdks frameworks libraries and sdks communities communities podcasts podcasts talks talks how to collaborate just send a pull request or open an issue https github com nqcm learning blockchain issues to send more useful links and i ll update the list basics 1 blockchain glossary for beginners https blockchainhub net blockchain glossary 1 blockchain 101 a beginners guide https medium com sinafl blockchain 101 a beginners guide 410d4d93d635 1 but how does bitcoin actually work https www youtube com watch v bbc nxj3ng4 1 a practical introduction to blockchain with python http adilmoujahid com posts 2018 03 intro blockchain bitcoin python 1 an introduction to ethereum https www toptal com ethereum hiring guide introductory courses 1 basics of cryptography from coursera https www coursera org learn crypto 1 blockchain fundamentals by berkeley https www edx org professional certificate uc berkeleyx blockchain fundamentals 1 bitcoin and cryptocurrencies by berkeley https www edx org course cryptocurrencies bitcoin and the crypto space 1 blockchain fundamentals https www pluralsight com courses blockchain fundamentals 1 blockchain theory 101 https www udemy com blockchain theory 101 siteid jvfxdtr9v80 e55ogko5gxrnglqhbsvzvq lsnpubid jvfxdtr9v80 1 introduction to cryptocurrencies and blockchain https www udemy com introduction to cryptocurrencies siteid jvfxdtr9v80 z 1qle6xbadd6jijag9h8q lsnpubid jvfxdtr9v80 advance courses 1 blockchain a z learn how to build your first blockchain https www udemy com build your blockchain az 1 build a blockchain and a cryptocurrency from scratch https www udemy com build blockchain 1 code your own cryptocurrency on ethereum https www udemy com code your own cryptocurrency 1 blochain principles and practices https www pluralsight com courses blockchain principles practices 1 ethereum decentralized application design development https www udemy com ethereum dapp 1 blockchain for business an introduction to hyperledger technologies https www edx org course blockchain business introduction linuxfoundationx lfs171x 0 tutorials 1 build your first blockchain app using ethereum smart contracts and solidity https www youtube com watch v coq5dg8wm2o 1 ibm blockchain 101 quick start guide for developers https developer ibm com technologies blockchain tutorials cl ibm blockchain 101 quick start guide for developers bluemix trs 1 build your own blockchain a python tutorial http ecomunsing com build your own blockchain 1 learn blockchains by building one https hackernoon com learn blockchains by building one 117428612f46 1 blockchain developers essentials https www youtube com watch v ydsjpirpmgm list plqeivdgmajcu i5rsqcdujkfukuxphhk 1 official ethereum tutorial https github com ethereum wiki wiki dapp using meteor 1 ethereum development walkthrough https hackernoon com ethereum development walkthrough part 1 smart contracts b3979e6e573e 1 full stack hello world voting ethereum dapp tutorial https medium com mvmurthy full stack hello world voting ethereum dapp tutorial part 1 40d2d0d807c2 1 learning solidity tutorials on youtube https www youtube com playlist list pl16wqdaj66scodl6xifbke xqg2gw avg 1 beginners guide to smart contracts in solidity https www youtube com watch v r ciemcfkis list plqeivdgmajcwnazlelxklzhs5a71sxzw0 1 create your own ethereum blockchain https www youtube com watch v skxyynmjauq list plqeivdgmajcvyh3hh29lgbpxog9s82o6a articles and blogs 1 beginner s guide series on cryptoassets https medium com linda xie beginners guide series on cryptoassets d897535d887 1 designing a decentralized profile dapp https uxdesign cc designing a decentralized profile dapp ab12ead4ab56 1 dangerous solidity hacks https hackernoon com hackpedia 16 solidity hacks vulnerabilities their fixes and real world examples f3210eba5148 books 1 mastering bitcoin https github com bitcoinbook bitcoinbook 1 solidity programming essentials https www amazon com gp product 1788831381 frameworks libraries and sdks 1 truffle https truffleframework com 1 openzeppelin https openzeppelin com sdk communities 1 r blockchain https www reddit com r blockchain 1 ethereum subreddit https www reddit com r ethdev 1 ethereum solidity on gitter https gitter im ethereum solidity 1 ethereum tutorials on gitter https gitter im ethereum tutorials 1 ethereum web3 js on gitter https gitter im ethereum web3 js podcasts 1 future thinkers https futurethinkers org decentralization 1 epicenter https epicenter tv 1 the let s talk bitcoin network https itunes apple com us podcast the lets talk bitcoin network id640581455 mt 2 1 coin mastery https itunes apple com us podcast coin mastery building your cryptocurrency empire id1251624136 mt 2 1 crypto radio http cryptoradio io talks 1 ted talk the blockchain explained simply https www youtube com watch v kp hgpqvlpa 1 ted talk the potential for blockchain https www ted com talks mike schwartz the potential of blockchain 1 ted talk how the blockchain will radically transform the economy https www ted com talks bettina warburg how the blockchain will radically transform the economy 1 ted talk how the blockchain is changing money and business https www ted com talks don tapscott how the blockchain is changing money and business 1 ted talk we ve stopped trusting institutions and started trusting strangers https www ted com talks rachel botsman we ve stopped trusting institutions and started trusting strangers language en 1 an intro to crypto building blocks https www youtube com watch time continue 58 v 2dgdgwyjok4 | blockchain |
|
QtMobileApp | b followng features are implemented b 1 stackview 2 drawer 3 header 4 moving back to prevous screen 5 resolving html file and display in webview 6 resolving network url using qnetworkaccessmanager can access webservice 7 handling error condition b pending features b 1 scaling of imgaes based on dpi | qml qt5 qtquick mobile-app cross-platform android-application ios-app | front_end |
Algebra-FE | algebra fe materijali za algebra front end developer te aj | front_end |
|
KAKAOS | kakaos os kakaos os kakaos 11000 90 mcu stm32f103 stm32f407 br gcc gcc gcc arm none eabi 7 2018 q2 br os 100k ipc 40k br os bsp br start kernel br windows linux windows keil ide c windows project linux make linux stm32f103 make stm32f103 config make makefile linux shell br stm32f103zet6 suspend 6 94us sleep 12 35us ucosiii 15 3us 16 3us br kakaos png testcase ipc mcb p v br mqb msg send msg receive br mutex mutex lock mutex try lock mutex unlock br sleep br suspend br task creat ready task init ready br exec br 64 0 63 br br system time display br set time br timer enable timer disable br c ka printf br ka strlen ka strcpy br buddy slab br ka malloc ka free br br buddy br init mp create mp mp alloc mp free br shell 1 br shell shell c br tab br 2 shell shell insmod br module tool so ko 2 mcu br ko br main module init module exit br rt thread br mount br device register vfs dev cat echo br fs register mount fat br br | os |
|
azure-search-openai-demo-csharp | page type sample languages csharp products ai services azure blob storage azure container apps azure cognitive search azure openai aspnet core blazor defender for cloud azure monitor chatgpt enterprise data with azure openai and cognitive search net github workflow status https img shields io github actions workflow status azure samples azure search openai demo csharp dotnet build yml label build 20 26 20test logo github style for the badge open in github codespaces https img shields io static v1 style for the badge label github codespaces message open color brightgreen logo github https github com codespaces new hide repo select true ref main repo 624102171 machine standardlinux32gb devcontainer path devcontainer 2fdevcontainer json location westus2 open in remote containers https img shields io static v1 style for the badge label remote 20 20containers message open color blue logo visualstudiocode https vscode dev redirect url vscode ms vscode remote remote containers cloneinvolume url https github com azure samples azure search openai demo csharp this sample demonstrates a few approaches for creating chatgpt like experiences over your own data using the retrieval augmented generation pattern it uses azure openai service to access the chatgpt model gpt 35 turbo and azure cognitive search for data indexing and retrieval the repo includes sample data so it s ready to try end to end in this sample application we use a fictitious company called contoso electronics and the experience allows its employees to ask questions about the benefits internal policies as well as job descriptions and roles rag architecture docs appcomponents version 4 png for more details on how this application was built check out transform your business with smart net apps powered by azure and chatgpt blog post https aka ms build dotnet ai blog build intelligent apps with net and azure build session https build microsoft com sessions f8f953f3 2e58 4535 92ae 5cb30ef2b9b0 we want to hear from you are you interested in building or currently building intelligent apps take a few minutes to complete this survey take the survey https aka ms dotnet build oai survey features voice chat chat and q a interfaces explores various options to help users evaluate the trustworthiness of responses with citations tracking of source content etc shows possible approaches for data preparation prompt construction and orchestration of interaction between model chatgpt and retriever cognitive search settings directly in the ux to tweak the behavior and experiment with options chat screen docs chatscreen png application architecture user interface the application s chat interface is a blazor webassembly https learn microsoft com aspnet core blazor application this interface is what accepts user queries routes request to the application backend and displays generated responses backend the application backend is an asp net core minimal api https learn microsoft com aspnet core fundamentals minimal apis overview the backend hosts the blazor static web application and what orchestrates the interactions among the different services services used in this application include azure cognitive search https learn microsoft com azure search search what is azure search indexes documents from the data stored in an azure storage account this makes the documents searchable using vector search https learn microsoft com azure search search get started vector capabilities azure openai service https learn microsoft com azure ai services openai overview provides the large language models to generate responses semantic kernel https learn microsoft com semantic kernel whatissk is used in conjunction with the azure openai service to orchestrate the more complex ai workflows getting started account requirements in order to deploy and run this example you ll need azure account if you re new to azure get an azure account for free https aka ms free and you ll get some free azure credits to get started azure subscription with access enabled for the azure openai service you can request access https aka ms oaiapply you can also visit the cognitive search docs https azure microsoft com free cognitive search to get some free azure credits to get you started azure account permissions your azure account must have microsoft authorization roleassignments write permissions such as user access administrator https learn microsoft com azure role based access control built in roles user access administrator or owner https learn microsoft com azure role based access control built in roles owner warning br by default this sample will create an azure container app and azure cognitive search resource that have a monthly cost as well as form recognizer resource that has cost per document page you can switch them to free versions of each of them if you want to avoid this cost by changing the parameters file under the infra folder though there are some limits to consider for example you can have up to 1 free cognitive search resource per subscription and the free form recognizer resource only analyzes the first 2 pages of each document cost estimation pricing varies per region and usage so it isn t possible to predict exact costs for your usage however you can try the azure pricing calculator https azure microsoft com pricing calculator for the resources below azure container apps https azure microsoft com pricing details container apps azure openai service https azure microsoft com pricing details cognitive services openai service azure form recognizer https azure microsoft com pricing details form recognizer azure cognitive search https azure microsoft com pricing details search azure blob storage https azure microsoft com pricing details storage blobs azure monitor https azure microsoft com pricing details monitor project setup you have a few options for setting up this project the easiest way to get started is github codespaces since it will setup all the tools for you but you can also set it up locally local environment if desired github codespaces you can run this repo virtually by using github codespaces which will open a web based vs code in your browser open in github codespaces https img shields io static v1 style for the badge label github codespaces message open color brightgreen logo github https github com codespaces new hide repo select true ref main repo 624102171 machine standardlinux32gb devcontainer path devcontainer 2fdevcontainer json location westus2 vs code remote containers a related option is vs code remote containers which will open the project in your local vs code using the dev containers https marketplace visualstudio com items itemname ms vscode remote remote containers extension open in remote containers https img shields io static v1 style for the badge label remote 20 20containers message open color blue logo visualstudiocode https vscode dev redirect url vscode ms vscode remote remote containers cloneinvolume url https github com azure samples azure search openai demo csharp local environment install the following prerequisites azure developer cli https aka ms azure dev install net 7 https dotnet microsoft com download git https git scm com downloads powershell 7 pwsh https github com powershell powershell for windows users only important br ensure you can run pwsh exe from a powershell command if this fails you likely need to upgrade powershell docker https www docker com products docker desktop important br ensure docker is running before running any azd provisioning deployment commands then run the following commands to get the project on your local environment 1 run azd auth login 1 clone the repository or run azd init t azure search openai demo csharp 1 run azd env new azure search openai demo csharp deploying from scratch important br ensure docker is running before running any azd provisioning deployment commands execute the following command if you don t have any pre existing azure services and want to start from a fresh deployment 1 run azd up this will provision azure resources and deploy this sample to those resources including building the search index based on the files found in the data folder for the target location the regions that currently support the model used in this sample are east us 2 east us or south central us for an up to date list of regions and models check here https learn microsoft com azure cognitive services openai concepts models if you have access to multiple azure subscriptions you will be prompted to select the subscription you want to use if you only have access to one subscription it will be selected automatically note br this application uses the gpt 35 turbo model when choosing which region to deploy to make sure they re available in that region i e eastus for more information see the azure openai service documentation https learn microsoft com azure cognitive services openai concepts models gpt 35 models 1 after the application has been successfully deployed you will see a url printed to the console click that url to interact with the application in your browser it will look like the following output from running azd up assets endpoint png note br it may take a few minutes for the application to be fully deployed use existing resources if you have existing resources in azure that you wish to use you can configure azd to use those by setting the following azd environment variables 1 run azd env set azure openai service name of existing openai service 1 run azd env set azure openai resource group name of existing resource group that openai service is provisioned to 1 run azd env set azure openai chatgpt deployment name of existing chatgpt deployment only needed if your chatgpt deployment is not the default chat 1 run azd env set azure openai embedding deployment name of existing embedding model deployment only needed if your embedding model deployment is not the default embedding 1 run azd up note br you can also use existing search and storage accounts see infra main parameters json for list of environment variables to pass to azd env set to configure those existing resources deploying or re deploying a local clone of the repo important br ensure docker is running before running any azd provisioning deployment commands run azd up deploying your repo using app spaces note br make sure you have azd supported bicep files in your repository and add an initial github actions workflow file which can either be triggered manually for initial deployment or on code change automatically re deploying with the latest changes to make your repository compatible with app spaces you need to make changes to your main bicep and main parameters file to allow azd to deploy to an existing resource group with the appropriate tags 1 add azure resource group to main parameters file to read the value from environment variable set in github actions workflow file by app spaces json resourcegroupname value azure resource group 2 add azure tags to main parameters file to read the value from environment variable set in github actions workflow file by app spaces json tags value azure tags 3 add support for resource group and tags in your main bicep file to read the value being set by app spaces bicep param resourcegroupname string param tags string 4 combine the default tags set by azd with those being set by app spaces replace tags initialization in your main bicep file with the following bicep var basetags azd env name environmentname var updatedtags union empty tags base64tojson tags basetags make sure to use updatedtags when assigning tags to resource group created in your bicep file and update the other resources to use basetags instead of tags for example json resource rg microsoft resources resourcegroups 2021 04 01 name empty resourcegroupname resourcegroupname abbrs resourcesresourcegroups environmentname location location tags updatedtags running locally important br ensure docker is running before running any azd provisioning deployment commands 1 run azd auth login 1 after the application deploys set the environment variable azure key vault endpoint you can find the value in the azure your environment name env file or the azure portal 1 run the following net cli command to start the asp net core minimal api server client host dotnetcli dotnet run project app backend minimalapi csproj urls http localhost 7181 navigate to http localhost 7181 and test out the app sharing environments run the following if you want to give someone else access to the deployed and existing environment 1 install the azure cli https learn microsoft com cli azure install azure cli 1 run azd init t azure search openai demo csharp 1 run azd env refresh e environment name note that they will need the azd environment name subscription id and location to run this command you can find those values in your azure env name env file this will populate their azd environment s env file with all the settings needed to run the app locally 1 run pwsh scripts roles ps1 this will assign all of the necessary roles to the user so they can run the app locally if they do not have the necessary permission to create roles in the subscription then you may need to run this script for them just be sure to set the azure principal id environment variable in the azd env file or in the active shell to their azure id which they can get with az account show clean up resources run azd down using the app in azure navigate to the azure container app deployed by azd the url is printed out when azd completes as endpoint or you can find it in the azure portal when running locally navigate to http localhost 7181 for the client app and http localhost 7181 swagger for the open api server page once in the web app on the voice chat page select the voice settings dialog and configure text to speech preferences you can either type messages to interact with blazor clippy or select the speak toggle button to use speech to text as your input try different topics in chat context for chat try follow up questions clarifications ask to simplify or elaborate on answer etc explore citations and sources click on the settings icon to try different options tweak prompts etc enabling optional features enabling application insights to enable application insights and the tracing of each request along with the logging of errors set the azure use application insights variable to true before running azd up 1 run azd env set azure use application insights true 1 run azd up to see the performance data go to the application insights resource in your resource group click on the investigate performance blade and navigate to any http request to see the timing data to inspect the performance of chat requests use the drill into samples button to see end to end traces of all the api calls made for any chat request tracing screenshot docs transaction tracing png to see any exceptions and server errors navigate to the investigate failures blade and use the filtering tools to locate a specific exception you can see python stack traces on the right hand side enabling authentication by default the deployed azure container app will have no authentication or access restrictions enabled meaning anyone with routable network access to the container app can chat with your indexed data you can require authentication to your azure active directory by following the add container app authentication https learn microsoft com azure container apps authentication azure active directory tutorial and set it up against the deployed container app to then limit access to a specific set of users or groups you can follow the steps from restrict your azure ad app to a set of users https learn microsoft com azure active directory develop howto restrict your app to a set of users by changing assignment required option under the enterprise application and then assigning users groups access users not granted explicit access will receive the error message aadsts50105 your administrator has configured the application app name to block users unless they are specifically granted assigned access to the application productionizing this sample is designed to be a starting point for your own production application but you should do a thorough review of the security and performance before deploying to production here are some things to consider openai capacity the default tpm tokens per minute is set to 30k that is equivalent to approximately 30 conversations per minute assuming 1k per user message response you can increase the capacity by changing the chatgptdeploymentcapacity and embeddingdeploymentcapacity parameters in infra main bicep to your account s maximum capacity you can also view the quotas tab in azure openai studio https oai azure com to understand how much capacity you have azure storage the default storage account uses the standard lrs sku to improve your resiliency we recommend using standard zrs for production deployments which you can specify using the sku property under the storage module in infra main bicep azure cognitive search if you see errors about search service capacity being exceeded you may find it helpful to increase the number of replicas by changing replicacount in infra core search search services bicep or manually scaling it from the azure portal azure container apps by default this application deploys containers with 0 5 cpu cores and 1gb of memory the minimum replicas is 1 and maximum 10 for this app you can set values such as containercpucorecount containermaxreplicas containermemory containerminreplicas in the infra core host container app bicep file to fit your needs you can use auto scaling rules or scheduled scaling rules and scale up the maximum minimum https learn microsoft com azure container apps scale app based on load authentication by default the deployed app is publicly accessible we recommend restricting access to authenticated users see enabling authentication enabling authentication above for how to enable authentication networking we recommend deploying inside a virtual network if the app is only for internal enterprise use use a private dns zone also consider using azure api management apim for firewalls and other forms of protection for more details read azure openai landing zone reference architecture https techcommunity microsoft com t5 azure architecture blog azure openai landing zone reference architecture ba p 3882102 loadtesting we recommend running a loadtest for your expected number of users resources revolutionize your enterprise data with chatgpt next gen apps w azure openai and cognitive search https aka ms entgptsearchblog azure cognitive search https learn microsoft com azure search search what is azure search azure openai service https learn microsoft com azure cognitive services openai overview azure ai openai nuget package https www nuget org packages azure ai openai original blazor app https github com ievangelist blazor azure openai note br the pdf documents used in this demo contain information generated using a language model azure openai service the information contained in these documents is only for demonstration purposes and does not reflect the opinions or beliefs of microsoft microsoft makes no representations or warranties of any kind express or implied about the completeness accuracy reliability suitability or availability with respect to the information contained in this document all rights reserved to microsoft faq question why do we need to break up the pdfs into chunks when azure cognitive search supports searching large documents answer chunking allows us to limit the amount of information we send to openai due to token limits by breaking up the content it allows us to easily find potential chunks of text that we can inject into openai the method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next this allows us to reduce the chance of losing the context of the text | azd-templates chatgpt csharp openai dotnet azure | ai |
FreeRTOS_Tutorials | freertos tutorials tip me via paypal https img shields io badge paypal tip 20me green svg logo paypal https www paypal me niketnaidu contribute through patreon https img shields io badge patreon niketnaidu orange svg https www patreon com niketnaidu all my code samples for freertos tutorials https coder137 github io docs freertos tutorial | freertos c esp32 esp-idf | os |
awesome-universal-react | awesome universal react awesome https awesome re badge svg https awesome re a collection of awesome universal react and react native frameworks libraries design systems apps and resources what is universal react universal react apps are cross platform react and react native apps using react native for web to use the same api for ui components they run on ios android and web they share the same navigation styles state management and business logic but they run natively on each platform and respect the platform conventions and best practices frameworks frameworks libraries libraries design systems design systems apps apps resources resources starter kits starter kits contributing contributing license license frameworks expo https github com expo expo an open source platform for making universal native apps with react expo runs on android ios and the web next js https github com vercel next js tree canary examples with react native web the react framework react native for web https github com necolas react native web cross platform react ui packages solito https github com nandorojo solito react native next js unified libraries ui components expo image https github com expo expo tree sdk 49 packages expo image a cross platform performant image component for react native and expo zeego https github com nandorojo zeego menus for react native done right burnt https github com nandorojo burnt crunchy toasts for react native react native bottom sheet https github com gorhom react native bottom sheet a performant interactive bottom sheet with fully configurable options infinite scroll list https github com showtime xyz showtime frontend tree staging packages design system infinite scroll list flashlist on native react virtual on web tab view https github com showtime xyz showtime tab view a react native tabview component that support collapse header and custom refresh control powered by reanimated gesture handler universal tooltip https github com alantoa universal tooltip cross platform tooltip component for react native powered by expo modules renderers react native skia https github com shopify react native skia high performance react native graphics using skia react three fiber https github com pmndrs react three fiber a react renderer for three js others react native reanimated https github com software mansion react native reanimated react native s animated library reimplemented moti https github com nandorojo moti the react native web animation library powered by reanimated 3 react native gesture handler https github com software mansion react native gesture handler declarative api exposing platform native touch and gesture system to react native react native mmkv https github com mrousavy react native mmkv the fastest key value storage for react native 30x faster than asyncstorage design systems tamagui https github com tamagui tamagui style react native and web with an optimizing compiler nativewind https github com marklawlor nativewind react native utility first universal design system powered by tailwind css gluestack ui https github com gluestack gluestack ui universal headless components for react native next js react with beautiful optional styling universal ui https github com showtime xyz showtime frontend tree staging packages design system universal ui components using tailwind apps bluesky https github com bluesky social social app open source showtime https github com showtime xyz showtime frontend open source beatgig https beatgig com closed source diversified https www diversified fi closed source resources talks is react native next js production ready https www youtube com watch v h1gswxa3qfw fernando rojo building cross platform apps with react native next js https www youtube com watch v 0ffviusoutu fernando rojo zero to 10 million with react native next js https www youtube com watch v 0lnbdrwejta fernando rojo how to build a universal design system https www youtube com watch v sy4brqmrgjc or how to build a universal design system https www youtube com watch v cdl3eh3vuha axel delafosse twitter lite react native and progressive web apps https www youtube com watch v tffn39llo u nicolas gallagher building a universal css in js library https www youtube com watch v eftcek8axtu list plzivgyydqbh5d8y m3bcz68qnsazkgntn index 32 sanket sahu move fast and build things with zod expo next js https www youtube com live njhgs erqbo si qm1kygqjqsfl gij t 13 thorr stevens guides use next js with expo for web https docs expo dev guides using nextjs get started with expo and next js tamagui next js guide https tamagui dev docs guides next js get started with tamagui and next js how to choose cross platform tech https dev to codinsonn why use react native over flutter a recap 57b0 when expo next js makes sense over other strategies learning more about react native as a web developer native for react developers a webinar by lydia hallie and evan bacon https www youtube com watch v wdcfcnxmru learn about the different apis between react dom and react native layout with flexbox https reactnative dev docs flexbox learn about how flexbox is used in react native starter kits solito nativewind example monorepo https github com nandorojo solito tree master example monorepos with tailwind get started with expo and next js solito and nativewind universal medusa https github com bidah universal medusa multi platform e commerce development using react native next js medusa js t4 stack https github com timothymiller t4 app interactive cli to start a full stack typesafe universal expo next js app on cloudflare s edge platform tamagui takeout https tamagui dev takeout takeout is a template repo with a github bot that lets us send prs easily thanks to a pluggable well isolated architecture aetherspace https github com aetherspace green stack starter demo readme expo next js template repo zod for single sources of truth automated storybook docs and write once data resolvers for rest graphql contributing default to github links when possible sort by popularity and relevance no emojis license cc0 1 0 universal license | cross-platform expo nextjs react react-dom react-native react-native-web universal | os |
accessibility-scraper | accessibilty scraper scripts to scrape a site and detect information about the accessibility and other technology it s using originally written at nhs hack day oxford 2013 for the nhs web site technology audit project team we aim to detect as many of the following features as possible done useful files on site robots txt humans txt sitemaps xml server https avaialable https by default are we redirected todo useful files on site xml site map favicon apple touch icon etc might check for standard ones from wordpress etc server server tech from headers hosting company from whois of ip address html doctype html tags attributes in use html5 etc alt tags on images screen reader hints aria attributes use of flash look for cms signatures wordpress etc cdn for images ie conditional comments js use jquery etc use of google translate welsh other language content initial redirect html validitiy rss links tables for layout multiple passes anything randomized each hit different site if you send mobile user agent css css selectors that are used media query use other css use font sizes browser detection hacks being used css valid css minified whole content size of download for initial view download time other whole site scanning friendly urls could scan sitemap if available pdfs to download word docs to download any ajax use setup requires python with lxml cssselect sudo easy install lxml sudo easy install cssselect | server |
|
Hands-On-Web-Development-with-Flask | hands on web development with flask hands on web development with flask published by packt the code for this repository is under development construction worker this is an example blog application that includes authentication using db oauth and openid role based access control | front_end |
|
address_bloc | readme this readme would normally document whatever steps are necessary to get the application up and running things you may want to cover ruby version system dependencies configuration database creation database initialization how to run the test suite services job queues cache servers search engines etc deployment instructions address bloc | server |
|
Database | database relative to all kind of database project that i made during my engineering school | server |
|
deep-vision-processing | deep vision processing build https github com cansik deep vision processing actions workflows build yml badge svg https github com cansik deep vision processing actions workflows build yml deep computer vision algorithms for processing https processing org the idea behind this library is to provide a simple way to use inference machine learning algorithms for computer vision tasks inside processing mainly portability and easy to use are the primary goals of this library starting with version 0 6 0 cuda inferencing support is built into the library windows linux caution the api is still in development and can change at any time pose readme pose jpg lightweight openpose example install it is recommended to use the contribution manager in the processing app to install the library image https user images githubusercontent com 5220162 118391536 05b1ea80 b635 11eb 9704 2c5b780008df png manual download the latest https github com cansik deep vision processing releases tag v0 8 1 alpha prebuilt version from the release https github com cansik deep vision processing releases sections and install it into your processing library folder usage the base of the library is the deepvision class it is used to download the pretrained models and create new networks java import ch bildspur vision import ch bildspur vision network import ch bildspur vision result deepvision vision new deepvision this usually it makes sense to define the network globally for your sketch and create it in setup the create method downloads the pre trained weights if they are not already existing the network first has to be created and then be setup java yolonetwork network void setup create the network download the pre trained models network vision createyolov3 load the model network setup set network settings optional network setconfidencethreshold 0 2f by default the weights are stored in the library folder of processing if you want to download them to the sketch folder use the following command java download to library folder vision storenetworksglobal download to sketch networks vision storenetworksinsketch each network has a run method which takes an image as a parameter and outputs a result you can just throw in any pimage and the library starts processing it java pimage myimg loadimage hello jpg resultlist objectdetectionresult detections network run myimg please have a look at the specific networks for further information or at the examples examples opencl backend support with version 0 8 1 by default if opencl is enabled it will be used as backend if cuda is enabled too cuda will be preferred it is possible to force the cpu backend by setting the following option java deepvision vision new deepvision this vision setusedefaultbackend true cuda backend support with version 0 6 0 it is possible to download the cuda bundled libraries https github com cansik deep vision processing releases tag v0 8 1 alpha this enables to run most of the dnn s on cuda enabled graphics cards for most networks this is necessary to run them in real time if you have the cuda bundled version installed and run deep vision on a linux or windows with an nvidia graphics card you are able to enable the cuda backend java second parameter enablecudabackend enables cuda deepvision vision new deepvision this true if the second parameter is unset the library will check if a cuda enabled device is available and enables the backend likewise it is possible to check if cuda backend has been enabled by the following method java println is cuda enabled vision iscudabackendenabled if cuda is enabled but the hardware does not support it processing will show you a warning and run the networks on the cpu networks here you find a list of implemented networks object detection yolov3 tiny yolov3 tiny prn efficientnetb0 yolov3 yolov3 openimages dataset yolov3 spp spatial pyramid pooling https stackoverflow com a 55014630 1138326 yolov3 yolov4 yolov4 tiny yolov5 https github com ultralytics yolov5 n s m l x yolo fastest xl https github com dog qiuqiu yolo fastest ssdmobilenetv2 handtracking based on ssdmobilenetv2 textboxes ultra light fast generic face detector 1mb rfb 30 fps on cpu ultra light fast generic face detector 1mb slim 40 fps on cpu cascade classifier object segmentation mask r cnn object recognition tesseract lstm keypoint detection facial landmark detection single human pose detection based on lightweight openpose classification mnist cnn fer emotion age net gender net depth estimation midasnet image processing style transfer multiple networks for x2 x3 x4 superresolution the following list shows the networks that are on the list to be implemented already in progress yolo 9k not supported by opencv multi human pose detection currently struggling with the partial affinity fields help textboxes crnn https github com bgshih crnn pixellink https github com zjulearning pixel link object detection locating one or multiple predefined objects in an image is the task of the object detection networks yolo readme yolo jpg yolo example the result of these networks is usually a list of objectdetectionresult java objectdetectionnetwork net vision createyolov3 net setup detect new objects resultlist objectdetectionresult detections net run image for objectdetectionresult detection detections println detection getclassname t detection getconfidence every object detection result contains the following fields getclassid id of the class the object belongs to getclassname name of the class the object belongs to getconfidence how confident the network is on this detection getx x position of the bounding box gety y position of the bounding box getwidth width of the bounding box getheight height of the bounding box yolo paper https pjreddie com darknet yolo yolo a very fast and accurate single shot network the pre trained model is trained on the 80 classes coco dataset there are three different weights models available in the repository yolov3 tiny very fast but trading performance for accuracy yolov3 spp original model using spatial pyramid pooling https stackoverflow com a 55014630 1138326 yolov3 608 yolov4 608 yolov4 tiny 416 yolov5n 640 yolov5s 640 yolov5m 640 yolov5l 640 yolov5x 640 java setup the network yolonetwork net vision createyolov4 yolonetwork net vision createyolov4tiny yolonetwork net vision createyolov3 yolonetwork net vision createyolov3spp yolonetwork net vision createyolov3tiny yolonetwork net vision createyolov5n yolonetwork net vision createyolov5s yolonetwork net vision createyolov5m yolonetwork net vision createyolov5l yolonetwork net vision createyolov5x set confidence threshold net setconfidencethreshold 0 2f basic example yolo examples yolodetectobjects webcam example yolo examples yolowebcamexample realsense example yolo examples realsenseyolodetector yolov5 since version 0 9 0 yolov5 is implemented as well it uses the pre trained models converted into the onnx format at the moment yolov5 does not work well with the implemented nms to adjust the settings of the nms use the following functions set confidence threshold net setconfidencethreshold 0 2f set confidence threshold net set 0 2f set the iou threshold overlapping of the bounding boxes net setnmsthreshold 0 4f set how many objects should be taken into account for nms 0 means all objects net settopk 100 ssdmobilenetv2 paper https arxiv org abs 1512 02325 this network is a single shot detector based on the mobilenetv2 architecture it is pre trained on the 90 classes coco dataset and is really fast java ssdmobilenetwork net vision createmobilenetv2 webcam example mobilenet examples mobilenetobjectdetectorwebcam handtracking project https github com victordibia handtracking this is a pre trained ssd mobilenetv2 network to detect hands java ssdmobilenetwork net vision createhanddetector hand detector webcam example examples handdetectorwebcam textboxes paper https arxiv org abs 1611 06779 textboxes is a scene text detector in the wild based on ssd mobilenet it is able to detect text in a scene and return its location java textboxesnetwork net vision createtextboxesdetector ultra light fast generic face detector project https github com linzaer ultra light fast generic face detector 1mb ulfg face detector is a very fast cnn based face detector which reaches up to 40 fps on a macbook pro the face detector comes with four different pre trained weights rfb640 rfb320 more accurate but slower detector slim640 slim320 less accurate but faster detector java ulfgfacedetectionnetwork net vision createulfgfacedetectorrfb640 ulfgfacedetectionnetwork net vision createulfgfacedetectorrfb320 ulfgfacedetectionnetwork net vision createulfgfacedetectorslim640 ulfgfacedetectionnetwork net vision createulfgfacedetectorslim320 the detector detects only the frontal face part and not the complete head most algorithms that run on results of face detections need a rectangular detection shape face detector example examples facedetectorexample face detector webcam example examples facedetectorcnnwebcam cascade classifier paper https citeseerx ist psu edu viewdoc summary doi 10 1 1 110 4868 the cascade classifier detector is based on boosting and very common as pre processor for many classifiers java cascadeclassifiernetwork net vision createcascadefrontalface face detector haar webcam example examples facedetectorhaarwebcam object recognition tbd keypoint detection tbd classification tbd depth estimation midasnet towards robust monocular depth estimation mixing datasets for zero shot cross dataset transfer readme midasnet jpg midasnet image processing tbd pipeline it is possible to create network pipelines to use for example a face detection network and different classifier for each face this is not yet documented so you have to check out the test code humanattributespipelinetest java l36 l41 https github com cansik deep vision processing blob master src test java ch bildspur vision test humanattributespipelinetest java l36 l41 build install jdk 8 because of processing jdk 11 for processing 4 run gradle to build a new release package under release deepvision zip bash windows gradlew bat releaseprocessinglib mac unix gradlew releaseprocessinglib cuda support to build with cuda support enable the property cuda bash gradlew bat releaseprocessinglib pcuda pdisable fatjar this will take several minutes and result in a 5 3 gb folder disable fatjar prevents form creating a fatjar which would be too big to be zipped platform specific to build only on a specific platform use the property javacppplatform bash builds with support for all platforms gradlew bat releaseprocessinglib pjavacppplatform linux x86 64 macosx x86 64 macosx arm64 windows x86 64 linux armhf linux arm64 faq why is xy network not implemented please open an issue if you have a cool network that could be implemented or just contribute a pr why is it no possible to train my own network the idea was to give artist and makers a simple tool to run networks inside of processing to train a network needs a lot of specific knowledge about neural networks cnn in specific of course it is possible to train your own yolo or ssdmobilenet and use the weights with this library check out the following example for detection facemasks cansik yolo mask detection https github com cansik yolo mask detection is it compatible with greg borensteins opencv for processing https github com atduskgreg opencv processing no opencv for processing uses the direct opencv java bindings instead of javacv please only include either one library because processing gets confused if two opencv packages are imported about maintained by cansik https github com cansik with the help of the following dependencies bytedeco javacv https github com bytedeco javacv atduskgreg opencv processing https github com atduskgreg opencv processing stock images from the following peoples have been used yoga jpg by yogendra singh from pexels office jpg by fauxels https www pexels com fauxels from pexels faces png by shvetsa https www pexels com shvetsa from pexels hand jpg by thought catalog on unsplash sport jpg by john torcasio on unsplash sticker jpg by claudio schwarz purzlbaum on unsplash children jpg by sandeep kr yadav https unsplash com fiftymm | deep-neural-networks computer-vision processing pose-estimation machine-learning classification inference-engine cuda-support | ai |
Mobile-Application-Development | mobile application development course material for mobile application development integrated digital media tandon school of engineering nyu classs 1a modern javascript https github com borg mobile application development blob master classes class 201a 20 20modern 20javascript md classs 1b modern javascript https github com borg mobile application development blob master classes class 201b 20 20modern 20javascript md classs 2a git basics https github com borg mobile application development blob master classes class 202a 20 20git 20basics md class 2b react and jsx https github com borg mobile application development blob master classes class 202b 20 20react 20and 20jsx md class 3a react native https github com borg mobile application development blob master classes class 203a 20 20react 20native md class 3b react native https github com borg mobile application development blob master classes class 203b 20 20react 20native md class 4a navigation and structure https github com borg mobile application development blob master classes class 204a 20 20navigation 20and 20structure md class 5a global state local storage https github com borg mobile application development blob master classes class 205 20 20global 20state md class 5b hooks https github com borg mobile application development blob master classes class 205b 20 20hooks md class 6 midterm presentation https github com borg mobile application development blob master classes class 206 20 20midterm 20presentation 20discussion md class 7a firebase app with google sign in https github com borg mobile application development blob master classes class 207a 20 20firebase 20app md class 8a firebase storing data and images https github com borg mobile application development blob master classes class 208a 20 20firebase 20continued md weekly schedule part i introduction to mobile development with react native 01 24 22 01 26 22 class 1 modern javascript 01 31 22 02 02 22 class 2 understanding react 02 07 22 02 09 22 class 3 react beyond the browser react native part ii application architecture 02 14 22 02 16 22 class 4 component driven application architecture for mobile 02 21 22 no class 02 23 22 02 28 22 class 5 global state local storage hooks 03 02 22 03 07 22 class 6 midterm presentation lab ux libraries 03 09 22 dev support 03 14 22 03 20 22 spring break 03 21 22 03 23 22 midterm presentations part iii advanced concepts 03 28 22 03 30 22 class 7 user authentication 04 04 22 04 06 22 class 8 database integration 04 11 22 04 13 22 class 9 animations 04 18 22 04 20 22 class 10 looking under the hood in xcode part iv building packaging shipping 04 25 22 04 27 22 class 11 building and releasing mobile apps 05 02 22 05 04 22 class 12 final project proposal review 05 09 22 final project presentations | front_end |
|
Eyes-Position-Estimator-Mediapipe | eyes position estimator mediapipe github repo stars https img shields io github stars asadullah dal17 eyes position estimator mediapipe style social github forks https img shields io github forks asadullah dal17 eyes position estimator mediapipe style social youtube channel subscribers https img shields io youtube channel subscribers ucc8lx22a5ox4xmxrcykzjba style social this is eyes eye eye tracking project here we will use computer vision techniques to extracting the eyes mediapipe python modules will provide the landmarks eyes position estimation demo without voice https user images githubusercontent com 66181793 134312531 d8f6b068 13d9 4590 a1d2 232ea4cf5681 mp4 iris tracking mediapipe opencv python video tutorial https youtu be dnkavdeqh y https user images githubusercontent com 66181793 150673670 7b12506f 67d6 4540 96f7 ea6233c01bd6 mp4 todo eyes tracking part 1 video tutorial https youtu be jfobb6arc4 x detecting face landmarks with mediapipe x detecting eyes landmarks x draw eyes eyebrows lips and face oval with transparent shapes eyes tracking part 2 video tutorial https youtu be xijd43rbi 4 x detecting the blinks x counting blinks eyes tracking part 3 video tutorial https youtu be y mctkv41rk x extracting eyes from the frame using masking techniques x thresholding eyes to get black and white pixels separated x dividing each eye into three 3 pieces right piece center piece left piece x counting the black pixel in each piece and estimating position eyes tracking part 4 video tutorial https youtu be oagu20kurqw x voice position indicator using pygame blog post https aiphile blogspot com 2021 08 eyes tracking mediapipe part1 html face mesh points details summary face mesh points summary br img src mesh image png br details face mesh map points details summary face mesh map points summary br face mesh map image mesh map jpg br details if you found this helpful please star star it you can watch my video tutorial on computer vision topics just check out my youtube channel a href https www youtube com c aiphile img alt aiphile youtube src https user images githubusercontent com 66181793 131223988 882d53a0 4882 468f 9bd7 46b46466baae png width 20 a i am avalaible for paid work here a href https www fiverr com aiphile fiverr img alt fiverr src https user images githubusercontent com 66181793 163767548 9a68e1c1 341a 4b07 9e35 042c35694c08 png width 15 join me on social media h4 a href https www youtube com c aiphile youtube img alt aiphile youtube src https user images githubusercontent com 66181793 131223988 882d53a0 4882 468f 9bd7 46b46466baae png width 35 a a href https github com asadullah dal17 github img alt github src https user images githubusercontent com 66181793 131223930 9fd2bfc7 9c43 465d a057 55f3292f3b2b png width 35 a a href https medium com asadullah92c medium img alt medium src https user images githubusercontent com 66181793 146642235 bde4be12 603d 4eed bd44 5b28829e17b3 png width 35 a a href https www fiverr com aiphile fiverr img alt fiverr src https user images githubusercontent com 66181793 163767548 9a68e1c1 341a 4b07 9e35 042c35694c08 png width 35 a a href https www instagram com aiphile17 instagram img alt instagram src https user images githubusercontent com 66181793 131223931 32d84c10 88b4 4cd6 8eb8 89f06c3b5b51 png width 35 a | ai |
|
webdev-podcasts | webdev podcasts if you have a podcast that you listen to that you think should be included open a pull request or issue my personal top 5 1 shoptalk 2 soft skills engineering 3 javascript jabber 4 whiskey web and whatnot 5 syntax all shows adventures in angular br https devchat tv adventures in angular br a weekly show dedicated to the angularjs framework the bike shed br http bikeshed fm br on the bike shed hosts derek prior sean griffin laila winner and guests discuss their development experience and challenges with ruby rails javascript and whatever else is drawing their attention admiration or ire this week boagworld br https boagworld com the call kent podcast br https kentcdodds com calls br a great little call in podcast usually between 8 12 minutes where kent c dodds answers questions about javascript and the web the changelog br http thechangelog com br the changelog is a collection of several podcasts the main podcast the changelog is one of the best podcasts out there it s a weekly podcast that covers the intersection of software development and open source it s hosted by adam stacoviak and jerod santo codenewbie br http www codenewbie org podcast br stories from people on their coding journey codepen radio br https blog codepen io radio br a podcast all about what it s like running a small web software business the good the bad and the ugly this show is currently on hiatus codingblocks br https www codingblocks net category podcast br coding blocks is the podcast and website for learning how to become a better software developer compressedfm br http compressed fm br a weekly podcast about web design and development from james q quick and amy dutton corecursive the stories behind the code br https corecursive com br interviews with the people behind the code each episode someone shares a fascinating story behind a piece of software being built dev interrupted br https devinterrupted com podcasts developer tea br https developertea com br a great podcast for shorter listening times a podcast for developers designed to fit inside your tea break engineering unblocked br https www unblocked fm br hosted by rebecca murphey engineering unlocked is a podcast dedicated to conversations with software leaders navigating the challenges of scale complexity and growth the freecodecamp podcast br https www freecodecamp org news tag podcast br stories about software development and the people who code hosted by quincy larson front end happy hour br http frontendhappyhour com br a conversation of web development topics by seniors from big companies netflix linkedin windows evernote etc front end fire br https podcasts apple com us podcast front end fire id1700169000 br a weekly podcast hosted by tj vantoll paige niedringhaus jack herrington the way to stay up to date with the latest and greatest in the front end world future of coding br https futureofcoding org br conversations with the people shaping the future of computing hosted by jimmy miller and ivan reese giant robots smashing into other giant robots br http giantrobots fm hanselminutes br http hanselminutes com js party br https changelog com jsparty br a community celebration of javascript and the web hosts mikeal rogers rachel white and alex sexton javascript jabber br http javascriptjabber com br a technical discussion of javascript related topics things like node js web frameworks json coffeescript event and object models and much more masters of scale br https mastersofscale com br how do companies grow from zero to a gazillion legendary silicon valley investor entrepreneur reid hoffman tests his theories with famous founders react round up br https topenddevs com podcasts react round up br a weekly discussion of the react ecosystem hosted by jack herrington paige niedringhaus and tj vantoll the react show br https thereactshow com br a podcast focused on react programming in general and the intersection of programming and the rest of the world rework podcast br https 37signals com podcast br a podcast from 37signcals the makers of basecamp about a better way to work and run your business they bring you stories and unconventional wisdom from basecamp s co founders and other business owners shoptalk br http shoptalkshow com br a live web design and development podcast by chris coyier and dave rupert soft skills engineering br https softskills audio br a weekly advice podcast for software developers this podcast is hosted by dave smith and jamison dance where they answer questions about all the non technical things that go along with being a software developer syntax br https syntax fm br a podcast hosted by wes bos and scott tolinski the tech leader s playbook br https podcasts apple com us podcast the tech leaders playbook id1690263628 br this podcast is for new and aspiring technology leaders who want to learn the secrets of great leaders and how they built high performing teams thoughtworks br https www thoughtworks com en us insights podcasts br a community of passionate individuals whose purpose is to revolutionize software design creation and delivery while advocating for positive social change whiskey web and whatnot br https whiskeywebandwhatnot fm br whiskey web and whatnot is an informal whiskey fueled fireside chat with your favorite web devs we discuss all things web development including javascript typescript emberjs react astro solidjs css html web3 and more but we also get to know the human side of developers and their hobbies outside of work hosted by robbiethewagner and charles william carpenter iii of ship shape https shipshape io | front_end |
|
Portfolio | introduction 10 years in it but a firm believer of the growth mindest and elsatic mind i am currently learning new skills thru rice university s cyber security certificate program as well as earning a bachelor s degree in cloud computing from western govenors university online my interest s include cloud computing aws azure threat analysis penetration testing and network automation software defined networks things i do for fun scripting coding and doing ctf capture the flag cyber security challenges ctf s i have worked on 1 picoctf 2 hacktivities my mentality assume the system has already been breached defend the data and ensure confidentiality integrity and accessibility disclaimer all information on this website is for educational purpose only any action taken using the information on this website is strictly at your own risk the user understands that i am not liable for any losses and or damages that may occur with the use of the information on this website | cloud |
|
django-inventory-post | django inventory django based inventory and asset control screenshot http img814 imageshack us img814 5088 screenshot1fz png screenshot2 http img443 imageshack us img443 1486 screenshot2wu png features object oriented approach to asset and inventory management csv import utility per asset or per item type photos and information match suppliers to item types site wide search capability user defined states broken in repairs etc for assets an item can be defined as a supply to another item assign assets to one or more individuals user photos group assets inventories or user per locations purchase request and purchase orders requirements django a high level python web framework that encourages rapid development and clean pragmatic design pil the python imaging library django pagination django photologue powerful image management for the django web framework or execute pip install r requirements production txt to install the dependencies automatically installation check the install file in the docs folder or if you are brave copy the file install sh https github com rosarior django inventory blob master misc install sh file to your computer and execute it this script has only been tested under ubuntu maverick amd64 w apache2 bash revise it before running it author roberto rosario twitter http twitter com siloraptor e mail roberto rosario gonzalez at gmail | os |
|
EmbeddedSystemTeachingModels | embeddedsystemteachingmodels models are built to demistify the complex embedded system design concepts specifically for electrical and electronics instrumentation engineering students summary hardware models built for following courses mcoa506l mcoa506p real time embedded systems eee4020 embedded system design this prototype models are integrated in the education of embedded system design course these prototypes are designed to drive students to a deeper understanding and integration of the diverse theoretical concepts that often come from different disciplines such as digital analog electronics sensors control systems communication and programming rather than proposing the experiment for a particular course within an embedded system engineering curriculum this prototypes describes how the experiment can be tailored to the needs and diverse background of both undergraduate and graduate students education image https github com kskumaree embeddedsystemteachingmodels assets 33861944 07c7743d b643 48b4 9453 fd45725d604d | os |
|
react-dsfr | p align center img src https github com codegouvfr react dsfr releases download assets dsfr react repo card png p p align center i a href https www systeme de design gouv fr french state design system a react toolkit i br br a href https github com codegouvfr react dsfr actions img src https github com codegouvfr react dsfr workflows ci badge svg branch main a a href https www npmjs com package codegouvfr react dsfr img src https img shields io npm v codegouvfr react dsfr logo npm a a href https bundlephobia com package codegouvfr react dsfr img src https img shields io bundlephobia minzip codegouvfr react dsfr a a href https github com codegouvfr react dsfr blob main license img src https img shields io npm l codegouvfr react dsfr a p p align center a href https components react dsfr codegouv studio components documentation a a href https react dsfr codegouv studio guides a a href https stackblitz com edit nextjs j2wba3 file pages index tsx playground a p version fran aise du readme ici https github com codegouvfr react dsfr blob main readme fr md warning this design system is only meant to be used for official french s public service websites its main purpose is to make it easy to identify governmental websites for citizens see terms https www systeme de design gouv fr utilisation et organisation perimetre d application this module is an advanced toolkit that leverages gouvfr dsfr https github com gouvernementfr dsfr the vanilla js css implementation of the dsfr a href https youtu be 5q88jgxuay4 img width 1712 alt image src https user images githubusercontent com 6702424 224423044 c1823249 eab6 4844 af43 d059c01416af png a while this module is written in typescript using typescript in your application is optional but recommended as it comes with outstanding benefits to both you and your codebase x fully typesafe well documented api x always in up to date with latest the dsfr evolutions code and types generated from gouvfr dsfr https www npmjs com package gouvfr dsfr dist dsfr css x exactly the same look and feel than with gouvfr dsfr https www npmjs com package gouvfr dsfr x no white flash when reloading in ssr setup https github com codegouvfr codegouvfr react dsfr issues 2 issuecomment 1257263480 x most components are server component ready the others are labeled with use client x perfect integration with all major react framework next js pagesdir and appdir create react app vue https guides react dsfr fr x almost all the components https www systeme de design gouv fr elements d interface are implemented https components react dsfr codegouv studio x three shakable distribution cherry pick the components you import it s not all in a big js bundle x optional integration with mui https mui com if you use mui components they will be automatically adapted to look like dsfr components https www systeme de design gouv fr elements d interface see documentation https guides react dsfr fr mui integration x enable css in js by providing a usecolors hooks that exposes the correct colors options and decision for the currently enabled color scheme x opt in i18n built in text can be displayed in multiple languages and user can provide extra translations x support routing libraries https guides react dsfr fr routing like react router need ready to use dsfr compliant login and register pages checkout keycloak theme dsfr https github com codegouvfr keycloak theme dsfr p align center a href https guides react dsfr fr get started a p governance this module is a product of etalab s free and open source software pole https code gouv fr en mission this project is co maintained by public servants from various french administrations joseph garrone garronej julien bouquillon revolunet dnum des minist res sociaux dylan decrulle ddecrulle insee development bash git clone https github com codegouvfr react dsfr cd react dsfr yarn starting storybook yarn storybook starting test apps yarn start cra for testing in a create react app setup yarn start vite for testing in a vite setup yarn start next pagesdir for testing in a next js 13 pagesdir setup the default setup yarn start next appdir for testing in a next js 13 appdir setup run all unit test test runtime yarn test run only test runtime cssvariable test ts for example npx vitest t resolution of css variables debugging while unit testing want to contribute thank you see the contribution guide https github com codegouvfr react dsfr blob main contributing md how to publish a new version on npm how to release a beta version this repo was bootstrapped form garronej ts ci https github com garronej ts ci have a look at the documentation of this starter for understanding the lifecycle of this repo use cases a few projects that use codegouvfr react dsfr https code gouv fr sill https immersion facile beta gouv fr https egapro travail gouv fr https maisondelautisme gouv fr https refugies info fr https www mediateur public fr https signal conso gouv fr https observatoire numerique gouv fr https github com baseadressenationale adresse data gouv fr https github com disic observatoire numerique gouv fr https github com disic monfranceconnect https github com inseefr lunatic dsfr https github com eig6 geocommuns lidarviz front https github com eig6 geocommuns geocommuns core https github com socialgouv bpco site https github com eig6 artificia predictia front https github com baseadressenationale bal admin https github com etalab sill web https github com inclusion numerique mediature https territoiresentransitions fr maybe keep in mind that the project has been released recently so it s only natural that there is only a few projects in production using it | dsfr react typescript design-system storybook | os |
async-webinar | usage install leiningen http leiningen org then run the following from the project directory shell lein cljsbuild auto webinar this will start the auto build process open index html in your favorite browser to try out the examples you can edit src webinar core cljs in your preferred text editor and refresh the browser to view your changes | front_end |
|
cvi | e gov common visual identity cvi this repository includes a shared css library preprocessed with sass along with html examples based on it it also houses a user interface ui kit in angular and a typed tree shakable icon library the css has been architectured to be independent of any specific javascript framework this makes it possible for contributors to easily integrate component libraries from other frameworks such as react or vue js the angular ui library has also been constructed free of any dependencies on design systems frameworks like bootstrap or google s material this ensures a reduction in dependence on third party vendors quick start documentation the css framework and angular component library utilize storybook https storybook js org for comprehensive documentation and seamless usage you can also play with the library in a live app like environment at codesandbox io https codesandbox io p github ekateriinal angular cvi starter to get started you can access the installation instructions and documentation on e gov cvi s storybook https e gov github io cvi https e gov github io cvi the styles in this repository take inspiration from the initial veera design system which you can find here https www figma com file nncv5krqdrks8mock1zbqu veera design system please note e gov cvi is not veera we ve taken inspiration from their work but our approach and implementation are unique to discuss any issues suggestions or questions join our public cvi signal group https signal group cjqkiii854res5vfiq8oqw5fwms2 fy8cjtem1rsji9fssplehc4dnwxgkcfqf34ymqjjdi installation instructions follow these steps to install and integrate our library 1 add the public koodivaramu registry to your project by following the instructions provided here https koodivaramu eesti ee e gov cvi packages 385 in short you will need to update your npmrc file be sure to choose the second radio button in the koodivaramu ui project level registry setup 2 install the necessary package to your project css framework use the command npm i save egov cvi styles angular components use the command npm i save egov cvi ng react components use the command npm i save egov cvi react icons use the command npm i save egov cvi icons 3 after installation import the dependencies into your project you ll find dedicated instructions for this in the documentation for the css framework https e gov github io cvi path docs styles how to install how to install angular components https e gov github io cvi path docs angular installation installation and icons https e gov github io cvi path docs icons how to use page packages and artifacts repository the built packages are published to the public koodivaramu repository from where you can download and add them to your application you can access it via the following link koodivaramu repository https koodivaramu eesti ee e gov cvi packages in addition the storybook docker image is also published to the koodivaramu repository storybook docker image https koodivaramu eesti ee e gov cvi container registry contributing if you want to contribute to the common visual identity component library follow these steps 1 create a fork of the repository 2 make changes in your own fork 3 create a pull request back to this repository feel free to use the library sandbox available at codesandbox io https codesandbox io p github ekateriinal angular cvi starter to verify issues or play with existing components for more detailed instructions follow the link below github contribution guide https docs github com en get started quickstart contributing to projects adding updating packages when adding upgrading peer dependencies ensure that they are also updated in the following files libs lib name package json for peerdependencies libs lib name ng package json for dependencies to be packaged with the library commit message format the project follows the conventional commit format https www conventionalcommits org convention and uses the semver nx plugin https github com jscutlery semver for versioning ensure to use the appropriate commit messages code style angular selectors use cvi component selector local selector name naming convention for content projection selectors https angular io guide content projection for example to introduce a content selector that inserts custom content before a title in a panel eg fictional panelcomponent cvi ng panel component an appropriate code would be ng content select cvi ng panel before title ng content running the storybook run the following command to build documentation and start the storybook locally npm run storybook running storybook locally in docker to run storybook locally using docker follow these steps 1 build the docker image with the following command docker build f libs storybook dockerfile t cvi storybook 2 start the storybook container with the following command docker compose up storybook 3 stop the container with the following command docker compose down 4 the storybook interface at http localhost 3005 in your web browser understand your workspace to see a diagram of the dependencies of your projects run the following command nx graph this will provide you with an overview of your workspace and how the different projects and libraries are interconnected using the nx build system to learn more about the nx build system check out the following resources nx documentation https nx dev getting started with nx https nx dev getting started intro concepts https nx dev concepts running cypress tests against storybook to run cypress tests against storybook make sure that storybook is up and running see the previous section then run the following command npm run cypress ui this will open up the cypress visual testing tool select e2e testing to view all component tests note that cypress tests use angular components in iframe windows which is why storybook needs to be up and running running chromatic tests the project uses automatic screenshot testing via chromatic 1 to run the tests use the following command in your terminal npm run chromatic 2 open the resulting url and review the visual changes chromatic ci also runs on every push the action always passes even when visual changes are detected except for cases when a story is broken contributors and reviewers should check the results of the action and accept or decline them in the chromatic ui by following a link in the build log publishing to chromatic also gives a possibility to share a storybook link for a specific branch even non pushed if the local npm command is used in this format https branch 6373995e3f280e239470296d chromatic com thanks a href https www chromatic com img src https user images githubusercontent com 321738 84662277 e3db4f80 af1b 11ea 88f5 91d67a5e59f6 png width 153 height 30 alt chromatic a thanks to chromatic https www chromatic com for providing the visual testing platform that helps us review ui changes and catch visual regressions | angular components design-system estonia | os |
awesome-react-design-systems | awesome react design systems awesome https awesome re badge flat svg https awesome re a design system is a collection of reusable components guided by clear standards that can be assembled together to build any number of applications will fanguy a comprehensive guide to design systems https www invisionapp com inside design guide to design systems a curated list of design systems made up of reusable react https reactjs org components contents react design systems react design systems react native design systems react native design systems misc misc hr react design systems ant design https ant design github https github com ant design ant design ant financial a design system with values of nature and determinacy for better user experience of enterprise applications atlaskit http atlaskit atlassian com bitbucket https bitbucket org atlassian atlaskit mk 2 atlassian atlassian s official ui library built according to the atlassian design guidelines backpack https backpack github io github https github com skyscanner backpack skyscanner backpack is a collection of design resources reusable components and guidelines for creating skyscanner products blueprint http blueprintjs com github https github com palantir blueprint palantir a react based ui toolkit for the web canvas https canvas hubspot com github https github com hubspot canvas hubspot hubspot canvas is the design system that we at hubspot use to build our products carbon design system http carbondesignsystem com github https github com carbon design system carbon components ibm the carbon design system is integrating the new ibm design ethos and language cf ui https cloudflare github io cf ui github https github com cloudflare cf ui cloudflare a set of packages used to build uis at cloudflare using projects such as react garden https garden zendesk com react components github https github com zendeskgarden react components zendesk garden our curated collection of ui goodness grommet http grommet io github https github com grommet grommet hewlett packard grommet provides all the guidance components and design resources you need to take your ideas from concept to a real application hack club design system https design hackclub com github https github com hackclub design system hack club a collection of react components designed for speed consistency and best practices lightning design system https react lightningdesignsystem com github https github com salesforce design system react salesforce a react implementation of the lightning design system material ui https www material ui com github https github com mui org material ui google react components that implement google s material design mineral ui https mineral ui com github https github com mineral ui mineral ui ca technologies an open source design system created to simplify building appealing modern software experiences mongodb design http mongodb design github https github com mongodb design mongodb design for mission critical applications pivotal ui https styleguide pivotal io github https github com pivotal cf pivotal ui pivotal a collection of react components that are styled for the pivotal brand plasma http plasma guide github https github com wework plasma wework a design system for building internal business tools at wework pluralsight design system https design system pluralsight com github https github com pluralsight design system pluralsight the ui building blocks for creating a cohesive design across pluralsight products polaris https polaris shopify com github https github com shopify polaris shopify our design system helps us work together to build a great experience for all of shopify s merchants priceline one https pricelinelabs github io design system github https github com pricelinelabs design system priceline a design system focused on speed consistency and best practices ring ui http www jetbrains org ring ui index html github https github com jetbrains ring ui jetbrains this collection of ui components aims to provide all of the necessary building blocks for web based products built inside jetbrains seek style guide https seek oss github io seek style guide github https github com seek oss seek style guide seek living style guide for seek powered by react webpack css modules and less sparebank 1 s design system https design sparebank1 no github https github com sparebank1 designsystem sparebank a common language across disciplines to ensure consistent design in our solutions spark design system https sparkdesignsystem com github https github com sparkdesignsystem spark design system quicken loans a system of patterns and components used to create the user interface for the quicken loans family of fintech products swarm design system https meetup github io swarm design system github https github com meetup swarm design system meetup a set of ui components ready for use by designers and engineers to quickly ship new products and features uniform http uniform hudl com hudl the system includes components visual guidelines language and additional resources to help you build more cohesive product interfaces react native design systems react native elements https react native training github io react native elements github https github com react native training react native elements react native training cross platform react native ui toolkit nativebase https nativebase io github https github com geekyants nativebase geekyants essential cross platform ui components for react native vue native shoutem ui https shoutem github io docs ui toolkit introduction github https github com shoutem ui shoutem shoutem ui toolkit enables you to build professionally looking react native apps with ease teaset github https github com rilyu teaset rilyu a ui library for react native misc react material design icons https jxnblk com rmdi github https github com jxnblk rmdi brent jackson built with pixo styled components and styled system | design design-systems react react-native awesome-list | os |
Hospital-management-system | hospital management system hospital management system | server |
|
map-reduce-bloom-filter | map reduce bloom filter project for cloud computing course at university of pisa msc computer engineering and artificial intelligence and data engineering the aim of this project is to design a mapreduce application to build and test bloom filters space efficient probabilistic structure used for membership testing over the movies of the imdb dataset the mapreduce application has to be implemented both with hadoop 3 1 3 using java 1 8 and spark 2 4 4 using python 3 6 the configured cluster was composed of four ubuntu vm deployed on the cloud infrastructure of the university of pisa credits to riccardo sagramoni veronica torraca fabiano pilia and emanuele tinghi strucure of the github repository dataset imdb dataset file docs project specification and report hadoop bloom filter java code for the hadoop application sh scripts linux script to automatize the execution of the applications spark bloom filter python code for the spark application util python scripts that sets the dataset up for the applications project structure on the linux virtual machines 0 launch partition dataset sh 1 launch linecount sh 2a launch hadoop bloomfilter sh 2b launch hadoop tester sh 3a launch spark bloomfilter sh 3b launch spark tester sh data imdb tsv hadoop bloom filter 1 0 snapshot jar bloom filter properties spark bloomfilters builder py bloomfilters tester py bloomfilters util py util count number of keys py split dataset py | cloud |
|
Optic-Disc-Segmentation-Based-on-K-means-for-Glaucoma-Detection | optic disc segmentation based on k means for glaucoma detection for the paper of course information processing technology on internet of things and the title is optic disc segmentation based on k means for glaucoma detection there are three python files in this respitory 1 kmeans py is for doing k means algorithm 2 crop py is for preprocessing the image to the certain size that is 1000 1000 3 briandcontr py is for adjusting the brightness and contrast for the access to the images provided in this paper please contact yufan wang at email yeahyuki 163 com because the ownership of these images is not mine i have to get permission to send these images to others thank you for your understanding and cooperation | server |
|
dtc-de-zc-week4-redshift | dtc de zc week4 redshift repository to use dbt cloud with redshift for week 4 of the data engineering zoomcamp | cloud |
|
TTK4155-Byggern-Lab | ttk4155 byggern lab this repository is the documented software and hardware designed by h ken sivesindtajet lunn and alexander navasca skinstad in ttk4155 embedded and industrial computer systems design byggern lab at ntnu node 1 atmega 162 firmware contains usart driver with printf parallel bus driver for sram adc and oled controller graphical user interface gui spi driver canbus driver external mcp2515 cancontroller node 2 atsam3x8e firmware contains usart driver and printf code provided by ntnu canbus driver code provided by ntnu with small adjustments by us pwm driver timers pi controlled motor driver demo https youtu be sb7sin5 m7u please feel free to use and learn from our project | os |
|
lens | prisma lens prisma lens is a design system guidelines and component library for the family of prisma projects and products both this document and the actual artefacts are a living system that aims to be evolving incrementally and often principles set up for change the changes to the system should never be dreaded and small incremental changes should never be expensive to make the properties that are built with the system should support this philosophy highly atomic the individual components of the system should be as simple and generic as possible but not more there should never be a component with only one usage site in the system visually low level the system should be recognizable at the level of typography and spacing to make sure we have flexible theming options code as source of truth it s more likely that there s going to be some sort of syncing happening from github react elements to figma than the other way around what s shipped or in this repository is the system and also the deliverable it is on the designers to make tools to support that easier local development prisma lens uses storybook https storybook js org as a preview mechanism for local development npm run storybook artifacts base theme should loosely follow the theme spec https system ui com theme with following elements color primitives spectrums usage colors typography spacing individual websites and products should extend this theme with overrides ideally at the level of usage values and not primitives iconography feather https feathericons com icons for now react components built with styled components and styled normalize | os |
|
bio-nlp | basic overview natural language processing tasks focused on biological domain available from a restful api drugs retrieves the drugs in a text along with their anatomical therapeutic chemical classification atc code web page available at https librairy github io bio nlp https librairy github io bio nlp natural language processing api available at http librairy linkeddata es bio nlp http librairy linkeddata es bio nlp this is an example of a curl query sh curl d text however clinical trials investigating the efficacy of several agents including remdesivir and chloroquine are underway in china h content type application json x post https librairy linkeddata es bio nlp drugs and the answer is json name remdesivir atc parent j05ab name chloroquine atc code p01ba01 atc parent p01ba cui c0008269 level 5 biomedical literature api available at http librairy linkeddata es bio api http librairy linkeddata es bio api some examples most frequent drugs used in the experiments https librairy linkeddata es bio api drugs https librairy linkeddata es bio api drugs grouped by therapeutic group atc 4 https librairy linkeddata es bio api drugs level 4 https librairy linkeddata es bio api drugs level 4 considered together with lopinavir https librairy linkeddata es bio api drugs keywords lopinavir https librairy linkeddata es bio api drugs keywords lopinavir used as viral vaccines https librairy linkeddata es bio api drugs keywords viral 20vaccines level 5 https librairy linkeddata es bio api drugs keywords viral 20vaccines level 5 used to handle fever https librairy linkeddata es bio api drugs keywords fever level 5 https librairy linkeddata es bio api drugs keywords fever level 5 most frequent diseases considered in the corpus https librairy linkeddata es bio api diseases https librairy linkeddata es bio api diseases grouped by symptoms https librairy linkeddata es bio api diseases level 2 https librairy linkeddata es bio api diseases level 2 treated by chloroquine https librairy linkeddata es bio api diseases keywords chloroquine https librairy linkeddata es bio api diseases keywords chloroquine appearing along with hallucination https librairy linkeddata es bio api diseases keywords hallucination https librairy linkeddata es bio api diseases keywords hallucination texts about about covid 19 and inflammation difficulties https librairy linkeddata es bio api texts keywords covid19 inflammation https librairy linkeddata es bio api texts keywords covid19 inflammation about hydroxychloroquine https librairy linkeddata es bio api texts keywordshydroxychloroquine https librairy linkeddata es bio api texts keywordshydroxychloroquine | ai |
|
homebrew-nlp | homebrew nlp a homebrew keg that specialized in natural language processing homebrew install console brew tap uetchy nlp package index chasen chasen darts darts hts engine api hts engine api julius julius julius dictation kit julius dictation kit julius grammar kit julius grammar kit knp knp open jtalk open jtalk contributing 1 fork it 2 create your feature branch git checkout b my new feature 3 commit your changes git commit am add some feature 4 push to the branch git push origin my new feature 5 create new pull request | natural-language-processing homebrew nlp | ai |
g10-keypad-to-lcd | pshs mc grade 10 keypad to lcd pager c arduino documentation of my pshs mc grade 10 2015 16 embedded systems program design course project on c arduino | os |
|
IoTgo | iotgo introdution iotgo is an open source iot platform like wordpress zencart and all other open source software you can deploy your own iotgo cloud service we at itead studio are committed to provide a complete set of hardware for iotgo with open source hardware designs and open source firmware the overall iotgo system architecture including iotgo iotgo compatible apps and iotgo compatible devices is illustrated by following graph iotgo system architecture docs iotgo arch png single board microcontroller like arduino developers single board computer like raspberry pi developers and other embedded system developers could use iotgo device api to connect their devices to iotgo and then easily control their devices by utilizing iotgo web app note we also provide iotgo compatible device library which wraps iotgo device api please refer to iotgo arduino library https github com itead iteadlib arduino iotgo iotgo segnix library https github com itead segnix tree master libraries itead iotgo for details web developers and mobile developers could use iotgo web api to build various apps that manage devices connected to iotgo to control those devices iotgo device api can be used in one word we want to provide cloud capability for device developers and device capability for app developers for more detailed information and a working iotgo cloud service please head over to iotgo iteadstudio com http iotgo iteadstudio com future plan iotgo is not an ordinary iot cloud platform we designed this platform to be open simple and easy to use so everyone can handle the hardware software and website design in the same time however this platform is still very primitive for now it only supports three simple device types we know what we provide is not enough to satisfy your need that s why we set up this future plan to improve iotgo step by step let s see what we will do in the near future 1 improve ui design display device connecting status and last connect time on device detail page connecting status added 2 support gps device receive device gps information and display the exact location on google map 3 add the functions of brightness control and rgb adjustment for light device 4 show power consumption information and the control function for switch device 5 store historic data collected from all kinds of sensers 6 provide websocket interface and support bidirectional communication between iotgo and devices done currently is only enabled for indie device 7 provide android app code done please head over to iotgo android app https github com itead iotgo android app if you have any advice please contact us we sincerely appreciate it installation automatically almost if you just want to get a feel of iotgo or deploy it for internal use we recommend iotgo docker image https registry hub docker com u humingchun iotgo which could set up a iotgo instance by only 4 commands and within 3 minutes depends on your internet bandwidth note iotgo docker image should not be used in production environment because it lacks several security features such as google recaptcha update here is a video demonstration of installation process http v youku com v show id xmtm1njq2njcwna html prerequisite docker https www docker com an open platform for distributed applications for developers and sysadmins install iotgo sudo docker pull dockerfile mongodb sudo docker pull humingchun iotgo sudo docker run d name mongodb dockerfile mongodb mongod smallfiles sudo docker run d p 80 80 name iotgo link mongodb mongodb humingchun iotgo node opt iotgo bin www and that s all you can now access iotgo at your linux box s ip address and port 80 if you want to use another port instead of 80 change the p option in the last command from 80 to any other port such as p 3000 80 the admin panel is at http linuxboxip linuxboxport admin and the default admin account is iotgo iteadstudio com corresponding password is password if you want to change the default account and password you can use sudo docker exec i t iotgo bin bash to login iotgo docker container and use text editor vi for example to change admin information in the config js file installation manually install iotgo manually takes some effort but it also means everything is under control prerequisite git http git scm com free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency mongodb https www mongodb org open source document database the leading nosql database node js http nodejs org an asynchronous javascipt event driven framework and yes javascript on the server forever https www npmjs org package forever running node application as system service bower http bower io a package manager for the web optimized for the front end centos 6 yum install git yum install mongodb yum install npm npm install forever g npm install bower install iotgo get iotgo source code from github com git clone https github com itead iotgo git change directory to downloaded iotgo and install dependencies cd iotgo npm install change directory to iotgo web app frontend and install dependencies cd public frontend bower install change directory to iotgo web app backend and install dependencies cd backend bower install change directory back to iotgo root cd configure iotgo copy config js sample to config js which is the actual configuration file being used during iotgo boot process cp config js sample config js edit config js and change corresponding fields to reflect your hosting environment js module exports host iotgo iteadstudio com hostname of iotgo db uri mongodb localhost iotgo mongodb database address options user iotgo mongodb database username pass iotgo mongodb database password jwt secret jwt secret shared secret to encrypt json web token admin iotgo iteadstudio com password administrator account of iotgo page limit 50 default query page limit sort 1 default query sort order recaptcha secret recaptcha secret key https developers google com recaptcha intro url https www google com recaptcha api siteverify edit public frontend views signup html and add your recaptcha site key applied from google html div ng model response class form group g recaptcha g recaptcha sitekey your recaptcha site key goes here div iotgo as system service to manage iotgo like system service such as sudo service iotgo start start iotgo sudo service iotgo stop stop iotgo and make iotgo start automatically during os boot we can create init scripts utilizing forever https www npmjs org package forever to monitor iotgo the following init script is a working example if you want to use it please put the script in etc init d folder and change file permission to 755 you may also need to change name node path node application path to reflect your hosting environment sudo touch etc init d iotgo sudo chmod 755 etc init d iotgo sudo update rc d iotgo defaults note please refer to node js and forever as a service simple upstart and init scripts for ubuntu https www exratione com 2013 02 nodejs and forever as a service simple upstart and init scripts for ubuntu for detailed explanations of the script bash bin bash an init d script for running a node js process as a service using forever as the process monitor for more configuration options associated with forever see https github com nodejitsu forever this was written for debian distributions such as ubuntu but should still work on redhat fedora or other rpm based distributions since none of the built in service functions are used so information is provided for both name itead iotgo node bin dir usr bin usr local bin node path home itead iotgo node modules application path home itead iotgo bin www pidfile var run iotgo pid logfile var log iotgo log min uptime 5000 spin sleep time 2000 path node bin dir path export node path node path start echo starting name forever pidfile pidfile a l logfile minuptime min uptime spinsleeptime spin sleep time start application path 2 1 dev null retval stop if f pidfile then echo shutting down name forever stop application path 2 1 dev null rm f pidfile retval else echo name is not running retval 0 fi restart stop start status echo forever list grep q application path if eq 0 then echo name is running retval 0 else echo name is not running retval 3 fi case 1 in start start stop stop status status restart restart echo usage start stop status restart exit 1 esac exit retval running iotgo to run iotgo you can start it in console mode debug iotgo bin www to run iotgo on other port instead of 80 you can use port environment variable port 3000 debug iotgo bin www to run iotgo as system service sudo service iotgo start web api iotgo provides a restful web api http en wikipedia org wiki representational state transfer to interact with clients web app mobile app desktop app etc the general process is as follows client sends http request to iotgo if it is a post request then data must be coded in json http en wikipedia org wiki json format and carried in request body iotgo does some validation against the request if the validation failed iotgo will reply with proper response code and reason if the validation succeeded iotgo will continue processing the request and reply with 200 ok status code and process result encoded in json format client checks the response from iotgo if the status code is not 200 ok then the request is probably illegal or bad formed if the status code is 200 ok but the data json format has an error property then the request still fails the value of error property is the reason of failure if the status code is 200 ok and there is no error property in the data then the request succeeds finally extract the data and do whatever you want smiley iotgo is also using json web token https tools ietf org html draft ietf oauth json web token 31 to protect web api so most of these web api requests must carry authorization header with json web token obtained from register or login request authorization bearer eyj0exaioijkv1qilcjhbgcioijiuzi1nij9 eyjfawqioii1ndyxnjq1ngm4odiznzflmwmxotcynmyilcjlbwfpbci6imhvbgx5lmhlqgl0zwfklmnjiiwiy3jlyxrlzef0ijoimjaxnc0xms0xmvqwmtoymdoymc4ynjfaiiwiyxbpa2v5ijoimtu3odnmzdytmdc1ms00odbmltllmzatnwzmztnhnwm4mtm1iiwiawf0ijoxnde1njczntexfq e gi5n8aigveba5s6vyg9ceacsgnafucscisyq2kxoi user api user register register an account on iotgo authorization not required request method post request body json email iotgo iteadstudio com password password response the user response token provided by the recaptcha to the user response body json jwt eyj0exaioijkv1qilcjhbgcioijiuzi1nij9 eyjlbwfpbci6inrlc3raaxrlywquy2milcjfawqioii1ndy1ytvmmddmzgrlyjkwnjlhzdjlzdqilcjjcmvhdgvkqxqioiiymde0ltexlte0vda2ojq5ojiwljgymloilcjhcglrzxkioijindvjmwu2ms05njrhltrhzdmtowi5zc0wyjk3ywm5nwzlmtqilcjpyxqioje0mtu5ndc3njb9 rh8bla7kps4r74djwkcnhtm1etyqfxmsil1irabrowi user email iotgo iteadstudio com createdat 2014 11 24t06 49 20 822z apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 api user login log in iotgo using email address and password authorization not required request method post request body json email iotgo iteadstudio com password password response body json jwt eyj0exaioijkv1qilcjhbgcioijiuzi1nij9 eyjlbwfpbci6inrlc3raaxrlywquy2milcjfawqioii1ndy1ytvmmddmzgrlyjkwnjlhzdjlzdqilcjjcmvhdgvkqxqioiiymde0ltexlte0vda2ojq5ojiwljgymloilcjhcglrzxkioijindvjmwu2ms05njrhltrhzdmtowi5zc0wyjk3ywm5nwzlmtqilcjpyxqioje0mtu5ndc3njb9 rh8bla7kps4r74djwkcnhtm1etyqfxmsil1irabrowi user email iotgo iteadstudio com createdat 2014 11 24t06 49 20 822z apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 api user password change password for the user identified by json web token authorization required request method post request body json oldpassword old password newpassword new password response body json device api user device create new device by using post request get the list of devices owned by user by using get request authorization required request method post request body json name switch group itead type 01 response body json name switch group itead type 01 deviceid 0100000001 apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 createdat 2014 11 24t02 27 41 363z request method get response body json name switch group itead type 01 deviceid 0100000001 apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 createdat 2014 11 24t02 27 41 363z api user device deviceid get detailed device information by using get request update device name and group by using post request delete device by using delete request authorization required request method get response body json name switch group itead type 01 deviceid 0100000001 apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 createdat 2014 11 24t02 27 41 363z params switch on lastmodified 2014 11 27t02 27 41 363z request method post request body json name new name group new group response body json name new name group new group type 01 deviceid 0100000001 apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 createdat 2014 11 24t02 27 41 363z params switch on lastmodified 2014 11 27t02 27 41 363z request method delete response body json name new name group new group type 01 deviceid 0100000001 apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 createdat 2014 11 24t02 27 41 363z params switch on lastmodified 2014 11 27t02 27 41 363z api user device add add indie device which is manufactured by itead or itead licensed manufacturers authorization required request method post request body json name lamp group indie deviceid 0280000001 apikey f44eeb0b 8a9e 4454 ad51 89beb93b072e response body json name lamp group indie deviceid 0280000001 apikey f44eeb0b 8a9e 4454 ad51 89beb93b072e createdat 2014 11 27t02 49 42 000z params admin api admin login log in iotgo admin area using email address and password authorization not required request method post request body json email admin iteadstudio com password password response body json jwt eyj0exaioijkv1qilcjhbgcioijiuzi1nij9 eyjlbwfpbci6inrlc3raaxrlywquy2milcjfawqioii1ndy1ytvmmddmzgrlyjkwnjlhzdjlzdqilcjjcmvhdgvkqxqioiiymde0ltexlte0vda2ojq5ojiwljgymloilcjhcglrzxkioijindvjmwu2ms05njrhltrhzdmtowi5zc0wyjk3ywm5nwzlmtqilcjpyxqioje0mtu5ndc3njb9 rh8bla7kps4r74djwkcnhtm1etyqfxmsil1irabrowi user email admin iteadstudio com isadmin true api admin users get the list of registered users on iotgo authorization required request method get response body json email humingchun gmail com apikey ea62c15b d194 4b16 a56e 7ad8433c5477 createdat 2014 11 27t02 50 10 000z api admin users apikey get detailed user information by using get request delete user and related devices by using delete request authorization required request method get response body json email humingchun gmail com apikey ea62c15b d194 4b16 a56e 7ad8433c5477 createdat 2014 11 27t02 50 10 000z request method delete response body json email humingchun gmail com apikey ea62c15b d194 4b16 a56e 7ad8433c5477 createdat 2014 11 27t02 50 10 000z api admin devices get the list of created added devices on iotgo authorization required request method get response body json name switch group itead type 01 deviceid 0100000001 apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 createdat 2014 11 24t02 27 41 363z api admin devices deviceid get detailed device information authorization required request method get response body json name switch group itead type 01 deviceid 0100000001 apikey b45c1e61 964a 4ad3 9b9d 0b97ac95fe14 createdat 2014 11 24t02 27 41 363z params switch on lastmodified 2014 11 27t02 27 41 363z api admin factorydevices get issued licenses for licensing indie devices on iotgo authorization required request method get response body json name itead type 01 deviceid 0180000001 apikey 94b38bbe 57c8 49bf a6c4 2871ee5bb873 createdat 2014 11 27t02 50 20 000z name itead type 01 deviceid 0180000002 apikey 938a8f4f 9f0f 424b b5ac f58b8f7a539c createdat 2014 11 27t02 50 20 000z api admin factorydevices create generate new licenses for indie devices authorization required request method post request body json name itead type 02 qty 2 response body json name itead type 02 deviceid 0280000001 apikey 37e45852 a381 4243 8bfe cc3c4c2becab createdat 2014 11 27t03 00 00 000z name itead type 02 deviceid 0280000002 apikey 41556a98 7685 424f bc27 74bf712108b2 createdat 2014 11 27t03 00 00 000z device api iotgo provides device api to interact with devices device api is also intended be used by clients web app mobile app desktop app etc to control connected devices device api is json based which means all request and response data is enclosed in json format iotgo currently supports 3 kind of request register get apikey of current user who owns the device only applies to indie device update update device status to iotgo query get device status from iotgo wire protocol register request json action register deviceid 01ad0253f2 apikey 123e4567 e89b 12d3 a456 426655440000 note register request is only for indie devices not diy devices so deviceid and apikey above are generated by iotgo for licensed manufacturers response json error 0 deviceid 01ad0253f2 apikey 6ba7b810 9dad 11d1 80b4 00c04fd430c8 error 0 denotes a successful response if the error property is not 0 then error occurs and reason property will exist for detailed error information this is also true for update and query request note apikey in a successful response is the user currently owns the indie device update request json action update deviceid 01ad0253f2 apikey 123e4567 e89b 12d3 a456 426655440000 params switch on note params property is a json object which contains all status information of the device response json error 0 deviceid 01ad0253f2 apikey 123e4567 e89b 12d3 a456 426655440000 note deviceid and apikey have the same values as they do in the request query request json action query deviceid 01ad0253f2 apikey 123e4567 e89b 12d3 a456 426655440000 params switch note params property is an array containing status names to be queried empty array denotes querying all status response json error 0 deviceid 01ad0253f2 apikey 123e4567 e89b 12d3 a456 426655440000 params switch on note deviceid and apikey have the same values as they do in the request transfer protocol device api request and response can be carried by any reliable transfer protocol and iotgo supports both http and websocket we strongly recommend websocket over http because websocket enables iotgo to push device status update to both the actual device and device owner s clients http 1 0 device api access point is http iotgo iteadstudio com api http request header post api http http 1 0 host iotgo iteadstudio com content type application json content length 116 note host header must be present even if http 1 0 itself does not require it http 1 1 device api access point is http iotgo iteadstudio com api http request header post api http http 1 1 host iotgo iteadstudio com content type application json content length 116 websocket device api access point is ws iotgo iteadstudio com api ws supported browsers iotgo web app currently supports the current and prior major release of chrome firefox internet explorer and safari on a rolling basis which means ie6 ie7 ie8 ie9 will not work properly support license mit https github com itead iotgo blob master license | server |
|
E_Commerce_app_using_Redux | getting started with create react app this project was bootstrapped with create react app https github com facebook create react app available scripts in the project directory you can run npm start runs the app in the development mode open http localhost 3000 http localhost 3000 to view it in your browser the page will reload when you make changes you may also see any lint errors in the console npm test launches the test runner in the interactive watch mode see the section about running tests https facebook github io create react app docs running tests for more information npm run build builds the app for production to the build folder it correctly bundles react in production mode and optimizes the build for the best performance the build is minified and the filenames include the hashes your app is ready to be deployed see the section about deployment https facebook github io create react app docs deployment for more information npm run eject note this is a one way operation once you eject you can t go back if you aren t satisfied with the build tool and configuration choices you can eject at any time this command will remove the single build dependency from your project instead it will copy all the configuration files and the transitive dependencies webpack babel eslint etc right into your project so you have full control over them all of the commands except eject will still work but they will point to the copied scripts so you can tweak them at this point you re on your own you don t have to ever use eject the curated feature set is suitable for small and middle deployments and you shouldn t feel obligated to use this feature however we understand that this tool wouldn t be useful if you couldn t customize it when you are ready for it learn more you can learn more in the create react app documentation https facebook github io create react app docs getting started to learn react check out the react documentation https reactjs org code splitting this section has moved here https facebook github io create react app docs code splitting https facebook github io create react app docs code splitting analyzing the bundle size this section has moved here https facebook github io create react app docs analyzing the bundle size https facebook github io create react app docs analyzing the bundle size making a progressive web app this section has moved here https facebook github io create react app docs making a progressive web app https facebook github io create react app docs making a progressive web app advanced configuration this section has moved here https facebook github io create react app docs advanced configuration https facebook github io create react app docs advanced configuration deployment this section has moved here https facebook github io create react app docs deployment https facebook github io create react app docs deployment npm run build fails to minify this section has moved here https facebook github io create react app docs troubleshooting npm run build fails to minify https facebook github io create react app docs troubleshooting npm run build fails to minify | server |
|
cuml | div align left img src img rapids logo png width 90px nbsp cuml gpu machine learning algorithms div cuml is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible apis with other rapids https rapids ai projects cuml enables data scientists researchers and software engineers to run traditional tabular ml tasks on gpus without going into the details of cuda programming in most cases cuml s python api matches the api from scikit learn https scikit learn org for large datasets these gpu based implementations can complete 10 50x faster than their cpu equivalents for details on performance see the cuml benchmarks notebook https github com rapidsai cuml tree branch 23 04 notebooks tools as an example the following python snippet loads input and computes dbscan clusters all on gpu using cudf python import cudf from cuml cluster import dbscan create and populate a gpu dataframe gdf float cudf dataframe gdf float 0 1 0 2 0 5 0 gdf float 1 4 0 2 0 1 0 gdf float 2 4 0 2 0 1 0 setup and fit clusters dbscan float dbscan eps 1 0 min samples 1 dbscan float fit gdf float print dbscan float labels output 0 0 1 1 2 2 dtype int32 cuml also features multi gpu and multi node multi gpu operation using dask https www dask org for a growing list of algorithms the following python snippet reads input from a csv file and performs a nearestneighbors query across a cluster of dask workers using multiple gpus on a single node initialize a localcudacluster configured with ucx https github com rapidsai ucx py for fast transport of cuda arrays python initialize ucx for high speed transport of cuda arrays from dask cuda import localcudacluster create a dask single node cuda cluster w one worker per device cluster localcudacluster protocol ucx enable tcp over ucx true enable nvlink true enable infiniband false load data and perform k nearest neighbors search cuml dask estimators also support dask array as input python from dask distributed import client client client cluster read csv file in parallel across workers import dask cudf df dask cudf read csv path to csv fit a nearestneighbors model and query it from cuml dask neighbors import nearestneighbors nn nearestneighbors n neighbors 10 client client nn fit df neighbors nn kneighbors df for additional examples browse our complete api documentation https docs rapids ai api cuml stable or check out our example walkthrough notebooks https github com rapidsai cuml tree branch 23 04 notebooks finally you can find complete end to end examples in the notebooks contrib repo https github com rapidsai notebooks contrib supported algorithms category algorithm notes clustering density based spatial clustering of applications with noise dbscan multi node multi gpu via dask hierarchical density based spatial clustering of applications with noise hdbscan k means multi node multi gpu via dask single linkage agglomerative clustering dimensionality reduction principal components analysis pca multi node multi gpu via dask incremental pca truncated singular value decomposition tsvd multi node multi gpu via dask uniform manifold approximation and projection umap multi node multi gpu inference via dask random projection t distributed stochastic neighbor embedding tsne linear models for regression or classification linear regression ols multi node multi gpu via dask linear regression with lasso or ridge regularization multi node multi gpu via dask elasticnet regression lars regression experimental logistic regression multi node multi gpu via dask glm demo https github com daxiongshu rapids demos naive bayes multi node multi gpu via dask stochastic gradient descent sgd coordinate descent cd and quasi newton qn including l bfgs and owl qn solvers for linear models nonlinear models for regression or classification random forest rf classification experimental multi node multi gpu via dask random forest rf regression experimental multi node multi gpu via dask inference for decision tree based models forest inference library fil k nearest neighbors knn classification multi node multi gpu via dask ucx https github com rapidsai ucx py uses faiss https github com facebookresearch faiss for nearest neighbors query k nearest neighbors knn regression multi node multi gpu via dask ucx https github com rapidsai ucx py uses faiss https github com facebookresearch faiss for nearest neighbors query support vector machine classifier svc epsilon support vector regression svr preprocessing standardization or mean removal and variance scaling normalization encoding categorical features discretization imputation of missing values polynomial features generation and coming soon custom transformers and non linear transformation based on scikit learn preprocessing time series holt winters exponential smoothing auto regressive integrated moving average arima supports seasonality sarima model explanation shap kernel explainer based on shap https shap readthedocs io en latest shap permutation explainer based on shap https shap readthedocs io en latest execution device interoperability run estimators interchangeably from host cpu or device gpu with minimal code change demo https docs rapids ai api cuml stable execution device interoperability html other k nearest neighbors knn search multi node multi gpu via dask ucx https github com rapidsai ucx py uses faiss https github com facebookresearch faiss for nearest neighbors query installation see the rapids release selector https docs rapids ai install selector for the command line to install either nightly or official release cuml packages via conda or docker build install from source see the build guide build md contributing please see our guide for contributing to cuml contributing md references the rapids team has a number of blogs with deeper technical dives and examples you can find them here on medium https medium com rapids ai tagged machine learning for additional details on the technologies behind cuml as well as a broader overview of the python machine learning landscape see machine learning in python main developments and technology trends in data science machine learning and artificial intelligence 2020 https arxiv org abs 2002 04803 by sebastian raschka joshua patterson and corey nolet please consider citing this when using cuml in a project you can use the citation bibtex bibtex article raschka2020machine title machine learning in python main developments and technology trends in data science machine learning and artificial intelligence author raschka sebastian and patterson joshua and nolet corey journal arxiv preprint arxiv 2002 04803 year 2020 contact find out more details on the rapids site https rapids ai community html div align left img src img rapids logo png width 265px div open gpu data science the rapids suite of open source software libraries aim to enable execution of end to end data science and analytics pipelines entirely on gpus it relies on nvidia cuda primitives for low level compute optimization but exposing that gpu parallelism and high bandwidth memory speed through user friendly python interfaces p align center img src img rapids arrow png width 80 p | machine-learning-algorithms machine-learning cuda gpu nvidia | ai |
trackformer | trackformer multi object tracking with transformers this repository provides the official implementation of the trackformer multi object tracking with transformers https arxiv org abs 2101 02702 paper by tim meinhardt https dvl in tum de team meinhardt alexander kirillov https alexander kirillov github io laura leal taixe https dvl in tum de team lealtaixe and christoph feichtenhofer https feichtenhofer github io the codebase builds upon detr https github com facebookresearch detr deformable detr https github com fundamentalvision deformable detr and tracktor https github com phil bergmann tracking wo bnw as the paper is still under submission this repository will continuously be updated and might at times not reflect the current state of the arxiv paper https arxiv org abs 2012 01866 div align center img src docs mot17 03 sdp gif alt mot17 03 sdp width 375 img src docs mots20 07 gif alt mots20 07 width 375 div abstract the challenging task of multi object tracking mot requires simultaneous reasoning about track initialization identity and spatiotemporal trajectories we formulate this task as a frame to frame set prediction problem and introduce trackformer an end to end mot approach based on an encoder decoder transformer architecture our model achieves data association between frames via attention by evolving a set of track predictions through a video sequence the transformer decoder initializes new tracks from static object queries and autoregressively follows existing tracks in space and time with the new concept of identity preserving track queries both decoder query types benefit from self and encoder decoder attention on global frame level features thereby omitting any additional graph optimization and matching or modeling of motion and appearance trackformer represents a new tracking by attention paradigm and yields state of the art performance on the task of multi object tracking mot17 and segmentation mots20 div align center img src docs method png alt trackformer casts multi object tracking as a set prediction problem performing joint detection and tracking by attention the architecture consists of a cnn for image feature extraction a transformer encoder for image feature encoding and a transformer decoder which applies self and encoder decoder attention to produce output embeddings with bounding box and class information div installation we refer to our docs install md docs install md for detailed installation instructions train trackformer we refer to our docs train md docs train md for detailed training instructions evaluate trackformer in order to evaluate trackformer on a multi object tracking dataset we provide the src track py script which supports several datasets and splits interchangle via the dataset name argument see src datasets tracking factory py for an overview of all datasets the default tracking configuration is specified in cfgs track yaml to facilitate the reproducibility of our results we provide evaluation metrics for both the train and test set mot17 private detections python src track py with reid center mot17 mota idf1 mt ml fp fn id sw train 74 2 71 7 849 177 7431 78057 1449 test 74 1 68 0 1113 246 34602 108777 2829 center public detections dpm frcnn sdp python src track py with reid tracker cfg public detections min iou 0 5 obj detect checkpoint file models mot17 deformable multi frame checkpoint epoch 50 pth center mot17 mota idf1 mt ml fp fn id sw train 64 6 63 7 621 675 4827 111958 2556 test 62 3 57 6 688 638 16591 192123 4018 center mot20 private detections python src track py with reid dataset name mot20 all obj detect checkpoint file models mot20 crowdhuman deformable multi frame checkpoint epoch 50 pth center mot20 mota idf1 mt ml fp fn id sw train 81 0 73 3 1540 124 20807 192665 1961 test 68 6 65 7 666 181 20348 140373 1532 center mots20 python src track py with dataset name mots20 all obj detect checkpoint file models mots20 train masks checkpoint pth our tracking script only applies mot17 metrics evaluation but outputs mots20 mask prediction files to evaluate these download the official motchallengeevalkit https github com dendorferpatrick motchallengeevalkit center mots20 smotsa idf1 fp fn ids train test 54 9 63 6 2233 7195 278 center demo to facilitate the application of trackformer we provide a demo interface which allows for a quick processing of a given video sequence ffmpeg i data snakeboard snakeboard mp4 vf fps 30 data snakeboard 06d png python src track py with dataset name demo data root dir data snakeboard output dir data snakeboard write images pretty div align center img src docs snakeboard gif alt snakeboard demo width 600 div publication if you use this software in your research please cite our publication inproceedings meinhardt2021trackformer title trackformer multi object tracking with transformers author tim meinhardt and alexander kirillov and laura leal taixe and christoph feichtenhofer year 2022 month june booktitle the ieee conference on computer vision and pattern recognition cvpr | ai |
|
tap_watir | tapwatir watir for testing your mobile devices powered by appium installation add this line to your application s gemfile ruby gem tap watir and then execute bundle or install it yourself as gem install tap watir usage for right now this is how you access a chrome browser locally appium url http localhost 4723 wd hub caps platformname android platformversion 8 1 devicename nexus browsername chrome browser tapwatir mobilebrowser new url appium url desired capabilities caps development to get the specs to run install android studio create a virtual device named nexus using android 8 1 install appium desktop v1 6 2 download chromedriver 2 34 https chromedriver storage googleapis com index html path 2 34 and specify its location in appium desktop advanced tab start the appium server contributing bug reports and pull requests are welcome on github at https github com watir tap watir license the gem is available as open source under the terms of the mit license https opensource org licenses mit | front_end |
|
FreeRTOS-mirror | getting started the freertos org https www freertos org website contains contains a freertos kernel quick start guide https www freertos org freertos quick start guide html a list of supported devices and compilers https www freertos org rtos ports html the api reference https www freertos org a00106 html and many other resources getting help you can use your github login to get support from both the freertos community and directly from the primary freertos developers on our active support forum https forums freertos org the faq https www freertos org faq html provides another support resource cloning this repository this repo uses git submodules https git scm com book en v2 git tools submodules to bring in dependent components note if you download the zip file provided by the github ui you will not get the contents of the submodules the zip file is also not a valid git repository to clone using https git clone https github com freertos freertos git recurse submodules using ssh git clone git github com freertos freertos git recurse submodules if you have downloaded the repo without using the recurse submodules argument you need to run git submodule update init recursive repository structure this repository contains the freertos kernel a number of supplementary libraries and a comprehensive set of example applications many libraries including the freertos kernel are included as git submodules from their own git repositories kernel source code and example projects freertos source contains the freertos kernel source code submoduled from https github com freertos freertos kernel freertos demo contains pre configured example projects that demonstrate the freertos kernel executing on different hardware platforms and using different compilers supplementary library source code and example projects freertos plus source contains source code for additional freertos component libraries as well as select partner provided libraries these subdirectories contain further readme files and links to documentation freertos plus demo contains pre configured example projects that demonstrate the freertos kernel used with the additional freertos component libraries previous releases releases https github com freertos freertos releases contains older freertos releases | freertos mirror deprecated | os |
Human-Face-Spoofing-Detection | human face spoofing detection publication link link https ieeexplore ieee org document 9225495 human face and spoofing detection are of prime importance in many security verification and law enforcement applications recently biometric identification has played a key role here face recognition is one such system widely used however such system has disadvantages face spoofing is an attempt to deceive a face recognition system using a substitute for another s person s face usually their photo or a 3d mask in such system it may happen that a person might easily log in to another person s account by wearing a 3d mask that looks very original to prevent such attacks a real human face detection system has to be in place https github com sm823zw human face spoofing detection blob main images intro png this paper https ieeexplore ieee org document 9225495 proposes a spoofing detection system using convolutional neural networks cnn cnns are well equipped to extract vital features on its own from images without the need to manually select and extract features from them the cnn classifier was implemented on a raspberry pi embedded system device for real time detection it should accurately predict whether or not a real human face is present in the frame of the recognizing detecting camera the cnn model is very light weight easier and faster to load and run it is ideally designed for real time detection in embedded hardware the cnn architecture used https github com sm823zw human face spoofing detection blob main images cnn jpg performance on samples of hkbu mars dataset face samples face samples https github com sm823zw human face spoofing detection blob main images face png no face samples no face samples https github com sm823zw human face spoofing detection blob main images noface png | cnn face-spoofing-detection tensorflow computer-vision | os |
fe-attitude | https yanyue404 github io fe attitude sup maybe sup details summary toc summary pre code articles blog css design pattern docs esnext interview javascript javascript algorithms javascript components mindmapping must write js nodejs project guide react site source learning staging tampermonkey test typescript utils vue code pre details online https yanyue404 github io blog online https yanyue404 github io fe attitude vue https github com yanyue404 vue mini vue https github com yanyue404 mini vue vue leetcode https github com yanyue404 leetcode 200 h5 editer npm issues2md https github com yanyue404 issues2md about export github issues for bloggers to markdown file fe cli https github com yanyue404 fe cli a front end automation tool build your own project beyond ui https github com yanyue404 beyond ui vue2 rainbow shared https github com yanyue404 rainbow shared rainbow vue shared vite plugin spritesmith2 https github com yanyue404 vite plugin spritesmith2 nuxt web update notification https github com yanyue404 nuxt web update notification project rainbow common https github com rainbow design rainbow common a modern javascript utility and components library npm git node crawler https github com yanyue404 node crawler node my bookmarks https github com yanyue404 my bookmarks my favorite collection dev admin https github com yanyue404 dev admin admin tools for development https yanyue404 github io dev admin nuxt issue blog https github com yanyue404 nuxt issue blog vue nuxt vue boilerplate https github com yanyue404 vue boilerplate done fe boilerplates https github com rainbow design fe boilerplates admin cli done wxapp template https github com rainbow design wxapp template todo rollup cli rainbow shared tampermonkey cli taro cli vite cli | javascript docs nodejs esnext vue | front_end |
DBLab-Spring2023 | dblab spring2023 repository for the materials of the course named database lab at the university of guilan department of computer engineering presented in spring 2023 | database dbms postgres postgresql sql | server |
cv-snippets | cv snippets my code snippets for computer vision some web applications are published here https kamino410 github io cv snippets camera calibration monoral stereo camera calibration using checker board camera distortion simulator simulator of camera s intrinsic and distortion parameters coreml sample software of machine learning using coreml to run playground you have to download model files into playground resources inception v3 https docs assets developer apple com coreml models inceptionv3 mlmodel https docs assets developer apple com coreml models inceptionv3 mlmodel yolo v3 https docs assets developer apple com coreml models image objectdetection yolov3 yolov3 mlmodel https docs assets developer apple com coreml models image objectdetection yolov3 yolov3 mlmodel yolo v3 tiny https docs assets developer apple com coreml models image objectdetection yolov3tiny yolov3tiny mlmodel https docs assets developer apple com coreml models image objectdetection yolov3tiny yolov3tiny mlmodel edsdk simple sample code to take a picture using canon edsdk https kamino hatenablog com entry canon edsdk https kamino hatenablog com entry canon edsdk gamma estimation estimate the gamma value of display from captured camera images geometry voronoi diagram delaunay diagram graycode obtain projector camera relationship using gray code pattern https kamino hatenablog com entry opencv graycode https kamino hatenablog com entry opencv graycode map by lut warp image based on given look up table media gif animation etc nn fitting non linear regression using shallow neural network nn warp image 2d non linear regression for image warping using shallow neural network optimization numerical optimization using eigen scipy optitrack c sample to get traking informations from optitrack motive api plotly visualization 3d data using plotly py https kamino hatenablog com entry python 3d visualization https kamino hatenablog com entry python 3d visualization pointgray c sample code to control a pointgrey camera projector calibration calibrate projector s intrinsic and extrinsic parameters using 2d 3d correspondence protobuf c js examples to use protobuf unity opencvtranslate br script to reflect opencv extrinsic parameters to the object opencvcameraparams br script to reflect opencv intrinsic parameters to the camera native plugin br sample of project for native plugin using cmake https kamino hatenablog com entry unity import opencv camera params https kamino hatenablog com entry unity import opencv camera params web apps driven by opencv js or webgl ximea simple sample code to get image from ximea camera | ai |
|
Object-sorting-using-Robotic-arm-and-Image-processing | object sorting using robotic arm and image processing watch the a href https youtu be t3t1tssweeq video a of the robotic arm in action overview 1 the object should be placed as shown in the video in between the clamp in front of the camera which will be sorted depending upon the color of the object the robotic arm will place them at three different angles at 90 180 and 270 degrees the usb camera which is connected to the raspberry pi continuously scans the live feed for a colored object 2 raspberry pi will detect the color of the object using image processing the colors used in this project are red green and yellow colors can be added by modifying the code 3 rpi is connected to an arduino uno board using two jumper wires which will send the information to arduino board using 2 bit communication method the color is represented by binary numbers for eg red is represent as 10 green as 01 and blue as 11 where 1 is high 5 volt and 0 is low 0 volt 4 the two wires will be connected from gpio pins 11 and 13 of rpi to two digital pins of arduino 6 and 7 using simple jumper wires 5 the robotic arm will perform operation depending upon the color 6 arduino will control three servo motors and motor control clamp hardwares used 1 raspberry pi 2 2 arduino uno 3 usb camera 4 motor control i c for clamp in robotic hand 5 three servo motors softwares used 1 python 2 opencv 3 arduino 4 numpy how to run the robot 1 ensure the two jumper cables are connected to desired pins by referring the source code 2 connect a 5v dc supply to power the motors 3 run classmoto ino on arduino board which controls three servo motors and a dc motor 4 run colordetect py on raspberry pi which is connected with a usb camera | robotic-arm robotics-programming image-processing image-recognition raspberry-pi-3 raspberry-pi-camera opencv python arduino robotics servo-motor numpy | ai |
GHS-Bugs | vulnerabilities in green hills integrity rtos in this report we are reporting vulnerabilities within integrity rtos 5 0 4 we will utilize the following vulnerabilities to bypass interpeak ipshell jail in order to directly talk to the integrity stack overflow in ipwebs cve 2019 7714 interpeka ipwebs which is being used as a webserver in green hills integrity 5 0 4 has a problem when parsing http headerlines during authentication the ipwebs allocate 60 bytes of buffer to parse http authentication header however when copying the authentication header to parse it does not check the size of the header leading to a basic buffer overflow in the ipwebs the variable auth outbuf with the fixed size of 60 bytes is declared however the auth str variable will get copied to the auth outbuf without being checked leading to a buffer overflow the assembly code of the function is the following asm bank a 018e2e64 s u b r o u t i n e bank a 018e2e64 bank a 018e2e64 attributes bp based frame bank a 018e2e64 bank a 018e2e64 t webserver basic auth check code xref sub 18e4b58 ec p bank a 018e2e64 mov r12 sp bank a 018e2e68 stmfd sp r5 r6 r8 r9 r11 r12 lr pc bank a 018e2e6c sub r11 r12 4 bank a 018e2e70 sub sp sp 0x20 bank a 018e2e74 mov r6 r0 bank a 018e2e78 mov r0 sp bank a 018e2e7c mov r2 0x20 bank a 018e2e80 mov r1 0 bank a 018e2e84 bl ipcom memset bank a 018e2e88 ldr r1 aauthorization authorization bank a 018e2e8c mov r0 r6 content bank a 018e2e90 bl t parse http header content bank a 018e2e94 mov r3 r0 bank a 018e2e98 mov r5 r3 bank a 018e2e9c cmp r5 0 bank a 018e2ea0 beq loc 18e2f48 bank a 018e2ea4 mov r1 r5 bank a 018e2ea8 ldr r0 abasic basic bank a 018e2eac mov r2 5 bank a 018e2eb0 bl t strncmp bank a 018e2eb4 mov r3 r0 bank a 018e2eb8 cmp r3 0 bank a 018e2ebc bne loc 18e2f58 bank a 018e2ec0 add r5 r5 5 bank a 018e2ec4 b loc 18e2ecc bank a 018e2ec8 bank a 018e2ec8 bank a 018e2ec8 loc 18e2ec8 code xref t webserver basic auth check 80 j bank a 018e2ec8 add r5 r5 1 bank a 018e2ecc bank a 018e2ecc loc 18e2ecc code xref t webserver basic auth check 60 j bank a 018e2ecc ldrb r3 r5 bank a 018e2ed0 cmp r3 0x20 bank a 018e2ed4 beq loc 18e2edc bank a 018e2ed8 b loc 18e2ee8 bank a 018e2edc bank a 018e2edc bank a 018e2edc loc 18e2edc code xref t webserver basic auth check 70 j bank a 018e2edc ldrb r3 r5 bank a 018e2ee0 cmp r3 0 bank a 018e2ee4 bne loc 18e2ec8 bank a 018e2ee8 bank a 018e2ee8 loc 18e2ee8 code xref t webserver basic auth check 74 j bank a 018e2ee8 mov r1 sp dest bank a 018e2eec mov r0 r5 src bank a 018e2ef0 mov r2 0x20 bank a 018e2ef4 bl t decode base64 into bank a 018e2ef8 mov r1 r0 bank a 018e2efc cmp r1 0 bank a 018e2f00 bne loc 18e2f58 bank a 018e2f04 mov r0 sp str bank a 018e2f08 mov r1 0x3a some len bank a 018e2f0c bl ipcom strchr bank a 018e2f10 mov r1 r0 bank a 018e2f14 cmp r1 0 bank a 018e2f18 beq loc 18e2f58 bank a 018e2f1c mov r0 sp str bank a 018e2f20 mov r1 0x3a some len bank a 018e2f24 bl ipcom strchr bank a 018e2f28 mov r12 r0 bank a 018e2f2c mov r5 0x20 bank a 018e2f30 strb r5 r12 bank a 018e2f34 add r3 r6 0x198 bank a 018e2f38 add r2 r6 0x178 bank a 018e2f3c ldr r1 a32s32s 32s 32s bank a 018e2f40 mov r0 sp bank a 018e2f44 bl t sprintf bank a 018e2f48 bank a 018e2f48 loc 18e2f48 code xref t webserver basic auth check 3c j bank a 018e2f48 add r1 r6 0x198 bank a 018e2f4c add r0 r6 0x178 bank a 018e2f50 bl check creds bank a 018e2f54 b loc 18e2f5c bank a 018e2f58 bank a 018e2f58 bank a 018e2f58 loc 18e2f58 code xref t webserver basic auth check 58 j bank a 018e2f58 t webserver basic auth check 9c j bank a 018e2f58 ldr r0 0xfffffbdc bank a 018e2f5c bank a 018e2f5c loc 18e2f5c code xref t webserver basic auth check f0 j bank a 018e2f5c ldmdb r11 r5 r6 r8 r9 r11 sp lr bank a 018e2f60 bx lr bank a 018e2f60 end of function t webserver basic auth check bank a 018e2f60 bank a 018e2f64 interpeak ipcomshell pwd command handler format string vulnerability cve 2019 7712 in the function handler for printing the current working directory the directory path is used as a first argu ment to printf this leads to a user supplied format string being executed interpeak ipcomshell print prompt heap overflow vulnerability cve 2019 7713 there is a heap overflow vulnerability in the ipcomshell used in green hills integrity rtos v5 0 4 while it is not documented inside the helpall command provided by the ipcomshell typing prompt new prompt allows the user to set the prompt looking at the implementation generating the shell output we can see those different modifiers are interpreted i print ip address p print shell process name p print shell process id w and w print working directory the function printing the shell prompt allows the use of custom modifiers to display information like process ids or current ip address or current working directory the expansion of those modifiers can trigger a heap based buffer overflow and also leaks process address information potentially valuable to an attacker this may result in memory corruption crash or info leak interpeak ipcomshell undocumented prompt command format string vulnerability cve 2019 7711 the non documented shell command prompt sets the user controlled shell s prompt value which is used as a format string input to printf resulting in an information leak interpeak ipcomshell process greetings format string vulnerability cve 2019 7715 the main shell handler function uses the value of the environment variable ipcom shell greeting as the first argument to printf setting the variable using the sysvar command results in a user controlled format string during login resulting in an information leak credit tobias scharnowski and ali abbasi of ruhr university bochum | os |
|
Placements_App | placements app for mangalore institute of technology and engineering this is a mobile application designed to help students and staff of mangalore institute of technology and engineering mite view the list of students who have successfully secured jobs through campus placements it uses an excel sheet as the database to store and retrieve the placement details of mite students and is developed using java and android studio features view a list of all placed students search for a specific student by name view the company name job profile and year of passing out for each placed student installation 1 clone the repository git clone https github com varshithvhegde placements app git 2 open the project in android studio 3 connect an android device or set up an emulator 4 click on the run button in android studio to install and run the app on the device contribution if you d like to contribute to the project please fork the repository and make the changes in your forked repository then create a pull request to this repository | server |
|
bezier-react | bezier react storybook https shields io badge storybook white logo storybook style flat https main 62bead1508281287d3c94d25 chromatic com version https img shields io github package json v channel io bezier react filename packages 2fbezier react 2fpackage json circleci https dl circleci com status badge img gh channel io bezier react tree main svg style svg https dl circleci com status badge redirect gh channel io bezier react tree main codecov https codecov io gh channel io bezier react branch main graph badge svg token bwctdh41fd https codecov io gh channel io bezier react monorepo for bezier react packages bezier react and related packages about this repo name description bezier react packages bezier react react components library that implements bezier design system bezier icons packages bezier icons icon library that implements bezier design system bezier codemod packages bezier codemod codemod transformations to help upgrade app using bezier design system bezier figma plugin packages bezier figma plugin figma plugin that helps build bezier design system and increase productivity commands install dependencies bash yarn install build workspaces bash yarn build build a specific workspace bash yarn build filter workspace run storybook bash yarn dev other commands command description yarn test tests all workspaces yarn lint lints all workspaces yarn typecheck type checks all workspaces yarn clean remove generated files yarn update snapshot update test snapshots of bezier react contributing see contribution guide github contributing md maintainers this package is mainly contributed by channel corp although feel free to contribution or raise concerns | react react-components design-system figma-plugin icons typescript bezier-react component-library figma storybook | os |
PCBDesign | pcbdesign this repository contains my learning journey with pcb and electronics hardware circuit design for embedded systems internet of things iot and everything electronics and hardware | os |
|
BikeStore | technicalassessment this project was generated with angular cli https github com angular angular cli version 7 0 7 development server run ng serve for a dev server navigate to http localhost 4200 the app will automatically reload if you change any of the source files code scaffolding run ng generate component component name to generate a new component you can also use ng generate directive pipe service class guard interface enum module build run ng build to build the project the build artifacts will be stored in the dist directory use the prod flag for a production build running unit tests run ng test to execute the unit tests via karma https karma runner github io running end to end tests run ng e2e to execute the end to end tests via protractor http www protractortest org further help to get more help on the angular cli use ng help or go check out the angular cli readme https github com angular angular cli blob master readme md | server |
|
CS-4476-6476-Computer-Vision | cs 4476 6476 computer vision | ai |
|
University-Database | university database example this database model created for the database management course project of ege university department of computer engineering project instructions had various data requirements and integrity challenges and it aimed to gain followings relational data model enhanced er modeling effective modeling advanced sql database views database triggers database constraints data integrity data requirements your database should keep track of the curriculums of each of the following departments in turkey respectively computer engineering software engineering artificial intelligence engineering each curriculum is composed of its own courses of type mandatory optional technical or non technical your design should have entities like chair faculty member professor associate professor assistant professor instructor research assistant your design should be able to store the instructors of the courses with their section information you should store information about the research areas of the faculty members their m sc and ph d theses you should also store the keywords associated with each course and these keywords should be related to the research areas of the faculty members to find whether a given course is instructed by the most matched faculty member for each of the course and curriculum you should store a computed value of this matching criteria project requirements given design img width 620 alt givendesign src https github com muhammetsanci university database assets 77257193 7e7f11b7 8da4 401e b84b 8ef610ebcf5f requirements write a brief explanation using your own words about the given design write an analysis report what is the aim of your design what are the main entities what are the characteristics of each entity what relationships exist among the entities what are the constraints related to entities and relationships among them create an eer diagram try to use enhanced extended features of er modeling the most important point of your design is how to extend the original design and generate added value you can define new entities where interaction and integration are required at this point your creativity has an artistic significance write down the data requirements for the eer diagram convert eer diagram into relational model using the methodology that will be introduced in your course write down the appropriate ddl statements for creating the database and its relational model you can select any of the dbms you wish populate the database you just created again using sql script file loaded with sample tuples write down 3 triggers for 3 different tables triggers should be meaningful write down 3 check constraints and 3 assertions check constraints and assertions should be meaningful write 10 select statements for the database you have implemented conclusion you can find the enhanced enhanced er diagram in the repository you can see the roadmap and developing process in the project report domain and relational model diagram are in the report feel free to implement this model into your sql server you can follow these steps 1 create the database with desired name 2 run schema sql file to create the tables 3 run insertstatements sql file to fill tables with data 4 run relationalconstraints sql file to add referential integrity constraints 5 run views sql file to create views 6 run triggers sql file to create triggers theses steps has been ordered meaninfully following them in unordered way may cause errors for example data in the insert statements are depending on each other because of the referential integrity running this query series in one compile after relational constraints may cause errors because of the dependencies it is important to fill these controlled data before adding this constraints | constraints database er-model query sql trigger view | server |
cs624-examples | cs624 examples hos examples for cs 624 full stack development mobile app students can find the examples that will be used for their practices in lecture or hos hands on skills if you find any mistakes please contact chungsam cityu edu module 01 cloud based development environment module 02 components and styling module 03 react native fundamentals i module 04 react native fundamentals ii module 05 react native design i module 06 react native design ii module 07 react native navigation i module 08 react native navigation ii module 09 react native networking i module 10 react native networking i | front_end |
|
ui-library | div align center img src https avatars githubusercontent com u 19564969 v 4 alt moja global logo height auto width 200 br h1 moja global ui library h1 p repository for the ui library a vue js user interface library for moja global projects p p align center a href https github com moja global ui library graphs contributors alt github contributors img src https img shields io github contributors moja global ui library svg a a href https github com moja global ui library issues alt github issues by label img src https img shields io github issues moja global ui library a a href https mojaglobal slack com alt slack img src https img shields io badge slack join chat brightgreen svg a a href https twitter com mojaglobal alt twitter follow img src https img shields io twitter follow mojaglobal svg label follow style social a a href https github com moja global ui library blob main license alt license img src https img shields io github license moja global ui library svg a a href https www npmjs com package moja global mojaglobal ui alt npm package img src https badge fury io js moja global 2fmojaglobal ui svg a p div introduction moja global http moja global is a collaboration under the linux foundation https linuxfoundation org that aims for the widest possible collaboration on and use of credible tools to better manage the land sector the flagship software is the full lands integration tool flint https github com moja global flint a tool to estimate emissions and sinks of greenhouse gasses from forestry and agriculture moja global user interface ui library aims to bring forward an intuitive consistent and easy to use interface that can help our developers within the user interface working group and users to quickly accomplish their tasks the ui library aims to considerably improve our design development workflow and meet the acceptable web accessibility requirements for our potential users a ui library helps us mitigate popular ui development issues like inconsistent user experience performance issues accessibility requirements and more technologies vue js https vuejs org storybook https storybook js org babel https babeljs io eslint https eslint org npm https www npmjs com components components planned implemented accordion alert button card datepicker dropdown footer modal navbar slider sponsor toggle find detailed description on our storybook setup https moja global ui library vercel app path story components accordion primary installation to setup the project on your local environment follow the given steps 1 fork the moja global ui library https github com moja global ui library repository 2 clone the repository bash git clone https github com username ui library git cd ui library replace the username with your github username if you ve already forked the repo you ll want to ensure your fork is configured and that it s up to date this will save you the headache of potential merge conflicts to sync your fork with the latest changes sh git checkout main git fetch upstream git merge upstream main 3 install the required dependencies sh yarn 4 to run the storybook locally sh yarn storybook go to localhost 6006 to view the storybook contributing moja global welcomes contributions to the community website if you have an idea for a new feature or a bug fix please submit an issue or pull request our planned features can be found on our issue tracker https github com mojaglobal ui library issues if you have any questions please reach out to us on slack https mojaglobal slack com adding new components 1 replace component name with your own component name sh cd mojaglobal ui mkdir component name cd component name 2 add all the files related to the component in this directory 3 export your component in ui library mojaglobal ui src components index js 4 add a story related to your component in the storybook 5 add the details for the component on the readme md file to add new features or fix bugs open the corresponding file and do the required changes in the file as well as in its corresponding story license mozilla public license 2 0 https github com moja global ui library blob main license | os |
|
WebDevelopment | h1 align center web development h1 img src readme images webdevelopment png width 100 alt lean in web development br br div h1 img src readme images send png width 25px overview h1 div this repository focuses on providing study material and resources for web deveopment cohort 2020 2021 also it includes the contributions and submissions for the tasks by mentees this repository aims to provide the required resources to all the readers under lean in igdtuw cohort 2020 2021 br br div h1 img src readme images content writing png width 25px contents h1 div sessions topic description orientation web development https github com lean in igdtuw webdevelopment blob master lean 20in 20web d 20orientation pdf slide for orientation session 1 html5 https github com lean in igdtuw webdevelopment tree master session1 html5 basics of html5 session 2 git https github com lean in igdtuw webdevelopment tree master session2 git git installation commands session 3 css3 https github com lean in igdtuw webdevelopment tree master session3 css3 basics of css3 session 4 webpage with css https github com lean in igdtuw webdevelopment tree master session 204 20 20webpage 20with 20css live session on css3 webpage session 5 flask https github com lean in igdtuw webdevelopment tree master session5 introtoflask introduction to flask session 6 webpage with css 2 https github com lean in igdtuw webdevelopment tree master session 206 20 20webpage 20with 20css 20 202 webpage built live using css3 session 7 bootstrap https github com lean in igdtuw webdevelopment tree master session 207 20 20bootstrap basic features of bootstrap quiz 1 html css git https github com lean in igdtuw webdevelopment blob master webdevelopmentcircle quizzes docx quiz on html5 css3 and git quiz 2 flask https github com lean in igdtuw webdevelopment blob master quiz 202 20 20flask pdf quiz on flask mentees contribution tasks https github com lean in igdtuw webdevelopment tree master tasks includes all the tasks submitted by our mentees br br div h1 technology stack h1 p align center code img src https img icons8 com color 48 000000 html 5 png code code img src https img icons8 com color 48 000000 css3 png code code img src readme images download png width 5 code code img src https img icons8 com color 50 000000 bootstrap png code code img src https img icons8 com color 64 fa314a figma png width 5 code code img src https img icons8 com color 64 000000 git png width 5 code code img src https img icons8 com color 64 000000 github png width 5 code code img src https img icons8 com nolan 64 flask png width 5 code code img src https img icons8 com windows 64 26e07f node js png width 5 code code img src https img icons8 com ios filled 50 26e07f jquery png width 5 code code img src https img icons8 com dusk 64 4a90e2 php logo png width 5 code p div br ul li features frontend webdevelopment backend webdevelopment ui ux designing li li we at leanin work for the betternment of society by helping our mentees providing best possible resources for growing developer technologies li li projects covered ol nbsp li introductory webpage about yourself li nbsp li sports webpage li nbsp li how to create a quiz in javascript li nbsp li landing page the right ui ux features li nbsp li design the hero section of the tourism page of a state or a country li nbsp li building a responsive website using pure css of your favourite anime cartoon movie tv show character li nbsp li restaurant marketing page food delivery app marketing page climate change aid ngo fashion apparel website skin care products website li nbsp li food delivery website audio products shopping pet adoption space books movies and resources banking website travelling website blog dynamic news website dynamic web game portfolio li ol li br li we will be happy to help you for your queries related to web development li ul br hr br br div h1 img src readme images working woman png width 35px mentors h1 div table style table layout fixed width 100 tr td align center a href https github com ishikabansal04 style text align center img src https avatars githubusercontent com u 49348251 s 400 u 8749987d8f8a9339868e9c0ee24e36253057e147 v 4 width 200px br sub b ishika bansal b sub a td td align center a href https github com shruthi019 style text align center img src https raw githubusercontent com shruthi019 shruthi019 github io master images profile png width 200px br sub b shruthi rao b sub a td td align center a href https github com pooja gera style text align center img src https avatars githubusercontent com u 55778515 v 4 width 200px br sub b pooja gera b sub a td td align center a href https github com titiksha01 style text align center img src https avatars githubusercontent com u 57208352 v 4 width 200px br sub b titiksha sharma b sub a td td align center a href https github com mansi35 style text align center img src https avatars githubusercontent com u 53896251 v 4 width 200px br sub b mansi sharma b sub a td tr table br hr br br div h1 img src readme images girl png width 32px mentees contributors h1 div h2 align center batch 2020 h2 table style table layout fixed width 100 tr td align center a href https github com anushkajain6459 style text align center img src https avatars githubusercontent com u 54318538 v 4 width 200px br sub b anushka jain b sub a td td align center a href https github com nightingirl style text align center img src https avatars githubusercontent com u 55595569 v 4 width 150px br sub b hiteshi saini b sub a td td align center a href https github com kritikaparihar04 style text align center img src https avatars githubusercontent com u 71223572 v 4 width 200px br sub b kritika parihar b sub a td td align center a href https github com megha raghav style text align center img src https avatars githubusercontent com u 71562767 v 4 width 200px br sub b megha raghav b sub a td td align center a href https github com prachiagarwal19 style text align center img src https avatars githubusercontent com u 70996961 v 4 width 200px br sub b prachi agarwal b sub a td tr tr td align center a href https github com preetigupta999 style text align center img src https avatars githubusercontent com u 65416320 v 4 width 150px br sub b preeti gupta b sub a td td align center a href https github com pnim28 style text align center img src https avatars githubusercontent com u 64418754 v 4 width 200px br sub b prerna nim b sub a td td align center a href https github com priyanshi sharma 142 style text align center img src https avatars githubusercontent com u 71590973 v 4 width 200px br sub b priyanshi sharma b sub a td td align center a href https github com riddhikwatra style text align center img src https avatars githubusercontent com u 62344381 v 4 width 200px br sub b riddhi kwatra b sub a td td align center a href https github com sapu30 style text align center img src https avatars githubusercontent com u 71216375 v 4 width 200px br sub b sapna kumari b sub a td tr tr td align center a href https github com shriya rai style text align center img src https avatars githubusercontent com u 48272365 v 4 width 200px br sub b shriya rai b sub a td td align center a href https github com sonali12920 style text align center img src https avatars githubusercontent com u 55687908 v 4 width 200px br sub b sonali b sub a td td align center a href https github com nutmegg style text align center img src https avatars githubusercontent com u 66939460 v 4 width 200px br sub b srishti chanda b sub a td td align center a href https github com vanichitkara style text align center img src https avatars githubusercontent com u 64951124 v 4 width 200px br sub b vani chitkara b sub a td td align center a href https github com yachika30 style text align center img src https avatars githubusercontent com u 66946112 v 4 width 200px br sub b yachika b sub a td tr table br hr br br h2 align center batch 2021 h2 table style table layout fixed width 100 tr td align center a href https github com ana git1906 style text align center img src https avatars githubusercontent com u 80242874 v 4 width 200px br sub b anandita khanooja b sub a td td align center a href https github com anusha 03 style text align center img src https avatars githubusercontent com u 75381481 v 4 width 200px br sub b anusha dixit b sub a td td align center a href https github com kaizengirl1111 style text align center img src https avatars githubusercontent com u 73153808 v 4 width 200px br sub b avni uplabdhee b sub a td td align center a href https github com maneet057 style text align center img src https avatars githubusercontent com u 77922626 v 4 width 200px br sub b maneet kaur b sub a td td align center a href https github com mansimandal style text align center img src https avatars githubusercontent com u 79393175 v 4 width 200px br sub b mansi mandal b sub a td tr tr td align center a href https github com navya32 style text align center img src https avatars githubusercontent com u 72815394 v 4 width 200px br sub b navya gupta b sub a td td align center a href https github com punervasingh style text align center img src https avatars githubusercontent com u 79405229 v 4 width 200px br sub b punerva singh b sub a td td align center a href https github com rochita sharma style text align center img src https avatars githubusercontent com u 69753947 v 4 width 200px br sub b rochita sharma b sub a td td align center a href https github com sakshi e glitch style text align center img src https avatars githubusercontent com u 79405440 v 4 width 200px br sub b sakshi pandey b sub a td td align center a href https github com riddhic15 style text align center img src https avatars githubusercontent com u 58457452 v 4 width 200px br sub b riddhi chaudhary b sub a td tr table br hr br br div h1 img src readme images send png width 25px find us elsewhere h1 div br p align center a href https www instagram com leanin igdtuw img src readme images instagram png width 15 alt instagram a a href https twitter com leaninigdtuw img src readme images twitter png width 13 alt twitter a a href https www linkedin com company lean in igdtuw mycompany img src readme images linkedin png width 14 alt linkedin a a href https www youtube com channel uccle63el4b2mmztmsatvksg img src readme images youtube png width 14 alt youtube a a href https github com lean in igdtuw img src readme images github png width 13 alt github a a href mailto leaninginigdtuw gmail com img src readme images gmail png width 12 alt gmail a p hr br br p align center img src http forthebadge com images badges built with love svg img src http forthebadge com images badges built by developers svg p br p align center show some if you like our work give some p | html5 css3 flask javascript git bootstrap nodejs | front_end |
causal-text-papers | papers about causal inference and language a collection of papers and codebases about influence causality and language pull requests welcome table of contents 1 datasets and simulations datasets and simulations 2 learning resources and blog posts learning resources and blog posts 3 causal inference with text variables causal inference with text variables 1 text as treatment text as treatment 2 text as mediator text as mediator 3 text as outcome text as outcome 4 text as confounder text as confounder 3 causality to improve nlp causality to improve nlp 1 causal interpretations and explanations causal interpretations and explanations 2 sensitivity and rhobustness sensitivity and robustness 4 applications in the social sciences applications in the social sciences 1 linguistics linguistics 2 marketing marketing 3 persuasion argumentation persuasion argumentation 4 mental health mental health 5 psychology psychology 6 economics economics 7 bias and fairness bias and fairness 8 social media social media 9 law law 10 online hate speech online hate speech 5 potential connections to language potential connections to language 1 vectorized treatments vectorized treatments datasets and simulations type description code semi simulated given text amazon reviews extracts treatments 0 or 5 stars and confounds product type then samples outcomes sales conditioned on the extracted treatments and confounds git https github com rpryzant causal text blob main src simulation py fully synthetic samples outcomes treatments and confounds from binomial distributions then words from a uniform distribution conditioned on those sampled variables git https github com zachwooddoughty emnlp2018 causal blob master datasets py learning resources and blog posts title description code text and causal inference a review of using text to remove confounding from causal estimates https arxiv org pdf 2005 00649 pdf br katherine a keith david jensen and brendan o connor survey of studies that use text to remove confouding also highlights numerous open problems in the space of text and causal inference text feature selection for causal inference http ai stanford edu blog text causal inference br reid pryzant and dan jurafsky blog post about text as treatment operationalized through lexicons git https github com rpryzant causal selection econometrics meets sentiment an overview of methodology and applications https doi org 10 1111 joes 12370 br andres algaba david ardia keven bluteau samuel borms and kris boudt survey summarizing various methods to transform alternative data with a focus on text into a variable and use it in econometric models includes applications throughout git https github com sborms econometrics meets sentiment causal inference with text variables text as treatment title description code causal effects of linguistic properties https arxiv org pdf 2010 12919 pdf br reid pryzant dallas card dan jurafsky victor veitch dhanya sridhar develops an adjustment procedure for text based causal inference with classifier based treatments proves bounds on the bias git https github com rpryzant causal text challenges of using text classifiers for causal inference https arxiv org pdf 1810 00956 pdf br zach wood doughty ilya shpitser mark dredze looks at different errors that can stem from estimating treatment labels with classifiers proposes adjustments to account for said errors git https github com zachwooddoughty emnlp2018 causal deconfounded lexicon induction for interpretable social science https nlp stanford edu pubs pryzant2018lexicon pdf br reid pryzant kelly shen dan jurafsky stefan wager looks at effect of text as manifested in lexicons or individual words proposes algorithms for estimating effects and evaluating lexicons git https github com rpryzant causal attribution how to make causal inferences using texts https arxiv org pdf 1802 02163 pdf br naoki egami christian j fong justin grimmer margaret e roberts and brandon m stewart also text as outcome covers assumptions needed for text as treatment and concludes that you should use a train test set discovery of treatments from text corpora https www aclweb org anthology p16 1151 pdf br christian fong justin grimmer propose a new experimental design and statistical model to simultaneously discover treatments in a corpora and estimate causal effects for these discovered treatments the effect of wording on message propagation topic and author controlled natural experiments on twitter https arxiv org pdf 1405 1438 pdf br chenhao tan lillian lee and bo pang controls for confouding by looking at tweets containing the same url and written by the same user but employing different wording when do words matter understanding the impact of lexical choice on audience perception using individual treatment effect estimation https arxiv org abs 1811 04890 br zhao wang and aron culotta measure effect of words on reader s perception multiple quasi experimental methods compared git https github com tapilab aaai 2019 words text as mediator title description code adapting text embeddings for causal inference https arxiv org pdf 1905 12741 pdf br victor veitch dhanya sridhar and david blei also text as confounder adapts bert embeddings for causal inference by predicting propensity scores and potential outcomes alongside masked language modeling objective tensorflow https github com blei lab causal text embeddings br pytorch https github com rpryzant causal bert pytorch operationalizing complex causes a pragmatic view of mediation https arxiv org abs 2106 05074 br limor gultchin david watson matt kusner and ricardo silva can also be viewed as text as treatment develops a notion of pragmatic mediation which helps make causel effect estimation when complex objects such as text image or genomics are involved across various intervention regimes identification of prgamtic mediators has an interpretability benefit which could guide the development of new interventions git https github com limorigu complexcauses text as causal mediators research design for causal estimates of differential treatment of social groups via language aspects https aclanthology org 2021 cinlp 1 2 pdf br katherine a keith douglas rice and brendan o connor proposes a causal research design for observational nonexperimental data to estimate the natural direct and indirect effects of social group signals e g race or gender on speakers responses with separate aspects of language as causal mediators text as outcome title description code estimating causal effects of tone in online debates https arxiv org pdf 1906 04177 pdf br dhanya sridhar and lise getoor also text as confounder looks at effect of reply tone on the sentiment of subsiquent responses in online debates git https github com dsridhar91 debate causal effects how judicial identity changes the text of legal rulings https papers ssrn com sol3 papers cfm abstract id 2620781 br michael gill and andrew hall looks at how the random assignment of a female judge or a non white judge affects the language of legal rulings measuring semantic similarity of clinical trial outcomes using deep pre trained language representations https www sciencedirect com science article pii s2590177x19300575 br anna koroleva sanjay kamath patrick paroubek text as confounder title description code causalnlp a practical toolkit for causal inference with text https arxiv org abs 2106 08043 br arun s maiya also text as treatment describes a toolkit for causal inference with text largely based on meta learners includes support for encoding text as a controlled for variable using traditional bow features in addition to a pytorch implementation of causal bert originally from r pryzant also includes convenience methods for easily transforming text into traditional numerical or categorical variables for use as treatment confounder outcome in a causal analyses e g sentiment topic emotion etc git https github com amaiya causalnlp text and causal inference a review of using text to remove confounding from causal estimates https arxiv org pdf 2005 00649 pdf br katherine a keith david jensen and brendan o connor survey of studies that use text to remove confouding also highlights numerous open problems in the space of text and causal inference adjusting for confounding with text matching https scholar princeton edu sites default files bstewart files textmatching preprint pdf br margaret e roberts brandon m stewart and richard a nielsen estimate a low dimensional summary of the text and condition on this summary via matching to remove confouding proposes a method of text matching topical inverse regression matching that matches on both on the topical content and propensity score matching with text data an experimental evaluation of methods for matching documents and of measuring match quality https arxiv org pdf 1801 00644 br reagan mozer luke miratrix aaron russell kaufman l jason anastasopoulos characterizes and empirically evaluates a framework for matching text documents that decomposes existing methods into the choice of text representation and the choice of distance metric learning representations for counterfactual inference https arxiv org pdf 1605 03661 pdf br fredrik johansson uri shalit david sontag one of their semi synthetic experiments has news content as a confounder learning representations for counterfactual inference http www jmlr org proceedings papers v48 johansson16 pdf br fredrik johansson uri shalit david sontag one of their semi synthetic experiments has news content as a confounder conceptualizing treatment leakage in text based causal inference https arxiv org pdf 2205 00465 pdf br adel daoud connor t jerzak and richard johansson characterize the problem of the leakage of treatment signal when controlling for text based confounders which may lead to issues in identification and estimation simulation study on how treatment leakage leads to issues with the estimation of the average treatment effect ate and how to mitigate this bias with text pre processing by assuming separability causality to improve nlp causal interpretations and explanations title description code towards trustworthy explanation on causal rationalization https arxiv org abs 2306 14115 br wenbo zhang tong wu yunlong wang yong cai hengrui cai this paper utilizes probability of causation to improve nlp self explaining models git https github com onepounchman causal retionalization causalm causal model explanation through counterfactual language models https arxiv org pdf 2005 13407 pdf br amir feder nadav oved uri shalit and roi reichart suggested a method for generating causal explanations through counterfactual language representations git https github com amirfeder causalm causal mediation analysis for interpreting neural nlp the case of gender bias https arxiv org pdf 2004 12265 pdf br jesse vig sebastian gehrmann yonatan belinkov sharon qian daniel nevo yaron singer and stuart shieber uses causal mediation analysis to interpret nlp models git https github com sebastiangehrmann causalmediationanalysis causal bert language models for causality detection between events expressed in text https arxiv org pdf 2012 05453 pdf br vivek khetan roshni ramnani mayuresh anand subhashis sengupta andrew e fano this paper investigate the language model s capabilities for identification of causal association among events expressed in natural language text using only the sentence context sentence context combined with event information and by leveraging masked event context with in domain and out of domain data distribution sensitivity and robustness title description code robustness to spurious correlations in text classification via automatically generated counterfactuals https arxiv org abs 2012 10040 br zhao wang and aron culotta matching to identify causal terms then generate counterfactuals for training git https github com tapilab aaai 2021 counterfactuals identifying spurious correlations for robust text classification https arxiv org abs 2010 02458 br zhao wang and aron culotta matching to identify spurious word features git https github com tapilab emnlp 2020 spurious discovering and controlling for latent confounds in text classification using adversarial domain adaptation https epubs siam org doi pdf 10 1137 1 9781611975673 34 br virgile landeiro tuan tran and aron culotta control for unobserved confounders in text classification robust text classification under confounding shift https jair org index php jair article view 11248 br virgile landeiro and aron culotta control for changing confounders in text classification git https github com tapilab aaai 2016 robust learning the difference that makes a difference with counterfactually augmented data https arxiv org abs 1909 12434 br divyansh kaushik eduard hovy zachary c lipton introducing methods and resources for training models less sensitive to spurious patterns git https github com acmi lab counterfactually augmented data explaining the efficacy of counterfactually augmented data https arxiv org abs 2010 02114 br divyansh kaushik amrith setlur eduard hovy zachary c lipton explaining the efficacy of counterfactually augmented data for training models less sensitive to spurious patterns git https github com acmi lab counterfactually augmented data applications in the social sciences linguistics title description code decoupling entrainment from consistency using deep neural networks https arxiv org abs 2011 01860 br andreas weise rivka levitan isolated the individual style of a speaker when modeling entrainment in speech estimating causal effects of exercise from mood logging data https linqs soe ucsc edu sites default files papers sridhar causalml18 1 pdf br dhanya sridhar aaron springer victoria hollis steve whittaker lise getoor confouder text of mood triggers confounding adjustment method propensity score matching marketing title description code predicting sales from the language of product descriptions https nlp stanford edu pubs pryzant2017sigir pdf br reid pryzant young joo chung and dan jurafsky found features of product descriptions most predictive of sales while controlling for brand price git https github com rpryzant causal attribution interpretable neural architectures for attributing an ad s performance to its writing style https nlp stanford edu pubs pryzant2018emnlp pdf br reid pryzant kazoo sone and sugato basu found features of ad copy most predictive of high ctr while controlling for advertiser and targeting git https github com rpryzant deconfounded lexicon induction tree master text performance attribution persuasion argumentation title description code influence via ethos on the persuasive power of reputation in deliberation online https arxiv org pdf 2006 00707 pdf br emaad manzoor george h chen dokyun lee michael d smith controls for unstructured argument text using neural models of language in the double machine learning framework healthcare title description code mimicause representation and automatic extraction of causal relation types from clinical notes https aclanthology org 2022 findings acl 63 br vivek khetan md imbesat rizvi jessica huber paige bartusiak bogdan sacaleanu andrew fano this work proposed annotation guidelines develop an annotated corpus and provided baseline scores to identify types and direction of causal relations between a pair of biomedical concepts in clinical notes communicated implicitly or explicitly identified either in a single sentence or across multiple sentences mental health title description code the language of social support in social media and its effect on suicidal ideation risk https www ncbi nlm nih gov pmc articles pmc5565730 br munmun de choudhury and emre kiciman confouder previous text written in a reddit forum confounding adjustment method stratified propensity scores matching discovering shifts to suicidal ideation from mental health content in social media https dl acm org doi pdf 10 1145 2858036 2858207 casa token zjklrg8laosaaaaa ecs8hsunryeued de6dx15 nprz1 mmjixfaexlpr25wwz6ywzqcjuzqwjjqiyibegxztokuld1h br munmun de choudhury emre kiciman mark dredze glen coppersmith mrinal kumar confouder user s previous posts and comments received confounding adjustment method stratified propensity scores matching psychology title description code increasing vegetable intake by emphasizing tasty and enjoyable attributes a randomized controlled multisite intervention for taste focused labeling https journals sagepub com doi pdf 10 1177 0956797619872191 br bradley turnwald jaclyn bertoldo margaret perry peggy policastro maureen timmons christopher bosso priscilla connors robert valgenti lindsey pine ghislaine challamel christopher gardner alia crum did rct on cafeteria food labels observing effect on how much of those foods students took a social media study on the effects of psychiatric medication use https www aaai org ojs index php icwsm article download 3242 3110 br koustuv saha benjamin sugar john torous bruno abrahao emre k c man munmun de choudhury confounder users previous posts on twitter confounding adjustment method stratified propensity score matching economics title description code a deep causal inference approach to measuring the effects of forming group loans in online non profit microfinance platform https arxiv org pdf 1706 02795 br thai t pham and yuanyuan shen confounder microloan descriptions on kiva confounding adjustment method a iptw tmle on embeddings bias and fairness title description code unsupervised discovery of implicit gender bias https arxiv org pdf 2004 08361 pdf propensity score matching and adversarial learning to get a model to focus on bias instead of other artifacts tweetment effects on the tweeted experimentally reducing racist harassment https link springer com article 10 1007 s11109 016 9373 5 br kevin munger did rct sending de escalation messages to racist twitter users changing the from user and observing effects on downstream behavior social media title description code estimating the effect of exercising on users online behavior http ls3 rnet ryerson ca wiki images e e0 ossm2017 amin pdf br seyed amin mirlohi falavarjani hawre hosseini zeinab noorian ebrahim bagheri confounder pre treatment topical interest shift confounding adjustment method matching on topic models distilling the outcomes of personal experiences a propensity scored analysis of social media https dl acm org doi pdf 10 1145 2998181 2998353 casa token u8icshz uguaaaaa i9qcf0uceh lykhte9aa5rnmxflvqfpw0tihtush lkmdv1f1o9ko9jpil nb8cx5rbtf4nn5jgq br alexandra olteanu onur varol emre kiciman confounder past word use on twitter confoundig adjustment method stratified propensity score matching using longitudinal social media analysis to understand the effects of early college alcohol use http kiciman org wp content uploads 2018 10 college alcohol tweets icwsm18e pdf br emre kiciman scott counts melissa gasser confounder previous posts on twitter confounding adjustment method stratified propensity score matching using matched samples to estimate the effects of exercise on mental health from twitter https ojs aaai org index php aaai article view 9205 9064 br virgile landeiro and aron culotta confounder gender location profile confounding adjustment method matching git https github com tapilab aaai 2015 matching extraction of explicit and implicit cause effect relationships in patient reported diabetes related tweets from 2017 to 2021 deep learning approach https medinform jmir org 2022 7 e37201 br adrian ahne vivek khetan xavier tanner md imbessat hasan rizvi thomas czernichow francisco orchard charline bour andrew fano guy fagherazzi a cause effect dataset was manually labeled and augmented using active learning first sentences containing causal information causal sentences were detected by fine tuning a bertweet model secondly cause effect pairs were identified in causal sentences with several models tested lastly in a semi supervised approach cause effect pairs were aggregated to form a cause effect network which was visualised in d3 law title description code everything has a cause leveraging causal inference in legal text analysis https aclanthology org 2021 naacl main 155 pdf br xiao liu da yin yansong feng yuting wu dongyan zhao built causal graphs from legal descriptions automatically and disambiguated similar charges with the built graphs treatment confounders factors from legal descriptions git https github com xxxiaol gci online hate speech title description code a survey of online hate speech through the causal lens https arxiv org abs 2109 08120 br antigoni m founta lucia specia a survey of studies that measure causal effects related to online hate speech the survey also highlights potential knowledge gaps and issues and provides suggestions on how to further extend the causal perspective of hate speech robust cyberbullying detection with causal interpretation https dl acm org doi pdf 10 1145 3308560 3316503 br lu cheng ruocheng guo huan liu proposes a principled framework to identify and block the influence of plausible latent confounders on cyberbullying detection prevalence and psychological effects of hateful speech in online college communities https dl acm org doi pdf 10 1145 3292522 3326032 br koustuv saha eshwar chandrasekharan munmun de choudhury measure psychological effect of exposure to hate speech on reddit communities as increase in levels of stress confounders subreddit user activity confounding adjustment method propensity score matching potential connections to language vectorized treatments title description code graph intervention networks for causal effect estimation https arxiv org pdf 2106 01939 pdf br jean kaddour qi liu yuchen zhu matt j kusner ricardo silva generalizes the robinson decomposition as used in r learner or generalized random forests to vectorized treatments e g text images graphs git https github com jeankaddour gin | causality natural-language-processing | ai |
azure-iotedge | welcome to the home of azure iot edge a product built from the open source iot edge project https github com azure iotedge azure iot edge moves cloud analytics and custom business logic to devices so that your organization can focus on business insights instead of data management enable your solution to truly scale by configuring your iot software deploying it to devices via standard containers and monitoring it all from the cloud documentation documentation can be found at https docs microsoft com azure iot edge issues issues can be filed in the issues section https github com azure iotedge issues of the iot edge github repo azure iot edge is built from the open source iot edge project development and bug fixing happens in the repo of the open source project feature requests feature requests can be filed on the azure iot edge user voice page https feedback azure com forums 907045 azure iot edge data telemetry iot edge sends basic metadata to microsoft about the host device this data may be used to improve microsoft products and services this data includes cpu architecture total memory virtualized machine kernel name kernel release kernel version os name os version os variant os build system manufacturer system product to opt out set the disabledeviceanalyticsmetadata environment variable on edgeagent https github com azure iotedge blob main doc environmentvariables md to learn more about microsoft policies review the privacy statement https go microsoft com fwlink linkid 521839 clcid 0x409 | server |
|
news-similarity | news similarity github project with all code needed for the final degree project news similarity with natural language processing it tries to stablish a distance between the content of politics news articles so that a network of similarities can be built this project has been organised in four python libraries detailed here newsparser defines classes feed and entry to extract entries from a rss feed saving all the necessary metadata and the article text in the central database newsfilter defines classes and methods to filter those entries that should not be considered in the system such as entries with a broken link that had no title that had no meaningful content etc newstagger defines a flask http server and its pages to allow an easy creation of a tagged dataset for the creation of the system newsbreaker defines functions and classes that inherit from the entry class in newsparser and allow for an easy access to its content its counters of words its what who where vectors and some methods to compute the distance between two of them all python code is python 3 all data used for this project will be stored in a zip file in the v1 release on this github page except for the wikipedia articles database which the code can download automatically and would have taken a lot of space for more details on this system check the project a href report pdf report a to check the project visualisations online please go here http aparafita github io news similarity | ai |
|
LockE | locke ios app controller to drive the rover for project locke from the engineering explorers program the app will be available for download on the app store instructions for using the app the app was desinged to communicate with an arduino via the sh hc 08 bluetooth module from the landing screen users can press search in the upper right hand corner to view pairable bluetooth low energy modules the user then chooses to connect to a sh hc 08 module and presses done the user is taken back to the landing screen which is also a controller for the rover the 4 side buttons are meant to controll the motors collection servos the middle button will trigger the ir led to fire | os |
|
go-graphql-react | go graphql react a template web app with a react typescript frontend and a go graphql backend that uses sqlboiler https github com volatiletech sqlboiler alongside gqlgen https github com 99designs gqlgen for a fast database schema driven development style stack golang backend server postgres database sqlboiler https github com volatiletech sqlboiler to generate go orm models based on database tables gqlgen https github com 99designs gqlgen to generate a graphql api based on the models gorilla sessions https github com gorilla sessions for authentication sign up log in log out resolvers are implemented react typescript frontend apollo graphql client development sqlboiler and gqlgen work really well together to generate nearly an entire graphql api from the database 1 add postgres table definitions to the migrations folder 1 generate orm based on tables in the database make db 1 add a graphql schema in the api folder that reflects the generated orm 1 generate a graphql server from the schema that binds to the orm make graphql bonus complete 2 4 with make models 1 fill in the graphql resolvers using the orm in internal resolvers running 1 create a new file env sh at the top level directory copy in env example sh and fill in the missing variables 1 start the react app make run client and start the backend make run server migrations are run automatically with golang migrate https github com golang migrate migrate | sqlboiler gqlgen graphql golang typescript | server |
awesome-web-components | awesome web components github https img shields io github license saadpasta awesome web components color blue https github com saadpasta awesome web components blob master license md github stars https img shields io github stars saadpasta awesome web components github forks https img shields io github forks saadpasta awesome web components web component and snippets for every front end developer react snippets javascript snippets html and css snippets contribution do you have javascript code that you want to share with everbody have you developed an awesome card and want to help others please don t hesitate to open an pull request https github com saadpasta web components pulls contributions of any kind are welcomed contribution guide fork this repository to your github account clone the forked repo from your account using bash git clone https github com your username web components git create a folder and put your files inside the folder try to include readme md in your folder to help other developers create a awesome merge request contributors thanks goes to these wonderful people emoji key https allcontributors org docs en emoji key all contributors list start do not remove or modify this section prettier ignore start markdownlint disable table tr td align center a href https github com saadpasta img src https avatars2 githubusercontent com u 23307811 v 4 s 100 width 100px alt br sub b saad pasta b sub a br a href https github com saadpasta awesome web components commits author saadpasta title code a td td align center a href http linkedin com in saeeddev img src https avatars3 githubusercontent com u 17095740 v 4 s 100 width 100px alt br sub b saeeddev b sub a br a href https github com saadpasta awesome web components commits author sa js title code a td td align center a href https www facebook com hashir shoaeb img src https avatars0 githubusercontent com u 35165481 v 4 s 100 width 100px alt br sub b hashir shoaib b sub a br a href https github com saadpasta awesome web components commits author hashirshoaeb title documentation a td td align center a href https www linkedin com in mohamedsgap img src https avatars2 githubusercontent com u 30293804 v 4 s 100 width 100px alt br sub b mohamed abdel nasser b sub a br a href https github com saadpasta awesome web components commits author mohamedsgap title code a td td align center a href https github com zaidakhterr img src https avatars1 githubusercontent com u 51528814 v 4 s 100 width 100px alt br sub b zaid akhter b sub a br a href https github com saadpasta awesome web components commits author zaidakhterr title code a td td align center a href https hanzla now sh img src https avatars githubusercontent com u 59178380 v 4 s 100 width 100px alt br sub b hanzla b sub a br a href https github com saadpasta awesome web components commits author 1hanzla100 title code a td tr table markdownlint restore prettier ignore end all contributors list end license this project is licensed under the mit license see the license license file for details | snippets javascript javascipt html5 html-css-javascript css3 react reactjs frontend front-end-development | front_end |
modernWebDevGenerator | modern web dev generator npm version https img shields io npm v generator modern web dev svg https www npmjs com package generator modern web dev downloads https img shields io npm dm generator modern web dev svg https www npmjs com package generator modern web dev build status https secure travis ci org dsebastien modernwebdevgenerator png branch master https travis ci org dsebastien modernwebdevgenerator coverage status https coveralls io repos dsebastien modernwebdevgenerator badge svg branch master service github https coveralls io github dsebastien modernwebdevgenerator branch master dependency status https david dm org dsebastien modernwebdevgenerator svg theme shields io style flat https david dm org dsebastien modernwebdevgenerator devdependency status https david dm org dsebastien modernwebdevgenerator dev status svg theme shields io style flat https david dm org dsebastien modernwebdevgenerator info devdependencies gitter https img shields io badge gitter join 20chat green svg style flat https gitter im dsebastien modernwebdevgenerator utm source badge utm medium badge utm campaign pr badge license https img shields io cocoapods l afnetworking svg license md about modernwebdevgenerator is a yeoman http yeoman io generator that will help you quickly get up and running with modernwebdevbuild https github com dsebastien modernwebdevbuild projects created with this yeoman generator will be able to directly leverage the awesome gulp based build provided by the modernwebdevbuild https github com dsebastien modernwebdevbuild project which includes many tasks and features out of the box e g transpilation of typescript es2015 to es5 sass transpilation to css minification bundling code quality code style checks sourcemaps support for unit testing this project comes with a fully working angular 2 configuration this generator includes all the folders files listed by modernwebdevbuild https github com dsebastien modernwebdevbuild as mandatory as well as the recommended ones so as to promote good practices readme md files are placed in multiple locations to describe what to put where provide some guidance design guidelines the generated projects also include a working setup of angular 2 this might later move to a separate sub generator a root component app core boot ts a home page app pages home ts a basic component router configuration a good html5 boilerplate https html5boilerplate com a good sass styling starting point an embedded folder structure and design guidelines componentization separation of concerns naming conventions a set of typescript code style quality rules tshint json a set of es2015 compliant code style quality rules jscsrc and jshintrc the general idea is that you can remove anything you don t need assuming it s not in the list of mandatory folders files of modernwebdevbuild https github com dsebastien modernwebdevbuild otherwise you ll break the build any feedback contributions are welcome to improve the project so don t hesitate this project is available as an npm package check out the usage instructions below demo click on this link to see a demo of how to install use this project and the modern web dev build a href http www youtube com watch feature player embedded v wc5itinyobw target blank img src http img youtube com vi wc5itinyobw 0 jpg alt modernwebdev build and generator demo width 240 height 180 border 10 a background please note that this project is heavily inspired from google s web starter kit https github com google web starter kit html5 boilerplate https html5boilerplate com brad frost s atomic design http bradfrost com blog post atomic web design nicolas gallagher s suit css https github com suitcss countless blog articles dan walhin https twitter com danwahlin s typescript posts course introduction to typescript https www edx org course introduction typescript microsoft dev201x 0 a gazillion gulp articles many others i m forgetting status roadmap check out the issues labels milestones to get an idea of what s next for existing features refer to the previous sections check out the change log changelog md usage in order to use this generator you first need to install yeoman bash npm install global yo once yeoman is installed you can install this generator bash npm install global generator modern web dev you will also need to install gulp globally bash npm install global gulp create a new folder go into it then invoke the generator by running the following command bash yo modern web dev once you ve answered all the questions the generator will do its thing once done you ll be able to run the development web server and start hacking away using bash npm start enjoy note that the modernwebdevbuild https www npmjs com package modern web dev build project has other tricks in store for you be sure to check out the docs https www npmjs com package modern web dev build options there are two main approaches to use this generator interactive mode the generator asks you all the questions batch mode you provide the information directly to the generator in practice nothing prevents you from mixing both though if you pass a setting directly to the generator it will not prompt you for that value you can list all the options with a brief description using yo modern web dev help by default the generator will install all project dependencies not the global requirements listed in the usage section you can skip the installation of the project dependencies by passing the skip install option the generator will check for updates once in a while but you can disable the update check by passing the following flag no update notifier generated projects dev dependencies gulp http gulpjs com javascript task runner babel https babeljs io es2015 to es5 transpiler needed so that the gulp configuration file can be written using es2015 gulpfile babel js nodemon https www npmjs com package nodemon monitoring of certain files used by npm scripts defined in package json https www npmjs com package nodemon generated projects runtime dependencies the following dependencies are managed by jspm in the jspm section of the package json file angular 2 https angular io rxjs https github com reactive extensions rxjs reactive extensions forget about promises and use observable the future of async in javascript normalize css nicolas gallagher s normalize css alternative to css resets https www npmjs com package normalize css generated projects configuration files the project includes multiple configuration files for more details about the configuration files check out the modernwebdevbuild https github com dsebastien modernwebdevbuild s documentation here s some high level information about these babelrc babel configuration file gulpfile babel js gulp s configuration file this is where the modern web dev build tasks are loaded package json npm s configuration file this is where all dependencies are defined project ones under jspm and build related ones under devdependencies more information https docs npmjs com files package json dockerignore files that are ignored by docker when creating images editorconfig helps configure code style for various text editors more information here http editorconfig org gitattributes allows to define git attributes per path more information here http git scm com docs gitattributes gitignore configures files folders that are ignored by git jscsrc configuration file for jscs it defines the js code style more information http jscs info overview html options note that it is configured to use es next es2015 rules reference http jscs info rules html news https github com jscs dev node jscs blob master changelog md jshintrc jshint configuration rules reference http jshint com docs options more information http jshint com docs jshintignore stuff that jshint should ignore travis yml travis ci configuration files more information http docs travis ci com user build configuration dockerfile and dockerfiledev docker configuration files used to describe how docker images should be created for this project more information https www docker com and http docs docker com reference builder jspm conf js jspm systemjs configuration file karma conf js karma test runner configuration file runondocker sh and rundevondocker sh build scripts that create run docker images tsconfig json typescript compiler configuration contains all compiler options code style rules and file selection exclusion rules bypassed by the gulp typescript plugin http json schemastore org tsconfig https github com microsoft typescript wiki tsconfig json typings json typings configuration file list of typescript type definitions files to retrieve tslint json typescript code style configuration more information https www npmjs com package tslint makefile for nix afficionados adding project dependencies as you go along you ll surely need to add new dependencies for your application if the dependency you want to add is required at runtime then you should use jspm to add it installing a dependency with jspm is as simple as jspm install x for more information about jspm check out the official site http jspm io contributing take a look at the project s open issues https github com dsebastien modernwebdevgenerator issues and milestones https github com dsebastien modernwebdevgenerator milestones if you know what to do then fork the project create a feature branch in your fork rebase if needed to keep the project history clean commit your changes push to github try and flood me with pull requests building from source if you want to build from source you need to install nodejs 4 2 and npm 3 clone this git repository install gulp npm install global gulp run npm run setup start hacking releasing a version commit all changes to include in the release edit the version in package json respect semver update changelog md commit git tag version git push tags draft the release on github add description etc npm publish authors sebastien dubois blog https www dsebastien net twitter https twitter com dsebastien github https github com dsebastien license this project and all associated source code is licensed under the terms of the mit license https en wikipedia org wiki mit license | yeoman-generator yeoman angular | front_end |
weekly | p align center img width 200 height 200 src assets img logo png alt logo p p align center a href https segmentfault com u boohee img src https img shields io badge segmentfault blue svg colora 419766 colorb 5c5c5c alt a a href https juejin im user 5b1de502e51d4506cf10bc34 img src https img shields io badge blue svg colora 246ff6 colorb 5c5c5c alt a p issues https github com booheefe weekly issues watch https github com booheefe weekly watchers star https github com booheefe weekly stargazers weekly 2019 04 23 android https github com booheefe weekly issues 36 2019 04 18 socket io https github com booheefe weekly issues 35 2019 03 30 https github com booheefe weekly issues 33 2019 03 14 https github com booheefe weekly issues 31 2019 01 14 https github com booheefe weekly issues 30 2019 12 21 promise https github com booheefe weekly issues 29 2018 12 16 babel7 https github com booheefe weekly issues 28 2018 12 10 redux https github com booheefe weekly issues 27 2018 11 27 canvas https github com booheefe weekly issues 26 2018 11 20 webpack webpack https github com booheefe weekly issues 25 2018 11 05 https github com booheefe weekly issues 24 2018 11 05 webpack https github com booheefe weekly issues 23 2018 11 05 react vue https github com booheefe weekly issues 22 2018 10 26 react https github com booheefe weekly issues 21 2018 10 23 https github com booheefe weekly issues 20 2018 10 15 koa https github com booheefe weekly issues 19 2018 10 15 https github com booheefe weekly issues 18 2019 09 25 https github com booheefe weekly issues 17 2019 09 18 echarts bizcharts g2 https github com booheefe weekly issues 16 2018 09 09 https github com booheefe weekly issues 15 2018 09 07 react 0 1 github star https github com booheefe weekly issues 14 2018 09 04 7 js https github com booheefe weekly issues 13 2018 08 13 vue ui https github com booheefe weekly issues 12 2018 08 06 chrome devtools performance https github com booheefe weekly issues 11 2018 07 29 iconfont https github com booheefe weekly issues 10 2018 07 22 vue cli https github com booheefe weekly issues 9 2018 07 15 javascript https github com booheefe weekly issues 8 2018 07 08 try catch https github com booheefe weekly issues 7 2018 07 06 web https github com booheefe weekly issues 6 2018 06 24 0 npm https github com booheefe weekly issues 5 2018 06 16 vue https github com booheefe weekly issues 4 2018 06 11 https github com booheefe weekly issues 3 2018 06 08 10 js https github com booheefe weekly issues 2 2018 06 07 javascript https github com booheefe weekly issues 1 weekly issue template https github com booheefe weekly issues new template new article md | weekly font-end | front_end |
cloud-mobile-end2end-sample | mahlwerk cloud mobile end to end sample reuse status https api reuse software badge github com sap samples cloud mobile end2end sample https api reuse software info github com sap samples cloud mobile end2end sample description sap mobile services provides multiple offerings for you to mobilize your data however when the options are aplenty choosing the correct offering becomes crucial thus we have defined a custom use case and built mobile solutions using all of our offerings experiencing these applications on your own devices will help you identify the right product for you use case mahlwerk is a coffee machine vendor and sells the machines through retail stores mahlwerk wants to use mobile technologies to coordinate its services with the customer salesperson and technician mahlwerk description image images mahlwerk png personas overview mahlwerk personas image images personas png user stories mahlwerk user story image images user story png architecture mahlwerk architecture image images architecture png prerequisites 1 set up sap btp trial account https developers sap com tutorials hcp create trial account html 2 access sap mobile services https developers sap com tutorials fiori ios hcpms setup html 3 set up sap btp sdk for android https developers sap com tutorials cp sdk android wizard app html 4 set up sap btp sdk for ios https developers sap com group ios sdk setup html 5 set up for the sap mobile development kit https developers sap com group mobile dev kit setup html 6 set up sap mobile cards https developers sap com tutorials cp mobile cards setup html download and installation to download and install the applications follow the instructions given in the readme file of the applications s no application name 1 backend odata service 1 backend odata service 2 salesperson mobile development kit app 2 salesperson mdk app 3 customer orders mobile cards 3 customer orders mobile cards 4 customer machine mobile cards 4 customer machine mobile cards 5 customer tickets mobile cards 5 customer tickets mobile cards 6 technician android app 6 technician android app 7 technician ios app 7 technician ios app known issues 1 mobile cards take some time to load 2 after every few days the odata backend resets the data present in it 3 once the data resets the technician app both android and ios crashes when you try to open the pear computing services repair task how to obtain support open an issue https github com sap samples cloud mobile end2end sample issues in this repository if you find a bug or have questions about the content contributing the repository is open for contribution to contribute to the repository you can create a fork and then create a pull request with all your changes the administrator of the repository will look into the pull request and will merge your changes to do upcoming changes the upcoming new features will be added and implemented into the applications some of the upcoming changes are 1 user propagation in salesperson s and technician s app currently the salesperson s and technician s application shows a default user name and not of the one who has actually logged in the upcoming version will have user propagation implemented and the user who is logged in will see their details 2 secure backend the authentication feature will be added into the backend to make it more secure the feature to connect your odata backend with the sql or hana database will also be provided 3 customization of mobile cards the mobile cards application will be customized to give it a look and feel of mahlwerk s application 4 listing job details in technician s app in the technician s application to resolve any task the feature to list the jobs steps and tools required by the technician will be added license copyright c 2021 sap se or an sap affiliate company all rights reserved this project is licensed under the apache software license version 2 0 except as noted otherwise in the license licenses apache 2 0 txt file | sample sap-cloud-platform mobile-services sap-btp-sdk-for-android sap-btp-sdk-for-ios sap-btp sap-mobile-backend mobile-cards odata-backend android sap-mobile-services | front_end |
Class-EmbeddedSystemDesign | class embeddedsystemdesign program for embedded system design class | os |
|
Python-Machine-Learning-Cookbook | python machine learning cookbook python machine learning cookbook by packt publishing instructions and navigation this is the code repository for python machine learning cookbook https www packtpub com big data and business intelligence python machine learning cookbook utm source github utm medium repository utm campaign 9781786464477 published by packt it contains all the supporting project files necessary to work through the book from start to finish the code files are organized according to the chapters in the book these code samples will work on any machine running linux mac os x or windows even though they are written and tested on python 2 7 you can easily run them on python 3 x with minimal changes to run the code samples you need to install scikit learn http scikit learn org stable install html numpy http www scipy org scipylib download html scipy http www scipy org install html and matplotlib http matplotlib org downloads html for chapter 6 you will need to install nltk http www nltk org install html and gensim https radimrehurek com gensim install html to run the code in chapter 7 you need to install hmmlearn http hmmlearn readthedocs org en latest and python speech features http python speech features readthedocs org en latest for chapter 8 you need to install pandas http pandas pydata org getpandas html and pystruct https pystruct github io installation html chapter 8 also makes use of hmmlearn for chapters 9 and 10 you need to install opencv http opencv org downloads html for chapter 11 you need to install neurolab https pythonhosted org neurolab install html description machine learning is becoming increasingly pervasive in the modern data driven world it is used extensively across many fields like search engines robotics self driving cars and so on during the course of this book you will learn how to use python to build a wide variety of machine learning applications to solve real world problems you will understand how to deal with different types of data like images text audio and so on we will explore various techniques in supervised and unsupervised learning we will learn machine learning algorithms like support vector machines random forests hidden markov models conditional random fields deep neural networks and many more we will discuss about visualization techniques that can be used to interact with your data using these algorithms we will discuss how to build recommendation engines perform predictive modeling build speech recognizers perform sentiment analysis on text data develop face recognition systems and so on you will understand what algorithms to use in a given context with the help of this exciting recipe based guide you will learn how to make informed decisions about the type of algorithms you need to use and learn how to implement those algorithms to get the best possible results stuck while making sense of images text speech or some other form of data this guide on applying machine learning techniques to each of these will come to your rescue the code is well commented so you will be able to get it up and running easily the book contains all the relevant explanations of the algorithms that are used to build these applications there is a lot of debate going on between python 2 x and python 3 x while we believe that the world is moving forward with better versions coming out a lot of developers still enjoy using python 2 x a lot of operating systems have python 2 x built into them it also helps in maintaining compatibility with python libraries that haven t been ported to python 3 x keeping that in mind the code in this book is oriented towards python 2 x we have tried to keep all the code as agnostic as possible to the python versions so that python 3 x users won t face too many issues we are focused on utilizing the machine learning libraries in the best possible way in python related python machine learning products python machine learning https www packtpub com big data and business intelligence python machine learning utm source github utm medium repository utm campaign 9781783555130 mastering python machine learning https www packtpub com big data and business intelligence mastering python machine learning utm source github utm medium repository utm campaign 9781783555130 opencv with python by example https www packtpub com application development opencv python example utm source github utm medium repository utm campaign 9781785283932 download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781786464477 https packt link free ebook 9781786464477 a p | ai |
|
Q-Blockchain | h1 align center q blockchain h1 htiyac m z olanlar ve notlar bana sorular n z ve yard m ihtiyac n z i in telegram https t me h ecre mccg4zta0 q blockchain i d ls z testnetinden beri takip ediyorum burada yap lanlar kay t gibi d n n kay t 31 aral k ta biter testnet 1 ocak ta ba lar 31 mart a kadar s rer yani 4 ay 20 g n al t rmak demek y ksek bir s re ekip makalesinde bunu d llerle kar layaca n s yl yor ben garanti oldu unu d nm yorum risk size kalm bu testnete kat l p kat lmamak tamamen ki isel fikrinizdir testnet bitince kyc olacakm d l d nemi kilit detaylar var makale https medium com q blockchain q blockchain validator onboarding program part 1 validator incentivized testnet 567ef6e4002e kat lman n ne kadar mant kl oldu unu siz se in proje discordu discord kanal https discord gg prkzrahj repoyu sa stten forklay p y ld zlamay unutmay n eksik g rd klerinizi pull request yapmay unutmay n sistem gereksinimleri not bilgi yok manuel olarak test ettim hetzner kulland m varsa 3 cpu i lemci garanti olur 2 cpu 2 ram de i kenleri ayarl yoruz bir ifre belirleyin password ifrebelirle fre yazan yeri d zenleyin echo export password fre home bash profile source home bash profile g ncellemeleri tek tek yap n z baz g ncellemelerde y n sorular nda y bas p enterley n sudo apt update sudo apt upgrade apt install docker compose apt install npm apt install screen sudo apt get update sudo apt install jq sudo apt install apt transport https ca certificates curl software properties common y curl fssl https download docker com linux ubuntu gpg sudo apt key add sudo add apt repository deb arch amd64 https download docker com linux ubuntu focal stable sudo apt get install docker ce docker ce cli containerd io docker compose plugin sudo apt get install docker compose plugin binary ve pwd olu turuyoruz komutlar tek tek giriniz ifre k sm n d zenleyiniz git clone https gitlab com q dev testnet public tools git cd testnet public tools testnet validator mkdir keystore cd keystore echo fre pwd txt c zdan olu turup bilgileri kaydedlim olu an 0xli c zdan m za token alal m faucet https faucet qtestnet org cd docker run entrypoint rm v pwd data it qblockchain q client testnet geth account new datadir data password data keystore pwd txt yap land rma dosyas n d zenleyece iz address k sm na 0xli olmayan keyi girelim ctrl x y enter ile k n sonra cp env example env nano env image https user images githubusercontent com 101149671 206860212 79018b15 b65d 4291 8054 8785b0078153 png ayn i lem addres ve password k sm n d zenleyin address 0xsiz adres ifrede yukarda belirlemi tik ctrl x y enter ile k n sonra nano config json image https user images githubusercontent com 101149671 206860284 853e9661 3f8a 4d0d b343 9adf93ff62ea png tokenlerimizi stakeleyelim bu komut al mazsa yukarda yap land rma dosyalar env ve config json eksik yapm s n z demektir docker run rm v pwd data v pwd config json build config json qblockchain js interface testnet validators js imdi private key olu turuyoruz cd cd testnet public tools chmod x run js tools in docker sh run js tools in docker sh npm install burada 0xl c zdan ve fre k sm n d zenlemeyi unutmay n bu i lem sonunda pk adl klas r olu acak ctrl a d ile k n npm i inden chmod x extract geth private key js node extract geth private key 0xl c zdan testnet validator fre winscp veya mobaxterm ile sunucunuza ba lan n dosya root testnet public tools js tools i inde olacak ine t klad m zda bize bir key vericek image https user images githubusercontent com 101149671 206860533 1c06a2ed 4f60 42b9 95e6 2ad3429a5127 png imdi bir metamask c zdan laz m bunun i in isterseniz testnet c zdan kullan n veya yeni c zdan a n sa stten profile t kl yoruz ve hesab i e aktar diyoruz az nce pk klas r nden ald m z keyi girip hesab olu turuyoruz image https user images githubusercontent com 101149671 206860604 caebf5ca f43d 4efd 9ce1 cf6a3e87fab2 png daha sonra buradan https itn qdev li ba vuruyoruz testnet c zdan n z do ru oldu undan emin olun b yle bir g rsel alacaks n z image https user images githubusercontent com 101149671 206860707 60d24966 f27c 4348 90b1 1fd45428df8a png buras kritik ve nemli cd cd testnet public tools cd testnet validator nano docker compose yaml geth nin virg ne gelin bo luk b rak n i areti ekleyip formda ki ethstatsl komutu girin girdikten sonra bir daha i areti ekleyip ekleyin ve bo luk b rak n rnek geth ethstats itn ruesvalidator 9 qstats testnet stats qtestnet org ctrl x y enter ile k n image https user images githubusercontent com 101149671 206860778 bd49a825 7c2c 4d68 b5c8 b7a3dd2a2cf4 png ba lat yoruz screen s q docker compose up d docker compose logs f explorerdan kontrol edelim explorer https stats qtestnet org biraz yava ve a r rengibize g re ye il olmak i in bi yar m saat tahmini falan beklemek gerekiyor zamanla k rm z sar ye il oluyorsunuz e le tin e le iyor biraz bekle e le me ar yor explorerda kendi validat r ad n z bulmakta zorlan yorsan z ctrl f yapt ktan sonra kendi ad n z yaz p bulabilirsiniz ard ndan a a da i aretledi im kendi ad n z n yan ndaki dairenin zerine geldi inizde click to pin yaz s na t klad n zda art k kendi ad n z en stte g rebileceksiniz kkk https user images githubusercontent com 98269269 207414985 60d423e6 facb 4292 be91 999209e9fe29 png e er formu doldururken a a daki hata ile kar la rsan z kulland n z identify ad nda de i iklik yapman z veya adreslerinizi kontrol etmeniz gerekiyor kulland n z karakterlerde de i iklik yaparak veya validat r adresinizi kontrol ederek bu sorunu zebilirsiniz dff https user images githubusercontent com 98269269 207157285 76e4d6b2 bf65 4155 84b7 59f36fbae211 jpg hastal klar cirit at yor dikkat edin kendinize | blockchain |
|
BayesianLearning | bayesianlearning deepbayesianlearning great course http deepbayes ru materials roadmap https www metacademy org roadmaps rgrosse bayesian machine learning course bayesian method in machine learning http www cse wustl edu garnett cse515t advanced machine learning http www seas harvard edu courses cs281 machine learning and probabilistic model http www cedar buffalo edu srihari cse574 index html probabilistic graphical models https sailinglab github io pgm spring 2019 lectures book pattern recognition and machine learning prml by christopher m bishop bayesian reasoning and machine learning brml by david barber freely available online http www cs ucl ac uk staff d barber brml machine learning a probabilistic perspective mlapp by keven p murphy generalized linear model http data princeton edu wws509 notes tools http mc stan org deep learning bayesian reasoning and deep learning http blog shakirm com wp content uploads 2015 10 bayes deep pdf discussion on bayesian and deep learning https www quora com why are very few schools involved in deep learning research why are they still hooked on bayesian methods deep learning course nlp http deeplearning cs cmu edu unsupervised learning http ufldl stanford edu tutorial cnn http vision stanford edu teaching cs231n reinforcement learning http www0 cs ucl ac uk staff d silver web teaching html deep reinforcement learning http rll berkeley edu deeprlcourse video https www youtube com playlist list pl6xpj9i5qxyecohn7tqghaj6naprnmubh more advanced topic modeling http mimno infosci cornell edu info6150 bayesian analysis for nlp http www cs columbia edu scohen bayesian advanced nlp bayesian methods https courses engr illinois edu cs598jhm sp2013 index html inference exact inference exact to get the posterior distribution belief propagation for trees variable elimination algorithm junction tree algorithm approximate inference approximate the posterior distribution variational inference mean field approximation structured variational approximation expectation propagation variational bayes for bayesian model markov random field variational message passing loopy belief propagation sample method approximate sample from the posterior distribution markov chain monte carlo gibss sampling rejection sampling particle filtering maximum likelihood expectation maximization gradient descent related with gradient descent stochastic gradient variational inference hoffman m bach f and blei d online learning for latent dirichlet allocation stochastic gradient mcmc algorithms stochastic gradient riemannian langevin dynamics on the probability simplex | ai |
|
cloud-devops | paguos https circleci com gh paguos cloud devops svg style svg https circleci com gh paguos cloud devops cloud devops engineering repository for material for the udacity nano degree of cloud devops engineering cloudformation the following shorcuts to interact with cloud formation are available using make create delete list update the commands support the cnf stack flag e g sh make create cnf stack network file structure each project needs the following files templates stored in the cnf project name stacks directory parameters stored in the cnf project name configs directory micro services at scale using k8s ml k8s project ml k8s readme md | cloud |
|
2017Spring | deep learning for computer vision cs 763 module spring 2017 course information please refer to the a href https www cse iitb ac in ajitvr cs763 spring2017 cs763 spring 2017 a course page for general information on this page you will find specific information for the lectures that will be taught by a href http www cse iitb ac in ajain arjun jain a ul li b instructor b a href http www cse iitb ac in ajain arjun jain a li b office b 216 cse new building li b email b i ajain cse dot iitb dot ac dot in i li b instructor office hours in room 216 cse new building b arjun is on campus only on fridays and saturdays for the 3 weeks during this course meet him after class or fix an appointment over email ul topics to be covered tentative ul li the data driven paradigm feed forwards networks back propagation and chain rule li cnns and their building blocks relu maxpool convolution crossentropy etc li vanishing gradients residual blocks visualizing and understanding cnns li cnn applications cnn compression and if time permits siamese and triplet networks ul learning materials and textbooks ul li lecture slides that will be regularly posted li a href http www deeplearningbook org deep learning a by ian goodfellow and yoshua bengio and aaron courville freely downloadable li all a href https github com facebook itorch itorch a notebooks for topics covered in class can be found a href https github com cs763 dl 2017spring tree master notebooks here a ul tutorials and other useful resources ul li a href http tylerneylon com a learn lua learn lua in 15 minutes a li a href https github com torch torch7 blob master doc tensor md torch s tensor class a li a href https github com torch torch7 blob master doc maths md torch s math library a li a href https github com terryum awesome deep learning papers awesome deep learning papers a ul assignment we will use kaggle to host the leader board for the assignment you will be able to submit your model predictions a href https inclass kaggle com c cse763 cifar10 on kaggle a using your iitb ac in email ids and check where you stand as compared to others in real time complete details about the assignment can be found a href https github com cs763 dl 2017spring blob master assignment cs763 assignment 4 pdf here a sample data can be downloaded from a href https github com cs763 dl 2017spring raw master assignment cs763deeplearninghw tar gz here a due date 14 april 2017 23 55 lecture schedule table tbody tr th date th th topics th th slides th th itorch notebooks th tr tr td 24 03 2017 td td ul li brief history achievements of dl based algorithms li li the data driven paradigm li li classification using k nearest neighbor algorithm li li classification using a linear classifier li li optimization using vanilla gradient descent li li feed forward networks li ul td td a href https github com cs763 dl 2017spring blob master slides lec 1 pdf slides a td td ul li a href https github com cs763 dl 2017spring blob master notebooks nn ipynb vanilla knns a li li a href https github com cs763 dl 2017spring blob master notebooks manual update ipynb numerical gradients a li ul td tr tr td 25 03 2017 td td ul li chain rule backward propagation li li torch7 nn module li li activation functions li ul td td a href https github com cs763 dl 2017spring blob master slides lec 2 pdf slides a td td ul li a href https github com cs763 dl 2017spring blob master notebooks relu ipynb relu activation funciton a li ul td tr tr td 31 03 2017 td td ul li fully conncted layer li li convolution layer li li fully connected as convolution li ul td td a href https github com cs763 dl 2017spring blob master slides lec 3 pdf slides a td td ul li a href https github com cs763 dl 2017spring blob master notebooks linear ipynb fully conncted layer a li li a href https github com cs763 dl 2017spring blob master notebooks convolution ipynb convolution layer a li ul td tr tr td 01 04 2017 td td ul li max pool layer li li cross entropy layer li li dropout layer li li data preprocessing and augmentation li li weight initialization li li baby sitting the learning process li li hyper parameter optimization li ul td td a href https github com cs763 dl 2017spring blob master slides lec 4 pdf slides a td td ul li a href https github com cs763 dl 2017spring blob master notebooks maxpool ipynb max pool layer a li li a href https github com cs763 dl 2017spring blob master notebooks dropout ipynb dropout layer a li li a href https github com cs763 dl 2017spring blob master notebooks weightinit ipynb weight initialization a li ul td tr tr td 07 04 2017 td td ul li cnn case studies li myth buster you need a lot of data to train convnets false li visualizing and understanding convnets ul td td a href https github com cs763 dl 2017spring blob master slides lec 5 pdf slides a td td td tr tr td 08 04 2017 td td ul li visualizing and understanding convnets cont li art fun with nns neuralart deepdream li cnn compression ul td td a href https github com cs763 dl 2017spring blob master slides lec 6 pdf slides a td td td tr tbody table | ai |
|
NLPF | natural language processing fundamentals sessions 1 fundamental concepts python examples https github com satuelisa nlpf blob main nlpf 01 p ipynb r examples https github com satuelisa nlpf blob main nlpf 01 r ipynb 2 topic detection r examples https github com satuelisa nlpf blob main nlpf 02 r ipynb 3 sentiment analysis r examples https github com satuelisa nlpf blob main nlpf 03 r ipynb 4 tagging python examples https github com satuelisa nlpf blob main nlpf 04 p ipynb r examples https github com satuelisa nlpf blob main nlpf 04 r ipynb 5 word networks python examples https github com satuelisa nlpf blob main nlpf 05 p ipynb 6 correction prediction python examples https github com satuelisa nlpf blob main nlpf 06 p ipynb 7 stemming python examples https github com satuelisa nlpf blob main nlpf 07 p ipynb 8 vectorization r examples https github com satuelisa nlpf blob main nlpf 08 r ipynb 9 chatbots python examples https github com satuelisa nlpf blob main nlpf 09 p ipynb 10 speech python examples https github com satuelisa nlpf blob main nlpf 10 p ipynb tools and libraries for each library that requires installation the parenthesis indicates the sessions that employ the package a python https www python org mdash create a new colab in python https colab research google com notebook create true gutenbergpy s01 nltk s01 wordcloud s01 matplotlib s01 numpy s01 pandas s01 speechrecognition s10 pyaudio s10 pyttsx3 s10 scipy s10 ffmpeg python s10 b r https www r project org mdash create a new colab in r https colab research google com notebook create true language r gutenbergr s01 tidytext s01 ggplot2 s01 quanteda s01 quanteda textplot s01 tm s02 reshape s02 reshape2 s02 topicmodels s02 wordcloud s02 rcolorbrewer s02 textdata s03 reshape2 s03 igraph s04 stopwords s08 plot matrix s08 proxy s08 word2vec s08 plot3d s08 nbclust s08 factoextra s08 you can either run things on an online environment like google colab or install both of these open source tools on your own computer note that some installable packages come pre installed for the colab python environment like pandas and numpy but need to be installed with pip if you set up your own environment bibliography r https learning oreilly com covers urn orm book 9781491981641 200w python https learning oreilly com covers urn orm book 9780596803346 200w text mining with r https learning oreilly com library view text mining with 9781491981641 nlp with python https learning oreilly com library view natural language processing 9780596803346 additionally to the above books available through the library we use the freely available online versions of the following textbooks speech and language processing https web stanford edu jurafsky slp3 referred to as slp supervised machine learning for text analysis in r https smltar com referred to as smltar data sets project gutenberg https www gutenberg org ebooks the raw text of numerous books open multilingual wordnet http compling hss ntu edu sg omw wordnet data in other languages assignments the contents of each assignment is detailed on mycourses and also as a single file on overleaf https www overleaf com read dgwcbnnzjjbz for easier access so you can prepare your responses offline concepts session 1 fundamental concepts token a meaningful unit of text tokenization the process of extracting tokens from text string a data representation for a sequence of characters metadata tags or other type of data associated to a string or a token describing its origin meaning or some other characteristic thereof corpus a collection of textual data that contains strings possibly with associated metadata stopword a word the presence of which is deemed meaningless in a given context term document matrix a matrix in which each row represents a document and each column represents a term with the cells indicating the frequency of occurrence of each term in each document session 2 topic detection tf idf term frequency versus inverse document frequency matrix that assigns higher weight for terms that are not frequent across all of the documents the idf is the natural logarithm of the fraction of total number of documents divided by the number of documents that contain a term lda latent dirichlet allocation a topic modeling algorithm represent a document as a mixture of topics and a topic as a mixture of words session 3 sentiment analysis lexicon a set of words a vocabulary unigram a unit of language that is a single word session 4 tagging part of speech pos lexical category word class the grammar classes of words such as nouns adverbs verbs adjectives etc bigram a two word sequence n gram a sequence of n words session 5 word networks wordnet a graph format thesaurus of relationships between english words hyponym a more specific synonym of a word hypernum a more general synonym of a word meronym component of a concept holonym container of a concept antonym the counterpart of a word the contrary version vertical horizontal positive negative session 6 correction prediction edit distance the total cost of alterations that need to be made on a string to convert it into another one session 7 stemming stemming removal of affixes suffixes mostly sometimes prefixes to cut down all variants of a word to their common core normalization a process of regularizing text in some way such as making all of it lowercase lemmatization taking each conjugated plural capitalized word into the form in which it would appear in a dictionary session 8 vectorization cosine similarity the dot product of two numerical vectors divided by the product of their norms pmi pointwise mutual information a measure to quantify how often do two words appear together than what one would expect if they were ordered at random lemma the dictionary form of a word wordform the specific variant of a word such as the conjugated form that may not be a lemma as such polysemous having more than one meaning word embedding a vectorization of a text that attempts to capture semantics based on word context such as word2vec skipgram window a token subsequence of a determined length skipgram probability the probability relative frequency of two tokens appearing together in a skipgram window session 9 chatbots reflection a word pair in which one serves as a response to the other in the sense that if the point of view of the speaker is reversed the substitution maintains consistency this is my dog your dog is cute rule based chatbot one that picks a response to an incoming message based on a set of rules often expressed in terms of regular expressions self learning chatbot one that uses machine learning to determine responses bag of words representing a part of text sentence document etc as the set of words it contains or a binary representation thereof session 10 speech text to speech have the computer read out loud a text given as input speech to text have the computer create a string from a recording live or file of spoken language other nlp courses at mcgill comp 345 ling 345 https docs google com document d 1pnwegzftyb mb g35thwgvozaeme7puuroot3jvj53c edit comp 445 ling 445 https www cs mcgill ca media academic courses 118 comp 445 pdf comp 550 https www cs mcgill ca jcheung teaching fall 2017 comp550 lectures comp 599 ling 484 https mcgill nlp github io teaching comp599 ling782 484 f22 quantitative analysis http people linguistics mcgill ca morgan qmld book | ai |
|
neptune-web | circleci https circleci com gh transferwise neptune web svg style shield https circleci com gh transferwise neptune web neptune web neptune is the design system built by and used at transferwise this is the neptune framework for web the react components and css provide a way to build high quality consistent user experiences on the web with ease this is the neptune web monorepo that houses our component library css documentation and more documentation visit the docs https transferwise github io neptune web for information on getting started and to discover what is available usage guidelines we are working on platform agnostic usage guidelines for components find them on github https github com transferwise neptune get started visit the docs https transferwise github io neptune web about home or read more in the separate packages react components https github com transferwise neptune web blob master packages components css https github com transferwise neptune web blob master packages css contribution we love contribution read the guide https github com transferwise neptune web blob master contributing md to get started | os |
|
newport-design-system | vlocity newport design system welcome to the vlocity newport design system brought to you by vlocity https vlocity com tailored for building vlocity newport apps using the newport design system markup and css framework results in uis that reflect the vlocity newport look and feel includes storybook js previewer to help you customize and rebrand all of vlocity s newport based templates in one place want to see the project hosted live go to http newport vlocinternal com http newport vlocinternal com pre requisites note you ll need to use the command line to work with newport if you re not familar with the command line we recommend following the short git tower command line 101 tutorial https www git tower com learn git ebook en command line appendix command line 101 you ll need the following installed install git https git scm com downloads install node js https nodejs org en download install gulp cli after installing the above open your command prompt and run npm install global gulp cli quick start clone the project with bash git clone https github com vlocityinc newport design system git change into the newport design system folder using bash cd newport design system optional switch to the right branch for your version of the salesforce package for example bash git checkout ins 108 0 install the dependencies by running bash npm install finally you can launch storybook previewer by running bash npm start preview in storybook docs previewer v1 png having trouble getting these steps to work on your machine follow the troubleshooting troubleshooting guide below docs for more indepth documentation please checkout the documentation section in storybook browser compatibility we support the latest versions of all browsers and ie 11 generating the zip to deploy when you have an updated version of newport that you re happy with and want to test in an org you can run the following command bash npm run build npm run dist this will generate a zipped up version to be uploaded into salesforce in the dist folder in your workspace if you also want to deploy it to an org then run it with the following env variables bash sf username myusername email com sf password mypassword sf loginurl myloginurl npm run dist if the sf loginurl argument is not passed then it defaults to https login salesforce com troubleshooting npm and node js the vlocity newport design system uses npm to manage dependencies please install node js https nodejs org and try running npm install again if node js is already installed make sure you re running v8 or up javascript and compilation issues javascript dependencies sometimes get out of sync and inexplicable bugs start to happen follow these steps to give a fresh start to your development environment 1 the installed npm version must be at least v3 10 you can update your npm with npm install npm g sudo may be required 2 re install dependencies rm rf node modules npm install 3 npm start if this did not work try running npm cache clean and repeat the above steps licenses originally forked from salesforce lightning design system https lightningdesignsystem com source code is licensed under bsd 3 clause https git io sfdc license all icons and images are licensed under creative commons attribution noderivatives 4 0 https github com vlocityinc newport design system blob master license icons images txt the lato font is licensed under the sil open font license https github com vlocityinc newport design system blob master license font txt got feedback please open a new a href https github com vlocityinc newport design system issues github issue a | os |
|
fswd-22 | fswd 22 a repository for general purpose code in the full stack web development course 2022 https www codingshuttle com courses full stack web development course mern stack | front_end |
|
LLM-and-Law | llm and law awesome https awesome re badge svg https github com zjunlp modeleditingpapers license mit https img shields io badge license mit green svg https opensource org licenses mit https img shields io github last commit jeryi sun llm and law color green https img shields io badge prs welcome red this repository is dedicated to summarizing papers related to large scale language models and the field of law applications of large language models in legal tasks 1 legal prompt engineering for multilingual legal judgement prediction 2 can gpt 3 perform statutory reasoning 3 legal prompting teaching a language model to think like a lawyer 4 large language models as fiduciaries a case study toward robustly communicating with artificial intelligence through legal standards 5 chatgpt goes to law school 6 chatgpt professor of law 7 chatgpt generative ai systems as quasi expert legal advice lawyers case study considering potential appeal against conviction of tom hayes 8 words are flowing out like endless rain into a paper cup chatgpt law school assessments 9 chatgpt by openai the end of litigation lawyers 10 law informs code a legal informatics approach to aligning artificial intelligence with humans 11 chatgpt may pass the bar exam soon but has a long way to go for the lexglue benchmark paper https arxiv org pdf 2304 12202v1 pdf 12 how ready are pre trained abstractive models and llms for legal case judgement summarization paper https arxiv org pdf 2306 01248v1 pdf 13 explaining legal concepts with augmented large language models gpt 4 paper https arxiv org pdf 2306 09525v1 pdf 14 garbage in garbage out zero shot detection of crime using large language models paper http arxiv org pdf 2307 06844v1 15 legal summarisation through llms the prodigit project paper https arxiv org pdf 2308 04416v1 pdf 16 black box analysis gpts across time in legal textual entailment task paper https arxiv org pdf 2309 05501v1 pdf 17 policygpt automated analysis of privacy policies with large language models 18 reformulating domain adaptation of large language models as adapt retrieve revise paper https browse arxiv org pdf 2310 03328v1 pdf 19 precedent enhanced legal judgment prediction with llm and domain model collaboration paper https arxiv org pdf 2310 09241 pdf legal problems of large language models 1 towards winoqueer developing a benchmark for anti queer bias in large language models 2 persistent anti muslim bias in large language models 3 understanding the capabilities limitations and societal impact of large language models 4 the dark side of chatgpt legal and ethical challenges from stochastic parrots and hallucination 5 the gptjudge justice in a generative ai world paper https papers ssrn com sol3 papers cfm abstract id 4460184 6 is the u s legal system ready for ai s challenges to human values paper http arxiv org pdf 2308 15906v1 data resources for large language models in law 1 cail2018 a large scale legal dataset for judgment prediction 2 when does pretraining help assessing self supervised learning for law and the casehold dataset of 53 000 legal holdings 3 lecard a legal case retrieval dataset for chinese law system 4 lexfiles and legallama facilitating english multinational legal language model development 5 legal extractive summarization of u s court opinions 6 awesome chinese legal resources github https github com pengxiao song awesome chinese legal resources 7 multilegalpile a 689gb multilingual legal corpus paper https arxiv org pdf 2306 02069v1 pdf 8 the cambridge law corpus a corpus for legal ai research law llms 1 lawgpt zh github github com liuhc0428 law gpt 2 lawgpt github https github com pengxiao song lawgpt 3 lawyer llama github https github com andrewzhe lawyer llama 4 lexilaw github https github com cshaitao lexilaw 5 lexgpt 0 1 pre trained gpt j models with pile of law paper https arxiv org pdf 2306 05431v1 pdf 6 towards the exploitation of llm based chatbot for providing legal support to palestinian cooperatives paper https arxiv org pdf 2306 05827v1 pdf 7 chatlaw open source legal large language model with integrated external knowledge bases paper http arxiv org pdf 2306 16092v1 8 disc lawllm github https github com fudandisc disc lawllm evaluation dataset 1 measuring massive multitask chinese understanding paper https arxiv org pdf 2304 12986 pdf the survey paper is shown in paper http arxiv org abs 2303 09136 acknowledgement please cite the following papers as the references if you use our codes or the processed datasets bib article sun2023short title a short survey of viewing large language models in legal aspect author sun zhongxiang journal arxiv preprint arxiv 2303 09136 year 2023 | ai |
|
cgdb | cgdb cgdb is a very lightweight console frontend to the gnu debugger it provides a split screen interface showing the gdb session below and the program s source code above the interface is modelled after vim s so vim users should feel right at home using it screenshot downloads and documentation are available from the home page https cgdb github io official source releases are available here https cgdb me files build instructions dependencies you must have the following packages installed sh autoconf automake aclocal autoheader libtool flex bison gcc g c11 c 11 support preparing the configure run autogen sh in the current working directory to generate the configure script running configure make and make install you can run configure from within the source tree however i usually run configure from outside the source tree like so mkdir build cd build cgdb configure prefix pwd prefix make srj4 make install cgdb is a c11 c 11 project just like gdb since the standard is relatively new your gcc g may support it out of the box or may require the std c11 and std c 11 flags you can see how to set these flag in the below configure invocation i typically enable more error checking with the build tools like so yflags wno deprecated cflags std c11 g o0 wall wextra wshadow pedantic wno unused parameter cxxflags std c 11 g o0 wall wextra wshadow werror pedantic wmissing include dirs wno unused parameter wno sign compare wno unused but set variable wno unused function wno variadic macros cgdb configure prefix pwd prefix if you like to have a silent build and the libtool link lines are bothering you you can set this environment variable to suppress libtools printing of the link line libtoolflags silent | front_end |
|
pos-portal | matic pos proof of stake portal contracts build status https github com maticnetwork pos portal workflows ci badge svg smart contracts that powers the pos proof of stake based bridge mechanism for matic network https matic network audits hexens audits matic pos upd pdf halborn audits pos portal halborn audit 07 07 2021 pdf certik audits matic audit certik report pdf peckshield audits pos portal peckshield audit 30 07 2021 pdf usage install package from npm using bash npm i maticnetwork pos portal develop make sure you ve nodejs npm installed bash user pos portal anjan node version v12 18 1 user pos portal anjan npm version 6 14 5 clone repository install all dependencies bash git clone https github com maticnetwork pos portal cd pos portal npm i compile all contracts bash npm run template process npm run build if you prefer not using docker for compiling contracts consider setting docker false in truffle config js js file truffle config js 127 solc 128 version 0 6 6 129 docker false for deploying all contracts in pos portal we need to have at least two chains running simulating rootchain ethereum childchain polygon there are various ways of building this multichain setup though two of them are majorly used 1 with matic cli 2 without matic cli matic cli is a project which makes setting up all components of ethereum polygon multichain ecosystem easier three components matic cli sets up for you ganache simulating rootchain heimdall validator node of polygon bor block production layer of polygon i e childchain you may want to check matic cli https github com maticnetwork matic cli 1 with matic cli assuming you ve installed matic cli set up single node local network by following this guide https github com maticnetwork matic cli usage it s good time to start all components seperately as mentioned in matic cli readme this should give you rpc listen addresses for both rootchain read ganache childchain read bor which need to updated in pos portal truffle config js also note mnemonic you used when setting up local network we ll make use of it for migrating pos portal contracts matic cli generates localnet config contractaddresses json given you decided to put network setup in localnet directory which contains deployed plasma contract addresses we re primarily interested in plasma rootchain deployed on rootchain as name suggests aka checkpoint contract statereceiver contract deployed on bor these two contract addresses need to be updated here migrations config js you may not need to change statereceiver field because that s where bor deploys respective contract by default plasma rootchain contract address is required for setting checkpoint manager in pos rootchainmanager contract during migration pos rootchainmanager will talk to checkpointer contract for verifying pos exit proof js file migrations config js module exports plasmarootchain 0x fill it up aka checkpointer statereceiver 0x0000000000000000000000000000000000001001 now you can update preferred mnemonic to be used for migration in truffle config truffle config js js file truffle config js 29 const mnemonic process env mnemonic preferred mnemonic also consider updating network configurations for root child in truffle config js js make sure host port of rpc matches properly that s where all following transactions to be sent 52 root host localhost port 9545 network id match any network skipdryrun true gas 7000000 gasprice 0 child host localhost port 8545 network id match any network skipdryrun true gas 7000000 gasprice 0 67 now start migration which is 4 step operation migration step effect migrate 2 deploys all rootchain contracts on ganache migrate 3 deploys all childchain contracts on bor migrate 4 initialises rootchain contracts on ganache migrate 5 initialises childchain contracts on bor bash assuming you re in root of pos portal npm run migrate runs all steps you ve deployed all contracts required for pos portal to work properly all these addresses are put into contractaddresses json which you can make use of for interacting with them if you get into any problem during deployment it s good idea to take a look at truffle config js or package json and attempt to modify fields need to be modified migration files are kept here migrations 1 2 3 4 5 js 2 without matic cli you can always independently start a ganache instance to act as rootchain bor node as childchain without using matic cli but in this case no heimdall nodes will be there depriving you of statesync checkpointing etc where validator nodes are required start rootchain by bash npm run testrpc rpc on localhost 9545 default now start childchain requires docker bash npm run bor rpc on localhost 8545 default if you ran a bor instance before a dead docker container might still be lying around clean it using following command bash npm run bor clean optional run testcases bash npm run test deploy contracts on local ganache bor instance bash npm run migrate this should generate contractaddresses json which contains all deployed contract addresses use it for interacting with those production use this guide for deploying contracts in ethereum mainnet 1 moonwalker needs rabbitmq and local geth running bash docker run d p 5672 5672 p 15672 15672 rabbitmq 3 management npm run testrpc 2 export env vars bash export mnemonic export from export provider url export root chain id export child chain id export plasma root chain export gas price 3 compile contracts bash npm run template process root chain id root chain id child chain id child chain id npm run build 4 add root chain contract deployments to queue bash npm run truffle exec moonwalker migrations queue root deployment js 5 process queue rerun if interrupted bash node moonwalker migrations process queue js 6 extract contract addresses from moonwalker output bash node moonwalker migrations extract addresses js 7 deploy child chain contracts bash npm run truffle migrate network mainnetchild f 3 to 3 8 add root chain initializations to queue bash node moonwalker migrations queue root initializations js 9 process queue rerun if interrupted bash node moonwalker migrations process queue js 10 initialize child chain contracts bash npm run truffle migrate network mainnetchild f 5 to 5 11 register state sync register rootchainmanager and childchainmanager on statesender set statesenderaddress on rootchainmanager grant state syncer role on childchainmanager command scripts management scripts bash npm run truffle exec scripts update implementation js network network name new address transfer proxy ownership and admin role set list of contract addresses and new owner address in 6 change owners js migration script set mnemonic and api key as env variables bash npm run change owners network network name | ethereum matic web3 blockchain | blockchain |
mobile-backend-apis-with-node-server | to accompany my course on mobile backend development with node https www udemy com developing and deploying mobile backend apis with nodejs create config js based on config example js remember to never commit secret keys to version control | front_end |
|
Internet-Engineering | rocketask web app simple single page app for managing daily tasks p align middle float left img src https raw githubusercontent com maikelsofly internet engineering master docs screen1 jpg width 700 p | os |
|
open-enterprise | open enterprise note open enterprise is the new name of that planning suite build status https img shields io travis autarklabs open enterprise svg style flat square https travis ci org autarklabs open enterprise coverage status https img shields io coveralls github autarklabs open enterprise svg style flat square https coveralls io github autarklabs open enterprise markdownlint disable md033 p align center a href development setup development setup a a href app overview app overview a a href contact contact a p markdownlint enable md033 open enterprise is a collection of aragon apps that enable organizations to curate issues collectively budget and design custom reward and bounty programs if you are interested in viewing app demos or want to install them to your rinkeby organizations learn more here https www autark xyz apps release status the apps are currently on rinkeby and undergoing a security audit and ux enhancements the apps will be released to mainnet in q4 2019 development setup node js lts or greater required you can use a tool like asdf https asdf vm com or nvm https github com nvm sh nvm to manage versions of node you must install aragon cli v6 3 2 globally npm i g aragon cli 6 3 2 bash bootstrap project dependencies npm i start a local blockchain and deploy aragon dao kit with all apps npm start develop single app react frontend npm run dev projects develop single app with backend and aragon wrapper npm run start dot current app name aliases address allocations dot projects rewards extra tips individual development is ultra fast thanks to parcel and hot module replacement start the dao kit to manage smart contracts interactions between all planning apps and aragon official apps token manager and voting right now the start script spawns a local blockchain needed to publish the apps before deploying the dao kit template with all them detailed information in the development notes md docs development notes md document app overview open enterprise is a collection of five aragon apps that supports the following allocations the allocations app is used to propose a financial allocation meant to be distributed to multiple parties allocation proposals are forwarded to the dot voting app the percentage of the allocation amount distributed to each party is determined based on the results of the dot vote address book maintain a list of ethereum addresses mapped to human readable names the address book will enable a more user friendly way to access and review common addresses a dao uses for allocations and dot voting projects allocate funding to multiple github issues in a single action and collectively curate issues curate issues token holders will be able to curate prioritize the top issues that should be developed issue curation proposals are forwarded to the dot voting app fund issues fund issues in a bulk fashion with the possibility to require dao approval before funding is allocated dot voting dot voting is used to cast votes for allocation or issue curation proposals members can vote on how to distribute an allocation across distinct entities or prioritize a list of github issues by specifying a percentage of votes per option rewards distributes payments to token holders based on the number of tokens one has earned in a specific cycle of time one time reward or based on the total tokens one holds dividend review more details https www autark xyz apps contact we can be found in the autark community keybase channel https keybase io team autark community if you have any questions or want to get involved in our development please drop in special thanks special thanks to the aragon network for funding our work with three grants to date nest https blog aragon one introducing aragon nest 1aa8c91c0566 agp 19 https github com aragon agps blob master agps agp 19 md and agp 73 https github com aragon agps blob master agps agp 73 md | os |
|
GPT-CLS-CARP | div align center img src assets carp header v3 jpg width 800 div paper link https arxiv org abs 2305 08377 br if you find this repo helpful please cite the following latex article sun2023text title text classification via large language models author sun xiaofei and li xiaoya and li jiwei and wu fei and guo shangwei and zhang tianwei and wang guoyin journal arxiv preprint arxiv 2305 08377 year 2023 for any question please feel free to post github issues br overview in this paper we introduce clue and reasoning prompting carp which is a progressive reasoning strategy tailored to addressing the complex linguistic phenomena involved in text classification carp first prompts llms to find superficial clues e g keywords tones semantic relations references etc based on which a diagnostic reasoning process is deduced for final decisions to further address the limited token issue carp uses a fine tuned model on the supervised dataset for knn demonstration search in the in context learning allowing the model to take the advantage of both llm s generalization ability and the task specific evidence provided by the full labeled dataset br examples of prompts under zero shot and few shot k 1 settings are shown in the following br div align left img src assets carp prompts png width 900 div data and trained models name link fullset google drive subset google drive ft model google drive setup environment before running this project you need to create a conda environment and install required packages br bash conda create n gpt env python 3 7 conda activate gpt env pip install torch 1 8 1 cu111 torchvision 0 9 1 cu111 f https download pytorch org whl torch stable html cd gpt cls carp pip install r requirements txt i https mirrors aliyun com pypi simple after that please execute the following commands in the terminal for downloading nltk s dependent files bash conda activate gpt env python3 import nltk nltk download punkt supervised roberta we release code and scripts for fine tuning roberta large on five text classification datasets including sst 2 agnews r8 r52 and mr zero shot in context learning scripts for reproducing our experimental results can be found in the scripts dataset name gpt3 zeroshot folder where dataset name takes value in sst2 agnews r8 r52 mr br note that you need to change data dir output dir to your own dataset path bert model path and log path respectively br for example run scripts sst2 gpt3 zeroshot carp davinci003 sh will start prompt gpt 3 in the zero shot setting and save intermediate log to output dir few shot in context learning scripts for reproducing our experimental results can be found in the scripts dataset name retriever type gpt3 fewshot folder where dataset name takes value in sst2 agnews r8 r52 mr and retriever type in ft retriever knn simcse retriever knn random demo br note that you need to change data dir output dir to your own dataset path bert model path and log path respectively br for example run scripts sst2 gpt3 fewshot carp davinci003 sh will start prompt gpt 3 in the zero shot setting and save intermediate log to output dir results experimental results for the supervised baseline roberta large the zero shot setting and the few shot setting with the ft retriever are shown in the following table more results e g few shot in context learning with simcse retriever can be found in the paper https arxiv org abs 2305 08377 dataset sst 2 agnews r8 r52 mr average roberta large 95 99 95 55 97 76 96 42 91 16 95 38 zero shot vanilla 91 55 90 72 90 19 89 06 88 69 90 04 cot 92 11 91 25 90 48 91 24 89 37 90 89 carp 93 01 92 60 91 75 91 80 89 94 91 82 few shot ft retriever k 16 vanilla 94 01 94 14 95 57 95 79 90 90 94 08 cot 95 48 94 89 95 59 95 89 90 17 94 40 carp 96 80 95 99 98 29 96 82 91 90 95 97 | ai |
Subsets and Splits