names
stringlengths 1
98
| readmes
stringlengths 8
608k
| topics
stringlengths 0
442
| labels
stringclasses 6
values |
---|---|---|---|
studentdirectory | studentdirectory this a backend server api for student directory application that utilizing crud development to manipulate information using a postgresql database this app is a form of human resource management tool and can used to manage students that are newcomers graduating legally changed their name married etc names emails and birthdates can be updated technologies used java spring boot spring boot web spring boot jpa tomcat postgresql maven | java postgresql spring-boot | server |
frontend-test | front end engineer challenge this challenge has been designed to assess the ability of a front end candidate to solve real world problems using our current technology stack while the difficulties arising during this project are real the project itself is a mock and will not be used by us for business purposes submission instructions 1 fork this repository on github 2 complete the project as described below within your fork 3 keep the commit history don t squash 4 push all of your changes to your fork on github and work with descriptive pull request 5 a cup of or project description make a simplified slack clone structure 1 system login only sign in and use fake data user 2 sidebar on the left list of channels add remove button list of users direct messages 3 chat window on the right list of messages each message avatar username timestamp message edit remove button comment box implementation instructions 1 use placeholders both ui elements and actions reducers in place of features you didn t have time to implement overall code organization and project structure are more important than implementation details 2 simplistic design will be completely acceptable don t waste much time on it 3 mock channels and users 4 mock short history of conversations in channels direct messages 5 store everything in memory only no need to persist data but mock ajax calls and make these calls asyncrounous 6 strive for good commit messages https github com erlang otp wiki writing good commit messages essential technology stack 1 react https facebook github io react jsx 2 redux http redux js org mobx https mobx js org zustand https github com pmndrs zustand or react context api https es reactjs org docs context html storage 3 react router https github com reactjs react router route 4 moment https momentjs com docs date fns https date fns org etc date 5 any ui framework as antd https ant design docs react introduce material https mui com material ui getting started installation ect remarks use promises https developer mozilla org en us docs web javascript reference global objects promise for ajax no need to implement backend mock response data and simulate latency e g with settimeout the correct use of hooks or custom hooks will be valued if used use correct spelling evaluation criteria in order of importance 1 code organization 2 code readability including comments 3 stick to the tech stack described above 4 commit history structure and quality 5 completeness 6 test coverage | front_end |
|
hasura-deno | let me tell you about hasura and deno i recently used them for app development and i wanted to spread the method let us start by exploring what these tools and products are and how they can be useful for app development let us start with hasura image https user images githubusercontent com 11357486 186637730 5c80a8eb cdbd 4d62 8f34 462d70d50372 png hasura calls themselves a graphq engine that make your data instantly accessible through real time graphql api so you can create applications connect to your database have rest and graphql endpoints to provide a unified connected real time secured graphql api for your data for front end developers it can be back end for front ends we can see an ui that consists of api data actions remote schemas events and monitoring from the beginning the important part would be how to connect to our database to start using other features image https user images githubusercontent com 11357486 186637816 936a07cf 82f9 4f95 9736 2b40ffc6cd73 png if we click on data tab we can observe which databases are already connected to the server image https user images githubusercontent com 11357486 186637849 d55e8d35 9d45 462b 8251 7889fe50823f png we can see the default database and we also can connect to another database or create a new one aut heroku has cloud offerings with free developer friendly tears for hobby projects that you can use or simply move to another cloud provider when the hobby project would move into production stage image https user images githubusercontent com 11357486 186637921 512b91c1 b168 4848 b5b3 8ca4ef0780d8 png if we click on free heroku database button we can create a new postgresql database in heroku image https user images githubusercontent com 11357486 186637975 437978a4 0e44 42a0 bd30 516399138b7d png when this is done we can observe that in the data section of hasura ui we see our new database is present and new database when we open it and connect to it you can create new tables and columns image https user images githubusercontent com 11357486 186638023 2ec6f071 549b 425d b196 3ee3c6527826 png img width 1353 alt image src https user images githubusercontent com 11357486 186638144 36babab9 218f 4af3 adcd c8c769bacfeb png hasura provides a very comfortable ui to create database columns that are very common in front end development such as unique random id of the row it s a very simple interface to generate very common things like timestamp columns that can be just added to that particular table and now if you are satisfied with the table you can stop here but let us say we want to have some kind of column for data we create a new column that will expect data in a particular format such as text hasura will notify us that we have successfully created a table image https user images githubusercontent com 11357486 186638262 a30f9f4c bf33 4a50 8690 06493b12df62 png img width 433 alt image src https user images githubusercontent com 11357486 186638387 10fd6a5e d756 4086 8008 cbd667692275 png after successfully creating the table we see modify section of this particular table in case we see any errors or adjustments we would like to make to this table app development consists of accessing the data from the databases through query languages one of them is sql another is graphql which is the main selling point of hasura as it connects with postgres database to generate graphql schema and endpoint for such database image https user images githubusercontent com 11357486 186638512 132efa19 473a 4d4d 945f 9f271f029270 png you can explore the generated graphql endpoint through the graphicql tool that lets you create graphql queries on the fly and see responses as json image https user images githubusercontent com 11357486 186638620 413e2467 0c15 4538 addb 59263562c757 png after this brief introduction to hasura i would like to move on to the second tool that we can use in our creation basically i am running time for our application because we have a back end and a database but now we want to have a way to interact with that application and with that database and the back end of it through some kind of front end whether it s a cli application a front end in a browser application a mobile application or some other kind of front layer for applications where javascript and our skills as front end developers can be used and leveraged image https user images githubusercontent com 11357486 186638744 4d3b2db7 c56a 47d5 8878 fe3a6bc30eec png lets focus on the deployed aspect of that runtime which in official ways can be done through their cloud provider service called deno deploy which provides instant deployments in 34 regions for a while with zero configuration zero maintenance on the development side and support for typescript web assembly modules and ecmascript modules we can just fine this for this project to see already existing projects to create new projects that would be the next step for us to interact with the data whenever we are ready we can create a new project hello world this is just a server what would we want as well as an example of a playground with jsx so you can observe how to develop locally and deploy globally in this case that would be the hello world and hello team image https user images githubusercontent com 11357486 186638837 63063785 3a87 404e 8f04 e8019de8047c png when we open a new browser we can see the hello world immediately in the web browser we can now try watch the soundtracks that we implemented before as data in hasura with some soundtracks url and we can see what they are answers from our database immediately image https user images githubusercontent com 11357486 186638953 26deda59 98c7 4a1f 8dfe 693be92157cc png | server |
|
CloudCV-Old | cloudcv note this repository is deprecated now we will start working on the revamped version of cloudcv website very soon join the chat at https gitter im batra mlp lab cloudcv https badges gitter im join 20chat svg https gitter im batra mlp lab cloudcv utm source badge utm medium badge utm campaign pr badge utm content badge build status https travis ci org cloud cv cloudcv svg branch master https travis ci org cloud cv cloudcv requirements status https requires io github cloud cv cloudcv requirements svg branch master https requires io github cloud cv cloudcv requirements branch master large scale distributed computer vision as a cloud service we are witnessing a proliferation of massive visual data unfortunately scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems our goal is to democratize computer vision one should not have to be a computer vision deep learning and distributed computing expert to have access to state of the art distributed computer vision algorithms we provide access to state of art distributed computer vision algorithms as a cloud service through web interface apis researchers students and developers will be able to access these distributed computer vision algorithms and the computation power through small number of click and minimal lines of code instructions to get started with cloudcv development to setup project cloudcv on your local machine you need install docker https docs docker com mac first after installing docker on your machine just follow the instructions given in the next section steps for setting the development environment 1 run the following git clone specify a directory of your choosing if you like git clone https github com batra mlp lab cloudcv git cloudcv 2 run virtualenv on the git cloned directory to setup the python virtual environment virtualenv cloudcv 3 cd into the name of the directory into which you cloned the git repository cd cloudcv 4 activate the virtual environment it is recommended to use virtual environment source bin activate 5 change directory to docker and run the bash script to create docker containers cd docker build sh 6 change directory to docker and run the bash script to create docker containers run server sh run 80 443 when the image building completes then you can visit 127 0 0 1 http 127 0 0 1 and check if cloudcv server is running or not 7 now for setting up workers just run the command worker cpu 8 now visit http 127 0 0 1 http 127 0 0 1 in your browser and you should be all set additional information whenever you want to stop the docker containers then run the command stopcontainer to remove all the images run the command docker rm docker ps a q to make yourself familiar with the codebase check the file directorydocumentation txt https github com batra mlp lab cloudcv blob master directorydocumentation txt for any other queries open issues or you can chat with developers at our gitter channel official documentation available at this link http batra mlp lab github io cloudcv | ai |
|
lightning-trader | lightning trader sample trader desktop application built with react http facebook github io react and the lightning design system www lightningdesignsystem com check out this video for a quick walkthrough video http img youtube com vi 53otiny4gn4 0 jpg http www youtube com watch v 53otiny4gn4 the back end is built with node js and socket io experience the application the application is hosted live here http lightning trader herokuapp com http lightning trader herokuapp com deploying your own instance follow the steps below to deploy your own instance 1 make sure you are logged in to the heroku dashboard https dashboard heroku com you can quickly create a free account if you don t have one 2 click the button below to deploy the application on heroku deploy https www herokucdn com deploy button png https heroku com deploy your own instance of the application is automatically deployed local installation follow the instructions below if you prefer to install the application on your local machine 1 clone this repository or download and unzip this https github com ccoenraets lightning trader archive master zip zip file 1 navigate to the lightning trader directory and install the project dependencies npm install 1 type the following command to build the client application npm run build client the project is written using ecmascript 6 including ecmascript 6 modules 1 type the following command to start the server npm start 1 open a browser and access http localhost 3000 http localhost 3000 using the socket io feed by default the application uses a mock feed simulated at the client side to use the actual socket io feed 1 open js app js 1 comment out the module import for the mock client side feed import as feed from services feed mock 1 uncomment the module import for the real socket io feed import as feed from services feed socketio 1 rebuild the client npm run build client | os |
|
Intro_to_Data_Science | introduction to data science this is the repository for the course introduction to data science offered by the department of information technologies bo akademi university finland | server |
|
LLM-T2T | llm t2t data and code for paper large language models are effective generators evaluators and feedback providers for faithful table to text generation | ai |
|
Smart-Garage-System | smart garage system design and simulation of an embedded smart garage system this repository contains the on paper design and a simulation using software proteus of a smart garage system it is an application of the knowledge gained over the entirety of the course microprocessors programming and interfacing it is a system designed around an intel 8086 processor and various other supplementary chips the code of the system has been written in asssembler emu8086 format and assembled into bin files using the same datasheets of the various devices and components used in the system are located in the manuals directory | os |
|
web-ui | super dispatch ui main https github com superdispatch ui workflows main badge svg branch main https github com superdispatch ui actions codecov https codecov io gh superdispatch ui branch master graph badge svg https codecov io gh superdispatch ui this project is using percy io for visual regression testing https percy io static images percy badge svg https percy io super dispatch ui superdispatch ui https github com superdispatch ui tree master packages ui ui components superdispatch dates https github com superdispatch ui tree master packages dates date and time components superdispatch phones https github com superdispatch ui tree master packages phones phone number components superdispatch forms https github com superdispatch ui tree master packages forms ui date time and phone number component adapters to work with forms publishing to npm 1 open an npm https www npmjs com account if you don t have one 2 ask frontend chapter lead to give access to superdispatch organization in npm 3 enable 2fa for your npm account 4 run npm login from your terminal and login into your account 5 run yarn release 6 make sure the tag is created on github and packages published into npm 7 update all usages of packages in product repositories component boilerplate generation with plop in our project we use plop a micro generator framework that helps developers to generate boilerplate code snippets plop is flexible and simple to use and it works by using generators configured in a plopfile js to generate component use command bash yarn run generate 1 plop will ask you what type of component you d like to generate 2 next plop will ask you for the name of your component 3 plop will generate all boilerplate files for you 4 open up the new files and you ll see that they re pre populated with the code needed to get started on your new component story and tests 5 from here you can start developing your component adding new states and props to the storybook story and writing tests troubleshooting tsc error output file has not been built from source file if error points at superdispatch package open respective package directory and run yarn tsc command tsc error on yarn run release or lerna publish commands run logged tsc command try solution of error described above tsc error output file | react material-ui | os |
notebooks | transformers notebooks this repository contains the example code from our o reilly book natural language processing with transformers https www oreilly com library view natural language processing 9781098136789 img alt book cover height 200 src images book cover jpg id book cover getting started you can run these notebooks on cloud platforms like google colab https colab research google com or your local machine note that most chapters require a gpu to run in a reasonable amount of time so we recommend one of the cloud platforms as they come pre installed with cuda running on a cloud platform to run these notebooks on a cloud platform just click on one of the badges in the table below this table is automatically generated do not fill manually chapter colab kaggle gradient studio lab introduction open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 01 introduction ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 01 introduction ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 01 introduction ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 01 introduction ipynb text classification open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 02 classification ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 02 classification ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 02 classification ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 02 classification ipynb transformer anatomy open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 03 transformer anatomy ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 03 transformer anatomy ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 03 transformer anatomy ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 03 transformer anatomy ipynb multilingual named entity recognition open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 04 multilingual ner ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 04 multilingual ner ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 04 multilingual ner ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 04 multilingual ner ipynb text generation open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 05 text generation ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 05 text generation ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 05 text generation ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 05 text generation ipynb summarization open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 06 summarization ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 06 summarization ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 06 summarization ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 06 summarization ipynb question answering open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 07 question answering ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 07 question answering ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 07 question answering ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 07 question answering ipynb making transformers efficient in production open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 08 model compression ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 08 model compression ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 08 model compression ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 08 model compression ipynb dealing with few to no labels open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 09 few to no labels ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 09 few to no labels ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 09 few to no labels ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 09 few to no labels ipynb training transformers from scratch open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 10 transformers from scratch ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 10 transformers from scratch ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 10 transformers from scratch ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 10 transformers from scratch ipynb future directions open in colab https colab research google com assets colab badge svg https colab research google com github nlp with transformers notebooks blob main 11 future directions ipynb kaggle https kaggle com static images open in kaggle svg https kaggle com kernels welcome src https github com nlp with transformers notebooks blob main 11 future directions ipynb gradient https assets paperspace io img gradient badge svg https console paperspace com github nlp with transformers notebooks blob main 11 future directions ipynb open in sagemaker studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github nlp with transformers notebooks blob main 11 future directions ipynb end of table nowadays the gpus on colab tend to be k80s which have limited memory so we recommend using kaggle https www kaggle com docs notebooks gradient https gradient run notebooks or sagemaker studio lab https studiolab sagemaker aws these platforms tend to provide more performant gpus like p100s all for free note some cloud platforms like kaggle require you to restart the notebook after installing new packages running on your machine to run the notebooks on your own machine first clone the repository and navigate to it bash git clone https github com nlp with transformers notebooks git cd notebooks next run the following command to create a conda virtual environment that contains all the libraries needed to run the notebooks bash conda env create f environment yml note you ll need a gpu that supports nvidia s cuda toolkit https developer nvidia com cuda toolkit to build the environment currently this means you cannot build locally on apple silicon chapter 7 question answering has a special set of dependencies so to run that chapter you ll need a separate environment bash conda env create f environment chapter7 yml once you ve installed the dependencies you can activate the conda environment and spin up the notebooks as follows bash conda activate book or conda activate book chapter7 jupyter notebook faq when trying to clone the notebooks on kaggle i get a message that i am unable to access the book s github repository how can i solve this issue this issue is likely due to a missing internet connection when running your first notebook on kaggle you need to enable internet access in the settings menu on the right side how do you select a gpu on kaggle you can enable gpu usage by selecting gpu as accelerator in the settings menu on the right side citations if you d like to cite this book you can use the following bibtex entry book tunstall2022natural title natural language processing with transformers building language applications with hugging face author tunstall lewis and von werra leandro and wolf thomas isbn 1098103246 url https books google ch books id 7hhyzgeacaaj year 2022 publisher o reilly media incorporated | ai |
|
gamut | gamut the component library design system for codecademy circleci https circleci com gh codecademy gamut svg style svg circle token 3d9adfca5a8b44e7297ceb18e032e89a11d223a2 https circleci com gh codecademy gamut this repository is a monorepo that we manage using lerna https lerna js org that means that we publish several packages to npm from the same codebase including gamut kit we provide a single package to manage the versions of a few core dependencies gamut gamut icons gamut illustrations gamut patterns gamut styles since these packages are highly intertwined we suggest only installing codecademy gamut kit when your app needs all of these gamut kit include in your application instead of the individual packages to simplify version management packages gamut kit readme md npm version https badge fury io js 40codecademy 2fgamut kit svg https badge fury io js 40codecademy 2fgamut kit 1 run yarn add codecademy gamut kit 2 add each of the managed packages to your peer dependencies this is required for enabling intellisense for these packages and does not have any effect on version resolution json peerdependencies codecademy gamut codecademy gamut icons codecademy gamut patterns codecademy gamut illustrations codecademy gamut styles codecademy gamut tests codecademy variance individual packages gamut our react ui component library packages gamut readme md npm version https badge fury io js 40codecademy 2fgamut svg https badge fury io js 40codecademy 2fgamut gamut styles utility styles for gamut components and codecademy apps packages gamut styles readme md npm version https badge fury io js 40codecademy 2fgamut styles svg https badge fury io js 40codecademy 2fgamut styles gamut icons svg icons for gamut components and codecademy apps packages gamut icons readme md npm version https badge fury io js 40codecademy 2fgamut icons svg https badge fury io js 40codecademy 2fgamut icons variance typescript css in js utility library packages variance readme md npm version https badge fury io js 40codecademy 2fvariance svg https badge fury io js 40codecademy 2fvariance styleguide styleguide documentation storybook development sandbox packages styleguide readme md local development 1 run yarn in the root directory 1 run yarn build to build all of the packages certain packages like gamut icons need to be built to function in storybook running the storybook styleguide 1 run yarn start to start the storybook server 1 add new stories to packages styleguide stories 1 stories are written using storybook s component story format https storybook js org docs formats component story format publishing modules 1 make your changes in a feature branch and get another engineer to review your code 1 after your code has been reviewed and tested you can merge your branch into main 1 make sure to update your pr title and add a short description of your changes for the changelog see the pr title guide https github com codecademy gamut pr title guide 1 to merge your changes add the ship it label to your pull request 1 once your branch is merged into main it will be published automatically by circleci 1 you can find the new version number on npmjs com package package name or find it in that package s package json on the main branch publishing an alpha version of a module every pr that changes files in a package publishes alpha releases that you can use to test your changes across applications note in case an alpha build is not published upon opening of the pr or draft pr re run the build test check and that will re run the alpha build publishing flows 1 create a pr or draft pr this will kickoff a circle ci workflow which will publish an alpha build this will appear in github as the deploy 1 after the alpha build is published the codecademydev bot should comment on your pr with the names of the published alpha packages br img width 290 height auto src https user images githubusercontent com 4298857 114948632 3fa88a80 9e04 11eb 89ef d016a1c9c572 png 1 install this version of the package in your application you wish to test your changes on working with pre published changes note due to the inconsistencies of symlinks in a lerna repo instead of using yarn link we recommend using the npm link better package with the copy flag to copy packages into your local repo s node modules directory initial setup 1 ensure you have npm link better installed npm install g npm link better 1 ensure you ve built the entire gamut repo since you last synced yarn build instructions for each of your local gamut packages e g gamut you ll need to do 2 things to get it working in your project 1 make sure your package changes have been built into the gamut packages package dist folder yarn build br or br yarn build watch not all packages support this yet 1 copy that built dist folder to your project s node modules codecademy package folder bash cd myprojectrepo npm link better copy watch path to gamut packages package note the watch flag will automatically copy your package into node modules everytime it is built details summary example workflow summary let s say we are making changes to the gamut package and our app that uses the gamut package uses yarn start to build serve and watch our app for changes let s also assume these two repos are sibling directories inside of a folder called repos repos gamut my app we would run the following commands in 3 separate shells bash shell 1 auto build gamut changes cd repos gamut packages gamut yarn build watch shell 2 auto copy built gamut changes to my app cd repos my app npm link better copy watch gamut packages gamut shell 3 auto update app when anything changes cd repos my app yarn start this would allow us to make a change in our gamut package and see that change automatically reflected in our local app in the browser details details summary troubleshooting summary if you see compilation issues in your project s dev server after running npm link better you may have to restart your app s dev server if you are seeing compilation issues in a gamut package you may need to rebuild the whole repository via bash yarn build details details summary instructions for using yarn link instead not recommended summary for quicker development cycles it s possible to run a pre published version of gamut in another project we do that using symlinks the following instructions assume you have set up and built gamut 1 cd path to gamut packages gamut 1 yarn link 1 cd path to other repo 1 yarn link codecademy gamut 1 yarn install if your other project uses react you must link that copy of react in gamut 1 cd path to other repo 1 cd node modules react 1 yarn link 1 cd path to gamut packages gamut 1 yarn link react 1 yarn build see the docs https reactjs org warnings invalid hook call warning html duplicate react for more information for why you have to do this details br adding a new package 1 create a new directory at packages package name package json 1 use yarn lerna create to create the new package copying values from existing package json s when unsure also copy the publishconfig field to let your published package be public by default 1 create a minimal amount of source code in the new package example a simple tsconfig json with a index ts exporting a single object 1 run yarn lerna bootstrap from the repository root 1 send a feat pr adding that package 1 one merged message out in our frontend slack channel to other gamut developers to re run yarn lerna bootstrap after they merge from main notes if your package will be used in other packages in the monorepo you may need to set up aliases in jest and storybook so that they can be run without building your package first you can find these aliases in jest config js jest config js and the styleguide storybook config packages styleguide storybook main ts nx this monorepo uses nx https nx dev to cache previous builds locally and in ci the config for nx is located at nx json nx json along with project json files for each package for new packages please use an nx generator plugin to create your initial package this will ensure that all of the configuration for linting testing is set up correctly pr title guide your pr title should follow the conventional commits https www conventionalcommits org en v1 0 0 format because we automatically squash merge pull requests you ll need to format your pr title to match these guidelines since the title will become the commit message your individual commits will affect the alpha version number but not the final version once you merge to main this title format will be linted in the conventional pr title status check and prevent merging if you do not follow the correct format pr title format when you click squash and merge the title should follow this format type scope message examples fix fixes a bug in some component test adds test to component with a scope feat button sparkles an awesome feature for the button component breaking change feat button fire deleted the button component check out the conventional commits https www conventionalcommits org en v1 0 0 page for more detailed options type the type determines what kind of version bump is needed a fix will create a patch release while a feat will create a minor release major version updates require a special syntax that is described below type must be one of the following options standard types feat a new feature fix a bug fix style changes that do not affect the meaning of the code white space formatting missing semi colons etc docs documentation only changes perf a code change that improves performance refactor a code change that neither fixes a bug nor adds a feature test adding missing tests or correcting existing tests ci changes to our ci configuration files and scripts build changes that affect the build system or external dependencies scope a scope is optional and consists of a noun describing a section of the codebase surrounded by parenthesis e g feat button breaking changes adding an exclamation point after your type before the colon will indicate that your pr contains a breaking change and increment the major version number of the modules you changed examples feat made a breaking change in the button component feat button made a breaking change in the button component you should do this if your changes introduce any incompatibilities with previous versions of the module this will indicate to package consumers that they need to refactor their usage of the module to upgrade breaking changes release process because gamut is a separate repository from its consumers it can be tricky to coordinate technically breaking changes if your changes will require changes in any downstream repositories 1 create a pr in gamut to create alpha package versions 2 create prs in the repositories using those alpha package versions 3 update each downstream pr description to link to the gamut pr and vice versa 4 once all prs have been approved merge your gamut pr first 5 update your repository prs to use the new non alpha package versions once published 6 merge your repository prs this process minimizes the likelihood of accidental breaking changes in gamut negatively affecting development on our other repositories body optional extra description for your changes this goes in the description for your pr between the changelog description comment tags in the pr template if you include the text breaking change in your description it will trigger a major version bump we prefer to use the feat syntax for breaking changes described above publishing storybook storybook is built and published automatically when there are merges into the main branch | codecademy design-system react | os |
GAN_Review | this repo contains gans review for topics of computer vision and time series news 2021 07 11 our preprint generative adversarial networks in time series a survey and taxonomy eoin brophy and zhengwei wang and qi she and tomas e ward https arxiv org pdf 2107 11098 pdf is out this work is currently in progress 2021 02 14 our paper generative adversarial networks in computer vision a survey and taxonomy zhengwei wang and qi she and tomas e ward https dl acm org doi abs 10 1145 3439723 arxiv version https arxiv org pdf 1906 01529 pdf has been published at acm computing surveys and we will continue to polish this work into the 5th version details of selected papers and codes can refer to gan cv folder https github com sheqi gan review tree master gan cv 2020 11 24 our paper generative adversarial networks in computer vision a survey and taxonomy zhengwei wang and qi she and tomas e ward https arxiv org pdf 1906 01529 pdf gets acceptted into acm computing surveys and we will continue to polish this work into the 5th version 2020 06 20 we have updated our 4th version of gan survey for computer vision paper it inlcudes more recent gans proposed at cvpr iccv 2019 2020 more intuitive visualization of gan taxonomy 2020 10 04 gans related to our latest paper will be updated shortly generative adversarial networks in computer vision p align center img src gan cv pic gans taxonomy png width 1000 img a survey and taxonomy of the recent gans development in computer vision please refer to the details in recent review paper generative adversarial networks in computer vision a survey and taxonomy zhengwei wang and qi she and tomas e ward https dl acm org doi abs 10 1145 3439723 arxiv version https arxiv org pdf 1906 01529 pdf we also provide a list of papers related to gans on computer vision in the gan cv csv file if you find this useful in your research please consider citing article wang2021generative title generative adversarial networks in computer vision a survey and taxonomy author wang zhengwei and she qi and ward tomas e journal acm computing surveys csur volume 54 number 2 pages 1 38 year 2021 publisher acm new york ny usa we have classified the two gan variants research lines based on recent gan developments below we provide a summary and the demo code of these models we have tested the codes below and tried to summary some of b lightweight b and b easy to reuse b module of state of the art gans architecture variant gans lapgan https github com jimfleming lapgan tensorflow https github com aaronyalai generative adversarial networks pytorch pytorch dcgan https github com carpedm20 dcgan tensorflow tensorflow https github com last one dcgan pytorch pytorch began https github com carpedm20 began tensorflow tensorflow https github com anantzoid began pytorch pytorch progan https github com tkarras progressive growing of gans tensorflow https github com nashory pggan pytorch pytorch sagan https github com brain research self attention gan tensorflow https github com heykeetae self attention gan pytorch biggan https github com taki0112 biggan tensorflow tensorflow https github com ajbrock biggan pytorch pytorch your local gan https github com giannisdaras ylg tensorflow https github com 188zzoon your local gan pytorch autogan https github com vita group autogan pytorch msg gan https github com akanimax msg stylegan tf tensorflow https github com akanimax msg gan v1 pytorch loss variant gans wgan https github com chengbinjin wgan tensorflow tensorflow https github com zeleni9 pytorch wgan pytorch wgan gp https github com changwoolee wgan gp tensorflow tensorflow https github com caogang wgan gp pytorch lsgan https github com xudonmao lsgan tensorflow https github com meliketoy lsgan pytorch pytorch f gan https github com lynnho f gan tensorflow tensorflow ugan https github com gokul uf tf unrolled gan tensorflow https github com andrewliao11 unrolled gans pytorch ls gan https github com maple research lab lsgan gp alt tensorflow https github com maple research lab glsgan gp pytorch mrgan https github com wiseodd generative models tree master gan mode regularized gan tensorflow and pytorch geometric gan https github com lim0606 pytorch geometric gan pytorch rgan https github com alexiajm relativisticgan tensorflow and pytorch sn gan https github com taki0112 spectral normalization tensorflow tensorflow https github com christiancosgrove pytorch spectral normalization gan pytorch realnessgan https github com taki0112 realnessgan tensorflow tensorflow https github com kam1107 realnessgan pytorch sphere gan https github com taki0112 spheregan tensorflow tensorflow https github com dotori hj spheregan pytorch implementation pytorch self supervised gan https github com zhangqianhui self supervised gans tensorflow https github com vandit15 self supervised gans pytorch pytorch gan review for time series a survey and taxonomy of the recent gans development in time series please refer to the details in recent review paper generative adversarial networks in time series a survey and taxonomy eoin brophy and zhengwei wang and qi she and tomas e ward https arxiv org pdf 2107 11098 pdf this work is currently in progress if you find this useful in your research please consider citing article brophy2021generative title generative adversarial networks in time series a survey and taxonomy author brophy eoin and wang zhengwei and she qi and ward tomas journal arxiv preprint arxiv 2107 11098 year 2021 datasets unlike computer vision having lots of well known and large scale benchmarking datasets time series benchmarking datasets are limited due to generalization and some privacy issues especially for clinical data below we provide some resources of well known time series datasets hopefully it is useful feel free to suggest any well known time series datasets to this repo by opening new issue we will review it and add it to the list we hope this can help push the time series research forward oxford man institute realised library updated daily https realized oxford man ox ac uk real multivariate time series dataset contains 2 689 487 instances and 5 attributes eeg motor movement imagery dataset 2004 https physionet org content eegmmidb 1 0 0 real multivariate time series contains 1 500 instances and 64 attributes ecg 200 2001 http www timeseriesclassification com description php dataset ecg200 real univariate time series contains 200 instance and 1 attribute epileptic seizure recognition dataset 2001 https archive ics uci edu ml datasets epileptic seizure recognition real multivariate time series dataset contains 11 500 instances and 179 attributes twoleadecg 2015 http www timeseriesclassification com description php dataset twoleadecg real multivariate time series dataset contains 1 162 instances and 2 attributes mimic iii clinical database 2016 https physionet org content mimiciii 1 4 real integer categorical multivariate time series mimic iii clinical database demo 2019 https physionet org content mimiciii demo 1 4 real integer categorical multivariate time series epilepsiae project database http www epilepsiae eu project outputs european database on epilepsy real multivariate time series dataset contains 30 instances physionet cinc https physionet org news post 231 lots of clinical data for challenging competition wrist ppg during exercise 2017 https physionet org content wrist 1 0 0 real multivariate time series dataset contains 19 instances and 14 attributes mit bih arrhythmia database 2001 https physionet org content mitdb 1 0 0 real multivariate time series dataset contains 201 instances and 2 attributes kdd cup dataset https kdd org kdd cup lots of real integer categorical multivariate time series datasets pems database updated daily https dot ca gov programs traffic operations mpr pems source real integer categorical multivariate time series datasets nottingham music database http abc sourceforge net nmd special text format time series discrete variant gans seqgan https arxiv org pdf 1609 05473 pdf tensorflow https github com lantaoyu seqgan pytorch https github com suragnair seqgan quant gan https arxiv org pdf 1907 06673 pdf code to be added continuous variant gans c rnn gan https arxiv org pdf 1611 09904 pdf tensorflow https github com olofmogren c rnn gan pytorch https github com cjbayron c rnn gan pytorch rcgan https arxiv org pdf 1706 02633 pdf tensorflow https github com ratschlab rgan sc gan https www springerprofessional de en continuous patient centric sequence generation via sequentially 16671112 code to be added nr gan https dl acm org doi abs 10 1145 3366174 3366186 code to be added time gan https papers nips cc paper 2019 file c9efe5f26cd17ba6216bbe2a7d26d490 paper pdf tensorflow https github com jsyoon0823 timegan sigcwgan https arxiv org pdf 2006 05421 pdf pytorch https github com sigcgans conditional sig wasserstein gans dat cgan https arxiv org pdf 2009 12682 pdf code to be added synsiggan https www mdpi com 2079 7737 9 12 441 code to be added | gan generative-adversarial-network deep-learning gans tensorflow | ai |
uber-data-engineering | uber data engineering here is my latest data engineering project leveraging the power of google cloud and mage i recently completed an end to end data engineering project using the uber dataset and i m thrilled with the outcomes here s a glimpse into the architecture i worked for this project data ingestion i started by uploading the uber dataset to google cloud storage ensuring secure and reliable storage of the raw data data transformation with mage and google compute engine to perform the data transformation i utilized mage open source data pipeline tool for transforming and integrating data mage s distributed processing capabilities allowed me to efficiently clean transform and enrich the dataset ensuring its quality and integrity data warehousing the cleaned and transformed data was then loaded into google bigquery a scalable and high performance data warehousing solution enabling fast and interactive analysis analytics and insights leveraging the querying power of bigquery and sql i conducted extensive analysis on the uber dataset i uncovered valuable insights related to user behavior demand patterns and performance metrics facilitating data driven decision making visualization with looker to make the insights easily accessible and visually appealing i used looker a robust data visualization and business intelligence platform looker allowed me to create interactive dashboards and visualizations enabling stakeholders to gain intuitive and actionable insights | cloud |
|
EasyLLM | easyllm running large language model easily faster and low cost installation shell pip install jllm data handling this step is optional but recommended especially when your data are too big to be loaded to cpu memory at once conversion convert the raw data to token ids stored in parquet files shell python m jllm raw to ids tokenizer baichuan inc baichuan 13b chat i dataset0 jsonl o dataset0 baichuan 13b chat note samples of pre train dataset should be separated by n n in text files or be the value of key text in jsonl files fine tune dataset s format should be system content user content assistant content in each row of jsonl files key system is not necessary shuffle if you have multiple datasets you shouldn t skip this step it could shuffle all the datasets globally by rows like spark https spark apache org doing firstly move all the datasets stored in parquet folders into one directory such as datasets shell datasets dataset0 baichuan 13b chat dataset0 00000 dataset0 00000 00000 gzip parquet dataset0 00000 00001 gzip parquet dataset0 00001 dataset0 00001 00000 gzip parquet dataset0 00001 00001 gzip parquet dataset1 baichuan 13b chat dataset1 00000 dataset1 00000 00000 gzip parquet dataset1 00000 00001 gzip parquet dataset1 00001 dataset1 00001 00000 gzip parquet dataset1 00001 00001 gzip parquet then run the following command to shuffle the rows inner each dataset and distribute them to new blocks num block is recommended to be the multiple of next step s repartition number shell python m jllm shuffle datasets d datasets output shuffled datasets num block 4 every dataset would be shuffled and placed in shuffled datasets with several times of num block parquet files shell shuffled datasets dataset0 baichuan 13b chat 00000 dataset0 baichuan 13b chat 00000 00000 gzip parquet dataset0 baichuan 13b chat 00000 00001 gzip parquet dataset0 baichuan 13b chat 00000 00002 gzip parquet dataset0 baichuan 13b chat 00000 00003 gzip parquet dataset1 baichuan 13b chat 00000 dataset1 baichuan 13b chat 00000 00000 gzip parquet dataset1 baichuan 13b chat 00000 00001 gzip parquet dataset1 baichuan 13b chat 00000 00002 gzip parquet dataset1 baichuan 13b chat 00000 00003 gzip parquet repartition optional but recommended 1b token ids in parquet files take up to 2g of hard disk at most but require approximately 10g of cpu memory setting num partition according to the cpu memory of each worker shell python m jllm repartition datasets shuffled datasets num partition 4 the datasets will be shell shuffled datasets 5984729befe338e6a7 part 00000 dataset0 baichuan 13b chat 00000 00000 gzip parquet dataset1 baichuan 13b chat 00000 00000 gzip parquet 5984729befe338e6a7 part 00001 dataset0 baichuan 13b chat 00000 00001 gzip parquet dataset1 baichuan 13b chat 00000 00001 gzip parquet 5984729befe338e6a7 part 00002 dataset0 baichuan 13b chat 00000 00002 gzip parquet dataset1 baichuan 13b chat 00000 00002 gzip parquet 5984729befe338e6a7 part 00003 dataset0 baichuan 13b chat 00000 00003 gzip parquet dataset1 baichuan 13b chat 00000 00003 gzip parquet data info note you can also use spark to shuffle the data if you have and want model training here are two training examples zero shell deepspeed h hostfile train zero py model openlm research open llama 13b train data dataset0 jsonl pipeline parallelism shell deepspeed h hostfile module jllm train pipe model baichuan inc baichuan 13b chat train data shuffled datasets pipe parallel size 8 per device train batch size 2 gradient accumulation steps 32 ds config ds config py checkpoint checkpoint max num checkpoints 2 note arguments train data and eval data also support jsonl file run python m jllm train pipe h for more arguments generally every gpu process reads one piece of data that means one worker with 8 gpus will need to allocate a total of 8x cpu memory for data but now they need just 1x if these gpus belong to one pipeline under my special optimizations in this project i strongly recommend you to train your model with faster and low cost pipeline parallelism rather than zero pipeline engine could directly load and save model s weights in huggingface s format it could also load parameters from the checkpoint if you want to resume interruption any configs related to training shouldn t be modified checkpoint conversion convert model s weights in checkpoint to hf format shell deepspeed module jllm ckpt to hf model baichuan inc baichuan 13b chat pipe parallel size 8 ckpt checkpoint hf baichuan 13b chat finetune if your model don t have any lora weights you can also convert the checkpoint without gpus by shell python m jllm nolora ckpt to hf model baichuan inc baichuan 13b chat ckpt checkpoint hf baichuan 13b chat finetune supported models model pipeline stages training speed tokens s llama 13b 8 82540 54 baichuan 13b 8 67174 40 qwen 7b 4 122033 10 qwen 14b 8 75915 26 note the training speed of each model was measured on 64 nvidia a100 pcie 40gb gpus with data type of bfloat16 and batch token size of 4m batch size seq length inference vllm is quoted here for inference batch inference shell python batch infer py model baichuan 13b chat finetune prompt file prompt txt api server start the server shell python server py model baichuan 13b chat finetune query the model sehll curl http localhost 8000 generate h content type application json d messages user san francisco is a sampling max tokens 32 citation if you find easyllm useful or use easyllm code in your research please cite it in your publications bibtex misc easyllm author jian lu title easyllm running large language model easily faster and low cost year 2023 publisher github journal github repository howpublished url https github com janelu9 easyllm git acknowledgment this repository benefits from deepspeed https github com microsoft deepspeed flash attention https github com dao ailab flash attention git xformers https github com facebookresearch xformers and vllm https github com vllm project vllm | deepspeed llama llm pipeline zero | ai |
tw8836 | tw8836 evaluation board ver 1 0 tw8836 evaluation board mcu tw8836 rtos smallrtos | os |
|
NLP-CSE4022 | natural language processing this repository contains all the parallel and distributed assignments for the cse4022 lab as of fall 18 experiment 1 final submissions experiment 1 experiment 1 md import an inaugural speech using nltk corpus display top five frequent words create a word cloud result final submissions experiment 1 obama inaugural worldcloud png experiment 2 use nltk corpus to plot a conditional frequency distribution result final submissions experiment 2 experiment 2b md import and use stanford s chinese segmenter result final submissions experiment 2 experiment 2a md experiment 3 final submissions experiment 3 exercise 3 pdf exploring corpus of contemporary american english coca experiment 4 final submissions experiment 4 experiment 4 md remove stopwords from any corpus import cmu wordlist use wordnet pos tag tweets experiment 5 final submissions experiment 5 experiment 5 md get two texts remove stop words map the text to vector spaces compute cosine vec1 vec2 use scipy take a call if you should do stemming or not experiment 6 sequence tagging basic implemented a sequence tagger name entity recognition original link https github com guillaumegenthial sequence tagging credit guillaumegenthial https github com guillaumegenthial experiment 7 cogcomp using cogcomp to run nlp tools such as part of speech tagging chunking named entity recognition etc original link https github com cogcomp cogcomp nlpy credit cogcomp https github com cogcomp experiment 8 https github com jacob5412 nlp blob master programs chinese segmented application py a chinese word segmenter designed using stanford nlp a python gui based application digital assignment 1 final submissions digital assignment 1 da1 pdf academic review of apache opennlp sample code of opennlp in r authors jacob initial work jacob5412 github com jacob5412 | natural-language-processing hacktoberfest | ai |
cloud-refactor-interview | infrastructure refactoring welcome to the cloud engineering technical interview the purpose of this exercise is to see how you would approach refactoring an existing piece of code that manages aws infrastructure based on the requirments we re setting out in the brief as general guidance you re not expected to spend more than an hour on this this exercise is designed to only require infrastructure that fits within the aws free tier this means that if you want to you can try this out live in order to validate your solution however the purpose of the exercise is to understand how you would approach solving the brief the solution doesn t have to be 100 correct and validated to work in aws as long as your general approach makes sense prep info in this exercise you ll be using aws cloud development kit and its typescript bindings to create some infrastructure in aws if you re familiar with tools like terraform then cdk should feel familiar except that you re using a general purpose programming language instead of a dsl for a short introduction to the general concepts of cdk as well as some example code you can read the what is the aws cdk https docs aws amazon com cdk v2 guide home html chapter of the documentation it s stronly recommended to set up your editor with a typescript language server so your editor can assist in the refactoring provide code completion etc the code itself contains links to the reference documentation for each object that is used before you get started ensure you have nodejs 14 and npm installed cloned this repository ran npm install if you also want to try out your solution in aws configure the aws cli for your account validate it works with aws sts get caller identity deploy to aws shell npm run cdk bootstrap only required if you never used cdk in the current aws account and region npm run cdk deploy taco service after deploying cdk should display the taco service lbendpoint which is the domain name curl fail http domain name folded smoky pulled aubergine black bean brief the el chiquito a hyper scaling mexican start up restaurant serves mexican dishes using http requests this repository contains the tacos microservice it serves tacos in response to get requests on the folded path the tacos are a great success and the company raised a series a it now wants to start serving 20 more mexican dishes we d like you to go in and refactor the current taco service we would like to end up with a reusable component that teams can use to deploy their own microservices with that refactoring done you should add the new burrito service that serves burritos on post requests to the wrapped path submission you can push your solution to a public repository of your chosing and tell us where to go and get it from criteria your solution should create a generic service that each team can use for their dish the solution should allow for enough configurability to avoid teams copy pasting the service abstraction or cloning the existing tacoservice each microservice should have its own ecs cluster and load balancer optional write a test in the end the solution should roughly work like this shell npm run cdk deploy taco service still deploys a functioning tacos service npm run cdk deploy burrito service deploys a burrito service curl fail x post http burrito service domain name wrapped fajita veggies salsa roja the above should serve a burrito where to start the entrypoint for any cdk application is in the bin directory it ll import cdk as well as our taco service that lives in the lib directory useful commands npm run build compile typescript to js npm run watch watch for changes and compile npm run test perform the jest unit tests npm run cdk deploy deploy this stack to your default aws account region npm run cdk diff compare deployed stack with current state npm run cdk synth emits the synthesized cloudformation template | cloud |
|
itk | itk information technology kindergarten | server |
|
nodejs-express-start | nodejs express get start install npm install run server node server js | front_end |
|
drone-cortexm | crates io https img shields io crates v drone cortexm svg https crates io crates drone cortexm maintenance https img shields io badge maintenance actively developed brightgreen svg drone cortex m cargo rdme start arm cortex m platform crate for drone an embedded operating system supported cores architecture core name build target drone cortexm rust flag armv6 m arm cortex m0 r0p0 thumbv6m none eabi cortexm0plus r0p0 armv6 m arm cortex m0 r0p1 thumbv6m none eabi cortexm0plus r0p1 armv7 m arm cortex m3 r0p0 thumbv7m none eabi cortexm3 r0p0 armv7 m arm cortex m3 r1p0 thumbv7m none eabi cortexm3 r1p0 armv7 m arm cortex m3 r1p1 thumbv7m none eabi cortexm3 r1p1 armv7 m arm cortex m3 r2p0 thumbv7m none eabi cortexm3 r2p0 armv7 m arm cortex m3 r2p1 thumbv7m none eabi cortexm3 r2p1 armv7e m arm cortex m4 r0p0 thumbv7em none eabi cortexm4 r0p0 armv7e m arm cortex m4 r0p1 thumbv7em none eabi cortexm4 r0p1 armv7e m arm cortex m4f r0p0 thumbv7em none eabihf cortexm4f r0p0 armv7e m arm cortex m4f r0p1 thumbv7em none eabihf cortexm4f r0p1 armv8 m arm cortex m33 r0p2 thumbv8m main none eabi cortexm33 r0p2 armv8 m arm cortex m33 r0p3 thumbv8m main none eabi cortexm33 r0p3 armv8 m arm cortex m33 r0p4 thumbv8m main none eabi cortexm33 r0p4 armv8 m arm cortex m33f r0p2 thumbv8m main none eabihf cortexm33f r0p2 armv8 m arm cortex m33f r0p3 thumbv8m main none eabihf cortexm33f r0p3 armv8 m arm cortex m33f r0p4 thumbv8m main none eabihf cortexm33f r0p4 rust target triple and drone cortexm rust flag should be set at the application level according to this table documentation drone book https book drone os com api documentation https api drone os com drone cortexm 0 15 usage add the crate to your cargo toml dependencies toml dependencies drone cortexm version 0 15 0 features add or extend host feature as follows toml features host drone cortexm host cargo rdme end license licensed under either of apache license version 2 0 license apache license apache or http www apache org licenses license 2 0 mit license license mit license mit or http opensource org licenses mit at your option contribution unless you explicitly state otherwise any contribution intentionally submitted for inclusion in the work by you as defined in the apache 2 0 license shall be dual licensed as above without any additional terms or conditions | embedded asynchronous concurrency no-std hardware-support arm cortex rtos bare-metal firmware rust | os |
Blockchain-Graveyard | blockchain graveyard publicly known blockchain incidents that include details of the breach or victim statements https magoo github io blockchain graveyard gh contributing this site consists of two main sections the graveyard is an enumeration of large incidents that we can see in one place the advice section is for beginners advice to the enormous security subject matter required to lock down a blockchain company graveyard all content you d want to add will live in posts and should follow similar markup only title and link are really important additions are preferred in this order of quality 1 announcements or post mortems from the primary victim email blog posts website announcements tweets 2 interviews of primary victims with commentary about the breach with journalists 3 primary victims discussing breach in public forums or communities 4 good journalism link bait opinion pieces or victimized customer rants not accepted please add any raw details about the breach into the body of the post directly quoted from the underlying link if possible advice the advice section will be pretty heavily moderated it s designed to be extremely introductory to avoid overwhelming information and fairly broad gh https magoo github io blockchain graveyard | blockchain |
|
iot-dc3-web | boom iot dc3 boom rocket star seedling web ui dc3 https gitee com pnoker iot dc3 https github com pnoker iot dc3 nodejs 12 14 16 git web storm visual studio code npm npmrc txt registry https registry npm taobao org sass binary site https npm taobao org mirrors node sass yarn bash npm install g yarn bash git clone https github com pnoker iot dc3 web git bash cd iot dc3 web install yarn run yarn serve visual studio code setting json json workbench tree indent 16 workbench editor wraptabs true workbench editor enablepreview false workbench colortheme default dark workbench icontheme vscode jetbrains icon theme workbench tree renderindentguides always editor hover enabled false editor fontfamily monaco consolas courier new monospace editor tabsize 4 editor formatonsave true editor formatonpaste true editor codeactionsonsave source fixall eslint true editor detectindentation false files associations ttml html ttss css wxss css wxml html search exclude node modules true bower components true target true logs true extensions ignorerecommendations true markdown preview openmarkdownlinks ineditor typescript updateimportsonfilemove enabled always git autofetch true git enablesmartcommit true terminal integrated cursorblinking true terminal integrated cursorstyle line terminal integrated defaultprofile windows gitbash terminal integrated profiles windows gitbash path d program files git bin bash exe args li json editor defaultformatter vscode json language features jsonc editor defaultformatter vscode json language features html editor defaultformatter vscode html language features javascript editor defaultformatter vscode typescript language features typescript editor defaultformatter vscode typescript language features vue editor defaultformatter octref vetur scss editor defaultformatter vscode css language features vetur format options usetabs true vetur format defaultformatteroptions js beautify html wrap attributes force aligned prettier printwidth 180 semi false singlequote true wrapattributes true sortattributes true eslintintegration true | dc3 iot web element-plus manager typescript ui vue3 iotdc3 | server |
FakeImageDetection | fake image detection using machine learning the objective of this project is to identify fake images fake images are the images that are digitally altered images the problem with existing fake image detection system is that they can be used detect only specific tampering methods like splicing coloring etc we approached the problem using machine learning and neural network to detect almost all kinds of tampering on images using latest image editing softwares it is possible to make alterations on image which are too difficult for human eye to detect even with a complex neural network it is not possible to determine whether an image is fake or not without identifying a common factor across almost all fake images so instead of giving direct raw pixels to the neural network we gave error level analysed image this project provides two level analysis for the image at first level it checks the image metadata image metadata is not that much reliable since it can be altered using simple programs but most of the images we come across will have non altered metadata which helps to identify the alterations for example if an image is edited with adobe photoshop the metadata will contain even the version of the adobe photoshop used in the second level the image is converted into error level analysed format and will be resized to 100px x 100px image then these 10 000 pixels with rgb values 30 000 inputs is given in to the input layer of multilayer perceptron network output layer contain two neurons one for fake image and one for real image depending upon the value of these neuron outputs along with metadata analyser output we determine whether the image is fake or not and how much chance is there for the given image to be tampered feature engineering 1 dr neal krawetz proposed a method called error level analysis ela http www hackerfactor com papers bh usa 07 krawetz wp pdf that exploits the lossy compression of jpeg images when an image is altered the compression ratio of the specific portion changes with respect to other parts a well trained neural network can detect the anomaly by and determine whether the image is fake or not 2 the second parameter considered is metadata of the image a parallel module is added to the program which checks the metadata to determine the signature of various image editing programs since it is costly to execute a neural network the metadata inspection will considerably increase the performance by detecting tampering at an early stage neural network structure layer neurons input layer 30 000 hidden layer 1 5000 sigmoid hidden layer 2 1000 sigmoid hidden layer 3 100 sigmoid output layer 2 watch on youtube watch a video https img youtube com vi mvin9hrs8uy 0 jpg https www youtube com watch v mvin9hrs8uy tools used neuroph studio http neuroph sourceforge net neuroph studio is an open source java neural network framework that helps to easily build and use neural networks it also provides a direct interface for loading images metadata extractor https github com drewnoakes metadata extractor metadata extractor is an open source java library used to extract metadata from images javafx http docs oracle com javase 8 javase clienttechnologies htm javafx is used to implement modern user interface for the application flow chart detection img src http i imgur com tkx7uv6 png flow chart training img src http i imgur com wuoo1kb png project status x implement metadata procesing module x design a user interface x implement image feature extractor x design neural network using neuroph studio x implement neural network interface with javafx x connect neural network x integrate neural network to master x train network x test network x read feedback from user and learn instantly x add network training interface x add module to apply error level analysis on a set of images x improve look and feel x train with more data x add batch testing module x detach user inteface from core x implement command line interface reach success rate of 90 journal link https www academia edu 37977449 fake image detection using machine learning screenshots img src https i imgur com vzfdecs png img src https i imgur com t3tvsuj png img src https i imgur com 0mzmffp png img src https i imgur com z8dzhgd png img src https i imgur com mvc9tp0 png img src https i imgur com yhq5jgx png | machine-learning fake-images neural-network multilayer-perceptron-network neuroph-studio java-neural-network javafx | ai |
PRODUCT-AVAILABILITY-BACKEND- | product availability product availability using app development backend | server |
|
welcome_to_TE | welcome to tileexpert front end tileexpert https jobs tile expert 1 todo fork forked https jobs tile expert ru front end react developer https jobs tile expert ru https tile expert ru p s issue pullrequest | test-tasks task tile-expert | front_end |
Ultimate-Frontend | ultimate frontend ultimate front end course on youtube | front_end |
|
front-end-develop-demo | https user gold cdn xitu io 2019 5 28 16afa1fcc4da6a40 w 3248 h 1246 f png s 595966 license readme md base babel6 react babel6 react babel7 react babel7 react git git jest jest jsdoc jsdoc lerna lerna mobx counter mobx react feature react hooks portals react mobx scss react mobx scss redux counter redux scss scss css typescript typescript webpack loader webpack loader download css css sheet git git javacript javascript javascript promise book pdf javascript promise issue template md lerna json package json senior design pattern sdk sdk workflow https user gold cdn xitu io 2019 3 11 16968712ebc59189 w 1870 h 1037 f png s 30629 video src https user gold cdn xitu io 2019 3 11 169686f66c4ee271 preload auto poster https img alicdn com tfs tb1 e8zlxtpk1rjszr0xxbewxxa 686 570 png controls video https user gold cdn xitu io 2019 2 15 168f01a09775564b w 1874 h 1098 f png s 231078 ppt im ppt ppt https share weiyun com 54jggbc https github com dkypooh front end develop demo https github com ge tbms issues star react hooks webpack git jest lerna https user gold cdn xitu io 2019 2 15 168ef583ac76baef w 1763 h 626 f png s 102917 sdk https user gold cdn xitu io 2019 2 15 168effd661702f7a w 1930 h 815 f png s 137276 eventemitter eventemitter typescript yeoman im https user gold cdn xitu io 2019 2 14 168eba75c8132ba3 w 2414 h 899 f png s 120804 xx https user gold cdn xitu io 2019 5 26 16af4721447468fe w 1421 h 859 f png s 17171 | front_end |
|
Computer_Vision_2023 | computer vision 2023 courseworks for computer vision 2023 spring term | ai |
|
California-State-Web-Template-Website | california state template website the california state web template is an html template and website standard offered by the california department of technology to state agencies and departments within the state of california and beyond please visit webstandards ca gov for more information on california web publishing guidelines this repository is the state web template sample website showcasing all of the components functionality and implementation guidence to download a version of the state web template without the sample content please select a framework and visit the corresponding repository available state web template frameworks california state web template html https github com office of digital services california state web template html california state web template eleventy https github com office of digital services california state web template eleventy california state template net core mvc https github com office of digital services california state web template net core mvc california state web template react https github com office of digital services california state web template react | server |
|
Machine-Learning-Plugins | machine learning plugins machine learning plugins | ai |
|
Web-development-workshop | during this course you will learn how to produce web sites that are technically sound aesthetically cohesive and appropriately structured this includes conducting research interviews surveys and competitor analysis to devise a content strategy using code html css and a dash of javascript creatively to communicate and advocate building technically sound aesthetically cohesive and appropriately structured websites understanding the www technical components and how they work browsers servers http dns ip etc structuring content with html styling information and interfaces with css hosting a website on github pages organising projects with git project management basics listing and prioritising tasks tracking and evaluating progress getting things done serve content dynamically on demand with php and javascript using wordpress to manage online content developing wordpress templates and themes transferring files with ftp plan term 2 when in class homework blog wednesday br 06 01 sessions 01 welcome br workshop html css recap br project sharing is caring sharing is caring video tutorials of your css trick sharing is caring wednesday br 13 01 sessions 02 workshop github recap br tutorials on sharing is caring br flexbox peer learning research how does the www work javascript for cats wednesday br 20 01 sessions 03 peer learning how does the www work br workshop scroll magic br tutorials on sharing is caring sharing is caring storyboard and copy how does the www work wednesday br 27 01 sessions 04 wwwtf quiz br workshop wireframing br wireframes critique sharing is caring wireframes to html css visualising information for advocacy wednesday br 03 02 sessions 05 formative presentations br tutorials on sharing is caring sharing is caring collect your research infographic stories week 6 wednesday br 17 02 sessions 07 content strategy br copy writing sharing is caring copy writing br peer learning research visualising information for advocacy copywriting is interface design friday br 26 02 sessions 08 peer learning visualising information for advocacy br tutorials on sharing is caring sharing is caring brain catching mock ups infographics good and bad friday br 04 03 sessions 09 tutorials on sharing is caring with tor br team project our space our space prep formative interviewing humans friday br 11 03 sessions 10 formative video presentations form teams for our space br install mamp br install wp locally interviewing your target audience term 3 when in class homework blog friday br 15 04 sessions 11 recap br competitor analysis br user interviews planning qualitative research interviews br quantitative research questionnaire our space interviews insights friday br 22 04 sessions 12 personas br content strategy peer learning research goodui our space personas friday br 29 04 sessions 13 peer learning on goodui br workshop typesetting moodbard collect inspirations br wireframes 10 common typography mistakes friday br 06 05 sessions 14 front end frameworks br prototyping with html css br tutorials on our space keep prototyping br prep formative web design myths tuesday br 17 05 sessions 15 formative presentations br workshop hacking a bootstrap template and publishing your work to github pages keep prototyping friday br 20 05 sessions 16 group tutorials on our space br individual tutorials on sharing is caring our space branding guidelines destroy the web friday br 27 05 sessions 17 user testing br group tutorials on our space br individual tutorials on sharing is caring face2face user testing br work on our space friday br 03 06 sessions 18 tutorials on our space and sharing is caring debugging clinic work on our space br prep summative friday br 10 06 sessions 19 summative presentations summative hand in what have i learned projects sharing is caring this project is about using code html css and a dash of javascript creatively to communicate and advocate a cause you care about all the project material is here projects sharing is caring content needs design on this team project you will learn how to design systems containers to store and display content that doesn t yet exist or that someone else will produce all the project material is here projects our space on this team project you will design and build a website for our ravensbourne web media degree course starting from research interviews with staff students competitor analysis you will devise a content strategy and then build a technically sound aesthetically cohesive and appropriately structured website all the project material is here projects our space learning goals by the end of this course you will be able to 1 conduct research interviews surveys and competitor analysis to devise a content strategy use code html css and a dash of javascript creatively to communicate and advocate build technically sound aesthetically cohesive and appropriately structured websites understand how the www works browsers servers http dns ip etc visualise your ideas and explore design solutions through paper and digital wireframes use html to structure web content appropriately and accessibly use css to style web pages cohesively and coherently being informed by contemporary trends document your design and development process from the exploration of ideas to their practical implementation including successes and failures communicate your ideas both technically and in an engaging way use the git version control system through github to collaborate with your team and back up your project files make web pages dynamic using php manage online content using wordpress modify and maintain wordpress templates and themes upload your project files to a web server using ftp rules of the road be present if you happen to be late even by 5 minutes or absent make sure you email me about it before a session starts we ll deduct 2 from your grade for each uncommunicated tardiness or absence aka the 2 tardiness tax participate in class debates and workshops we ll make sure that your ideas have space to be heard and that nobody makes you feel uncomfortable about sharing them present your work during formative and summative assessments if you can t make it those days then you ll record your presentation and upload it to youtube or similar be responsible for what happens in class organise with your peers to get class information and material that you may have missed meet the deadlines if you submit your work after a deadline your grade will be capped at d bare pass license https i creativecommons org l by nc sa 4 0 88x31 png http creativecommons org licenses by nc sa 4 0 this work is licensed under a creative commons attribution noncommercial sharealike 4 0 international license http creativecommons org licenses by nc sa 4 0 | content-strategy css html | front_end |
ComputerVision_Bootcamp | computer vision course and workshop files these projects and solutions are part of my computer vision courses and workshops you can find more information here http mdfarragher com training | ai |
|
large-language-models | large language models this repo contains the notebooks and slides for the large language models application through production https www edx org course large language models application through production course on edx https www edx org professional certificate databricks large language models databricks academy details summary notebooks summary how to import the repo into databricks 1 you first need to add git credentials to databricks refer to documentation here https docs databricks com repos repos setup html add git credentials to databricks 2 click repos in the sidebar click add repo on the top right img width 400 alt repo 1 src https files training databricks com images llm repo 1 png 3 clone the https url from github or copy https github com databricks academy large language models git and paste into the box git repository url the rest of the fields i e git provider and repository name will be automatically populated click create repo on the bottom right img width 700 alt add repo src https files training databricks com images llm add repo png how to import the files from dbc releases on github 1 you can download the notebooks from a release by navigating to the releases section on the github page img width 400 alt dbc release1 src https files training databricks com images llm dbc release1 png 2 from the releases page download the dbc file this contains all of the course notebooks with the structure and meta data img width 400 alt dbc release2 src https files training databricks com images llm dbc release2 png 3 in your databricks workspace navigate to the workspace menu click on home and select import img width 400 alt dbc release3 src https files training databricks com images llm dbc release3 png 4 using the import tool navigate to the location on your computer where the dbc file was dowloaded from step 1 once you select the file click import and the files will be loaded and extracted to your workspace img width 400 alt dbc release4 src https files training databricks com images llm dbc release4 png details details summary cluster settings summary which databricks cluster should i use 1 first select single node img width 500 alt single node src https files training databricks com images llm single node png 2 this courseware has been tested on databricks runtime 13 1 for machine learning url https docs databricks com release notes runtime 13 1ml html if you do not have access to a 13 1 ml runtime cluster you will need to install many additional libraries as the ml runtime pre installs many commonly used machine learning packages and this courseware is not guaranteed to run img width 400 alt cluster src https files training databricks com images llm cluster png for all of the notebooks except llm 04a fine tuning llms and llm04l fine tuning llms lab you can run them on a cpu just fine we recommend either i3 xlarge or i3 2xlarge i3 2xlarge will have slightly faster performance img width 400 alt cpu settings src https files training databricks com images llm cpu settings png for these notebooks llm 04a fine tuning llms and llm04l fine tuning llms lab you will need the databricks runtime 13 1 for machine learning with gpu img width 400 alt gpu src https files training databricks com images llm gpu png select gpu instance type of g5 2xlarge img width 400 alt gpu settings src https files training databricks com images llm gpu settings png details details summary install datasets and models summary how do i install the datasets and models locally 1 to improve performance of the code we highly recommend pre installing the datasets and models by running the llm 00a install datasets notebook br img width 400 alt install datasets file src https files training databricks com images llm installdatasets1 png 2 you should run this script before running any of the other notebooks this can take up to 25mins to complete img width 1000 alt install datasets notebook src https files training databricks com images llm installdatasets2 png details details summary slides summary where do i download course slides please click the latest version under the releases section you will be able to download the slides in pdf details | ai |
|
LowerMachineCode-FreeRTOS-STM32F103C8T6- | my first project messege chiptype stm32f103c8t6 author ld date 2017 11 20 os freertos email adayimaxiga hotmail com led led0 led1 pb5 pb6 timer tim1 ch1 ch2 pa8 pa9 can can1 rx tx pb8 pb9 adc adc1 adc1 in9 pb1 usart usart2 rx tx pa3 pa2 total ram size rw data zi data total rom size code ro data rw data | os |
|
Jail | jail img src https github com hoyinm14mc jail blob master icon png width 75 height 75 align right gitter https badges gitter im hoyinm14mc jail svg https gitter im hoyinm14mc jail utm source badge utm medium badge utm campaign pr badge jail sets a new standard for player punishment systems it makes your pocketmine https github com pmmp pocketmine mp server secure than before it makes player punishment more capable than ever with our advanced techniques jail is the most innovative powerful and customizable punishment system people ve ever experienced installation poggit ci https poggit pmmp io ci shield hoyinm14mc jail jail https poggit pmmp io p jail jail updates frequently regarding the continuous change of your server software to install the most recent stable release of jail get into our releases https github com hoyinm14mc jail releases page to obtain the build users are advised not to compile jail from its source code as its stability is not 100 guaranteed innovative more than a prison with jail 1 1 player punishment is more than just commands equipped with its extensive compatibility various economic plugins could be installed to supports its ability for players to bail using money mines could be set so players got their work to do in prisons there is a lot more for you to discover advanced one mode four operations configuring a jail has never been easy before with its advanced set jail mode users could set up their three dimensional jails with just one command and few taps it s no need to use ancient operations just to do simple things learn more https youtu be hr8xhoizd c powerful better than you think jail is way smarter than you think of its penalty system automatically detects players misbehavior in their jail and apply extra penalties its vote jail system puts a player into prison after certain votes given by other players it has even more impressively powerful stuff to be unveiled safe and secure the lock never breaks malfunction of a prison s security features is not allowed in reality security always comes first so jail forces players to stay in their prisons even when the prison is not sealed with blocks controls is limited to players in jail so they never break out of prison learn more https youtu be etmxwf4oivs better design more elegant than ever a revolutionary beautiful interface is shown on the screen of prisoners it features a timer fee calculator and a colored bar all vividly animated giving you an incredibly great view stay updated stay new with jail 1 1 you never get out of the trend whenever your server is up on the internet a multi channel update detector is featured in it and it checks for update on the air automatically start using jail try out jail now by placing the latest phar file into your plugins folder you could find the most recent release on both poggit and github releases pages before your first try out you are advised to go through wiki https github com hoyinm14mc jail wiki you could learn all commands permission nodes and usage of specific features there feedback we concern how our users think about jail give us feedback or report any bugs on the issues https github com hoyinm14mc jail issues section we will follow up your case as soon as possible | jail minecraft minecraft-pocket-edition pocketmine-mp | os |
aws-machine-learning-university-dte | logo data mlu logo png machine learning university decision trees and ensemble methods class this repository contains slides notebooks and datasets for the machine learning university mlu decision trees and ensemble methods class our mission is to make machine learning accessible to everyone we have courses available across many topics of machine learning and believe knowledge of ml can be a key enabler for success this class is designed to help you get started with tree based models learn about widely used machine learning techniques and apply them to real world problems youtube watch all class video recordings in this youtube playlist https www youtube com playlist list pl8p z6c4gcuxrj9crytu xayh3jac4x0p from our youtube channel https www youtube com channel uc12lqyqtqybxatys9aa7nuw playlists playlist https img youtube com vi dtx1hn0frfk 0 jpg https www youtube com playlist list pl8p z6c4gcuxrj9crytu xayh3jac4x0p course overview there are five lectures one final project and five assignments for this class lecture 1 title studio lab decision trees impurity functions cart algorithm regularization open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 1 dte lecture 1 prune ipynb lecture 2 title studio lab bias variance trade off error decomposition extra trees algorithm open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 2 dte lecture 2 tree variance ipynb bias variance and randomized ensembles lecture 3 title studio lab boostrapping open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 3 dte lecture 3 bootstrap ipynb bagging bagging open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 3 dte lecture 3 bagging overfit ipynb br tree correlation open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 3 dte lecture 3 tree correlation ipynb random forests open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 3 dte lecture 3 random forest ipynb lecture 4 title studio lab random forest proximities some use cases for proximities feature importance in trees open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 4 dte lecture 4 permutation feature imp ipynb feature importance in random forests open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 4 dte lecture 4 feature importance ipynb lecture 5 title studio lab boosting open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 5 dte lecture 5 boosting ipynb gradient boosting xgboost lightgbm and catboost catboost open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 5 dte lecture 5 catboost ipynb br lightgbm open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks lecture 5 dte lecture 5 lightgbm ipynb final project title studio lab final project open in studio lab https studiolab sagemaker aws studiolab svg https studiolab sagemaker aws import github aws samples aws machine learning university dte blob main notebooks final project dte final project ipynb final project practice working with a real world computer vision dataset for the final project final project dataset is in the data final project folder https github com aws samples aws machine learning university dte tree main data final project for more details on the final project check out this notebook https github com aws samples aws machine learning university dte blob main notebooks final project dte final project ipynb interactives visuals interested in visual interactive explanations of core machine learning concepts check out our mlu explain articles https mlu explain github io to learn at your own pace including relevant articles for this course decision trees https mlu explain github io decision tree random forest https mlu explain github io random forest and the bias variance tradeoff https mlu explain github io bias variance contribute if you would like to contribute to the project see contributing contributing md for more information license the license for this repository depends on the section data set for the course is being provided to you by permission of amazon and is subject to the terms of the amazon license and access https www amazon com gp help customer display html nodeid 201909000 you are expressly prohibited from copying modifying selling exporting or using this data set in any way other than for the purpose of completing this course the lecture slides are released under the cc by sa 4 0 license the code examples are released under the mit 0 license see each section s license file for details | machine-learning decision-trees tabular-data random-forest boosting xgboost lightgbm catboost | ai |
Yo | yo yo v3 yo ui yo v3 pure yo intro getting started supported browsers attention documentation and demo versioning bugs and feature requests author copyright and license a name intro a yo min css yo min js yo yo a name getting started a http yo doyoe com doc getting started html yo yo a name supported browsers a ios6 0 android4 0 latest stable chrome safari opera ie10 a name attention a yo html5 doctype doctype html viewport yo mobile first meta name viewport content initial scale 1 minimum scale 1 maximum scale 1 user scalable no maximum scale 1 user scalable no minimum scale 1 yo 2 border px border rem border box before after before after webkit box sizing border box moz box sizing border box box sizing border box pc flex flex flex display flex flex a name documentation and demo a view demo http doyoe github io yo demo view documentation http doyoe github io yo doc yo ydoc npm install ydoc g registry https registry npm taobao org ydoc build doc a name versioning a yo semver http semver org lang zh cn releases tag https github com doyoe yo releases changelog changelog md a name bugs and feature requests a yo yo issues https github com doyoe yo issues new pull requests https github com doyoe yo pulls a name author a https github com doyoe http weibo com doyoe http www doyoe com ymfe team https github com ymfe a name copyright and license a yo the mit license http opensource org licenses mit creative commons http creativecommons org licenses by 4 0 | css scss es6 react frontend-framework scss-framework ui-components mobile-first mobile-web mobile-app | front_end |
VisualTransformers | pytorch visual transformers token based image representation and processing for computer vision a pytorch implementation of the following paper visual transformers token based image representation and processing for computer vision visual transformers find the original paper here https arxiv org abs 2006 03677 p align center img src overview png width 600 title vision transformer p this pytorch implementation is based on this repo https github com tahmid0007 visiontransformer the default dataset used here is cifar10 which can be easily changed to imagenet or anything else you might need to install einops | ai |
|
Leftovers | leftovers the leftovers ios application coded in swift 4 2 on xcode 10 for csc 415 please go to the wiki for a lot of info on leftovers | os |
|
TACYTools | tacytools ios reverse engineering tools | os |
|
ANYCOIN | anycoin anycoin related technology license information | server |
|
ece155 | ece 155 android project in collaboration with anthony nguyen dong and ananya singh all iterations of the android lab project for ece155 at the university of waterloo the later labs are more interesting most relevant code can be found in this directory lab4 app src main java lab3 201 14 uwaterloo ca lab3 201 14 thanks to hassan one of our tas for excellent and passionate feedback on our code and our design | os |
|
SPEED | improved compatibility of back to top link see https github com othneildrew best readme template pull 73 a name readme top a project shields i m using markdown reference style links for readability reference links are enclosed in brackets instead of parentheses see the bottom of this document for the declaration of the reference variables for contributors url forks url etc this is an optional concise syntax you may use https www markdownguide org basic syntax reference style links contributors contributors shield contributors url forks forks shield forks url issues issues shield issues url project logo br div align center a href https github com carl31 speed img src images logo png alt logo width 80 height 80 a h3 align center speed app h3 p align center a searchable database of evidence about different claims about different software engineering se practices br br br a href https vercel com view demo a a href https github com carl31 speed issues report bug a a href https github com carl31 speed issues request feature a p div table of contents details summary table of contents summary ol li a href about the project about the project a ul li a href built with built with a li ul li li a href getting started getting started a ul li a href prerequisites prerequisites a li ul li li a href usage usage a li li a href usage workflows a li li a href contact contact a li li a href acknowledgments acknowledgments a li ol details about the project about the project product name screen shot product screenshot https example com replace image with front end once made insert summary of speed and why it is needed p align right a href readme top back to top a p built with major frameworks libraries that we used to bootstrap our project next next js next url react react js react url tailwindcss typescript nest js mongoose p align right a href readme top back to top a p getting started getting started to run front end npm run dev to run back end npm start prerequisites npm sh npm install npm latest g have env file within speed backend containing username and password for mongodb p align right a href readme top back to top a p usage examples usage insert examples of how to use speed p align right a href readme top back to top a p workflows workflows the workflows via github actions allow for ci cd to vercel try create a new pull request to the github repository github actions will recognize the change and use the vercel cli to build your application the action uploads the build output to vercel and creates a preview deployment when the pull request is merged a production build is created and deployed every pull request will now automatically have a preview deployment attached if the pull request needs to be rolled back you can revert and merge the pr and vercel will start a new production build back to the old git state p align right a href readme top back to top a p contact contact carl santos abdul syed maxinne santico project link https github com carl31 speed https github com carl31 speed p align right a href readme top back to top a p acknowledgments acknowledgments helpful resources font awesome https fontawesome com hero icons https heroicons com p align right a href readme top back to top a p markdown links images https www markdownguide org basic syntax reference style links contributors shield https img shields io github contributors carl31 speed svg style for the badge contributors url https github com carl31 speed graphs contributors forks shield https img shields io github forks carl31 speed svg style for the badge forks url https github com carl31 speed network members issues shield https img shields io github issues carl31 speed svg style for the badge issues url https github com carl31 speed issues product screenshot images screenshot png next js https img shields io badge next js 000000 style for the badge logo nextdotjs logocolor white next url https nextjs org react js https img shields io badge react 20232a style for the badge logo react logocolor 61dafb react url https reactjs org tailwind url https tailwindcss com nest url https nest js com typescript https typescriptlang org | server |
|
progetti_EISD | progetti eisd progetti per il corso di embedded amp iot system design | os |
|
challenge-one-portafolio-latam | challenge one front end portafolio p align center img width 600 heigth 600 src https user images githubusercontent com 101413385 169097543 d5ada41e 7db8 481d 9d89 cef4efdf7e05 png p bienvenido al proyecto base del portafolio pasos principales marca este proyecto con una estrella sigue las lecciones y las instrucciones de contenido visita la p gina del desaf o haciendo clic aqu link del challenge https www aluracursos com challenges oracle one front end portafolio analisando el repositorio este es el repositorio base de nuestro proyecto ac tendr s index html documento html conclu do que puedes utilizar en tu proyecto las fuentes los archivos css y javascript ya est n referenciados en el documento y todas las secciones contienen coment rios que te ayudan a entender la estructura si quieres puedes aventurarte y contruir tu propio html style css documento de css con instrucciones de estilo y algunas sugerencias de desarrollo validacion js documento vac o d nde vas a desarrollar tu l gica de programaci n para validar formularios usando javascript cuando clones o descargues el proyecto base tendr s la siguiente presentaci n p align center img width 600 heigth 600 src https user images githubusercontent com 101413385 169064699 f268715c 822c 4335 b066 97a1bc1ea8e1 png p c mo incluir mi proyecto en este challenge 1 publicar el proyecto en github 2 publicarlo en github pages c mo publicar mi proyecto con github pages https docs github com pt pages getting started with github pages creating a github pages site 3 utiliza el tema topic grupo 4 challengeoneportafolio4 grupo 5 challengeoneportafolio5 ve a la pesta a acerca o about de tu proyecto en el men de la izquierda dentro de tu repositorio de github incluye la etiqueta challengeoneportafolio4 o challengeoneportafolio5 dependiendo al grupo que pertenezcas c mo hago la entrega final de mi proyecto 4 coloca tus datos en el formulario de entrega con el link del proyecto publicado con github pages link del formulario https lp alura com br alura latam entrega challenge one esp front end image https user images githubusercontent com 92184087 208179417 7965c06e 21d6 4174 b76a 95ec648edc00 png 5 accede a tu correo electr nico para conseguir tu insignia exclusiva para este challenge 6 no olvides publicar un link o un v deo de tu proyecto en linkedin a href https www linkedin com company alura latam mycompany target blank img src https img shields io badge linkedin 230077b5 style for the badge logo linkedin logocolor white target blank a | challengeoneportafolio challengeoneportafolio2 challengeoneportafolio3 challengeoneportafolio4 challengeoneportafolio5 | front_end |
FEWD_1.0 | fewd front end web development | front_end |
|
penryn-starter | penryn starter a starter kit for web development usage npm install features docker browsersync php 7 1 5 postcss css nano es2015 eslint rollup uglify js 2 skylake | front_end |
|
loopchain | this repository archived refer to icon 2 0 aka goloop https github com icon project goloop loopchain loopchain https img shields io badge icon consensus blue logocolor white logo icon labelcolor 31b8bb https shields io loopchain https snapcraft io loopchain badge svg https snapcraft io loopchain citizen sync https github com icon project loopchain workflows citizen 20sync badge svg loopchain is a high performance blockchain consensus network engine of icon project in order to run a loopchain node you need to install icon service that runs a smart contract and interacts with loopchain engine and icon rpc server that processes http requests from clients for details refer to the guide below table of contents getting started getting started requirements requirements installation installation teardown teardown see also see also documentation documentation icon release icon release license license getting started requirements loopchain development and execution requires following environments 1 python 3 7 x we recommend to create an isolated python 3 virtual environment with virtualenv bash virtualenv p python3 venv source venv bin activate note now we support 3 7 x only please upgrade python version to 3 7 x 1 rabbitmq 3 7 loopchain requires rabbitmq for the reliable installation please visit downloading and installing rabbitmq 1 reward calculator reward calculator is a daemon which calculates i score of iconists to support iiss please visit reward calculator github repository to install it 1 other dependencies macos bash brew install automake pkg config libtool leveldb openssl ubuntu bash sudo apt update sudo apt install y make build essential libssl dev zlib1g dev libbz2 dev libreadline dev libsqlite3 dev wget curl llvm libncurses5 dev libncursesw5 dev xz utils tk dev libffi dev liblzma dev automake libtool lsof note if you are using ubuntu 18 04 you need to install additional library libsecp256k1 dev centos bash sudo yum update sudo yum install y git zlib devel bzip2 bzip2 devel readline devel sqlite sqlite devel openssl devel xz xz devel libffi devel gcc gcc c automake libtool lsof installation via source code 1 check all requirements properly installed bash make requirements if you don t see any error logs and you have started rabbitmq server you may move on to next step 1 proceed installation bash make all this command is for setting up packages installs all necessary python packages via setup py grpc proto generates python grpc code from protocol buffer which is defined in loopchain proto keystore generates a keystore file note password must be at least 8 characters long including alphabet number and special character please be careful not to forget the password since you will need it to run the citizen node 1 run citizen run citizen node on icon testnet network run citizen node on icon mainnet network via snapcraft linux only 1 follow this guide install loopchain via snap teardown clear rabbitmq processes pycache build bash make clean delete log delete db bash make clean log clean db note for more command options make help see also documentation please visit icon developers portal icon release please visit icon release license this project follows the apache 2 0 license dependencies icon service https github com icon project icon service icon rpc server https github com icon project icon rpc server reward calculator https github com icon project rewardcalculator virtualenv https virtualenv pypa io en stable downloading and installing rabbitmq https www rabbitmq com download html icon release https github com icon project icon release releases relative links run citizen node on icon testnet network docs 5 20run run citizen node md run citizen node on icon testnet network run citizen node on icon mainnet network docs 5 20run run citizen node md run citizen node on icon mainnet network install loopchain via snap citizen quick start snap md web pages icon developers portal https www icondev io apache 2 0 license https www apache org licenses license 2 0 | blockchain consensus-algorithm p2p-network citizen-node icon | blockchain |
learning-flutter | flutter firebase project flutter sample project to learn different things in flutter todos x firebase authentication x riverpod flutter hooks freezed firestore database usage x navigation file upload and caching infinite scrolling | front_end |
|
blockchainbib | blocks chains bibliography a bibtex bibliography including eprints pre prints and peer reviewed publications related to or of relevance in the field of cryptocurrencies and distributed ledger technologies commonly referred to as blockchains papers in blockchain bib are currently exported as html to https allquantor at blockchainbib https allquantor at blockchainbib the topics of the papers in this bibliography encompass various aspects that are directly or indirectly related to this interdisciplinary field e g bitcoin and other cryptocurrencies credit networks and payment channels smart contract platforms smart contract analysis and applications distributed systems aspects applied cryptography applicable in this context economics and game theory privacy and transparency general it security related issues regulatory and legal issues usability and usable security usage of this bibliography just reference the blockchain bib in your bibtex bibliography and compile your latex files as usual see the test folder for an example based on an ieee template advanced usage download all papers make sure python 2 is running on your system python2 version make sure pybtex https pybtex org is installed pip install pybtex to download the papers into the papers folder type bash python fetch pdfs py blockchain bib papers advanced usage generate html files to generate a html paper list you need to clone install the bibloograpy https github com nullhypothesis bibliograpy tool by philipp winter bash git clone https github com nullhypothesis bibliograpy cd blockchainbib bash generate sh contribute generally contributions are welcome and might fall in the one of the following categories contribute by adding updating an entry in the blockchain bib file the requirements for a paper to be added to the bib are it must be open access and not locked behind a pay wall it is either 1 a peer reviewed paper which has been published on an academic venue e g conferences proceeding journal workshop 2 a pre print eprint of a paper published for example on http arxiv org that has not been published at an peer reviewed venue yet the criteria for a paper in this category are based on facts systematic structure no marketing not pure speculation written in comprehensible english how to add update an entry 1 add or update the bib entry in blockchain bib 2 run the test makefile to see if everything builds as expected shell cd test make test compiled successfully 3 issue pull request on github contribute code this project is a quick and dirty approach and various evolutionary steps are possible e g migrate to another dataformat for entries e g json ld add abstract to entries add tags to entries migrate to a github io page and use some lightweight javascript to filter entries provide a simple local interface to add entries generate the resulting bib files based on custom selection notes note that only papers in blockchain bib are currently exported as html to https allquantor at blockchainbib https allquantor at blockchainbib there are also references in blockchain online bib to online resources like for example github projects block explorer websites developer references wiki entries etc again the requirements form above hold not purely marketing some banners on websites are acceptable not pure speculation written in comprehensible english related bibliographies a list of other possibly outdated collections of resources on this topic collection by christian decker https github com cdecker btcresearch collection by jeremy clark http users encs concordia ca clark biblio php bitcoin collection by brett scott https docs google com spreadsheets d 1vawhbaj7hwndie73p w wrl5a0wngzjofmzxe0rh5sg htmlview usp sharing pli 1 sle true collection in the bitcoin wiki subset of the above https en bitcoin it wiki research reading list repository of blockstack https github com blockstack reading list publications section of ic3 http www initc3 org publications collection in the bitcoinbyte blog outdated https thebitcoinbyte wordpress com annotated bibliography license bibliography license cc by cc https licensebuttons net l by 4 0 88x31 png https creativecommons org licenses by 4 0 python code and tpl templates license gnu general public license credits all authors that created the publications listed in this bibliography philipp winter phw at nymity ch for the python fetch code and the tpl templates aljosha judmayer ajudmayer at sba research dot org initiator of this endeavor nicolas christin suggestion of publications daniel kraft suggestion of publications nicholas stifter suggestion of publications philipp schindler suggestion of publications alexei zamyatin suggestion of publications bernhard haslhofer suggestion of publications andreas kern suggestion of publications code | blockchain |
|
django-offline-at-GIET-10-feb-2021 | django offline at giet 10 feb 2021 web development with django | front_end |
|
esp-homekit | esp homekit apple homekit accessory server library for esp open rtos https github com superhouse esp open rtos see esp homekit demo https github com maximkulkin esp homekit demo for examples building for esp idf 4 0 in esp idf 4 0 there is a spi flash write protection that checks if area written to is inside writable parition haven t figured out yet how esp homekit can modify parition table automatically so for the time being you need to disable that check in menuconfig go to component config spi flash driver write to dangerous flash regions and set it to allowed qr code pairing you can use a qr code to pair with accessories to enable that feature you need to configure accessory to use static password and set some setup id c homekit server config t config accessories accessories password 123 45 678 setupid 1qj8 the last piece of information you need is accessory category code you can find it in mdns announcement in accessory logs mdns announcement name sample led 1692md myledpv 1 0id 16 92 ce d4 ee 7ac 1s 1ff 0sf 1ci 5 port 5556 ttl 4500 notice ci 5 this is accessory category code or just find value of category enum you set in main accessory c homekit accessory id 1 category homekit accessory category lightbulb services homekit service t notice homekit accessory category lightbulb this is accessory category code then you need to generate qr code using supplied script tools gen qrcode 5 123 45 678 1qj8 qrcode png qr code example qrcode example png | esp8266 homekit | os |
uswds-jekyll | this repository is in maintenance mode and only accepting fixes the tech portfolio and tts digital council are working on a strategy for tts microsites which will inform the future of this theme open an issue to provide any feedback https github com 18f uswds jekyll issues new jekyll u s web design system this is a jekyll theme https jekyllrb com docs themes for the u s web design system https designsystem digital gov table of contents 1 installation installation versioning versioning 1 configuration configuration site title site title site description site description navigation navigation page subnavigation page subnavigation hero hero tagline intro tagline intro graphics list graphics list color and font configuration color and font configuration search search analytics analytics last modified date last modified date edit page edit page anchor js anchor js private eye js private eye js 1 assets assets stylesheets stylesheets scripts scripts asset load order asset load order 1 customization customization overriding includes and layouts overriding includes and layouts 1 components components header header identifier identifier 1 layouts layouts default layout default page layout page home layout home post layout post project layout project team member layout team member 1 migration guide migration 1 development development installation 1 install the theme as a ruby gem by adding it to your gemfile like so ruby gem uswds jekyll 1 fetch and update your bundled gems by running sh bundle 1 set the theme in your site s jekyll configuration config yml yml theme uswds jekyll you will need to restart your jekyll server to see the effects install as a new jekyll site 1 create a new jekyll site jekyll new 1 replace the default gem minima 2 0 gem with the uswds jekyll gem in your gemfile ruby gem uswds jekyll 1 set the theme in your site s jekyll configuration config yml yml theme uswds jekyll 1 fetch and update your bundled gems by running sh bundle 1 run bundle exec jekyll serve to build your site locally at http localhost 4000 versioning to reference a specific version of this theme 1 visit the releases page https github com 18f uswds jekyll releases and decide which version you want to use 1 specify the version in your gemfile ruby gem uswds jekyll 5 0 configuration configuration of common elements header header identifier identifier navigation navigation etc happens in your project s data files https jekyllrb com docs datafiles see this project s data directory data for reference configurations of each component the default layout layout default also provides a mechanism for automatically including stylesheets stylesheets and scripts scripts on a site wide layout wide and per page basis see asset load order asset load order for more information site title you can change your site s title with the title field in config yml if you want to provide an alternate title for use only in the site header you can set the title field in data header yml site description you can change your site s description with the description field in config yml if you want to override it for a particular page you can set the description field in that page s frontmatter navigation this theme s navigation system is powerful and flexible named navigational lists live in your project s data navigation yml by default all links are assumed to be internal to the site you can add external true to links that are external you can also add class class name to add a class to a specific link yml data navigation yml primary text documentation href docs text support href help class highlight text 18f href https 18f gsa gov external true link objects with a links field will be presented as collapsible link lists the links field can either be a reference to another link list in this file or a literal list text section title links links this scheme allows you to define navigational elements that can be shared by different components such as the header header and sidenav page subnavigation see the documentation for those components for more info page title set each page s title in its frontmatter title about us page subnavigation if you re using the page layout layout page each page may declare its own side navigation and subnavigation in its front matter md sidenav documentation subnav text section one href section one text section two href section two section one section two as with the header header the sidenav field may either reference a common navigation list navigation from data navigation yml recommended or be a literal list of links the subnav field should be used to link to sections within the current page because links to other pages will cause the linking page s side navigation to collapse when visited sidenav is a key into data navigation yml see the navigation navigation docs for more info a page s current or active state in the sidenav is determined by whether a link s href matches page url or page permalink for each page being rendered subnav is a list of links to display on this page under its own link in the side navigation note that subnav link hrefs are not prefixed with site baseurl because this breaks hash links prefixed with pro tip unless your jekyll configuration specifies otherwise the default markdown formatter kramdown will automatically generate predictable id attributes for your page headings and convert markdown like this md section one into html h2 id section one section one h2 if you re using redcarpet you will need to configure it to enable the with toc data extension in your config yml like so yml markdown redcarpet redcarpet extensions with toc data pro tip if you re like us and prefer your navigation sticky you can add sticky sidenav true on page layout page project layout project and team member layout team member layouts to have the sidenav follow as you scroll hero yml hero optional image path to image jpg optional callout alt callout white text optional text the rest of the callout button optional text the button text href button href tagline intro yml optional but must be used in conjunction with intro below tagline a tagline for your page also optional but must be used with tagline above intro some introductory text content this will be processed as markdown graphics list yml an optional list of graphics to display before or after the content graphics image note the indentation here graphics n image src src path to image ext alt optional alt text title optional graphic title rendered as an h3 description graphic description text processed as markdown optional graphics position before after color configuration the default colors can be configured in the data theme yml file other settings can be configured using uswds theme settings see the customization customization section below search search gov https search gov is used for search and can be configured in config yml before configuring your search you will need to create a search gov account and set up your website with search gov after setting up your site on search gov you can then add your search site handle to the config yml analytics google analytics you can add google analytics to your site by uncommenting the google analytics ua line and replacing ua with your google analytics ua code configuration for google analytics add your ua code here google analytics ua ua digital analytics program dap you can add dap to your site by uncommenting the dap agency line and if need be replacing gsa with the appropriate agency key and optionally dap subagency may also be specified for more information visit https www digitalgov gov services dap configuration for dap add your agency id here dap agency gsa dap subagency tts feedback form to add a user feedback form create a new survey through touchpoints https touchpoints digital gov and add the id via the touchpoints form id key in config yml last modified date to show the last date a page was last modified by 1 add these lines to the edit page yml data file yml last modified display date true date format b d y 1 add the following to your gemfile ruby group jekyll plugins do gem jekyll last modified at end this will add the date right before the identifier component edit page to add a link which will allow visitors to submit edits to the current page via github add the following lines to to the edit page yml data file yml edit page display link true text edit this page this will add the edit link right before the identifier component anchor js you can show an anchor link next to header tags by uncommenting this section from the config yml data file this will add an anchor link after the header tag on the page and post layouts making each header linkable see https github com bryanbraun anchorjs for more information yml anchor js targets h1 h2 h3 h4 h5 h6 private eye js by default the uswds jekyll theme uses private eye https github com 18f private eye to denote private links you can turn this on by adding the setting below in your config yml if you would like to customize the default private eye configuration you can find it in assets js private eye conf js yml private eye true assets the stylesheet includes styles html and script includes scripts html includes each incorporate the uswds css and js files if the corresponding styles and scripts lists aren t defined in your config yml so unless you add one or both of those manually your html will include the following html in the head link rel stylesheet href assets uswds css uswds min css media screen before body script src assets uswds js uswds min js async read more about customizing stylesheets stylesheets and scripts scripts below stylesheets as a general rule all stylesheets are inserted in a layouts head which qualifies them as render blocking site stylesheets can be specified in config yml or a layout or page s front matter yaml in the following form yml styles path to sheet css href path to sheet css media screen print all optional stylesheets specified as objects in the latter item above must have an href property the media defaults to screen scripts as a general rule all scripts are inserted before a layouts body which prevents them from blocking the rendering of your page s content scripts can be specified in config yml or a layout or page s front matter yaml in the following form yml scripts path to script js src path to script js async true optional scripts specified as objects in the latter item above must have a src property scripts with async true will get an async attribute which tells the browser not to let this script s loading block the execution of subsequent scripts if the execution order of your scripts is not important setting async true may provide performance benefits to your users conversely if you don t know whether your scripts need to execute in a particular order then you should not set async true because it may prevent your scripts from running properly asset load order both stylesheets stylesheets and scripts scripts can be configured 1 assets configured at the site level in your config yml will be loaded in all pages that use the uswds layouts layouts 1 those configured at the layout level in that layout s front matter will be loaded on all pages that use that layout after site assets 1 those configured at the page level in the page s front matter will be loaded last customization customize the uswds jekyll theme with uswds theme settings files https designsystem digital gov documentation settings uswds design tokens https designsystem digital gov design tokens and custom sass or css you ll need to place uswds settings and custom sass into a couple specific locations for the theme to find them 1 settings add custom uswds settings to sass uswds theme settings scss settings control the uswds output see all available settings in the uswds settings documentation https designsystem digital gov documentation settings we recommend adding only your modified settings to the uswds theme settings scss file to see an example of all the settings available to uswds see the files in the uswds github repo https github com uswds uswds tree develop src stylesheets theme the repo splits settings into multiple files if you want to copy and mimic that structure download the repo files using a tool like downgit https minhaskamal github io downgit home url https github com uswds uswds tree develop src stylesheets theme then add them to the sass directory and import them from uswds theme settings scss whether you add only individual settings variables or import from multiple files uswds theme settings scss needs to be the entry point 1 custom sass and variables add any custom css or sass to sass uswds theme custom styles scss you can use this custom styles file to import any additional sass or css files your project needs as long as any additional files exist in the sass directory custom sass loads after the uswds and default sass so you can use it to override the defaults individual sites can also selectively override overriding includes and layouts individual includes and layouts overriding includes and layouts any include includes or layout layouts can be overridden by your site by placing a file with the same name into your site s includes or layouts directory for instance you can change how the side navigation is rendered but not which data it receives in the page layout layout page by creating your own includes sidenav html you can change how and whether the side navigation is displayed at all in the page layout layout page by overriding layouts page html components for some uswds components https designsystem digital gov components there are two different files that control how data is passed to the template 1 components component html is the low level template that assumes a similarly named global template variable for instance the header component operates on the header template variable 1 component html is the concrete implementation of the component that sets the appropriate global variable then includes the low level template this separation allows you to override either of the component includes in your own jekyll site without having to re implement either the high or low level logic for instance if you want your header data to come directly from the jekyll configuration file config yml rather than data header yml you can override includes header html to look like this html assign header site data header include components header basic html header the header html include includes header html sets the header template variable to site data header the value of which is set in your jekyll project s data header yml file then it includes components header html includes components header html to render the header s markup see this repo s header yml data header yml for more info identifier the components identifier html include includes components identifier html sets the identifier template variable to site data identifier the value of which is set in your jekyll project s data identifier yml file see this repo s identifier yml data identifier yml for more info layouts this theme provides the following layouts which you can use by setting the layout front matter on each page like so yml layout name supported optional front matter for page layouts page navigation page subnavigation hero hero tagline intro tagline intro graphics list graphics list layout default this is the bare bones uswds layout which does all of the basic page scaffolding then drops the page content into the main element all of the other layouts inherit this one and provide other features in the content block the default layout provides a layout front matter hook to add attributes to the main element you can see how this works in the page layout layouts page html l3 l4 layout home this layout implements the home page template https designsystem digital gov page templates landing which accommodates the following front matter check out the yaml front matter in the home demo page demo home html for an example of how to structure it layout page this layout implements the document page template https designsystem digital gov page templates docs see the page demo page demo page md for an example of how this works and see data navigation yml data navigation yml for how to structure named navigation data for your site layout post this layout is identical to the layout page and is included to allow for easier site creation using jekyll new layout project this layout is used to show details for an individual project and uses the following front matter yml layout project title title of project permalink projects link to project description project description large image path to image ext small image path to image ext image alt the image alt text to show a listing of projects on a page add include project list html to the page layout team member this layout is used to show details for an individual team member and uses the following front matter yml layout team member permalink team link to team member name team member name image path to image ext job title team member job title phone 123 456 7890 email email address gov to show a listing of team members on a page add include team list html to the page sass http sass lang com guide jekyll sass https jekyllrb com docs assets sassscss front matter https jekyllrb com docs frontmatter migration from guides style 18f https github com 18f guides style see this example pull request https github com 18f before you ship pull 458 from earlier versions note uswds jekyll 5 x is only compatible with jekyll 4 0 and higher 1 update your uswds jekyll gem in your project s gemfile replace the current gem uswds jekyll line with ruby gem uswds jekyll 5 0 then in the terminal run sh bundle update uswds jekyll 1 if you have an existing sass folder it needs to move to the root level and out of any directory like assets 1 add or move any custom styles or variables to sass uswds theme custom styles scss if you have multiple custom styles files add them to the sass directory and import them from uswds theme custom styles scss 1 convert manual values to tokenized values using the guidance on the uswds migration page https designsystem digital gov documentation migration spacing units 1 don t duplicate the h1 in the body content of page template pages this is automatically inserted at the top with the content of page title 1 check that certain data keys exist config yml styles nothing unless adding additional stylesheets header yml type basic basic mega extended extended mega theme yml examples colors usa banner usa banner bg base lightest usa banner text ink usa banner link primary dark usa banner link hover primary darker header header bg white header title ink header link base header link hover primary dark alt section bg color primary darker header color accent cool text color white link color base lightest link hover color white hero hero bg primary darker hero header accent cool hero header alt white hero text white hero link accent cool hero button bg primary hero button text white top navigation top nav bg white top nav link base dark top nav link hover primary top nav link hover bg white top nav link current base dark top nav dropdown bg primary dark top nav dropdown link white top nav dropdown link hover bg transparent side navigation side nav bg transparent side nav link ink side nav link hover primary dark side nav link hover bg base lightest side nav link current primary dark 1 check that css is referencing uswds theme css development this section explains how to develop this theme and or test it locally requirements ruby https www ruby lang org bundler https bundler io 2 x node js https nodejs org setup install the node js dependencies npm install install ruby dependencies npm run setup jekyll start the application this allows you to preview the effects of your changes jekyll will build the site watch the sass files and rebuild when there are changes npm start open your web browser to localhost 4000 http localhost 4000 to update uswds when new version of uswds is released you should pull in the latest assets rake update review and commit the assets working with a jekyll site if you want to test an existing jekyll site that uses uswds jekyll you can link the gem to your local uswds jekyll repo in your jekyll site change your gemfile to point at the local clone of this repo ruby gem uswds jekyll path path to uswds jekyll publish to rubygems 1 update spec version number here in the uswds jekyll gemspec file to the version you want to publish 1 run bundle install 1 add a pr for the update and get it merged 1 run bundle exec rake release 1 add a github release to the releases page with the same version number 1 you should see the latest version here https rubygems org gems uswds jekyll scripts start starts the jekyll site setup uswds copies assets from the uswds package to their theme locations by running the following scripts which can also be run separately sync assets copies assets to assets uswds sync sass copies sass source files to sass uswds src sync default settings copies default settings files to sass uswds settings sync theme settings copies only theme settings files to sass settings | jekyll-theme uswds government | os |
approx-vision | the approximate vision project this is the public release of code developed for the paper reconfiguring the imaging pipeline for computer vision https capra cs cornell edu research visionmode by mark buckler suren jayasuriya and adrian sampson it contains the configurable reversible imaging pipeline crip described in the paper documentation on how to run and edit the crip for your own use and both dockerfiles and instructions for how to run our supported computer vision benchmarks license and citation all code in this git repository is released under the mit license if you use this code in your research please our iccv 2017 paper inproceedings buckler iccv2017 author mark buckler and suren jayasuriya and adrian sampson title reconfiguring the imaging pipeline for computer vision booktitle the ieee international conference on computer vision iccv year 2017 compiling running and general usage all available documentation for this code can be found in this github repo s wiki https github com cucapra approx vision wiki those of you who just want to run a simple example or are just getting started will find our faq https github com cucapra approx vision wiki getting started faq particularly helpful learning and using other camera models our pipeline is an augmented version of this reversible imaging pipeline https github com mbuckler reversiblepipeline the page that used to host additional models as well as the code needed to learn new models is here https www comp nus edu sg brown radiometric calibration but unfortunately the site appears to be down this is possibly because the pi on the project michael s brown https www eecs yorku ca mbrown has moved universities if you contact them you may gain access to the original code but we don t have access contributors mark buckler mab598 cornell edu suren jayasuriya sjayasur andrew cmu edu adrian sampson asampson cs cornell edu taehoon lee tl353 cornell edu | ai |
|
compassuol | oii eu sou a nat lia guimar es uma engenheira de dados em forma o estou participando do programa de aws cloud data engineering da compassuol img alt uol height 20 width 20 src https user images githubusercontent com 104440384 214584734 789f5402 5283 40cc 8caa 63f29bb498c2 png br integrando a squad 3 clouds cloud city sunset moro em goi nia go no centro oeste do brasil br mortar board sou graduada em direito pela ufg 2016 e estou cursando o 3 semestre de sistemas para internet na unicesumar br computer essa a minha primeira experi ncia na rea da tecnologia e eu estou muito feliz por isso relaxed br purple heart al m da tecnologia meus hobbies s o cozinhar e fazer pequenas reformas diy em m veis e em casa br tudo isso acompanhado claro de uma excelente trilha sonora https www youtube com watch v kxyiu jcytu ab channel linkinpark headphones star2 br aqui voc pode acompanhar um pouco dos meus estudos e projetos em desenvolvimento x sprint 1 metodologia gil seguran a git e linux 18 01 a 31 01 x sprint 2 sql e conceitos usados na rea de dados 01 02 a 14 02 x sprint 3 python 15 02 a 28 02 x sprint 4 python funcional containers e ml 01 03 a 14 03 x sprint 5 computa o em nuvem aws 15 03 a 28 03 x sprint 6 computa o em nuvem aws 29 03 a 11 04 x sprint 7 arquitetura lambda apache hadoop apache spark 12 04 a 25 04 x sprint 8 apache spark batch 26 04 a 09 05 x sprint 9 apache spark streaming 10 05 a 23 05 x sprint 10 visualiza o de dados 24 05 a 06 06 x apresenta o final x aws cloud practitioner cerfified https www credly com badges 8cc870ce 05c6 4c9f 9cc1 36c26e22878e linked in profile br p align right em um ser humano deve transformar informa o em intelig ncia ou conhecimento br tendemos a esquecer que nenhum computador jamais far uma nova pergunta br grace hopper em p rocket linguagens frameworks e ferramentas estudadas div img align center alt linux height 30 width 30 src https user images githubusercontent com 104440384 214585540 742f932b f868 4908 a65e 74094552ee53 png img align center alt github height 30 width 30 src https user images githubusercontent com 104440384 214586360 9770dad2 d14c 4927 b238 56cffa0409a9 png img align center alt git height 30 width 30 src https user images githubusercontent com 104440384 214585535 e73ee71d 804b 400c adb9 e67fdea944fd png img align center alt nati html height 30 width 30 src https user images githubusercontent com 104440384 214584934 2d91da15 143b 460f 82ce ee3566d63349 png img align center alt nati css height 30 width 30 src https user images githubusercontent com 104440384 214584940 7211657c 6d0a 45f5 855a 898647e7b6aa png img align center alt nati js height 30 width 30 src https user images githubusercontent com 104440384 214586350 8d97910e c432 4396 aa91 706c3c1a8810 png img align center alt sql height 30 width 30 src https user images githubusercontent com 104440384 218635686 f8b56c01 19dd 451e b787 4ab7d2e9fed2 png img align center alt python height 30 width 30 src https user images githubusercontent com 104440384 214360489 b5abd1ed 3612 448f 86d0 d934dff813ab png img align center alt apache spark height 30 width 30 src https user images githubusercontent com 104440384 214586357 b26325ee 5a40 4b24 96af 0b5cd5d8d4d5 png img align center alt docker height 30 width 30 src https user images githubusercontent com 104440384 214584923 d9f3ec66 9558 425a 8d24 c2b816f2f201 png img align center alt cloudaws height 25 width 30 src https user images githubusercontent com 104440384 214564965 7f15743f 147a 428a b84e 068578c5752c png anota es pessoais a href https natycodes notion site compass uol 1aedfaf2caff4a299063ee510956b566 pvs 4 notion a img align right alt nati pic height 150 style border radius 50px src https user images githubusercontent com 104440384 214576775 90842255 a57b 4ee2 b6fd 68d93831aece png div email https img shields io badge gmail 23333 style for the badge logo gmail logocolor white mailto guimaraessnatalia gmail com linkedin https img shields io badge linkedin 230077b5 style for the badge logo linkedin logocolor white https www linkedin com in natalia guimar c3 a3es 6a357721b snake animation https github com nataliasguimaraes nataliasguimaraes blob output github contribution grid snake svg compassuol https user images githubusercontent com 104440384 214567499 2dc24c5e d882 4825 b953 f5a69a6be44e jpg | cloud |
|
365-Days-Computer-Vision-Learning-Linkedin-Post | 365 days computer vision learning linkedin post follow me on linkedin https www linkedin com in ashishpatel2604 https github com ashishpatel26 365 days computer vision learning linkedin post blob main poster gif days topic post link 1 efficientdet https bit ly 362nwha 2 yolact https bit ly 3o5oau3 3 yolo series https bit ly 3650laj 4 detr https bit ly 39s5f57 5 vision transformer https bit ly 39umhld 6 dynamic rcnn https bit ly 3939gy5 7 deit data efficient image transformer https bit ly 363zabt 8 yolov5 https bit ly 39qhtxq 9 dropblock https bit ly 3sm4tig 10 fcn https bit ly 3ie9u8c 11 unet https bit ly 3izdbg2 12 retinanet https bit ly 3o5nrln 13 segnet https bit ly 3qiauvz 14 cam https bit ly 2y2i8zr 15 r fcn https bit ly 3icksql 16 repvgg https bit ly 2y2pgjv 17 graph convolution network https bit ly 2ls9rk8 18 deconvnet https bit ly 2mhwzes 19 enet https bit ly 2y2hgez 20 deeplabv1 https bit ly 3o7utqn 21 crf rnn https bit ly 2y5nsr4 22 deeplabv2 https bit ly 2y9dgsx 23 dpn https bit ly 363cye2 24 grad cam https bit ly 3if006q 25 parsenet https bit ly 3oesfk5 26 resnext https bit ly 2m2sxxe 27 amoebanet https bit ly 2ygribn 28 dilatednet https bit ly 2m9fuds 29 drn https bit ly 2kxvmuh 30 refinenet https bit ly 3cpcbvq 31 preactivation resnet https bit ly 2mjtgwq 32 squeezenet https bit ly 3cv3ca0 33 fractalnet https bit ly 3psv712 34 polynet https bit ly 3atcqfj 35 deepsim image quality assessment https bit ly 3okjgti 36 residual attention network https bit ly 3cijupl 37 igcnet igcv https bit ly 36lrfto 38 resnet38 https bit ly 2n7tpkl 39 squeezenext https bit ly 3csev5w 40 group normalization https bit ly 3rynxei 41 enas https bit ly 2lb6pdc 42 pnasnet https bit ly 3tix6mx 43 shufflenetv2 https bit ly 2zb3xam 44 bam https bit ly 3b67xb2 45 cbam https bit ly 3plxhvj 46 morphnet https bit ly 3rwzcsm 47 netadapt https bit ly 2ntlfme 48 espnetv2 https bit ly 3jwvojv 49 fbnet https bit ly 3k1pxzl 50 hideandseek https bit ly 3qelcp0 51 mr cnn s cnn https bit ly 2zw6qtf 52 acol adversarial complementary learning https bit ly 3qkfniu 53 cutmix https bit ly 2nt5shi 54 adl https bit ly 3qnefqm 55 saol https bit ly 2nvubbs 56 ssd https bit ly 37pwpyo 57 noc https bit ly 3ubrzjj 58 g rmi https bit ly 3kjdlap 59 tdm https bit ly 3dv5zgn 60 dssd https bit ly 3q6ehg8 61 fpn https bit ly 2oewzn0 62 dcn https bit ly 3e3g4kg 63 light head rcnn https bit ly 388rtct 64 cascade rcnn https bit ly 3uudlzz 65 megnet https bit ly 3bknvum 66 stairnet https bit ly 3blue2p 67 imagenet rethinking https bit ly 3bqbfzz 68 erfnet https bit ly 2oxgc5c 69 layercascade https bit ly 3qzwdd8 70 idw cnn https bit ly 3leteay 71 dis https bit ly 3vi3xh3 72 sdn https bit ly 3lftn0k 73 resnet duc hdc https bit ly 3lmdhln 74 deeplabv3 https bit ly 3lfsrur 75 autodeeplab https bit ly 2p14ksf 76 c3 https bit ly 3qx0yqk 77 drrn https bit ly 3ltkwp9 78 br net https bit ly 3f0jgli 79 sds https bit ly 3f0czlw 80 addernet https bit ly 3sfmdya 81 hypercolumn https bit ly 3vv7jn5 82 deepmask https bit ly 3cy2rvr 83 sharpmask https bit ly 3rg0h2r 84 multipathnet https bit ly 31fctmr 85 mnc https bit ly 39rrxqj 86 instancefcn https bit ly 3wbquy8 87 fcis https bit ly 3dhpz6b 88 masklab https bit ly 3wb3vya 89 panet https bit ly 2pmqtns 90 cudmedvision1 https bit ly 3retzd1 91 cudmedvision2 https bit ly 3mago0q 92 cfs fcn https bit ly 3cxp0zx 93 u net res net https bit ly 3mpkd3p 94 multi channel https bit ly 2q1wcbn 95 v net https bit ly 3syxgat 96 3d unet https bit ly 3uvnocs 97 m fcn https bit ly 3cxslpg 98 suggestive annotation https bit ly 3t1ubv8 99 3d unet resnet https bit ly 3wru3i9 100 cascade 3d unet https bit ly 3sinsex 101 densevoxnet https bit ly 2rgliyd 102 qsa qnt https bit ly 3wwtydf 103 attention unet https bit ly 3eamnak 104 runet r2unet https bit ly 2q4bixg 105 voxresnet https bit ly 32glbwn 106 unet https bit ly 3esshgv 107 h denseunet https bit ly 3dn53kn 108 dunet https bit ly 3spyrws 109 multiresunet https bit ly 32j7epr 110 unet3 https bit ly 3vj4lrx 111 vggnet for covid19 https bit ly 3ewquw6 112 https bit ly 3tr67cm 113 ki unet https bit ly 3gd4wdk 114 medical transformer https bit ly 3dlw9zf 115 deep snake instance segmentation https bit ly 3dqmdhm 116 blendmask https bit ly 32lvxyf 117 centernet https bit ly 3ajrjqd 118 srcnn https bit ly 3t82eie 119 swin transformer https bit ly 2qmwxct 120 polygon rnn https bit ly 3ujej7d 121 polytransform https bit ly 3gt11zz 122 d2det https bit ly 3b2edjl 123 polarmask https bit ly 3uklsso 124 fgn https bit ly 3uiyyal 125 meta sr https bit ly 3ekfyr9 126 iterative kernel correlation https bit ly 3xpgzp6 127 srfbn https bit ly 2qc1c7z 128 ode https bit ly 3w1k8k4 129 srntt https bit ly 2rnt9hs 130 parallax attention https bit ly 3tir74x 131 3d super resolution https bit ly 3blixja 132 fstrn https bit ly 3uwj8h7 133 pointgroup https bit ly 2qfekpp 134 3d mpa https bit ly 3bqz9j6 135 saliency propagation https bit ly 3txtvj4 136 libra r cnn https bit ly 3hdytnt 137 siamrpn https bit ly 33tnjyi 138 loftr https bit ly 3eutljs 139 mzsr https bit ly 3ul5gas 140 uctgan https bit ly 3fqg9ox 141 occuseg https bit ly 3bujtta 142 lapgan https bit ly 3unojw1 143 tpn https bit ly 3vvyiow 144 gtad https bit ly 3c09yqk 145 slowfast https bit ly 3fmri0d 146 idu https bit ly 2rocia5 147 atss https bit ly 3htiflc 148 attention rpn https bit ly 3oyescy 149 aug fpn https bit ly 3fubdzi 150 hit detector https bit ly 3ugclgb 151 mcn https bit ly 3yspjtq 152 centripetalnet https bit ly 2s1wnvb 153 roam https bit ly 34ft8ex 154 pf net 3d https bit ly 2tzqik9 155 pointaugment https bit ly 3umc8hr 156 c flow https bit ly 3xgdlun 157 randla net https bit ly 3fyajd9 158 total3dunderstanding https bit ly 3v3jy9c 159 if nets https bit ly 3v7xjpj 160 perfectshape https bit ly 3za20vk 161 acne https bit ly 3gajqsn 162 pq net https bit ly 35dvpsm 163 sg nn https bit ly 3iq4yca 164 cascade cost volume https bit ly 3gyzhtt 165 sketchgcn https bit ly 3pvoxi8 166 spektral graph neural network https bit ly 3q2t079 167 graph convolution neural network https bit ly 3gakinx 168 fast localized spectral filtering graph kernel https bit ly 3iruea0 169 graphsage https bit ly 3gcj9xx 170 arma convolution https bit ly 3qcubpc 171 graph attention networks https bit ly 3h1gfky 172 axial deeplab https bit ly 3qiif7l 173 tide https bit ly 3j5evmh 174 sipmask https bit ly 3gmboje 175 ufo https bit ly 2svs2xa 176 scan https bit ly 2thbv70 177 aabo adaptive anchor box optimization https bit ly 3qcsrap 178 simaug https bit ly 3dlv6tk 179 instant teaching https bit ly 3h0e2lu 180 refinement network for rgb d https bit ly 3dtrh5o 181 polka lines https bit ly 3hlnbhd 182 hotr https bit ly 3hsv44i 183 soft introvae https bit ly 3jfoztk 184 rexnet https bit ly 3r42wo9 185 dints https bit ly 3aqibii 186 pose2mesh https bit ly 3wftori 187 keep eyes on the lane https bit ly 3wxs4hl 188 assemblenet https bit ly 3xahhjf 189 sne roadseg https bit ly 3hyceal 190 advpc https bit ly 3i3dgrv 191 eagle eye https bit ly 3e5iqaz 192 deep hough transform https bit ly 2uefbam 193 weightnet https bit ly 3rfdsul 194 stylemapgan https bit ly 2urgpto 195 pd gan https bit ly 3xqmcmm 196 non local sparse attention https bit ly 3xjzbad 197 tedigan https bit ly 3wh67mz 198 feddg https bit ly 3zfkige 199 auto exposure fusion https bit ly 3y3f2w1 200 involution https bit ly 36ksiaz 201 mutualnet https bit ly 3zhfd4n 202 teachers do more than teach image to image translation https bit ly 36rp28k 203 videomoco https bit ly 3f6pq7z 204 artgan https bit ly 3rvdcb9 205 vip deeplab https bit ly 3xmzmvx 206 psconvolution https bit ly 3reigmy 207 deep learning technique on semantic segmentation https bit ly 375hrid 208 synthetic to real https bit ly 3yfzsro 209 panoptic segmentation https bit ly 376tbda 210 histogan https bit ly 3zsyyvd 211 semantic image matting https bit ly 3s5zd9f 212 anchor free person search https bit ly 2vi0kad 213 spatial phase shallow learning https bit ly 3cdal82 214 liteflownet3 https bit ly 3ydilco 215 efficientnetv2 https bit ly 3xaqsie 216 cbnetv2 https bit ly 3s3ptvb 217 perpixel classification https bit ly 3loomyg 218 kaleido bert https bit ly 3ywh2lf 219 darkgan https bit ly 3ltw05j 220 ppdm https bit ly 3lpgjbt 221 sean https bit ly 3youj3l 222 closed loop matters https bit ly 3czbnlq 223 elastic graph neural network https bit ly 3jket9s 224 deep imbalance regression https bit ly 3yn0ue3 225 pipal image quality assessment https bit ly 3gclisx 226 mobile former https bit ly 3kxcsbm 227 rank and sort loss https bit ly 3spqt1s 228 room classification using graph neural network https bit ly 3gd8odv 229 pyramid vision transformer https bit ly 3zmod9h 230 eigengan https bit ly 3bfdivo 231 gnerf https bit ly 3md3ktr 232 detco https bit ly 3sqirk9 233 dert with special modulated co attention https bit ly 3spq5jw residual attention https bit ly 3yni4bj 235 mg gan https bit ly 3md30o7 236 adaptable gan encoders https bit ly 3yh4xj3 237 adaattn https bit ly 3bepkpa 238 conformer https bit ly 3gckj4n 239 yolop https bit ly 3bicysb 240 vmnet https bit ly 3k73jfz 241 airbert https bit ly 3nvcrgs 242 https bit ly 397zius 243 battle of network structure https bit ly 2xchbb0 244 insegan https bit ly 3z9wymf 245 efficient person search https bit ly 3cpbzor 246 deepgcns https bit ly 3aevshg 247 groupformer https bit ly 3lqzm2y 248 slide https bit ly 3hwpiep 249 super neuron https bit ly 3zkxe3d 250 sotr https bit ly 3hvqcyl 251 survey instance segmentation https bit ly 3k90xqb 252 so pose https bit ly 3c56kd8 253 canet https bit ly 2xldkz2 254 xvfi https bit ly 3lropcz 255 txt https bit ly 3tgfleh 256 convmlp https bit ly 2xle8xu 257 cross domain contrastive learning https bit ly 3tdb2id 258 os2d one stage object detection https bit ly 3ufnemd 259 pointmanifoldcut https bit ly 3ckvail 260 large scale facial expression dataset https bit ly 2zqtt4v 261 graph fpn https bit ly 2xh8t9f 262 3d shape reconstruction https bit ly 2xte9aq 263 open graph benchmark dataset https bit ly 3et2lfl 264 shiftaddnet https bit ly 3i6eb5c 265 watchout motion blurring the vision of your dnn https bit ly 3cktzrw 266 rethinking learnable tree filter https bit ly 3zhfpac 267 neuron merging https bit ly 39dwlns 268 distance iou loss https bit ly 3i7zj6z 269 deep imitation learning https bit ly 3azgvd6 270 pixel level cycle association https bit ly 3itzmk6 271 deep model fusion https bit ly 2yk45kl 272 object representation network https bit ly 3ba0mne 273 hoi analysis https bit ly 3fh2key 274 deep equilibrium models https bit ly 3fdh2ib 275 sampling from k dpp https bit ly 3bayruc 276 rotated binary neural network https bit ly 3miuyx3 277 pp lcnet lightcnn https bit ly 3v1zh5h 278 mc net https bit ly 3v5tyqk 279 fake it till you make it https bit ly 3aygtsq 280 enformer https bit ly 3aadcr9 281 videoclip https bit ly 3mouegu 282 moving fashion https bit ly 3jdvatn 283 convolution to transformer https bit ly 3v5yy8f 284 headgan https bit ly 3blzrvm 285 focal transformer https bit ly 3lvcysi 286 stylegan3 https bit ly 3kvfpkw 287 3detr 3d object detection https bit ly 3hfk6a8 288 do self supervised and supervised methods learn similar visual representations https bit ly 3kywm6h 289 back to the features https bit ly 3kvsxh3 290 anticipative video transformer https bit ly 30madl2 291 attention meets geometry https bit ly 3kwespz 292 deepmocap deep optical motion capture https bit ly 30mjtdt 293 trocr transformer based optical character recognition https bit ly 3dqenw5 294 moving fashion https bit ly 2ygtja1 295 stylenerf https bit ly 31w4mbz 296 eca net efficient channel attention https bit ly 3n92i1s 297 inferring high resolution traffic accident risk maps https bit ly 3hgovd6 298 bias loss for mobile neural network https bit ly 3qvbpno 299 bytetrack multi object tracking https bit ly 3c3l7wq 300 non deep network https bit ly 3qwzwov 301 temporal attentive covariance https bit ly 3ontcbp 302 plan then generate controlled data to text generation https bit ly 3dcbsa6 303 dynamic visual reasoning https bit ly 31q4bhp 304 medmnist medical mnist dataset https bit ly 3qxuqxq 305 colossal ai a pytorch based deep learning system for large scale parallel training https bit ly 3wg6xv8 306 recursively embedded atom neural network reann https bit ly 3f1jkqe 307 polytrack for fast multi object tracking and segmentation https bit ly 3debmms 308 can contrastive learning avoid shortcut solutions https bit ly 3whjik9 309 projectedgan to improve image quality https bit ly 30hw8zm 310 arch net a family of neural networks built with operators to bridge the gap https bit ly 3ofocef 311 pp shitu a practical lightweight image recognition system https bit ly 3naurfw 312 editgan https bit ly 30gyd2z 313 panoptic 3d scene segmentation https bit ly 3casvla 314 parp improve the efficiency of nn https bit ly 3daktjt 315 word organ segmentation dataset https bit ly 3qv5ow2 316 denseulearn https bit ly 3ohriyi 317 does thermal data make the detection systems more reliable https bit ly 3sqgtso 318 maddness approximate matrix multiplication amm https bit ly 3zgvil4 319 deceive d adaptive pseudo augmentation https bit ly 3sig6ya 320 oadtr https bit ly 3jsuhuf 321 onepassimagenet https bit ly 3skl6ti 322 image specific convolutional kernel modulation for single image super resolution https bit ly 3fupa20 323 transmix https bit ly 3eh93gh 324 pytorchvideo https bit ly 3jvgdp7 325 metnet 2 https bit ly 3smzb2m 326 unsupervised deep learning identifies semantic disentanglement https bit ly 3jyawvi 327 story visualization https bit ly 3qb554i 328 metaformer https bit ly 3slbebp 329 gaugan2 https bit ly 3pgrivh 330 scigap https bit ly 3eb7e4u 331 generative flow networks gflownets https bit ly 3jv9yez 332 ensemble inversion https bit ly 3ecwbg9 333 savi https bit ly 3ef6txe 334 digital optical neural network https bit ly 3ei07rh 335 image generation research with manifold matching via metric learning https bit ly 3fuomnq 336 ghn 2 graph hypernetworks https bit ly 3qzc5yb 337 neatnet https bit ly 3sly17r 338 neuralprophet https bit ly 3jruk38 339 background activation suppression for weakly supervised object detection https bit ly 3jvyzt2 340 learning to detect every thing in an open world https bit ly 3mkxotc 341 poolformer https bit ly 3qfhnts 342 glip https bit ly 3mk3bgx 343 phalp https bit ly 3ejjvev 344 pixmix https bit ly 3hqh77m 345 codenet https bit ly 32rpx3x 346 gangealing https bit ly 3eiko6k 347 semantic diffusion guidance https bit ly 3jsnzi3 348 tokenlearner https bit ly 3mlg4lm 349 temporal fusion transformer tft https bit ly 3juhcno 350 hiclass evaluation metrics for local hierarchical classification https bit ly 3jhmn8h 351 stable long term recurrent video super resolution https bit ly 3qflphl 352 adavit https bit ly 3edasmj 353 few shot learner fsl https bit ly 3elooym 354 exemplar transformers https bit ly 3qzje3c 355 styleswin https bit ly 3hqkce4 356 repmlnet https bit ly 32dxbuu 357 2 stage unet https bit ly 3jgjimq 358 untrained deep nn https bit ly 3jpll7r 359 semask https bit ly 3zfoum8 360 jojogan https bit ly 31gl9qi 361 elsa https bit ly 3mlwscb 362 prime https bit ly 3fi14rz 363 glide https bit ly 31ixb20 364 stylegan v https bit ly 3jvx91g 365 slip self supervision meets language image pre training https bit ly 3qajl3r 366 smoothnet a plug and play network for refining human poses in videos https bit ly 3tynxlp 367 multi view partial mvp point cloud challenge 2021 on completion and registration methods and results https bit ly 3tzfyeq 368 pcace a statistical approach to ranking neurons for cnn interpretability https bit ly 3lckenk 369 vision transformer with deformable attention https bit ly 3ty3s3k 370 a transformer based siamese network for change detection https bit ly 3dxpyp5 371 lawin transformer improving semantic segmentation transformer with multi scale representations via large window attention https bit ly 3qrstle 372 sasa semantics augmented set abstraction for point based 3d object detection https bit ly 3txduls 373 hyperionsolarnet solar panel detection from aerial images https bit ly 35v2rx6 374 realistic full body anonymization with surface guided gans https bit ly 3dwbnd4 375 generalized category discovery https bit ly 3iz1hac 376 kergnns interpretable graph neural networks with graph kernels https bit ly 3dtwtlu 377 optimization planning for 3d convnets https bit ly 3k38e5p 378 gdna towards generative detailed neural avatars https bit ly 3detfhc 379 seamlessgan self supervised synthesis of tileable texture maps https bit ly 3niieta 380 hydla domain adaptation in lidar semantic segmentation via alternating skip connections and hybrid learning https bit ly 379dy8v 381 hardboost boosting zero shot learning with hard classes https bit ly 379dix5 382 ddu net dual decoder u net for road extraction using high resolution remote sensing images https bit ly 3lu0uzu 383 q vit fully differentiable quantization for vision transformer https bit ly 3qxv9ym 384 spams structured implicit parametric models https bit ly 3iu95cl 385 geofill reference based image inpainting of scenes with complex geometry https bit ly 3quwcp6 386 improving language models by retrieving from trillions of tokens https bit ly 37aksg5 387 stylex finds and visualizes disentangled attributes that affect a classifier automatically https bit ly 3qywyef 388 relicv2 pushing the limits of self supervised resnet https bit ly 3jzxy7c 389 detic a method to detect twenty thousand classes using image level supervision https bit ly 3irtsqz 390 momentum capsule networks https bit ly 3nfdv0j 391 reltr relation transformer for scene graph generation https bit ly 3ivbwgb 392 transformer based sar images despecking https bit ly 3qweilh 393 residualgan resize residual dualgan for cross domain remote sensing images semantic segmentation https bit ly 3wwgy4t 394 vrt a video restoration transformer https bit ly 3k44yxw 395 you only cut once boosting data augmentation with a single cut https bit ly 36l8pdw 396 stylegan xl scaling stylegan to large diverse datasets https bit ly 3irlep8 397 the kfiou loss for rotated object detection https bit ly 3nhul5e 398 the met dataset instance level recognition https bit ly 3k7lpj2 399 alphacode a system that can compete at average human level https bit ly 3qxiih5 400 third time s the charm image and video editing with stylegan3 https bit ly 35vaoqx 401 neuralfusion online depth fusion in latent space https bit ly 3ufaysa 402 vos learning what you don t know by virtual outlier synthesis https bit ly 3upg9rg 403 self conditioned generative adversarial networks for image editing https bit ly 3tx8m0u 404 transformnet self supervised representation learning through predicting geometric transformations https bit ly 3uocfpm 405 yolov7 framework beyond detection https bit ly 3wxu81y 406 f8net fixed point 8 bit only multiplication for network quantization https bit ly 3dzhfxu 407 block nerf scalable large scene neural view synthesis https bit ly 3lyelk5 408 patch netvlad learned patch descriptor and weighted matching strategy for place recognition https bit ly 375c76y 409 cola coarse label pre training for 3d semantic segmentation of sparse lidar datasets https bit ly 3nck6bz 410 scorenet learning non uniform attention and augmentation for transformer based histopathological image classification https bit ly 3ujumbz 411 geometric deep learning grids groups graphs geodesics and gauges https bit ly 388imet 412 how do vision transformers work https bit ly 3ne1mo2 413 mirror yolo an attention based instance segmentation and detection model for mirrors https bit ly 3lbs96p 414 pencil deep learning with noisy labels https bit ly 3ixvhc4 415 vlp a survey on vision language pre training https bit ly 3j0v2rz 416 visual attention network https bit ly 3dt7rbv 417 groupvit semantic segmentation emerges from text supervision https bit ly 3nqv7eg 418 paying u attention to textures multi stage hourglass vision transformer for universal texture synthesis https bit ly 373xs4t 419 end to end cascaded image de raining and object detetion nn https bit ly 375plgw 420 level k to nash equilibrium https bit ly 3nfrx8t 421 machine learning for mechanical ventilation control https bit ly 3jzcmev 422 the effect of fatigue on the performance of online writer recognition https bit ly 3wxssls 423 state of the art in the architecture methods and applications of stylegan https bit ly 3irjl5s 424 long tailed classification with gradual balanced loss and adaptive feature generation https bit ly 3v5xzxr 425 self supervised transformer for deepfake detection https bit ly 3txtudk 426 centersnap single shot multi object 3d shape reconstruction and categorical 6d pose and size https bit ly 3lxkrqa 427 tctrack temporal contexts for aerial tracking https bit ly 3um5o4b 428 latentformer multi agent transformer based interaction modeling and trajectory prediction https bit ly 3uofke0 429 hypertransformer a textural and spectral feature fusion transformer for pansharpening https bit ly 35trv2j 430 zippypoint fast interest point detection description and matching through mixed precision discretization https bit ly 3lwommy 431 mlseg image and video segmentation https bit ly 38p9icn 432 image steganography based on style transfer https bit ly 3djhlan 433 grainspace a large scale dataset for fine grained and domain adaptive recognition of cereal grains https bit ly 3jyprig 434 agcn augmented graph convolutional network https bit ly 3dwzrwn 435 stylebabel artistic style tagging and captioning https bit ly 3j1klit 436 rood mri benchmarking the robustness of deep learning segmentation models to out of distribution and corrupted data in mri https bit ly 38man4z 437 insetgan for full body image generation https bit ly 3dsu9at 438 implicit feature decoupling with depthwise quantization https bit ly 3k1mxaa 439 bamboo building mega scale vision dataset https bit ly 3wvpald 440 tensorf tensorial radiance fields https bit ly 3iwafwi 441 ferv39k a large scale multi scene dataset for facial expression recognition https bit ly 3nchtxd 442 one shot adaptation of gan in just one clip https bit ly 36nopab 443 shrec 2021 classification in cryo electron tomograms https bit ly 3isxpqv 444 maskgit masked generative image transformer https bit ly 3qsqz8i 445 detection recognition and tracking a survey https bit ly 378g8qw 446 mixed differential privacy https bit ly 3iz0mgu 447 mixed dualstylegan https bit ly 3wtyamd 448 bigdetection https bit ly 3duzsrk 449 feature visualization for convolutional neural network https bit ly 3dwf6fj 450 autoavatar https bit ly 38m9clf 451 a long short term memory based recurrent neural network for interventional mri reconstruction https bit ly 3dz1idf 452 stylet2i https bit ly 35u5wx0 453 l 3u net https bit ly 3itoq8r 454 balanced mse https bit ly 3rxt7yo 455 bevformer learning bird s eye view representation from multi camera images via spatiotemporal transformers https bit ly 36m3hfc 456 transeditor transformer based dual space gan for highly controllable facial editing https bit ly 3jqkzks 457 on the importance of asymmetry for siamese representation learning https bit ly 3jngcyt 458 on one class graph neural networks for anomaly detection in attributed networks https bit ly 3uqtc3p 459 pyramid frequency network with spatial attention residual refinement module for monocular depth https bit ly 3kwt6a4 460 unleashing vanilla vision transformer with masked image modeling for object detection https bit ly 3l8a59h 461 davit dual attention vision transformers https bit ly 3engc7e 462 spact self supervised privacy preservation for action recognition https bit ly 3ktnvrw 463 class incremental learning with strong pre trained models https bit ly 3mdlcoq 464 rbgnet ray based grouping for 3d object detection by center for data science https bit ly 3eqkydh 465 event transformer https bit ly 3kusmxc 466 reclip a strong zero shot baseline for referring expression comprehension https bit ly 3m6rgde 467 a9 dataset multi sensor infrastructure based dataset for mobility research https bit ly 3xayqrj 468 simple baselines for image restoration https bit ly 3vt4tjb 469 masked siamese networks for label efficient learning https bit ly 3vies6s 470 neighborhood attention transformer https bit ly 3jnexk3 471 topformer token pyramid transformer for mobile semantic segmentation https bit ly 3m3ea0k 472 mvster epipolar transformer for efficient multi view stereo https bit ly 3madtcr 473 temporally efficient vision transformer for video instance segmentation https bit ly 3w6xkf3 474 editgan high precision semantic image editing https bit ly 3yx2jj2 475 centernet for object detection https bit ly 3woxrbg 476 a case for using rotation invariant features in state of the art feature matchers https bit ly 3kz1x9a 477 webface260m a benchmark for million scale deep face recognition https bit ly 3w2t3vd 478 jiff jointly aligned implicit face function for high quality single view clothed human reconstruction https bit ly 3n9me9u 479 image data augmentation for deep learning a survey https bit ly 3pfc1ua 480 stylegan human a data centric odyssey of human generation https bit ly 3pqv710 481 few shot head swapping in the wild secrets revealed by department of computer vision technology vis https bit ly 3w7xm6c 482 clip gen language free training of a text to image generator with clip https bit ly 3n3ceku 483 humman multi modal 4d human dataset for versatile sensing and modeling https bit ly 3nqnevx 484 generative adversarial networks for image super resolution a survey https bit ly 39jyl0u 485 clip art contrastive pre training for fine grained art classification https bit ly 3n7qd6v 486 c3 stisr scene text image super resolution with triple clues https bit ly 3l1352c 487 barbershop gan based image compositing using segmentation masks https bit ly 39hus6d 488 danbo disentangled articulated neural body representations https bit ly 3lkqwp3 489 blobgan spatially disentangled scene representations https bit ly 3sufeyz 490 text to artistic image generation https bit ly 3w6wzmd 491 sequencer deep lstm for image classification https bit ly 3sulpvt 492 ivy an open source tool to make deep learning code compatible across frameworks https bit ly 3m6mbvj 493 introspective deep metric learning https bit ly 3w2pz02 494 keypointnerf generalizing image based volumetric avatars using relative spatial encoding of keypoints https bit ly 3wnrhwf 495 graphworld a methodology for analyzing the performance of gnn architectures on millions of synthetic benchmark datasets https bit ly 3puqexk 496 group r cnn for weakly semi supervised object detection with points https bit ly 3zfvu3w 497 few shot head swapping in the wild https bit ly 3xapgkn 498 stylandgan a stylegan based landscape image synthesis using depth map https bit ly 3gkx4bi 499 spiking approximations of the maxpooling operation in deep snns https bit ly 3glp7ag 500 deep spectral methods a surprisingly strong baseline for unsupervised semantic segmentation and localization https bit ly 3ntgsjq thanks for reading | linkedin computer-vision deep-learning iclr iclr2020 iclr2019 iclr2021 eccv eccv2020 eccv-2018 eccv2019 cvpr cvpr2020 cvpr2019 cvpr2018 jmlr iclr2018 | ai |
emvpn | emvpn a vpn client server application designed for embedded systems c 2014 daniele lacamera root danielinux net work in progress see license for copying | os |
|
Computer-Vision-Tutorial | computer vision tutorial resources to learn more about these resources you can refer to some of these articles written by me medium https medium com geeky bawa geeky traveller https sites google com view geeky traveller blogs https github com vaibhavhariaramani blogs youtube https www youtube com channel ucy7amuplnsrlemiajggbyog youtube badge https img shields io badge geeky bawa 1ca0f1 style flat circle labelcolor d54b3d logo youtube logocolor white link https www youtube com channel ucy7amuplnsrlemiajggbyog https www youtube com channel ucy7amuplnsrlemiajggbyog don t forget to tag us if you use this repo in your project don t forget to mention us as contributer in it and don t forget to tag us linkedin https www linkedin com in vaibhav hariramani 087488186 instagram https www instagram com geeky baba hl en facebook https www facebook com jayesh hariramani 3 twitter https www linkedin com in vaibhav hariramani 087488186 github https github com vaibhavhariaramani made with by vaibhav hariramani about me i am a machine learning enthusiast an actions on google internet of things alexa skills and image processing developer i have a keen interest in image processing and andriod development i am currently studying at chandigarh university punjab my portfolio https vaibhavhariaramani github io you can find me at linkedin https www linkedin com in vaibhav hariramani 087488186 or github https github com vaibhavhariaramani email vaibhav hariramani01 gmail com mailto vaibhav hariramani01 gmail com download the vaibhav hariramani app https github com vaibhavhariaramani the vaibhav hariramani app raw master vaibhav 20hariramani 20app apk img src https github com vaibhavhariaramani vaibhavhariaramani blob master icon gh bannner light png https github com vaibhavhariaramani the vaibhav hariramani app raw master vaibhav 20hariramani 20app apk p align center a href https www linkedin com in vaibhav hariramani 087488186 img height 30 src https github com vaibhavhariaramani vaibhavhariaramani blob master icon linkedin png a nbsp nbsp a href https twitter com vaibhavhariram2 img height 30 src https github com vaibhavhariaramani vaibhavhariaramani blob master icon twitter png a nbsp nbsp a href https www instagram com vaibhav hariramani hl en img height 30 src https github com vaibhavhariaramani vaibhavhariaramani blob master icon instagram jpg a nbsp nbsp a href https www buymeacoffee com vaibhavjii img height 30 src https github com vaibhavhariaramani vaibhavhariaramani blob master icon by me a coffee png a a href https wa me 917790991077 img height 30 src https github com vaibhavhariaramani vaibhavhariaramani blob master icon whatsapp png a nbsp nbsp a href mailto vaibhav hariramani01 gmail com img height 30 src https github com vaibhavhariaramani vaibhavhariaramani blob master icon email png a nbsp nbsp p img width 150 align center src https archive org download download button png download button png png the vaibhav hariramani app latest version https github com vaibhavhariaramani the vaibhav hariramani app raw master vaibhav 20hariramani 20app apk download the vaibhav hariramani app https github com vaibhavhariaramani the vaibhav hariramani app raw master vaibhav 20hariramani 20app apk consist of tutorials projects blogs and vlogs of our site developed using android studio with web view try installing it in your android device happy coding follow me linkedin badge https img shields io badge vaibhavhariramani blue style flat circle logo linkedin logocolor white link https www linkedin com in vaibhav hariramani 087488186 https www linkedin com in vaibhav hariramani 087488186 instagram badge https img shields io badge vaibhavhariramani e02c73 style flat circle labelcolor e02c73 logo instagram logocolor white link https www instagram com vaibhav hariramani hl en https www instagram com vaibhav hariramani hl en twitter badge https img shields io badge vaibhavhariramani 1ca0f1 style flat circle labelcolor 1ca0f1 logo twitter logocolor white link https twitter com vaibhavhariram2 https twitter com vaibhavhariram2 github badge https img shields io badge vaibhavhariaramani 24292e style flat circle labelcolor 24292e logo github logocolor white link https github com vaibhavhariaramani https github com vaibhavhariaramani gmail badge https img shields io badge vaibhavhariramani d54b3d style flat circle labelcolor d54b3d logo gmail logocolor white link mailto vaibhav hariramani01 gmail com mailto vaibhav hariramani01 gmail com medium badge https img shields io badge vaibhavhariramani d54b3d style flat circle labelcolor d54b3d logo medium logocolor white link https medium com geeky bawa https medium com geeky bawa | ai |
|
codecrux.github.io | codecrux introduction codecrux is a worldwide provider of a full fledged spectrum of creative design web mobile app development with api services we have successfully delivered many projects for clients across the globe usa uk australia germany india etc our technology stack ruby ruby on rails golang python angularjs backbone js reactjs html5 javascript css3 scss postgres mongodb mysql redis heroku amazon web services mobile development native as well as hybrid using react native etc our services include web mobile app design and development e commerce application development healthcare application development real estate application development rest api development it infrastructure management devops backend server side programming and cloud services like amazon heroku ibm google microsoft azure etc setup please visit https codecrux com for more details about the company and https codecrux com portfolio for our work contact details india codecrux web technologies p ltd 5th floor catalyst building t hub iiit hyderabad campus gachibowli hyderabad telangana 500032 email contact codecrux com phone 91 868 846 8400 usa apt 6735 b 2800 waterview pkwy richardson tx 75080 email contact codecrux com phone 1 682 593 2808 | front_end |
|
LLM-API | llm api a repository to demonstrate some of the concepts behind large language models transformer foundation models in context learning and prompt engineering using open source large language models like bloom and co here large language models and prompt engineering plots language model input prompt png table of contents overview overview objective objective requirement requirement install install data data features features how to use the api examples pipelines pipelines notebooks notebooks scripts scripts test test author author overview as transformer foundation models were introduced around 2017 they quickly become the leading models to go too to win the race of nlp and computer vision these models can practically be used to do any language related tasks such as but not limited to sentiment analysis text extraction text generation chatbot applications language translations and many more the main idea behind this week s project is to have a clear if not basic understanding of these large language models prompt engineering in context learning fine tuning and building an api to connect all of them and build a product that can even be used for a commercial use objective is to build and end to end pipeline that can use some if not any of the large language models free apis such as bloom gpt 3 or co here in order to use their capabilities and 1 extract the relationships and entities of an input job description 2 score whether or not specific news is going to cause a riots in the real world based on a scoring value feature 3 api creation to receive inputs and display outputs for the above services 4 make the whole project into an installable product available for commercial use requirement python 3 5 and above pip and cohere transformers the visualization are made using plotly seaborn and matplot lib install git clone https github com fisseha estifanos llm api git cd llm api pip install r requirements txt data dataset i news and scores the news score data can be found here at google drive https docs google com spreadsheets d 19n k6snim0fyld2tbs 5y3wesgdveb3j edit usp sharing ouid 108085860825615283789 rtpof true sd true dataset ii job descriptions the development and training data set could be found in here at github https github com walidamamou relation extraction transformer blob main relations dev txt the testing and final reporting data set could be found in here at github https github com walidamamou relation extraction transformer blob main relations test txt features features i news and scores domain the base url or a reference to the source these item comes from title title of the item the content of the item description the content of the item body the content of the item link url to the item source it may not functional anymore sometime timestamp timestamp that this item was collected at analyst average score target variable the score to be estimated analyst rank score as rank reference final score not relevant for now it is a transformed quantity features ii job descriptions feature 1 what it is feature 1 what it is feature 3 what it is examples pipelines several numbers of pipelines are going to be used in this project the main pipeline that is going to be the main end to end pipeline is going to be formed using dvc notebooks all the preparation preprocessing analysis eda and examples of several prompt engineering in context learning large language api interactions and other types of several notebooks will be found here in the form of an ipynb file in the notebooks folder scripts all the modules for the eda notebooks analyses helpers and any other scripts will be found here tests all the unit and integration tests are found here in the tests folder author fisseha estifanos github fisseha estifanos https github com fisseha estifanos linkedin fisseha estifanos https www linkedin com in fisseha estifanos 109ba6199 twitter fisseha estifanos https twitter com f0x tr0t show us your support give us a if you like this project and also feel free to contact us at any moment | api cohere in-context-learning llm prompt-engineering transformer bloom text-extraction news-score | ai |
Logic-LLM | logic lm data and codes for logic lm empowering large language models with symbolic solvers for faithful logical reasoning https arxiv org abs 2305 12295 findings of emnlp 2023 authors liangming pan alon albalak xinyi wang william yang wang nlp group http nlp cs ucsb edu university of california santa barbara introduction large language models llms have shown human like reasoning abilities but still struggle with complex logical problems this paper introduces a novel framework logic lm which integrates llms with symbolic solvers to improve logical problem solving our method first utilizes llms to translate a natural language problem into a symbolic formulation afterward a deterministic symbolic solver performs inference on the formulated problem we also introduce a self refinement module which utilizes the symbolic solver s error messages to revise symbolic formalizations we demonstrate logic lm s effectiveness on five logical reasoning datasets proofwriter prontoqa folio logicaldeduction and ar lsat on average logic lm achieves a significant performance boost of 39 2 over using llm alone with standard prompting and 18 4 over llm with chain of thought prompting our findings suggest that logic lm by combining llms with symbolic logic offers a promising avenue for faithful logical reasoning the general framework of logic lm framework png first install all the required packages bash pip install r requirements txt datasets the datasets we used are preprocessed and stored in the data folder we evaluate on the following datasets prontoqa https github com asaparov prontoqa deductive resoning dataset we use the 5 hop subset of the fictional characters version consisting of 500 testing examples proofwriter https allenai org data proofwriter deductive resoning dataset we use the depth 5 subset of the owa version to reduce overall experimentation costs we randomly sample 600 examples in the test set and ensure a balanced label distribution folio https github com yale lily folio first order logic reasoning dataset we use the entire folio test set for evaluation consisting of 204 examples logicaldeduction https github com google big bench tree main bigbench benchmark tasks logical deduction constraint satisfaction problems csps we use the full test set consisting of 300 examples ar lsat https github com zhongwanjun ar lsat analytical reasoning ar problems containing all analytical logic reasoning questions from the law school admission test from 1991 to 2016 we use the test set which has 230 multiple choice questions baselines to replicate the standard lm direct and the chain of thought cot baselines please run the following commands bash cd baselines python gpt3 baseline py api key your openai api key model name model name text davinci 003 gpt 4 dataset name dataset name prontoqa proofwriter folio logicaldeduction ar lsat split dev mode baseline direct cot max new tokens 16 for direct 1024 for cot the results will be saved in baselines results to evaluate the results please run the following commands bash python evaluate py dataset name dataset name prontoqa proofwriter folio logicaldeduction ar lsat model name model name text davinci 003 gpt 4 split dev mode baseline direct cot logic program generation to generate logic programs for logical reasoning problems in each dataset at the root directory run the following commands bash python models logic program py api key your openai api key dataset name dataset name prontoqa proofwriter folio logicaldeduction ar lsat split dev model name model name text davinci 003 gpt 4 max new tokens 1024 the generated logic programs will be saved in outputs logic programs you can also reuse the logic programs we generated in outputs logic programs logic inference with symbolic solver after generating logic programs we can perform inference with symbolic solvers at the root directory run the following commands bash dataset dataset name prontoqa proofwriter folio logicaldeduction ar lsat split dataset split dev test model the logic programs are generated by which model text davinci 003 gpt 4 backup the random backup answer random or cot logic collabration mode llm python models logic inference py model name model dataset name dataset split split backup strategy backup backup llm result path baselines results cot dataset split model json the logic reasoning results will be saved in outputs logic inferences backup strategies random if the generated logic program cannot be executed by the symbolic solver we will use random guess as the prediction llm if the generated logic program cannot be executed by the symbolic solver we will back up to using cot to generate the prediction to run this mode you need to have the corresponding baseline llm results stored in baselines results to make the inference more efficient the model will just load the baseline llm results and use them as the prediction if the symbolic solver fails evaluation to evaluate the logic reasoning results please run the following commands bash python models evaluation py dataset name dataset name prontoqa proofwriter folio logicaldeduction model name the logic programs are generated by which model text davinci 003 gpt 4 split dev backup the basic mode random or cot logic collabration mode llm self refinement after generating the logic programs without self refinement run the following commands for self refinement bash dataset dataset name prontoqa proofwriter folio logicaldeduction ar lsat split dataset split dev test model the logic programs are generated by which model text davinci 003 gpt 4 backup the random backup answer random or cot logic collabration mode llm python models self refinement py model name model dataset name dataset split split backup strategy backup backup llm result path baselines results cot dataset split model json api key your openai api key maximum rounds 3 the self refinement results will be saved in outputs logic inferences reference please cite the paper in the following format if you use this dataset during your research inproceedings panlogiclm23 author liangming pan and alon albalak and xinyi wang and william yang wang title logic lm empowering large language models with symbolic solvers for faithful logical reasoning booktitle findings of the 2023 conference on empirical methods in natural language processing findings of emnlp address singapore year 2023 month dec url https arxiv org abs 2305 12295 q a if you encounter any problem please either directly contact the liangming pan liangmingpan ucsb edu or leave an issue in the github repo | ai |
|
blockchain_learn | mind mind png https shuwoom com p 403 https shuwoom com p 430 https shuwoom com p 643 https shuwoom com p 672 merkle spv https shuwoom com p 692 p2p https shuwoom com p 721 kademlia https shuwoom com p 813 https shuwoom com p 798 raft https shuwoom com p 826 bloom filter spv https shuwoom com p 857 https shuwoom com p 449 https shuwoom com p 947 mini http miniblockchain shuwoom com http miniblockchain shuwoom com miniblockchain1 miniblockchain1 jpg miniblockchain1 miniblockchain2 jpg miniblockchain1 miniblockchain3 jpg token token token token token token token token token token todo https union click jd com jdc d n7o7st 7 7 http book 8btc com masterbitcoin2cn google shuwoom com github https github com guanchao blockchain learn shuwoom email shuwoom wgc gmail com wechat wechat jpg | blockchain python tutorial | blockchain |
Engineering-Regulations-App | engineeringregulationsapp js standard style https img shields io badge code 20style standard brightgreen svg style flat http standardjs com standard compliant react native app utilizing ignite https github com infinitered ignite arrow up how to setup step 1 git clone this repo step 2 cd to the cloned repo step 3 install the application with yarn or npm i arrow forward how to run app 1 cd to the repo 2 run build for either os for ios run npx react native run ios for android run genymotion run npx react native run android no entry sign standard compliant js standard style https cdn rawgit com feross standard master badge svg https github com feross standard this project adheres to standard our ci enforces this so we suggest you enable linting to keep your project compliant during development to lint on commit this is implemented using husky https github com typicode husky there is no additional setup needed bypass lint if you have to bypass lint for a special commit that you will come back and clean pushing something to a branch etc then you can bypass git hooks with adding no verify to your commit command understanding linting errors the linting rules are from js standard and react standard regular js errors can be found with descriptions here http eslint org docs rules while react errors and descriptions can be found here https github com yannickcr eslint plugin react closed lock with key secrets this project uses react native config https github com luggit react native config to expose config variables to your javascript code in react native you can store api keys and other sensitive information in a env file api url https myapi com google maps api key abcdefgh and access them from react native like so import secrets from react native config secrets api url https myapi com secrets google maps api key abcdefgh the env file is ignored by git keeping those secrets out of your repo get started 1 copy env example to env 2 add your config variables 3 follow instructions at https github com luggit react native config setup https github com luggit react native config setup 4 done | os |
|
Hermes-Defi-Frontend | hermes defi getting started this project was built using nextjs chakra ui web3 react and react query to setup the project you need the following pre setup cp env example env local redis url from upstash com for caching homepage and tvl historical data if you have a better idea how to get the tvl history feel free to implement add the url to the env local in the hermes redis url key setup run yarn install to install all packages run yarn dev to start the development server project structure this is the way the project is structured src this is the main folder containing all source files public this is the folder containing all images fonts and static files github workflows this sets to cron job on github actions to run calculate the tvl data on intervals currently set to 60 mins src folder src pages this contains all pages for almost all the pages the components and are all stored in the same file except for the pools farms and balancers which uses the poolcard component stored in the components folder src wallet this contains all wallet related files mostly for web3 react this included the connectors and the setup hooks src web3 functions this contains all functions that sends request to the blockchain this is mostly for actions prices and apr calculations which are used outside of the frontend src hooks this contains all major hooks used in the application from data fetching in the home page to queries and actions for pools and also prices and contracts src config this contains all abis contract addresses pool farms balancers configurations and also constants used in the app src component this container all components for layout modals wallet interations pool card and web3 manager extra notes on architecture this application doesn t use redux for state management therefore all data are fetched when the page loads for the farm related pages pool farms balancers the data is fetched in the page and are passed down using reducers and context this is to make action side effects easier to do all 3 farm related pages all share the same component poolcard and all use the same queries and methods so becareful of changes made in that file p s you re smarter if you have any suggestions about how to make it better please don t hestitate to tear it all down and make it better 3 | front_end |
|
BabyRTOS | babyrtos icon doxygen icon png a pre emptive priority based real time operating system rtos based on the arm cortex m4 processor major features of the implementation includes up to 63 task priorities time sharing i e round robin scheduling between tasks of equal priority support for multiple os resources including os mutex os semaphore and os queue priority inheritance by mutex locking tasks by ceiling all waiting tasks notes under src is a sample application based on the stm32f401cc micro controller the implementation resides under src os os c and src os os h other implementation dependencies are src mcal util std types h src util linkedlist and src util queue the implementation uses the systick isr for tracking os ticks its clock configuration is the responsibility of the application the interrupt is automatically enabled by os init the timer is automatically started by os start zero interrupt priority is reserved and should not be assigned to any other interrupt interrupt pre emption in the system is automatically disabled by os init all system resources i e tasks mutexes semaphores queues must be zero initialized i e defined as global variables use make r s build or make r s clean while in the src directory links codebase https github com hazemanwer2000 babyrtos documentation https hazemanwer2000 github io babyrtos implementation guide guide rtos from scratch pdf | os |
|
Dianzisuoa | dianzisuoa embedded systems design project | os |
|
hashsecret | reversing secret s phone numbers question how could you reverse engineer secret https www secret ly s phone number database if you had a copy answer you can try every possible phone number in about 2 5 hours thanks to this twitter thread https twitter com coda status 436267472639897600 for inspiring me to actually write this details here is what is known about how secret stores phone numbers https medium com secret den 12ab82fda29f we locally hash the contact details first with shared salt therefore 15552786005 becomes a22d75c92a630725f4 and the original number never leaves your phone i wrote a quick and dirty go program to try and answer this question i m assuming secret uses a standard cryptographic hash i ll give them the benefit of the doubt and say sha256 since its one of the slower standard choices since the shared salt must be in the client somewhere i ve added an 8 character constant string to the hash to simulate it time to hash 8 billion numbers 2 22 05 142 minutes or 8525 seconds cpu intel r core tm i5 2500k cpu 3 30ghz conclusion your phone number isn t secret to the secret developers or anyone that hacks them make it faster ordered approximately from least to most effort use a faster computer generate numbers with a stateful iterator http ewencp org blog golang iterators instead of a go routine increment phone number bytes in place instead of creating a string for each number divide the number space across multiple cores only generate valid phone numbers see the north american dialing plan http www nanpa com i generate all numbers from 200 000 0000 through 999 999 9999 many of which are not actually numbers use optimized c c particularly for sha256 use gpus i used a go routine and a channel because its a natural way to write a generator in go this program runs slightly slower if you run it with gomaxprocs 2 10782 seconds with 2 procs 9255 with 1 this is a good example of how not to parallelize this problem run it yourself go build hashsecret go time hashsecret | server |
|
modular | div align center h1 img height 38px width 44px style height 38px max width 44px src https raw githubusercontent com jpmorganchase modular main docs img modular hero svg nbsp modular h1 p strong scaled web engineering strong where libraries and micro frontends coexist together and tooling is a first class citizen p div prs welcome https img shields io badge prs welcome brightgreen svg https github com jpmorganchase modular blob main contributing md npm version https img shields io npm v modular scripts svg https www npmjs com package modular scripts static https github com jpmorganchase modular actions workflows static yml badge svg tests https github com jpmorganchase modular actions workflows test yml badge svg coverage https coveralls io repos github jpmorganchase modular badge svg branch main https coveralls io github jpmorganchase modular branch main modular is a collection of tools and guidance to enable micro frontend development at scale it is derived from work at jp morgan to enable development in large monorepositories owned by many teams it provides a cli to scaffold new micro frontends and libraries from scratch provide ready to use opinionated test lint and build configurations for micro frontends and libraries provide tooling to incrementally and selectively run operations on monorepositories at scale pre requisites see the compatibility page https modular js org compatibility getting started bash yarn create modular react app my new modular project verbose prefer offline repo bootstraps a new project configured to use yarn workspaces https classic yarnpkg com en docs workspaces this also creates a workspace named app which is a new modular app package types written in typescript https www typescriptlang org it supports three flags verbose enables verbose yarn and modular logging prefer offline will prefer locally cached node modules versions over those from your remote registry repo value will toggle whether a new git repo is created and the initial files committed commands more documentation about modular commands is here https modular js org commands configuration modular is based around the idea of minimal configuration however documentation for the options available is here https modular js org configuration | modular react esbuild jpmorgan | front_end |
Natural-Language-Processing-Python-and-NLTK | natural language processing python and nltk code repository for natural language processing python and nltk what you will learn get a glimpse of the complexity of natural languages and how they are processed by machines clean and wrangle text using tokenization and chunking to help you better process data tokenize text into sentences and sentences into words classify text and perform sentiment analysis implement string matching algorithms and normalization techniques understand and implement the concepts of information retrieval and text summarization find out how to implement various nlp tasks in python software and hardware module 1 chapter number software required with version download links to the software hardware specifications os required 1 5 python anaconda nltk https www python org http continuum io downloads http www nltk org common unix printing system any 6 scikit learn and gensim http scikit learn org stable https radimrehurek com gensim common unix printing system any 7 scrappy http scrapy org common unix printing system any 8 numpy scipy pandas and matplotlib http www numpy org http www scipy org http pandas pydata org http matplotlib org common unix printing system any 9 twitter python apis and facebook python apis https dev twitter com overview api twitter libraries https developers facebook com common unix printing system any software and hardware module 2 chapter number software required with version free proprietary download links to the software 1 nltk 3 0a4 nltk data free http www nltk org http www nltk org data html 2 pyenchant 1 6 5 free http pythonhosted org pyenchant 3 lockfile 0 9 1 mongodb 2 6 pymongo 2 6 3 free https pypi python org pypi lockfile http www mongodb org https pypi python org pypi pymongo 4 nltk trainer 0 9 free https github com japerk nltk trainer 7 scikit learn 0 14 1 free http scikit learn org stable 8 redis 2 8 redis 2 8 0 execnet 1 1 free http redis io https pypi python org pypi redis https codespeak net execnet 9 python dateutil 2 0 beautifulsoup4 4 3 2 lxml 3 2 3 charade 1 0 3 free http labix org python dateutil http www crummy com software beautifulsoup http lxml de https pypi python org pypi charade software and hardware module 3 chapter number software required with version hardware specifications os required all chapters python 2 7 or 3 2 install nltk 3 0 either on 32 bit or 64 bit machine windows or mac unix note modules 1 2 and 3 have code arranged by chapter for the chapters that have code click here https docs google com forms d e 1faipqlse5qwunkgf6puvzpirpdtuy1du5rlzew23ubp2s p3wb gcwq viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781787285101 https packt link free ebook 9781787285101 a p | ai |
|
dialogue-system | dialogue system here to design a dialogue system robot | os |
|
synthetic-computer-vision | synthetic for computer vision this is a repo for tracking the progress of using synthetic images for computer vision research if you found any important work is missing or information is not up to date please edit this file directly and make a pull request each publication is tagged with a keyword to make it easier to search if you find anything missing from this page please edit this readme md file to add it when adding a new item you can simply follow the format of existing items how this document is structured is documented in contribute md contribute md how to use click publication to jump to the paper title detailed information such as code and project page will be provided together with pdf file div id dataset div synthetic image dataset suncg princeton https sscnet cs princeton edu minos https minosworld github io house3d facebook https github com facebookresearch house3d procedural human action videos phav de2016procedural surreal varol2017learning virtual kitti gaidon2016virtual synthia ros2016synthia sintel butler2012naturalistic a synthetic dataset for optical flow sceneflow mayer2015large 4d light fields honauer2016dataset icl nuim dataset handa2014benchmark driving in the matrix drivingthematrix playing for benchmarks http playing for benchmarks org overview div id models div 3d model repository realistic 3d models are critical for creating realistic and diverse virtual worlds here are research efforts for creating 3d model repositories shapenet chang2015shapenet 3dscan choi2016large seeing3dchairs aubry2014seeing div id tool div tools aiplayground ue4 based data ablation tool mousavi2020ai see project page https github com mmehdimousavi aip airsim microsoft https github com microsoft airsim carla intel https github com carla simulator carla unity ml agents https blogs unity3d com 2017 09 19 introducing unity machine learning agents render smpl human bodies on blender see cvpr2017 varol2017learning render for cnn based on blender see iccv2015 su2015render uetorch https github com facebook uetorch based on ue4 see icml2016 lerer2016learning unrealcv https github com unrealcv unrealcv based on ue4 see arxiv qiu2016unrealcv vizdoom based on doom see arxiv kempka2016vizdoom openai universe see project page https universe openai com blender addon for 4d light field rendering see project page https github com lightfield analysis blender addon event camera dataset and simulator see project page https github com uzh rpg rpg davis simulator nvidia deep learning dataset synthesizer ndds https github com nvidia dataset synthesizer div id resource div resources eccv 2016 workshop virtual augmented reality for visual artificial intelligence varvai workshop http adas cvc uab es varvai2016 iccv 2017 workshop role of simulation in computer vision https www microsoft com en us research event iccv 2017 role of simulation in computer vision virtual reality meets physical reality modelling and simulating virtual humans and environments siggraph asia 2016 workshop http sigvr org cvpr 2017 workshop thor challenge http vuchallenge org thor html see also http riemenschneider hayko at vision dataset index php filter synthetic misc realismcnn github https github com junyanz realismcnn abnormality detection in images http paul rutgers edu babaks abnormality detection html div id reference div reference the div id is bib citekey from google scholar use div id makes it easier to reference a work in this document 2020 div id mousavi2020ai div mousavi mehdi and khanal aashis and estrada rolando ai playground unreal engine based data ablation tool for deep learning international symposium on visual computing isvc 2020 pdf https arxiv org abs 2007 06153 project https github com mmehdimousavi aip 2017 total 12 adversarially tuned scene generation pdf https arxiv org pdf 1701 00405 pdf ue4sim a photo realistic simulator for computer vision applications pdf https arxiv org abs 1708 05869 project https ue4sim org div id richterplaying div playing for benchmarks pdf http vladlen info papers playing for benchmarks pdf div id mitash2017self div a self supervised learning system for object detection using physics simulation and multi view pose estimation span class octicon octicon mark github span octocat code https github com cmitash physim 6dpose pdf https arxiv org pdf 1703 03347 pdf project http paul rutgers edu cm1074 physim html div id de2016procedural div procedural generation of videos to train deep action recognition networks pdf http openaccess thecvf com content cvpr 2017 papers de souza procedural generation of cvpr 2017 paper pdf project http adas cvc uab es phav citation 8 https scholar google com scholar cites 12002008688864745159 as sdt 2005 sciodt 0 5 hl en div id varol2017learning div learning from synthetic humans span class octicon octicon mark github span octocat code https github com gulvarol surreal pdf https arxiv org abs 1701 01370 project http www di ens fr willow research surreal tag synthetic human nvidia issac http www marketwired com press release nvidia ushers new era robotics with breakthroughs making it easier build train intelligent 2215481 htm configurable photorealistic image rendering and ground truth synthesis by sampling stochastic grammars representing indoor scenes div id airsim div aerial informatics and robotics platform span class octicon octicon mark github span octocat code https github com microsoft airsim pdf https www microsoft com en us research wp content uploads 2017 02 aerial informatics robotics tr pdf project https www microsoft com en us research project aerial informatics robotics platform tag tool div id tobin2017domain div tobin josh et al domain randomization for transferring deep neural networks from simulation to the real world arxiv preprint arxiv 1703 06907 2017 tag domain pdf https arxiv org pdf 1703 06907 pdf div id drivingthematrix div m johnson roberson c barto r mehta s n sridhar karl rosaen and r vasudevan driving in the matrix can virtual worlds replace human generated annotations for real world tasks in ieee international conference on robotics and automation pp 1 8 2017 span class octicon octicon mark github span octocat code https github com umautobots driving in the matrix pdf https arxiv org pdf 1610 01983 project https fcav engin umich edu sim dataset citation 3 https scholar google com scholar um 1 ie utf 8 lr cites 2191650018344815319 div id person re id div zheng z zheng l yang y unlabeled samples generated by gan improve the person re identification baseline in vitro in proceedings of ieee international conference on computer vision 2017 span class octicon octicon mark github span octocat code https github com layumi person reid gan pdf https arxiv org abs 1701 07717 citation 48 https scholar google com scholar oi bibs hl zh cn cites 270746001988088124 tag generated images by gan 2016 total 17 div id sadeghi2016rl div sadeghi fereshteh and sergey levine rl real single image flight without a single real image arxiv preprint arxiv preprint arxiv 1611 04201 12 2016 tag rl johnson justin et al clevr a diagnostic dataset for compositional language and elementary visual reasoning arxiv preprint arxiv 1612 06890 2016 pdf https arxiv org abs 1612 06890 mccormac john et al scenenet rgb d 5m photorealistic images of synthetic indoor trajectories with ground truth arxiv preprint arxiv 1612 05079 2016 de souza c sar roberto et al procedural generation of videos to train deep action recognition networks arxiv preprint arxiv 1612 00881 2016 pdf https arxiv org abs 1612 00881 project http adas cvc uab es phav tag synthetic human synnaeve gabriel et al torchcraft a library for machine learning research on real time strategy games arxiv preprint arxiv 1611 00625 2016 pdf https arxiv org abs 1611 00625 code https github com torchcraft torchcraft lin jenny et al a virtual reality platform for dynamic human scene interaction siggraph asia 2016 virtual reality meets physical reality modelling and simulating virtual humans and environments acm 2016 pdf https xiaozhuchacha github io projects siggraphasia16 vrplatform vrplatform2016siggraphasia pdf project https xiaozhuchacha github io projects siggraphasia16 vrplatform index html mahendran a et al researchdoom and cocodoom learning computer vision with games arxiv preprint arxiv 1610 02431 2016 pdf https arxiv org pdf 1610 02431 pdf project www robots ox ac uk vgg research researchdoom div id ros2016synthia div the synthia dataset a large collection of synthetic images for semantic segmentation of urban scenes 2016 pdf http www cv foundation org openaccess content cvpr 2016 html ros the synthia dataset cvpr 2016 paper html project http synthia dataset net citation 4 http scholar google com scholar cites 9178628328030932213 as sdt 2005 sciodt 0 5 hl en div id gaidon2016virtual div virtual worlds as proxy for multi object tracking analysis 2016 pdf http arxiv org abs 1605 06457 project http www xrce xerox com research development computer vision proxy virtual worlds citation 5 http scholar google com scholar cites 11727455440906017188 as sdt 2005 sciodt 0 5 hl en playing for data ground truth from computer games 2016 pdf http link springer com chapter 10 1007 978 3 319 46475 6 7 citation 1 http scholar google com scholar cites 12822958035144353200 as sdt 2005 sciodt 0 5 hl en play and learn using video games to train computer vision models 2016 pdf http arxiv org abs 1608 01745 citation 1 http scholar google com scholar cites 16081073673799361643 as sdt 2005 sciodt 0 5 hl en vizdoom a doom based ai research platform for visual reinforcement learning 2016 octocat code https github com marqt vizdoom pdf http arxiv org abs 1605 02097 project http vizdoom cs put edu pl citation 4 http scholar google com scholar cites 4101579648300742816 as sdt 2005 sciodt 0 5 hl en div id choi2016large div a large dataset of object scans 2016 pdf http arxiv org abs 1602 02481 project http redwood data org 3dscan citation 6 http scholar google com scholar cites 5989950372336055491 as sdt 2005 sciodt 0 5 hl en div id qiu2016unrealcv div unrealcv connecting computer vision to unreal engine 2016 span class octicon octicon mark github span octocat code https github com unrealcv unrealcv project http unrealcv github io pdf http arxiv org abs 1609 01326 div id lerer2016learning div learning physical intuition of block towers by example 2016 octocat code https github com facebook uetorch pdf http arxiv org abs 1603 01312 citation 12 http scholar google com scholar cites 12846348306706460250 as sdt 2005 sciodt 0 5 hl en target driven visual navigation in indoor scenes using deep reinforcement learning 2016 pdf http arxiv org abs 1609 05143 div id honauer2016dataset div a dataset and evaluation methodology for depth estimation on 4d light fields accv 2016 octocat code https github com lightfield analysis pdf http lightfield analysis net benchmark paper lightfield benchmark accv 2016 pdf project http lightfield analysis net citation https scholar google de scholar cluster 3369030498099069181 hl en as sdt 0 5 2015 total 3 a large dataset to train convolutional networks for disparity optical flow and scene flow estimation 2015 pdf http arxiv org abs 1512 02134 citation 9 http scholar google com scholar cites 16431759299155441580 as sdt 2005 sciodt 0 5 hl en div id su2015render div render for cnn viewpoint estimation in images using cnns trained with rendered 3d model views 2015 octocat code https github com shapenet renderforcnn pdf http www cv foundation org openaccess content iccv 2015 html su render for cnn iccv 2015 paper html citation 33 http scholar google com scholar cites 1209553997502402606 as sdt 2005 sciodt 0 5 hl en div id chang2015shapenet div shapenet an information rich 3d model repository 2015 pdf http arxiv org abs 1512 03012 project http shapenet cs stanford edu citation 27 http scholar google com scholar cites 1341601736562194564 as sdt 2005 sciodt 0 5 hl en 2014 total 2 virtual and real world adaptation for pedestrian detection 2014 pdf http ieeexplore ieee org xpls abs all jsp arnumber 6587038 citation 46 http scholar google com scholar cites 2637402509859183337 as sdt 2005 sciodt 0 5 hl en div id aubry2014seeing div seeing 3d chairs exemplar part based 2d 3d alignment using a large dataset of cad models 2014 octocat code https github com dimatura seeing3d pdf http www cv foundation org openaccess content cvpr 2014 html aubry seeing 3d chairs 2014 cvpr paper html project http www di ens fr willow research seeing3dchairs citation 110 http scholar google com scholar cites 18030645502969108287 as sdt 2005 sciodt 0 5 hl en div id handa2014benchmark div handa ankur thomas whelan john mcdonald and andrew j davison a benchmark for rgb d visual odometry 3d reconstruction and slam in robotics and automation icra 2014 ieee international conference on pp 1524 1531 ieee 2014 project https www doc ic ac uk ahanda vafric iclnuim html 2013 total 1 detailed 3d representations for object recognition and modeling 2013 pdf http ieeexplore ieee org xpls abs all jsp arnumber 6516504 citation 67 http scholar google com scholar cites 6595507135181144034 as sdt 2005 sciodt 0 5 hl en 2012 total 1 div id butler2012naturalistic div a naturalistic open source movie for optical flow evaluation 2012 pdf http link springer com chapter 10 1007 978 3 642 33783 3 44 project http sintel is tue mpg de citation 227 http scholar google com scholar cites 15124407213489971559 as sdt 20000005 sciodt 0 21 hl en 2010 total 1 learning appearance in virtual scenarios for pedestrian detection 2010 pdf http ieeexplore ieee org xpls abs all jsp arnumber 5540218 citation 79 http scholar google com scholar cites 17243485674852907889 as sdt 2005 sciodt 0 5 hl en 2007 total 1 ovvv using virtual worlds to design and evaluate surveillance systems 2007 pdf http ieeexplore ieee org xpls abs all jsp arnumber 4270516 citation 58 http scholar google com scholar cites 3459961090644684583 as sdt 2005 sciodt 0 5 hl en | computer-vision virtual-worlds synthetic-images dataset | ai |
CabsOnline-Taxi-Booking-System | cabsonline taxi booking system web application development using embedded php and mysql only the assignment is to develop a web based taxi booking system called cabsonline cabsonline allows passengers to book taxi services from any of their internet connected computing devices three major components are customer registration login online booking and administration dbconnect php configure username and password connecting to mysql database mysql sql import file for creating tables register php verifying whether the users input data matching to the requirements email is the primary key to check whether the user is existing in database after registration client will be directed to login page login php verifying whether the users input data matching to the requirements the data will then be checked whether it is matching in the database after that client will be directed to booking page resetpwd php once the input email is matched in the database the system will create an unique code and update database an activation email with unique code link will be sent to the required email activate php verifying whether email and unique code are matching in the database once the data is matched the unique code will be erased in the database and new password input page will be loaded newpwd php new password will be double confirmed and then stored in the database the login page is then loaded booking php verifying whether the users input data matching to the requirements the data will then be recorded in the database unless the booking is 1 hour before the pick up time in the meantime a confirmation email with system generated booking number is sent to the user s registered email bootstrap datepicker css and javascript are suggested to use in the booking form but the files are not included here admin php the administrator logs into the system through login page the role will be recognised through the database backend configuration the administration page is loaded once the administrator logins successfully two functions are available 1 a button to search for all unassigned booking requests with a pick up time within 2 hours 2 an update button with a booking reference number input to update the system that the booking is being assigned logout php to clear user s email in the session and go back to the login page | front_end |
|
AS-One | as one a modular library for yolo object detection and object tracking img src https kajabi storefronts production kajabi cdn com kajabi storefronts production file uploads themes 2151476941 settings images 65d82 0d84 6171 a7e0 5aa180b657d5 black with logo jpg width 100 https www youtube com watch v k vcppwcm8k table of contents 1 introduction 2 prerequisites 3 clone the repo 4 installation linux 4 installation windows 10 11 4 installation macos 4 installation 5 running as one 6 sample code snippets 6 sample code snippets 7 model zoo asone linux instructions benchmarking md 1 introduction update yolo nas is out as one is a python wrapper for multiple detection and tracking algorithms all at one place different trackers such as bytetrack deepsort or norfair can be integrated with different versions of yolo with minimum lines of code this python wrapper provides yolo models in onnx pytorch coreml flavors we plan to offer support for future versions of yolo when they get released this is one library for most of your computer vision needs if you would like to dive deeper into yolo object detection and tracking then check out our courses https www augmentedstartups com store and projects https store augmentedstartups com img src https s3 amazonaws com kajabi storefronts production blogs 22606 images 0fdx83vxsyoy0nao2kmc asone windows play jpg width 50 https www youtube com watch v k vcppwcm8k watch the step by step tutorial 2 prerequisites make sure to install gpu drivers in your system if you want to use gpu follow driver installation asone linux instructions driver installations md for further instructions make sure you have ms build tools https aka ms vs 17 release vs buildtools exe installed in system if using windows download git for windows https git scm com download win if not installed 3 clone the repo navigate to an empty folder of your choice git clone https github com augmentedstartups as one git change directory to as one cd as one 4 installation details open summary for linux summary shell python3 m venv env source env bin activate pip install numpy cython pip install cython bbox asone onnxruntime gpu 1 12 1 pip install super gradients 3 1 3 for cpu pip install torch torchvision for gpu pip install torch torchvision extra index url https download pytorch org whl cu113 details details summary for windows 10 11 summary shell python m venv env env scripts activate pip install numpy cython pip install lap pip install e git https github com samson wang cython bbox git egg cython bbox pip install asone onnxruntime gpu 1 12 1 pip install super gradients 3 1 3 for cpu pip install torch torchvision for gpu pip install torch torchvision extra index url https download pytorch org whl cu113 or pip install torch 1 10 1 cu113 torchvision 0 11 2 cu113 torchaudio 0 10 1 cu113 f https download pytorch org whl cu113 torch stable html details details summary for macos summary shell python3 m venv env source env bin activate pip install numpy cython pip install cython bbox asone pip install super gradients 3 1 3 for cpu pip install torch torchvision details 5 running as one run main py to test tracker on data sample videos test mp4 video python main py data sample videos test mp4 run in google colab a href https drive google com file d 1xy5p9wgi19 pzrh3ceomocgp63k6j ls view usp sharing img src https colab research google com assets colab badge svg alt open in colab a 6 sample code snippets details summary 6 1 object detection summary python import asone from asone import utils from asone import asone import cv2 video path data sample videos test mp4 detector asone detector asone yolov7 pytorch use cuda true set use cuda to false for cpu filter classes car set to none to detect all classes cap cv2 videocapture video path while true frame cap read if not break dets img info detector detect frame filter classes filter classes bbox xyxy dets 4 scores dets 4 class ids dets 5 frame utils draw boxes frame bbox xyxy class ids class ids cv2 imshow result frame if cv2 waitkey 25 0xff ord q break run the asone demo detector py to test detector shell run on gpu python m asone demo detector data sample videos test mp4 run on cpu python m asone demo detector data sample videos test mp4 cpu details summary 6 1 1 use custom trained weights for detector summary 6 1 2 use custom trained weights use your custom weights of a detector model trained on custom data by simply providing path of the weights file python import asone from asone import utils from asone import asone import cv2 video path data sample videos license video webm detector asone detector asone yolov7 pytorch weights data custom weights yolov7 custom pt use cuda true set use cuda to false for cpu class names license plate your custom classes list cap cv2 videocapture video path while true frame cap read if not break dets img info detector detect frame bbox xyxy dets 4 scores dets 4 class ids dets 5 frame utils draw boxes frame bbox xyxy class ids class ids class names class names simply pass custom classes list to write your classes on result video cv2 imshow result frame if cv2 waitkey 25 0xff ord q break details details summary 6 1 2 changing detector models summary change detector by simply changing detector flag the flags are provided in benchmark asone linux instructions benchmarking md tables our library now supports yolov5 yolov7 and yolov8 on macos python change detector detector asone detector asone yolox s pytorch use cuda true for macos yolo5 detector asone detector asone yolov5x mlmodel yolo7 detector asone detector asone yolov7 mlmodel yolo8 detector asone detector asone yolov8l mlmodel details details details summary 6 2 object tracking summary use tracker on sample video python import asone from asone import asone instantiate asone object detect asone tracker asone bytetrack detector asone yolov7 pytorch use cuda true set use cuda false to use cpu filter classes person set to none to track all classes to track using video file get tracking function track detect track video data sample videos test mp4 output dir data results save result true display true filter classes filter classes loop over track to retrieve outputs of each frame for bbox details frame details in track bbox xyxy ids scores class ids bbox details frame frame num fps frame details do anything with bboxes here to track using webcam get tracking function track detect track webcam cam id 0 output dir data results save result true display true filter classes filter classes loop over track to retrieve outputs of each frame for bbox details frame details in track bbox xyxy ids scores class ids bbox details frame frame num fps frame details do anything with bboxes here to track using web stream get tracking function stream url rtsp wowzaec2demo streamlock net vod mp4 bigbuckbunny 115k mp4 track detect track stream stream url output dir data results save result true display true filter classes filter classes loop over track to retrieve outputs of each frame for bbox details frame details in track bbox xyxy ids scores class ids bbox details frame frame num fps frame details do anything with bboxes here note use can use custom weights for a detector model by simply providing path of the weights file in asone class details summary 6 2 1 changing detector and tracking models summary changing detector and tracking models change tracker by simply changing the tracker flag the flags are provided in benchmark asone linux instructions benchmarking md tables python detect asone tracker asone bytetrack detector asone yolov7 pytorch use cuda true change tracker detect asone tracker asone deepsort detector asone yolov7 pytorch use cuda true python change detector detect asone tracker asone deepsort detector asone yolox s pytorch use cuda true details run the asone demo detector py to test detector shell run on gpu python m asone demo detector data sample videos test mp4 run on cpu python m asone demo detector data sample videos test mp4 cpu details details summary 6 3 text detection summary sample code to detect text on an image python detect and recognize text import asone from asone import utils from asone import asone import cv2 img path data sample imgs sample text jpeg ocr asone detector asone craft recognizer asone easyocr use cuda true set use cuda to false for cpu img cv2 imread img path results ocr detect text img img utils draw text img results cv2 imwrite data results results jpg img use tracker on text python import asone from asone import asone instantiate asone object detect asone tracker asone deepsort detector asone craft recognizer asone easyocr use cuda true set use cuda false to use cpu to track using video file get tracking function track detect track video data sample videos gta 5 unique license plate mp4 output dir data results save result true display true loop over track to retrieve outputs of each frame for bbox details frame details in track bbox xyxy ids scores class ids bbox details frame frame num fps frame details do anything with bboxes here run the asone demo ocr py to test ocr shell run on gpu python m asone demo ocr data sample videos gta 5 unique license plate mp4 run on cpu python m asone demo ocr data sample videos gta 5 unique license plate mp4 cpu details details summary 6 4 pose estimation summary sample code to estimate pose on an image python pose estimation import asone from asone import utils from asone import poseestimator import cv2 img path data sample imgs test2 jpg pose estimator poseestimator estimator flag asone yolov8m pose use cuda true set use cuda false to use cpu img cv2 imread img path kpts pose estimator estimate image img img utils draw kpts img kpts cv2 imwrite data results results jpg img now you can use yolov8 and yolov7 w6 for pose estimation the flags are provided in benchmark asone linux instructions benchmarking md tables python pose estimation on video import asone from asone import poseestimator video path data sample videos football1 mp4 pose estimator poseestimator estimator flag asone yolov7 w6 pose use cuda true set use cuda false to use cpu estimator pose estimator estimate video video path save true display true for kpts frame details in estimator frame frame num fps frame details print frame num do anything with kpts here run the asone demo pose estimator py to test pose estimation shell run on gpu python m asone demo pose estimator data sample videos football1 mp4 run on cpu python m asone demo pose estimator data sample videos football1 mp4 cpu details to setup asone using docker follow instructions given in docker setup asone linux instructions docker setup md todo x first release x import trained models x simplify code even further x updated for yolov8 x ocr and counting x ocsort strongsort motpy x m1 2 apple silicon compatibility x pose estimation yolov7 v8 x yolo nas sam integration offered by maintained by augmentedstarups https user images githubusercontent com 107035454 195115263 d3271ef3 973b 40a4 83c8 0ade8727dd40 png https augmentedstartups com axcelerateai https user images githubusercontent com 107035454 195114870 691c8a52 fcf0 462e 9e02 a720fc83b93f png https axcelerate ai | computer-vision opencv yolor yolov5 yolov7 yolox deep-learning object-detection pytorch tracking ultralytics yolov8 yolo-nas | ai |
Research-Computer-Vision | research computer vision a running catalogue of my reviews of the literature of computer vision i will be focusing on reproducing deep learning architectures for the next few months my main interests are in convolutional architectures applied to mobile devices i spend a significant amount of my time researching cnn applications as well as research advances on the compression of neural networks for computer vision | ai |
|
nuttx.rs | nuttx rs rust https github com no1wudi nuttx rs workflows rust badge svg overview a rust std library like wrapper for nuttx it s built on nuttx with mature hardware support and posix compatible api we can use it just like rust std library rust no std no main macro use extern crate nuttx rs no mangle pub fn main println hello from rust requirement on ubuntu install nuttx build dependencies by this command bash sudo apt install gcc arm none eabi kconfig frontends and the glone nuttx and apps into your work space like work nx apps nuttx build first you should setup nuttx s develop enviroment and set the task entry to main or other you preferred and then set the enviroment vairable nuttx src dir nuttx board dir nuttx board ld bash export nuttx src dir path to nuttx e g work nx nuttx export nuttx board dir nuttx boards xxx stm32f4discovery by default export nuttx board ld ld script by default in boards scripts add dependencies to your cargo toml toml dependencies nuttx rs git https github com no1wudi nuttx rs git and in your application project add build target in cargo config toml toml build rustflags c link arg tlink ld the link script link ld you can get in nuttx s board config dir | rust nuttx mcu rtos | os |
doccano | div align center img src https raw githubusercontent com doccano doccano master docs images logo doccano png div doccano codacy badge https app codacy com project badge grade 35ac8625a2bc4eddbff23dbc61bc6abb https www codacy com gh doccano doccano dashboard utm source github com amp utm medium referral amp utm content doccano doccano amp utm campaign badge grade doccano ci https github com doccano doccano actions workflows ci yml badge svg https github com doccano doccano actions workflows ci yml doccano is an open source text annotation tool for humans it provides annotation features for text classification sequence labeling and sequence to sequence tasks you can create labeled data for sentiment analysis named entity recognition text summarization and so on just create a project upload data and start annotating you can build a dataset in hours demo try the annotation demo http doccano herokuapp com demo image https raw githubusercontent com doccano doccano master docs images demo demo gif documentation read the documentation at https doccano github io doccano features collaborative annotation multi language support mobile support emoji smile support dark theme restful api usage there are three options to run doccano pip python 3 8 docker docker compose pip to install doccano run bash pip install doccano by default sqlite 3 is used for the default database if you want to use postgresql install the additional dependencies bash pip install doccano postgresql and set the database url environment variable according to your postgresql credentials bash database url postgres postgres user postgres password postgres host postgres port postgres db sslmode disable after installation run the following commands bash initialize database doccano init create a super user doccano createuser username admin password pass start a web server doccano webserver port 8000 in another terminal run the command bash start the task queue to handle file upload download doccano task go to http 127 0 0 1 8000 docker as a one time setup create a docker container as follows bash docker pull doccano doccano docker container create name doccano e admin username admin e admin email admin example com e admin password password v doccano db data p 8000 8000 doccano doccano next start doccano by running the container bash docker container start doccano go to http 127 0 0 1 8000 to stop the container run docker container stop doccano t 5 all data created in the container will persist across restarts if you want to use the latest features specify the nightly tag bash docker pull doccano doccano nightly docker compose you need to install git and clone the repository bash git clone https github com doccano doccano git cd doccano note for windows developers be sure to configure git to correctly handle line endings or you may encounter status code 127 errors while running the services in future steps running with the git config options below will ensure your git directory correctly handles line endings bash git clone https github com doccano doccano git config core autocrlf input then create an env file with variables in the following format see docker env example https github com doccano doccano blob master docker env example plain platform settings admin username admin admin password password admin email admin example com rabbit mq settings rabbitmq default user doccano rabbitmq default pass doccano database settings postgres user doccano postgres password doccano postgres db doccano after running the following command access http 127 0 0 1 bash docker compose f docker docker compose prod yml env file env up one click deployment service button aws 1 aws cloudformation launch stack svg button https cdn rawgit com buildkite cloudformation launch stack button svg master launch stack svg https console aws amazon com cloudformation home stacks new stackname doccano templateurl https doccano s3 amazonaws com public cloudformation template aws yaml heroku deploy https www herokucdn com deploy button svg https dashboard heroku com new template https 3a 2f 2fgithub com 2fdoccano 2fdoccano gcp 2 gcp cloud run png button https storage googleapis com gweb cloudblog publish images run on google cloud max 300x300 png https console cloud google com cloudshell editor shellonly true cloudshell image gcr io cloudrun button cloudshell git repo https github com doccano doccano git cloudshell git branch cloudrunbutton 1 1 ec2 keypair cannot be created automatically so make sure you have an existing ec2 keypair in one region or create one yourself https docs aws amazon com awsec2 latest userguide ec2 key pairs html having ec2 create your key pair 2 if you want to access doccano via https in aws here is an instruction https github com doccano doccano wiki https setting for doccano in aws 2 although this is a very cheap option it is only suitable for very small teams up to 80 concurrent requests read more on cloud run docs https cloud google com run docs concepts faq how to create a user https doccano github io doccano faq how to create a user how to add a user to your project https doccano github io doccano faq how to add a user to your project how to change the password https doccano github io doccano faq how to change the password see the documentation https doccano github io doccano for details contribution as with any software doccano is under continuous development if you have requests for features please file an issue describing your request also if you want to see work towards a specific feature feel free to contribute by working towards it the standard procedure is to fork the repository add a feature fix a bug then file a pull request that your changes are to be merged into the main repository and included in the next release here are some tips might be helpful how to contribute to doccano project https github com doccano doccano wiki how to contribute to doccano project citation tex misc doccano title doccano text annotation tool for human url https github com doccano doccano note software available from https github com doccano doccano author hiroki nakayama and takahiro kubo and junya kamura and yasufumi taniguchi and xu liang year 2018 contact for help and feedback feel free to contact the author https github com hironsan | natural-language-processing machine-learning annotation-tool python datasets dataset data-labeling text-annotation nuxtjs vue vuejs nuxt | ai |
NENU-Courses | author lili liang date 2021 03 12 12 12 18 lastedittime 2021 03 13 00 16 52 lasteditors please set lasteditors description in user settings edit filepath nenu courses readme md guidance for courses in school of information science and technology nenu github last commit https img shields io github last commit leungll nenu courses color red style flat square github repo size https img shields io github repo size leungll nenu courses style flat square github language count https img shields io github languages count leungll nenu courses color 9cf style flat square github contributors https img shields io github contributors leungll nenu courses color green style flat square github https img shields io github license leungll nenu courses color orange style flat square github repo stars https img shields io github stars leungll nenu courses style social github forks https img shields io github forks leungll nenu courses style social github watchers https img shields io github watchers leungll nenu courses style social https github com qsctech zju icicles https github com salensoft thu cst cracker google a4 readme md ebook hw exam e g java e g 17 ebook readme md exam readme md hw readme md readme md github qq github readme github issue pull request readme leungll0316 gmail com 17 leungll 17 readme md readme md pdf watch star fork star watch fork pull request t octotree https addons mozilla org zh cn firefox addon octotree giteetree https addons mozilla org zh cn firefox addon giteetree downgit https minhaskamal github io downgit home downgit http zhoudaxiaa gitee io downgit home gitzip https kinolien github io gitzip github https shrill pond 3e81 hunsh workers dev download chromium gitzip chrome https chrome google com webstore detail gitzip for github ffabmkklhbepgcgfonabamgnfafbdlkn https addons mozilla org en us firefox addon gitzip zip code download zip git clone https github com leungll nenu courses git gitee https gitee com ll leung nenu courses gitee https gitee com ll leung nenu courses git lfs 100m commit github web fork https github com leungll nenu courses fork code add file upload files code add file upload files pr fork github upload file pr issue email leungll0316 gmail com nenu courses issue pr issues leungll0316 gmail com commit leungll https github com leungll 17 kongyy https github com 2420795001 17 lyfer233 https github com lyfer233 19 cc by nc sa https creativecommons org licenses by nc sa 4 0 deed zh reference https github com qsctech zju icicles https github com salensoft thu cst cracker | server |
|
usellm | notice this package is no longer under active development please use the vercel ai sdk https sdk vercel ai docs instead usellm use large language models in your react app usellm is a react hook for integrating large language models like openai s chatgpt with just a few lines of code see it in action here https usellm org installation install the package from npm bash npm install usellm latest usage please refer to the documentation site https usellm org docs source code https github com usellm usellm | chatgpt react artificial-intelligence javascript typescript | ai |
EmbeddedUMLStateMachines | embeddedumlstatemachines repository for the udemy course embedded systems design using uml state machines | os |
|
blockchain-documentation-project | blockchain in python from scratch understanding blockchain isn t easy at least it wasn t for me i had to go through number of frustrations due to too few funcional examples of how this technology works and i like learning by doing so if you do the same allow me to guide you and by the end you will have a functioning blockchain with a solid idea of how they work before you get started remember that a blockchain is an immutable sequential chain of records called blocks they can contain transactions files or any data you like really but the important thing is that they re chained together using hashes what is needed make sure that you have python 3 6 installed along with pip and you will also need flask and requests library sh pip3 install r requirements you will also need an http client like postman or curl but anything will do step 1 building a blockchain so what does a block look like each block has an index timestamp transactions proof more on that later and a hash of the previous transaction here is an example of what a single block looks like python block index 1 timestamp 1506092455 transactions sender 852714982as982341a4b27ee00 recipient a77f5cdfa2934hv25c7c7da5df1f amount 5 proof 323454734020 previous hash 2cf24dba5fb0a3202h2025c25e7304249898 represenging a blockchain we ll create a blockchain class whose constructor creates a list to store our blockchain and another to store transactions here is how the class will look like python class blockchain object def init self self chain self current transactions staticmethod def hash block pass def new block self pass property def last block self return self chain 1 this blockchain class is responsible for managing the chain it will store transactions and have helper functions the new block method will create a new block and adds it on the chain and returns the last block in the chain the last block method will return the last block in the chain each block contains the hash and the hash of the previous block this is what gives blockchains it s immutability i e if anyone attack this all subsequent blocks will be corrupt it s the core idea of blockchains adding transactions to the block we will need some way of adding transactions to the block python class blockchain object def new transaction self sender recipient amount self current transactions append sender sender recipient recipient amount amount return self last block index 1 the new transaction returns index of the block which will be added to current transactions and is next one to be mined creating new blocks in addition to creating the genesis block in our constructor we will also need to flesh out methods for the new block add new transaction and hash python import hashlib import json import time class blockchain object def init self self chain self current transactions create the genesis block self new block previous hash 1 proof 100 staticmethod def hash block hashes a block also make sure that the transactions are ordered otherwise we will have insonsistent hashes block string json dumps block sort keys true encode return hashlib sha256 block string hexdigest def new block self proof previous hash none creates a new block in the blockchain block index len self chain 1 timestamp time time transactions self current transactions proof proof previous hash previous hash or self hash self chain 1 reset the current list of transactions self current transactions self chain append block return block property def last block self returns last block in the chain return self chain 1 def new transaction self sender recipient amount adds a new transaction into the list of transactions these transactions go into the next mined block self current transactions append sender sender recipient recipient data amount return int self last block index 1 once our block is initiated we need to feed it with the genesis block a block with no predecessors we will also need to add a proof of work to our genesis block which is the result of mining at this point we re nearly done representing our blockchain so lets talk about how the new blocks are created forged and mined understanding proof of work a proof of work algorithm are how new blocks are created or mined on the blockchain the goal is to discover a number that solves a problem the number must be difficult and resources consuming to find but super quick and easy to verify this is the core idea of proof of work so lets work out some stupid shit math problem that we are going to require to be solved in order for a block to be mined lets say that hash of some integer x multiplied by another y must always end in 0 so as an example the hash x y 4b4f4b4f54 0 python from hashlib import sha256 x 5 y 0 we do not know what y should be yet while sha256 f x y encode hexdigest 1 0 y 1 print f the solution is y y in this example we fixed the x 5 the solution in this case is x 5 ands y 21 since it procuced hash 0 python hash 5 21 1253e9373e781b7500266caa55150e08e210bc8cd8cc70d89985e3600155e860 in the bitcoin world the proof of work algorithm is called hashcash and it s not any different from the example above it s the very algorithm that miners race to solve in order to create a new block the difficulty is of course determined by the number of the characters searched for in the string in our example we simplified it by defining that the resultant hash must end in 0 to make the whole thing in our case quicker and less resource intensive but this is how it works really the miners are rewarded for finding a solution by receiving a coin in a transaction there are many opinions on effectiness of this but this is how it works and it really is that simple and this way the network is able to easily verify their solution implementing proof of work let s implement a similar algorithm for our blockchain our rule will be similar to the example above find a number p that when hashed with the previous block s solution a hash with 4 leading 0 is produced python import hashlib import json from time import time from uuid import uuid4 class blockchain object def proof of work self last proof simple proof of work algorithm find a number p such as hash pp containing leading 4 zeros where p is the previous p p is the previous proof and p is the new proof proof 0 while self valid proof last proof proof is false proof 1 return proof staticmethod def validate proof last proof proof validates the proof does hash last proof proof contain 4 leading zeroes guess f last proof proof encode guess hash hashlib sha256 guess hexdigest return guess hash 4 0000 to adjust the difficulty of the algorithm we could modify the number of leading zeors but strictly speaking 4 is sufficient enough also you may find out that adding an extra 0 makes a mammoth difference to the time required to find a solution now our blockchain class is pretty much complete let s begin to interact with the ledger using the http requests step 2 blockchain as an api we ll use python flask framework it s a micro framework and it s really easy to use so for our example it ll do nicely we ll create three simple api endpoints transactions new to create a new transaction block mine to tell our service to mine a new block chain to return the full blockchain setting up flask our server will form a single node in our blockchain so let s create some code python import hashlib import json from time import time from uuid import uuid4 from flask import flask jsonify request initiate the node app flask name generate a globally unique address for this node node identifier str uuid4 replace initiate the blockchain blockchain blockchain app route mine methods get def mine return we will mine a new block app route transaction new methods get def transaction new return we will add a new transaction app route chain methods get def chain response chain blockchain chain length len blockchain chain return jsonify response 200 if name main app run host 0 0 0 0 5000 the transaction endpoint this is what the request for the transaction will look like it s what the user will send to the server json sender sender address recipient recipient address amount 100 since we already have the method for adding transactions to a block the rest is easy and pretty straight forward python import hashlib import json from time import time from uulib import uulib4 from flask import flask jsonify request app route transactions new methods post def new transaction values request get json required sender recipient amount if not all k in values for k in required return missing values 400 create a new transaction index blockchain new transaction values sender values recipient values amount response message f transaction will be added to the block index return jsonify response 200 the mining endpoint our mining endpoint is where the mining happens and it s actually very easy as all it has to do are three things 1 calculate proof of work 2 reward the miner by adding a transaction granting miner 1 coin 3 forge the new block by adding it to the chain so let s add on the mining function on our api python import hashlib import json from time import time from uulib import uulib4 from flask import flask jsonify request app route mine methods get def mine first we have to run the proof of work algorithm to calculate the new proof last block blockchain last block last proof last block proof proof blockchain proof of work last proof we must receive reward for finding the proof blockchain new transaction sender 0 recipient node identifier amount 1 forge the new block by adding it to the chain previous hash blockchain hash last block block blockchain new block proof previous hash response message forged new block index block index transactions block transaction proof block proof previous hash block previous hash return jsonify response 200 at this point we are done and we can start interacting with out blockchain step 3 interacting with our blockchain you can use a plain old curl or postman to interact with our blockchain api ovet the network fire up the server python3 blockchain py running on http 127 0 0 1 5000 press ctrl c to quit so first off let s try mining a block by making a get request to the mine http localhost 5000 mine json index 1 message forged new block previous hash 7cd122100c9ded644768ccdec2d9433043968352e37d23526f63eefc65cd89e6 proof 35293 transactions data 1 recipient 6a01861c7b3f483eab90727e621b2b96 sender 0 200 motherfucker very good now lets create a new transaction by making a post request to http localhost 5000 transaction new with a body containing our transaction structure let s make this call using the curl curl x post h content type application json d sender d4ee26eee15148ee92c6cd394edd974e recipient recipient address amount 5 http localhost 5000 transactions new i have restarted the server mined two blocks to give 3 in total so let s inspect the full chain by requesting http localhost 5000 chain json chain index 1 previous hash 1 proof 100 timestamp 1506280650 770839 transactions index 2 previous hash c099bc bfb7 proof 35293 timestamp 1506280664 717925 transactions amount 1 recipient 8bbcb347e0631231 e152b sender 0 index 3 previous hash eff91a 10f2 proof 35089 timestamp 1506280666 1086972 transactions amount 1 recipient 9e2e234e12e0631231 e152b sender 0 length 3 step 4 transaction verification for this we will be using python nacl to generate a public private signing key pair private key public key which need to be generated before runtime we will employ the cryptography using the public key signature standards x 509 for public key certificates step 5 smart wallet this is very cool wallet is a gateway to decentralized applications on the blockchain it allows you to hold and secure tokens and other crypto assets this blockchain example is built on erc 20 standards and therefore should be compatible and working out of the box with your regular wallet step 6 consensus this is very cool actually we ve got a fully valid basic blockchain that accepts transactions and allows us to mine a new block and get rewarded for it but the whole point of blockchains is to be decentralized and how on earth do we ensure that all the data reflect the same chain well it s actually a well know problem of consensus and we are going to have to implement a consensus algorithm if we want more that a single node in our network so better buckle up we re moving onto registering the new nodes registering new nodes ok first off before you start adding new nodes you d need to let your node to know about his neighbouring nodes this needs to be done before you even start implementing consensus algorithm each node on our network needs to keep registry of other nodes on the network and therefore we will need to add more endpoints to orchestrate our miner nodes miner register to register a new miner node into the operation miner nodes resolve to implement our consensus algorithm to resolve any potential conflicts making sure all nodes have the correct and up to date chain first we re goint to modify the blockchain class constructor and add in the method for registering nodes python from urllib parse import urlparse class blockchain object def init self self nodes set def register miner node self address add on the new miner node onto the list of nodes parsed url urlparse address self nodes add parse url netloc return implementing the consensus algorithm as mentioned conflict is when one node has a different chain to another node to resolve this we ll make the rule that the longest valid chain is authoritative in other words the longest valid chain is de facto one using this simple rule we reach consensuns amongs the nodes in our network python import requests class blockchain object def valid chain self chain determine if a given blockchain is valid last block chain 0 current index 1 while current index len chain block chain current index check that the hash of the block is correct if block previous hash self hash last block return false check that the proof of work is correct if not self valid proof last block proof block proof return false last block block current index 1 return true def resolve conflicts self this is our consensus algorithm it resolves conflicts by replacing our chain with the longest one in the network neighbours self nodes new chain none we are only looking for the chains longer than ours max length len self chain grab and verify chains from all the nodes in our network for node in neighbours we utilize our own api to construct the list of chains response request get f http node chain if response status code 200 length response json length chain response json chain check if the chain is longer and whether the chain is valid if length max length and self valid chain chain max length length new chain chain replace our chain if we discover a new longer valid chain if new chain self chain new chain return true return false ok so the first method the valid chain loops through each block and checks that the chain is valid by verifying both the hash and the proof the resolve conflicts method loops through all the neighbouring nodes downloads their chain and verify them using the above valid chain method if a valid chain is found and it is longer than ours we replace our chain with this new one so what is left are the very last two api endpoints specifically one for adding a neighbouring node and another for resolving the conflicts and it s quite straight forward python app route miner register method post def register new miner values request get json get the list of miner nodes nodes values get nodes if nodes is none return error please supply list of valid nodes 400 register nodes for node in nodes blockchain register node node response message new nodes have been added total nodes list blockchain nodes return jsonify response 200 app route miner nodes resolve method post def consensus an attempt to resolve conflicts to reach the consensus conflicts blockchain resolve conflicts if conflicts response message our chain was replaced new chain blockchain chain return jsonify response 200 response message our chain is authoritative chain blockchain chain return jsonify response 200 and here comes the big one the one you have been waiting for as at this point you can grab a different machine or a computer if you like and spin up different miners on our network or you can run multiple miners on your single machine by running the same process but using a different port number as an example i can run another miner node on my machine by running it on a different port and register it with the current miner therefore i have two miners http localhost 5000 and http localhost 5001 registering a new node curl x post h content type application json d nodes http 127 0 0 1 5001 http localhost 5000 nodes register request ok returned json message new nodes have been added all nodes 127 0 0 1 5001 consensus algorithm at work curl http localhost 5000 nodes resolve request ok returns json message our chain was replaced new chain index 1 previous hash 1 proof 100 timestamp 1525160363 12144 transactions index 2 previous hash 7cd122100c9ded644768ccdec2d9433043968352e37d23526f63eefc65cd89e6 proof 35293 timestamp 1525160706 82745 transactions amount 1 recipient a77f5cdfa2934hv25c7c7da5df1f sender 0 and that s a wrap now go get some friends to mine your blockchain license bsd 2 clause can i contribute sure open an issue point out errors and what not wanna fix something yourselves you re welcome to open a pr and i appreciate it donation address eth 0xbcfab06e0cc4fe694bdf780f1fcb1bb143bd93ad have fun | blockchain python | blockchain |
SpeechGPT | speechgpt empowering large language models with intrinsic cross modal conversational abilities a href https 0nutation github io speechgpt github io img src https img shields io badge project page green a a href https arxiv org abs 2305 11000 img src https img shields io badge paper arxiv red a https img shields io badge datasets speechinstruct yellow https huggingface co datasets fnlp speechinstruct p align center img src pictures logo png width 20 br p introduction speechgpt is a large language model with intrinsic cross modal conversational abilities capable of perceiving and generating multi model content following human instructions with discrete speech representations we first construct speechinstruct a large scale cross modal speech instruction dataset additionally we employ a three stage training strategy that includes modality adaptation pre training cross modal instruction fine tuning and chain of modality instruction fine tuning the experimental results demonstrate that speechgpt has an impressive capacity to follow multi modal human instructions and highlight the potential of handling multiple modalities with one model br speechgpt demos are shown in our project page https 0nutation github io speechgpt github io as shown in the demos speechgpt has strong cross modal instruction following ability and spoken dialogue ability speechgpt can be a talking encyclopedia your personal assistant your chat partner a poet a psychologist and your educational assistant br br p align center img src pictures speechgpt intro png width 95 br speechgpt s capabilities to tackle multiple cross modal tasks p br br p align center img src pictures speechgpt main png width 95 br left speechinstruct construction process right speechgpt model structure p release 2023 9 15 we released speechgpt code and checkpoints and speechinstruct dataset 2023 9 1 we proposed speechtokenizer unified speech tokenizer for speech language models we released the code and checkpoints of speechtokenizer checkout the paper https arxiv org abs 2308 16692 demo https 0nutation github io speechtokenizer github io and github https github com zhangxinfd speechtokenizer 2023 5 18 we released speechgpt empowering large language models with intrinsic cross modal conversational abilities we propose speechgpt the first multi modal llm capable of perceiving and generating multi modal contents following multi modal human instructions checkout the paper https arxiv org abs 2305 11000 and demo https 0nutation github io speechgpt github io table of contents open source list open source list talk with speechgpt talk with speechgpt train speechgpt train speechgpt finetune speechgpt finetune speechgpt open source list models speechgpt 7b ma https huggingface co fnlp speechgpt 7b ma the model obtained after the first stage modality adaptation pre training which was initialized with llama 7b and further pre trained on librilight speech units speechgpt 7b cm https huggingface co fnlp speechgpt 7b cm the model obtained after the second stage cross modal instruction finetuning which was initialized with speechgpt 7b ma and further finetuned on speechinstruct cross modal instruction set this is a powerful foundational model that aligns speech and text speechgpt 7b com https huggingface co fnlp speechgpt 7b com the model obtained after the third stage chain of modality instruction lora finetuning which was initialized with speechgpt 7b cm and further lora finetuned on speechinstruct chain of modality instruction set this is an adapter model of speechgpt 7b cm for spoken dialogue datasets speechinstruct cross modal https huggingface co datasets fnlp speechinstruct the cross modal instruction set about 9 million unit text data pairs tokenized by mhubert from large scale english asr datasets data format speechinstruct chain of modality https huggingface co datasets fnlp speechinstruct the chain of thought style instructions for four input output formats namely speech instruction speech response speech instruction text response text instruction speech response and text instruction text response speechinstruct cross modal data format prefix you are an ai assistant whose name is speechgpt n speechgpt is a intrinsic cross modal conversational language model that is developed by fudan university speechgpt can understand and communicate fluently with human through speech or text chosen by the user n it can perceive cross modal inputs and generate cross modal outputs n plain text human try to speak out this sentence please this is input the alchemist rode in front with the falcon on his shoulder eoh speechgpt sosp 661 588 604 157 596 499 596 106 596 189 63 189 665 991 162 202 393 946 327 905 907 597 660 351 557 794 788 59 754 12 977 877 333 873 835 67 940 118 686 613 169 72 644 553 535 935 101 741 384 173 894 787 380 787 196 555 721 944 250 56 812 222 915 143 390 479 330 435 647 246 650 816 325 506 686 208 613 417 755 193 411 452 111 735 6 735 63 665 644 991 535 271 333 196 918 29 202 393 946 734 390 479 330 776 167 761 907 597 660 351 557 794 75 788 15 366 896 627 168 654 659 177 183 609 710 187 493 361 470 821 59 56 198 912 742 840 431 531 76 668 576 803 791 380 660 325 801 549 366 377 164 309 584 605 193 71 39 eosp eoa speechinstruct chain of modality data format prefix you are an ai assistant whose name is speechgpt n speechgpt is a intrinsic cross modal conversational language model that is developed by fudan university speechgpt can understand and communicate fluently with human through speech or text chosen by the user n it can perceive cross modal inputs and generate cross modal outputs n plain text human sosp 661 987 511 732 951 997 111 982 189 63 665 991 535 101 741 173 945 944 503 641 124 565 734 870 290 978 833 238 761 907 430 901 185 403 557 244 583 788 663 969 896 627 143 515 663 969 660 691 251 412 260 41 740 677 253 380 382 268 506 876 417 755 16 819 80 651 80 651 80 987 588 eosp eoh speechgpt what is a bad term for poop ta a bad term for poop is excrement it is usually used as a polite way to refer to fecal waste ua sosp 497 63 264 644 710 823 565 577 154 331 384 173 945 29 244 326 583 728 576 663 969 896 627 143 38 515 663 24 382 251 676 412 260 41 740 677 253 382 268 876 233 878 609 389 771 865 641 124 878 609 423 384 879 487 219 522 589 337 126 119 663 748 12 671 877 377 385 902 819 619 842 419 997 829 111 666 42 277 63 665 644 389 771 685 437 641 124 258 436 139 340 11 59 518 56 948 86 258 436 139 340 347 376 940 118 944 878 173 641 124 362 734 179 961 931 878 609 423 384 879 219 522 866 337 243 935 101 741 822 89 194 630 86 555 105 79 868 220 156 824 998 870 390 422 330 776 663 969 523 105 79 799 220 357 390 479 422 330 776 485 165 86 501 119 716 205 521 787 935 101 741 89 194 664 835 67 940 118 613 417 755 902 415 772 497 eosp eoa talk with speechgpt due to limited training data and resources the performance of the open source speechgpt is currently not optimal problems such as task recognition errors and inaccuracies in speech recognition may occur as this project is primarily an exploration in research we have not increased the amount of pretraining and sft data or training steps to enhance performance our hope is that speechgpt can serve as a foundational model to encourage research and exploration in the field of speech language models installation bash git clone https github com 0nutation speechgpt cd speechgpt conda create name speechgpt python 3 8 conda activate speechgpt pip install r requirements txt download to talk with speechgpt you should download speechgpt 7b cm https huggingface co fnlp speechgpt 7b cm and speechgpt 7b com https huggingface co fnlp speechgpt 7b com locally you should download mhubert model to utils speech2unit please see speech2unit https github com 0nutation speechgpt blob main speechgpt utils speech2unit readme md for details bash s2u dir uitls speech2unit cd s2u dir wget https dl fbaipublicfiles com hubert mhubert base vp en es fr it3 pt wget https dl fbaipublicfiles com hubert mhubert base vp en es fr it3 l11 km1000 bin you should download the unit vocoder to utils vocoder please see vocoder https github com 0nutation speechgpt blob main speechgpt utils vocoder readme md for details bash vocoder dir utils vocoder cd vocoder dir wget https dl fbaipublicfiles com fairseq speech to speech vocoder code hifigan mhubert vp en es fr it3 400k layer11 km1000 lj config json o config json wget https dl fbaipublicfiles com fairseq speech to speech vocoder code hifigan mhubert vp en es fr it3 400k layer11 km1000 lj g 00500000 o vocoder pt cli inference bash python3 speechgpt src infer cli infer py model name or path path to speechgpt 7b cm lora weights path to speechgpt 7b com s2u dir s2u dir vocoder dir vocoder dir output dir output notes for speech input you can provide the path to the audio file for asr or tts tasks you must prefix the speech or text with this is input otherwise it may be recognized incorrectly the speech response will be saved to a wav file and detailed responses will be saved in a json file the paths to these files will be indicated in the response here are some examples of talking with speechgpt textual dialogue example please talk with speechgpt who is lebron james response lebron james is an american professional basketball player for the los angeles lakers of the national basketball association nba he is considered one of the greatest basketball players of all time and is known for his athleticism scoring ability and leadership skills he is a four time nba mvp a 14 time nba all star a 13 time all nba selection and a two time olympic gold medalist response json is saved in output responses json spoken dialogue example please talk with speechgpt prompts 0 wav transcript what are the main causes of climate change text response the main causes of climate change are human activities such as burning fossil fuels deforestation and agricultural practices these activities release greenhouse gases like carbon dioxide and methane into the atmosphere which trap heat and cause the earth s temperature to rise speech repsonse is saved in output wav answer 0 wav response json is saved in output responses json asr example please talk with speechgpt recognize this speech this is input prompts 1 wav response today is a sunny day response json is saved in output responses json tts example please talk with speechgpt read this sentence aloud this is input today is a sunny day response sosp 661 987 520 982 681 982 681 982 681 982 681 982 189 63 662 79 868 220 196 166 549 822 89 194 633 14 855 183 609 389 771 865 641 124 362 734 742 98 519 26 204 280 668 167 104 650 179 961 428 950 82 165 196 166 549 822 89 194 458 726 603 819 651 133 651 133 186 133 186 133 186 511 186 511 eosp speech repsonse is saved in output wav answer 1 wav response json is saved in output responses json gradio web ui bash python3 speechgpt src infer web infer py model name or path path to speechgpt 7b cm lora weights path to speechgpt 7b com s2u dir s2u dir vocoder dir vocoder dir output dir output train speechgpt stage1 modality adaptation pre training first utilize mhubert for discretizing the librilight dataset to obtain discrete unit sequences for stage1 training you can refer to the data processing methods in speech2unit https github com 0nutation speechgpt blob main speechgpt utils speech2unit readme md second divide the discrete units into a training set and a development set and save them in the following format in the files data stage1 train txt and data stage1 dev txt sosp 189 247 922 991 821 258 485 974 284 466 969 523 196 202 881 331 822 853 432 32 742 98 519 26 204 280 576 384 879 901 555 944 366 641 124 362 734 156 824 462 761 907 430 81 597 716 205 521 470 821 677 355 483 641 124 243 290 978 82 620 915 470 821 576 384 466 398 212 455 931 579 969 778 45 914 445 469 576 803 6 803 791 377 506 835 67 940 613 417 755 237 224 452 121 736 eosp sosp 300 189 63 6 665 991 881 331 6 384 879 945 29 244 583 874 655 837 81 627 545 124 337 850 412 213 260 41 740 797 211 488 961 428 6 196 555 944 873 32 683 700 955 812 328 915 166 250 56 903 86 233 479 330 776 167 104 764 259 921 366 663 432 431 531 976 314 822 89 664 377 611 479 417 eosp sosp 189 735 991 39 565 734 32 742 98 519 26 204 280 668 576 803 791 660 555 233 787 101 741 466 969 219 107 459 491 556 384 733 219 501 445 137 910 523 793 50 981 230 534 321 948 86 116 281 62 462 104 70 918 743 15 212 455 143 836 173 944 958 390 422 66 776 258 436 139 663 432 742 98 519 589 243 126 260 41 444 6 655 764 969 219 727 85 297 700 362 493 6 493 361 393 946 6 470 821 246 655 837 81 969 916 584 819 544 452 158 452 736 eosp third you should download llama 7b huggingface to llama hf 7b now you can start stage1 training to perform distributed training you must specify the correct values for nnode node rank master addr and master port bash bash scripts ma pretrain sh nnode node rank master addr master port stage 2 cross modal instruction finetuning you should download speechinstruct cross modal instruction set https huggingface co datasets fnlp speechinstruct resolve main cross modal instruction jsonl to data stage2 if you want to skip stage1 training you can download speechgpt 7b ma to output stage1 now you can start stage2 training to perform distributed training you must specify the correct values for nnode node rank master addr and master port bash bash scripts cm sft sh nnode node rank master addr master port stage 3 chain of modality instruction finetuning you should download speechinstruct chain of modality instruction set https huggingface co datasets fnlp speechinstruct resolve main chain of modality instruction jsonl to data stage3 if you want to skip stage1 and stage2 you can download speechgpt 7b cm to output stage2 now you can start stage3 training to perform distributed training you must specify the correct values for nnode node rank master addr and master port bash bash scripts com sft sh nnode node rank master addr master port finetune speechgpt speech 7b cm is a foundational model with strong alignment between speech and text we encourage fine tuning speechgpt based on this model step1 prepare your data following the format in speechinstruct cross modal instruction set https huggingface co datasets fnlp speechinstruct resolve main cross modal instruction jsonl step2 download speechgpt 7b cm https huggingface co fnlp speechgpt 7b cm locally step3 modify the metaroot dataroot and outroot parameters in the scripts cm sft sh script to yours and then run it for lora fine tuning update the metaroot dataroot and outroot parameters in the scripts com sft sh script and run it acknowledgements we express our appreciation to fuliang weng and rong ye for their valuable suggestions and guidance moss https github com openlmlab moss we use moss sft 002 data stanford alpaca https github com tatsu lab stanford alpaca the codebase we built upon citation if you find speechgpt useful for your research and applications please cite using the bibtex misc zhang2023speechgpt title speechgpt empowering large language models with intrinsic cross modal conversational abilities author dong zhang and shimin li and xin zhang and jun zhan and pengyu wang and yaqian zhou and xipeng qiu year 2023 eprint 2305 11000 archiveprefix arxiv primaryclass cs cl | ai |
|
s2s | s2s deepsource https deepsource io gh auula s2s svg label active issues show trend true https deepsource io gh auula s2s ref repository badge deepsource https deepsource io gh auula s2s svg label resolved issues show trend true https deepsource io gh auula s2s ref repository badge license https img shields io badge license mit db5149 svg https github com higker sessionx blob master license go reference https pkg go dev badge github com auula s2s svg https pkg go dev github com auula s2s s2s s2s sql to structure java go rust class s2s s2s bash s2s export s2s host 127 0 0 1 3306 export s2s user root export s2s pwd you db password export s2s charset utf8 windows mac linux profile zshrc 1 go go build 2 s2s 3 rust user info snake case bug windows x64 s2s windows x64 zip https github com higker s2s releases download v0 0 1 s2s windows x64 zip mac x64 s2s darwin x64 zip https github com higker s2s releases download v0 0 1 s2s darwin x64 zip linux 64 s2s linux x64 zip https github com higker s2s releases download v0 0 1 s2s linux x64 zip ps tab databases use tables gen gen info exit clear bash s2s java you have entered the command line mode press the tab key to get a prompt enter exit to exit the program s2s databases database 1 information schema 2 emp db 3 mysql 4 performance schema 5 sys 6 test db s2s use emp db selected as database emp db s2s tables tables 1 user info s2s gen user info package model import java sql timestamp import java math bigdecimal import java math biginteger public class userinfo private string account private timestamp createtime private timestamp updateddate private short age private bigdecimal money id private biginteger uid public string getaccount return account public void setaccount string account this account account public timestamp getcreatetime return createtime public void setcreatetime timestamp createtime this createtime createtime public timestamp getupdateddate return updateddate public void setupdateddate timestamp updateddate this updateddate updateddate public short getage return age public void setage short age this age age public bigdecimal getmoney return money public void setmoney bigdecimal money this money money public biginteger getuid return uid public void setuid biginteger uid this uid uid override public string tostring return user info account account createtime createtime updateddata updateddate age age money money uid uid s2s exit bye go 1 bash go get u github com higker s2s 2 go package main import github com higker s2s core lang java github com higker s2s core db func main java structure java new if err structure opendb db info hostipandport os getenv s2s host ip username os getenv s2s user password os getenv s2s pwd type db mysql postgresql oracle charset os getenv s2s charset err nil failed to establish a connection to the database logic code defer structure close structure setschema structure parse os stdout mysql issues 1 linux | database java rust command reverse-engineering go | server |
Udacity-Data-Engineering-Projects | data engineering nanodegree learn to design data models build data warehouses and data lakes automate data pipelines and work with massive datasets at the end of the program you ll combine your new skills by completing a capstone project data modeling learn to create relational and nosql data models to fit the diverse needs of data consumers use etl to build databases in postgresql and apache cassandra data modeling with postgres data modeling with apache cassandra cloud data warehouses sharpen your data warehousing skills and deepen your understanding of data infrastructure create cloud based data warehouses on amazon web services aws build a cloud data warehouse spark and data lakes understand the big data ecosystem and how to use spark to work with massive datasets store big data in a data lake and query it with spark build a data lake data pipelines with airflow schedule automate and monitor data pipelines using apache airflow run data quality checks track data lineage and work with data pipelines in production data pipelines with airflow capstone project combine what you ve learned throughout the program to build your own data engineering portfolio project this is currently still in progress data engineering capstone | cloud |
|
InterruptButton | interruptbutton for esp32 arduino esp idf this is highly responsive and simple to use interrupt based button event library for the esp32 suitable in the arduino framework as well as the esp idf framework it uses the onchange interrupt rising or falling for a given pin and the esp high precision timer to carry the necessary pin polling to facilitate a simple but reliable debouncing and timing routines once de bounced actions bound to certain button events including key down key up key press long key press auto repeat press and double clicks are available to bind your own functions to how the bound functions are executed depends on the mode you have set the library to mode asynchronous actioned immediately mode synchronous actioned in main loop or mode hybrid where key up and key down are asynchronous and the remainder of events are executed synchronously this makes employing extended button functions very easy to do only the first 2 lines below are required to get it going global variable interruptbutton button1 32 low monitor pin 35 low when pressed setup function button1 bind event keypress menu0button1keypress bind a predefined function to the event alternatively you may bind a lambda function directy which saves having to define it elsewhere button1 bind event doubleclick do stuff here if you have selected mode synchronous then you ll need to action the event in the main loops as below main loop function button1 processsyncevents only required if using sync events with a built in user defined menu paging system each button can be bound to multiple layers of different functionality depending on what the user interface is displaying at a given moment this means it can work as a simple button all the way up to a full user interface using a single button with different combinations of special key actions for navigation the use of interrupts instead of laborious button polling means that actions bound to the button are not limited to the main loop frequency which significantly reduces the chance of missed presses with long main loop durations there are 6 events and a menu paging structure which is only limited to your memory on chip which is huge for the esp32 this means you could attach 6 different user defined action functions to a single button per menu level per page ie if you have a 4 page gui menu one button associated with that interface could have up to 24 actions bound to it it allows for the following event types events are actioned by calling user defined functions attached to specific button events asynchronously via rtos or synchronously via main loop hook event keydown happens anytime key is depressed even if held be it a keypress longkeypress or a double click event keyup happens anytime key is released be it a keypress longkeypress end of an autorepeatpress or a double click event keypress occurs upon keyup only if it is not a longkeypress autorepeatpress or double click event longkeypress required press time is user configurable event autorepeatpress rapid fire if enabled but not defined then the standard keypress action is used event doubleclick max time between clicks is user configurable multi page level events this is handy if you have several different gui pages where all the buttons mean something different on a different page you can change the menu level of all buttons at once using the static member function setmenulevel level note that you must set the desired number of menus before initialising your first button as this cannot be changed later this may be improved later subject to user requests other features each event or all events can enabled or disabled on a per button basis the timing for debounce longpress autorepeatpress and doubleclick can be set on a per button basis asynchronous events are called immediately after debouncing synchronous events are invoked by calling the processsyncevents member function in the main loop and are subject to the main loop timing example usage this is an output of the serial port from the example file here just the serial println function is called but you can replace that with your own code to do what you need menu 0 button 1 key down 5371114 ms menu 0 button 1 key up 5371340 ms menu 0 button 1 key press 5371540 ms menu 0 button 1 key down 5373335 ms menu 0 button 1 long key press 5374085 ms menu 0 button 1 auto repeat key press 5374335 ms menu 0 button 1 key up 5374472 ms menu 0 button 1 key down 5376200 ms menu 0 button 1 key up 5376380 ms menu 0 button 1 key press 5376580 ms menu 0 button 1 key down 5377305 ms menu 0 button 1 long key press 5378055 ms menu 0 button 1 auto repeat key press 5378305 ms menu 0 button 1 auto repeat key press 5378555 ms menu 0 button 1 auto repeat key press 5378805 ms menu 0 button 1 auto repeat key press 5379055 ms menu 0 button 1 key up 5379145 ms menu 0 button 2 key down 5380088 ms menu 0 button 2 key up 5380378 ms menu 0 button 2 key press 5380578 ms menu 0 button 2 key down 5381342 ms menu 0 button 2 long press 5382092 ms disabling doubleclicks and switing to menu level 1 menu 1 button 2 auto repeat key press 5382343 ms menu 1 button 2 key up 5382440 ms menu 1 button 1 key down 5389981 ms menu 1 button 1 key up 5390070 ms menu 1 button 1 key down 5390149 ms menu 1 button 1 key up 5390264 ms menu 1 button 1 double click 5390265 ms changing to asynchronous mode and to menu level 0 functional flow diagram the flow diagram below shows the basic function of the library it is pending an update to include some recent updates and additions such as autorepeatpress flow diagram images flowdiagram png roadmap forward consider adding button modes such as momentary latching etc known limitations the synchronous routines can be much more robust than asynchronous routies depending on value set for m rtosservicerstackdepth but are limited to the main loop frequency see the example file as it covers most interesting things but i believe it is fairly self explanatory this libary should not be used for mission critical or mass deployments the developer should satisfy themselves that this library is stable for their purpose i feel the code works great and i generated this library because i couldn t find anything similar and the ones based on loop polling didn t work at all with long loop times interrupts can be a bit cantankerous but this seems to work nearly flawlessly in my experience but i imagine maybe not so for everyone and welcome any suggestions for improvements special thanks to vortigont for all his input and feedback particuarly with respect to methodology implementing the esp idf functions to allow this library to work on both platforms and the suggestion of rtos queues | button interrupts events arduino-library esp32 esp32-arduino debounce synchronous asynchronous rtos | os |
js-blockchain | javascript blockchain a simple implementation of blockchain using express and web sockets that allows you to add transaction and view the chain details frameworks used 1 node js https nodejs org en 2 socket io https socket io 3 express https expressjs com 4 pm2 runtime https pm2 io runtime start up 1 run yarn install or npm install 2 run yarn start or npm start endpoints post nodes request body json host localhost port 5000 post transaction request body json sender foo receiver bar amount 4 get chain response json index 0 proof 0 timestamp 1539970017557 previousblockhash 1 transactions index 1 proof 6109837942 514816 timestamp 1539970045394 previousblockhash 4794a4da7764850e31a2974ea1983ee048e5d8db9d882c16e9d4b55c1ed4fd3e transactions sender a receiver b amount 1 timestamp 1539970035916 sender c receiver d amount 2 timestamp 1539970045393 license mit license md | blockchain |
|
cleartk | cleartk example workflow https github com cleartk cleartk actions workflows build snapshot yml badge svg maven central https maven badges herokuapp com maven central org dkpro core dkpro core badge svg style plastic https maven badges herokuapp com maven central org cleartk cleartk introduction cleartk provides a framework for developing statistical natural language processing nlp components in java and is built on top of apache uima it is developed by the center for computational language and education research clear at the university of colorado at boulder cleartk is built with maven and we recommend that you build your project that depends on cleartk with maven this will allow you to add dependencies for only the parts of cleartk that you are interested and automatically pull in only those dependencies that those parts depend on the zip file you have downloaded is provided as a convenience to those who are unable to build with maven it provides jar files for each of the sub projects of cleartk as well as all the dependencies that each of those sub projects uses to use cleartk in your java project simply add all of these jar files to your classpath if you are only interested in one or a few sub project of cleartk then you may not want to add every jar file provided here please consult the maven build files to determine which jar files are required for the parts of cleartk you want to use please see the section titled dependencies below for important licensing information license copyright c 2007 2014 regents of the university of colorado all rights reserved redistribution and use in source and binary forms with or without modification are permitted provided that the following conditions are met redistributions of source code must retain the above copyright notice this list of conditions and the following disclaimer redistributions in binary form must reproduce the above copyright notice this list of conditions and the following disclaimer in the documentation and or other materials provided with the distribution neither the name of the university of colorado at boulder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission this software is provided by the copyright holders and contributors as is and any express or implied warranties including but not limited to the implied warranties of merchantability and fitness for a particular purpose are disclaimed in no event shall the copyright owner or contributors be liable for any direct indirect incidental special exemplary or consequential damages including but not limited to procurement of substitute goods or services loss of use data or profits or business interruption however caused and on any theory of liability whether in contract strict liability or tort including negligence or otherwise arising in any way out of the use of this software even if advised of the possibility of such damage dependencies cleartk depends on a variety of different open source libraries that are redistributed here subject to the respective licensing terms provided by each library we have been careful to use only libraries that are commercially friendly please see the notes below for exceptions for a complete listing of the dependencies and their respective licenses please see the file licenses index html gpl dependencies cleartk has two sub projects that depend on gpl licensed libraries cleartk syntax berkeley cleartk stanford corenlp neither of these projects nor their dependencies are provided in this release to obtain these projects please manually download them from our googlecode hosted maven repository http cleartk googlecode com svn repo org cleartk cleartk syntax berkeley http cleartk googlecode com svn repo org cleartk cleartk stanford corenlp svmlight cleartk also has two projects called cleartk ml svmlight and cleartk ml tksvmlight which have special licensing considerations the cleartk project does not redistribute svmlight cleartk does however facilitate the building of svmlight models via the classifierbuilder interface in order to use the implementations of this interface to good effect you will need to have svmlight installed on your machine the classifierbuilders for svmlight simply call the executable svm learn provided by the svmlight distribution cleartk does not use svmlight at classification time it only uses the models that are build by svmlight instead cleartk provides its own code for classification that makes use of an svmlight generated model this code is provided with cleartk and is available with the above bsd license as is all of the other code written for cleartk therefore be advised that while cleartk is not required or compelled to redistribute the code or license of svmlight or to comply with it i e the noncommercial license provided by svmlight is not compatible with our bsd license it would be very difficult to use the svmlight wrappers we provide in a commercial setting without obtaining a license for svmlight directly from its authors lgpl the cleartk ml mallet project depends on mallet http mallet cs umass edu which depends on trove4j http trove starlight systems com which is released under the lgpl license if you do not need mallet classifiers and would like to avoid the lgpl license you can omit the cleartk ml mallet dependency | ai |
|
LLM | large language models a language model is a probability distribution over words or sequences of words 46 jurafsky 2023 book given a sequence of words of length m a language model assigns a probability p w 1 ldots w m to the whole sequence based on training data from text in large language models llms deep neural networks with a huge number of parameters billions or trillions are trained on massive datasets of unlabelled text to assign probabilities language every community in human history possesses language a feature regarded innate and defining of humanity non human species can exchange information but none of them have a communication system with a complexity that in any way is comparable to human language 37 scott phillips 2013 jrsi although humans have always develop means and media to preserve and reproduce expressions of their culture history shows that different cultures developed different technologies to create these means and media to accommodate their socioeconomic needs and to extend knowledge 14 gabrial 2007 book furthermore history also shows that the development of writing systems have been sporadic uneven and slow 4 berwick 2015 book due to the different definitions of language reported in the literature when one refer to language one are referring to a strict sense of linguistic communication 15 glasersfeld 1977 book additionally one refer to natural languages and not to artificial and restricted languages such as mathematics or formal logic moreover one focus in written language particularly one define language as a set of symbols structured according to specific set of rules to express ideas furthermore one define writing as a communication system that represents language with inscriptions to make it understandable creating a durable version of speech that can be stored for future reference or transmitted across distance writing allows societies to transmit information and to share knowledge by using a key piece called text a sequence of symbols forming words sentences and larger units such as paragraphs and documents that can be understood in context over the centuries writing has been evolving due to the development of new technologies from finger bone clay papyrus pen pencil paper and printing press to digital computing devices technology have altered what is written the medium through which the text are produced and how fast they are produced 14 gabrial 2007 book recently the growing volume of information available in digital media especially the web 43 gulli 2005 www 44 vandenbosch 2016 scientometrics is demanding ever more technologies for efficient information consume in particular the development of technologies for text representation to be used in different computational tasks such as natural language processing information retrieval and data mining is paramount to ensure the effective consumption of the increasing amount of digital information 25 manning 2008 book text representation text representation aims to numerically represent written symbols letters words sentences documents to make them mathematically computable 42 yan 2009 book in the computer science research field some text representation strategies have been proposed in the past decades for different application purposes commonly text is represented by a numerical vector such that the similarity between vectors two different piece of text can be computed by different approaches for instance by the vector normalized inner product also known as the cosine similarity 36 salton 1975 cacm besides enable mathematically computation representing text as vector provides support for language modeling language modeling language modeling is the process of the development of models that are able to predict the next word in a sequence given the words that precede it 35 ponte 1998 sigir language modeling is a root problem for a large range of natural language processing tasks practically language models are used as a basis for more sophisticated models for tasks that require language understanding therefore they are crucial components in real world applications such as machine translation and automatic speech recognition playing a central role in information retrieval natural language processing artificial intelligence and machine learning research 16 goldberg 2017 book 25 manning 2008 book considering the word prediction strategy one can categorize language models into three distinct groups probabilistic topic and neural models probabilistic models probabilistic models also known as probabilistic language models or statistical language models are probabilistic distributions over sequences or words that are able to predict the next word in a sequence given the words that precede it 35 ponte 1998 sigir a probabilistic model learns the probability of word occurrence based on examples of text simpler models may look at a context of a short sequence of words whereas complex models may work at large sequences formally a probabilistic model can be represented by a conditional probability of the next word given all the previous ones 3 bengio 2003 jmlr since p w 1 t prod i 1 t p w t mid w 1 t 1 where w t is the t th word and a sequence w i j w i w i 1 ldots w j 1 w j such models have already been used to address many natural language problems such as speech recognition language translation and information retrieval the most popular models used for this purpose are the bag of words bow and n gram models bow bow is an acronym for bag of words a probabilistic language model where all words in a text are mutually independent it is called a bag of words since text structure the order and distance of words are discarded only concerning with whether known words occur not where next or close to the intuition is that similar text share similar words regardless of their structure in this model the probability of each word only depends on that word s own probability in the text formally the sum of the probabilities of each word in the text vocabulary is one sum i 1 t p w i 1 bow is a very simple and easy to understand model hence it has been used on several information retrieval and natural language problems nevertheless it has some limitations first the size of the vocabulary impacts the sparsity of the text representation accordingly sparse representations are harder to compute space and time complexity and to leverage the available information in such a large representational space second discarding word correlations such as the order and distance of words the model ignores context and therefore the meaning of the words in text semantics can help to address several information retrieval and natural language problems by incorporating important correlation between words such as polysemy synonyms and hyponyms an usual text representation with bow represents a text as a t dimensional vector of reals numbers where t is the size of the text vocabulary i e the set of unique words in the text and real numbers are word metrics extracted from the text for instance considering the text the who is the band their vocabulary is composed by four words band is the and who therefore one represents the text using bow as 1 0 1 0 2 0 1 0 where each dimension corresponds to a word in the vocabulary and the real numbers are the frequency of the word in the text analogously for a collection of texts one have a larger vocabulary and therefore larger vectors representing each text of the collection the representation of large text collections commonly culminates in the curse of dimensionality 45 chen 2009 book is such phenomena when the dimensionality increases the volume of the space increases so fast that the data vectors become sparse course dimensionality is a fundamental problem making language modeling and learning tasks too difficult 3 bengio 2003 jmlr a widely used approach to address the high dimensionality that leads to sparsity problem is managing vocabulary sparse vectors require more memory and computational resources therefore decreasing the size of the vocabulary improves text processing efficiency there are simple but effective techniques that can be used as a preliminary strategy such as ignoring case punctuation and stop words fix misspelled words and stemming 25 manning 2008 book although these techniques do not definitively solve the sparsity problem they offer a way to lower the computational cost at the price of a semantically more restrictive representation of the original text which may be useful in some learning natural language processing and information retrieval tasks in addition to word frequency one can also use different metrics for the real valued vectors tf idf is a very popular word weighting schema in information retrieval providing a numerical statistic that is intended to reflect how important a word is to a text in a collection particularly the tf idf value of a word increases proportionally to the word frequency in the text term frequency and decreases proportionally to the number of distinct texts it occurs in the collection inverted document frequency there are different variants proposed in literature for accounting tf and idf metrics 25 manning 2008 book and one of the most used variants defines tf log 1 f t d where f t d is the frequency of a word t in a text d and idf log n n t where n is the number of texts in the collection and n t is number of distinct texts where the word t occurs in the collection for instance for a collection composed of three texts t 1 the who is the band t2 who is the band and t3 the band who plays the who one have a vocabulary composed of five words band is plays the and who therefore one represents the texts using bow with tf idf as t 1 1 0 1 0 2 0 1 0 t 2 1 0 1 0 2 0 1 0 and t 3 1 0 1 0 2 0 1 0 n grams to solve the limitations of bow model bow various advanced text representation strategies have been proposed the n gram statistical language models 2 were proposed to capture the term correlation within document however the exponentially increasing data dimension with the increase of n limits the application of n gram models in fact the bow model is also known as the unigram model n gram model with n 1 topic models topic models are statistical models for discovering latent topics from text lsa the latent semantic indexing lsi 3 was proposed to reduce the polysemy and synonym problems at the same time lsi can also represent the semantics of text documents through the linear combination of terms which is computed by the singular value decomposition svd however the high complexity of svd 1 make lsi seldom used in real ir tasks in addition some external resources such as the wordnet and wikipedia are recently used for solving the polysemy and synonym problems in text representation since the effectiveness of these external resources for text representation can only be learned in research papers and there still has no evidence to show their power in real ir applications this article will not introduce their details plsa motivated by the lsi the probabilistic latent semantic indexing plsi 6 is also proposed for representing the semantics of text documents however it is still limited by the computation complexity due to the increasing scale of web data as a summary though various approaches have been proposed for solving the limitations of bow the bow with tfidf term weighting schema is still one of the most commonly used text representation strategies in real ir applications pda motivated by the lsi the probabilistic latent semantic indexing plsi 6 is also proposed for representing the semantics of text documents however it is still limited by the computation complexity due to the increasing scale of web data as a summary though various approaches have been proposed for solving the limitations of bow the bow with tfidf term weighting schema is still one of the most commonly used text representation strategies in real ir applications neural models neural models use neural network to provide continuous representations or embeddings of words references a name arora 2017 iclr a 1 1 sanjeev arora yingyu liang and tengyu ma an efficient framework for learning sentence representations in proceedings of the 5th international conference on learning representations iclr 17 april 2017 sif a name baker 1962 jacm a 2 2 frank b baker information retrieval based upon latent class analysis journal of the acm 9 4 512 521 october 1962 lsa a name bengio 2003 jmlr a 3 3 yoshua bengio r jean ducharme pascal vincent and christian janvin a neural probabilistic language model the journal of machine learning research 3 1137 1155 march 2003 nplm a name berwick 2015 book a 4 4 robert c berwick and noam chomsky why only us language and evolution mit press 2015 a name blei 2003 jmlr a 5 5 david m blei andrew y ng and michael i jordan latent dirichlet allocation the journal of machine learning research 3 993 1022 march 2003 lda a name bojanowski 2017 tacl a 6 6 piotr bojanowski edouard grave armand joulin and tomas mikolov enriching word vectors with subword information transactions of the association for computational linguistics 5 135 146 june 2017 fasttext a name brown 1992 cl a 7 7 peter f brown peter v desouza robert l mercer vincent j della pietra and jenifer c lai class based n gram models of natural language computational linguistics 18 4 467 479 december 1992 n gram a name cer 2018 emnlp a 8 8 daniel cer yinfei yang sheng yi kong nan hua nicole limtiaco rhomni st john noah constant mario guajardo cespedes steve yuan chris tar brian strope and ray kurzweil universal sentence encoder for english in proceedings of the 2018 conference on empirical methods in natural language processing emnlp 18 pages 169 174 october 2018 use a name conneau 2018 corr a 9 9 alexis conneau and douwe kiela senteval an evaluation toolkit for universal sentence representations in proceedings of the 19th international conference on language resources and evaluation lrec 18 pages 1699 1704 may 2018 evaluation a name conneau 2017 emnlp a 10 10 alexis conneau douwe kiela holger schwenk lo c barrault and antoine bordes supervised learning of universal sentence representations from natural language inference data in proceedings of the 2017 conference on empirical methods in natural language processing emnlp 17 pages 1532 1543 september 2017 infersent a name deerwester 1990 jasis a 11 11 scott deerwester susan t dumais george w furnas thomas k landauer and richard harshman indexing by latent semantic analysis journal of the american society for information science 41 6 391 407 1990 lsi a name firth 1935 tps a 12 12 john rupert firth the technique of semantics transactions of the philological society 34 1 36 73 1935 foundations a name firth 1957 book a 13 13 john rupert firth a synopsis of linguistic theory 1930 1955 in studies in linguistic analysist pages 1 32 1957 foundations a name gabrial 2007 book a 14 14 brian gabrial history of writing technologies in handbook of research on writing history society school individual text pages 27 39 july 2007 a name glasersfeld 1977 book a 15 15 ernst von glasersfeld linguistic communication theory and definition in language learning by a chimpanzee pages 55 71 academic press 1977 a name goldberg 2017 book a 16 16 yoav goldberg and graeme hirst neural network methods in natural language processing morgan claypool publishers 2017 a name harris 1954 word a 17 17 zellig s harris distributional structure word 10 146 162 1954 bow a name hill 2016 naacl a 18 18 felix hill kyunghyun cho and anna korhonen learning distributed representations of sentences from unlabelled data in proceedings of the 2016 conference of the north american chapter of the association for computational linguistics human language technologies naacl hlt 16 pages 1367 1377 june 2016 fastsent a name iyyer 2015 acl a 19 19 mohit iyyer varun manjunatha jordan boyd graber and hal daum iii deep unordered composition rivals syntactic methods for text classification in proceedings of the 53rd annual meeting of the association for computational linguistics acl 15 pages 1681 1691 july 2015 dan a name jernite 2017 corr a 20 20 yacine jernite samuel r bowman and david sontag discourse based objectives for fast unsupervised sentence representation learning corr 1705 00557 2017 discsent a name kiros 2018 emnlp a 21 21 jamie ryan kiros and william chan inferlite simple universal sentence representations from natural language inference data in proceedings of the 2018 conference on empirical methods in natural language processing emnlp 18 pages 4868 4874 october 2018 inferlite a name kiros 2015 nips a 22 22 ryan kiros yukun zhu ruslan salakhutdinov richard s zemel antonio torralba raquel urtasun and sanja fidler skip thought vectors in proceedings of the 29th conference on neural information processing systems nips 15 pages 1532 1543 december 2015 skip thought a name levy 2015 tacl a 23 23 omer levy yoav goldberg and ido dagan improving distributional similarity with lessons learned from word embeddings transactions of the association for computational linguistics 2015 a name logeswaran 2018 iclr a 24 24 lajanugen logeswaran and honglak lee an efficient framework for learning sentence representations in proceedings of the 6th international conference on learning representations iclr 18 april 2018 quick thought a name manning 2008 book a 25 25 christopher d manning prabhakar raghavan and hinrich sch tze introduction to information retrieval cambridge university press 1 edition july 2008 a name manning 1999 book a 26 26 christopher d manning and hinrich sch tze foundations of statistical natural language processing mit press 1 edition may 1999 n gram a name mccann 2017 corr a 27 27 bryan mccann james bradbury caiming xiong and richard socher learned in translation contextualized word vectors corr 1708 00107 2017 cove a name mikolov 2013 corr a 28 28 tomas mikolov kai chen gregory s corrado and jeffrey dean efficient estimation of word representations in vector space corr abs 1301 3781 2013 skip gram a name mikolov 2013 nips a 29 29 tomas mikolov ilya sutskever kai chen gregory s corrado and jeffrey dean distributed representations of words and phrases and their compositionality in proceedings of the 26th international conference on neural information processing systems nips 13 december 2013 a name nie 2017 corr a 30 30 allen nie erin d bennett and noah d goodman dissent sentence representation learning from explicit discourse relations corr 1710 04334 2017 dissent a name pagliardini 2018 naacl a 31 31 matteo pagliardini prakhar gupta and martin jaggi unsupervised learning of sentence embeddings using compositional n gram features in proceedings of the 2018 conference of the north american chapter of the association for computational linguistics human language technologies naacl hlp 18 pages 528 540 june 2018 sent2vec a name pennington 2014 emnlp a 32 32 jeffrey pennington richard socher and christopher d manning glove global vectors for word representation in proceedings of the 2014 conference on empirical methods in natural language processing emnlp 14 pages 1532 1543 october 2014 glove a name perone 2018 corr a 33 33 christian s perone roberto silveira and thomas s paula evaluation of sentence embeddings in downstream and linguistic probing tasks corr 1806 06259 2018 evaluation a name peters 2018 naacl a 34 34 matthew peters mark neumann mohit iyyer matt gardner christopher clark kenton lee and luke zettlemoyer deep contextualized word representations in proceedings of the 2018 conference of the north american chapter of the association for computational linguistics human language technologies naacl hlp 18 pages 2227 2237 june 2018 elmo a name ponte 1998 sigir a 35 35 jay m ponte and w bruce croft a language modeling approach to information retrieval in proceedingsof the 21st annual international acm sigir conference on research and development in information retrieval sigir 98 pages 275 281 1998 a name salton 1975 cacm a 36 36 gerald salton andrew wong and yang chung shu a vector space model for automatic indexing communications of the acm 18 11 613 620 november 1975 bow a name scott phillips 2013 jrsi a 37 37 thomas c scott phillips and richard a blythe why is combinatorial communication rare in the natural world and why is language an exception to this trend journal of the royal society interface 10 88 2013 a name subramanian 2018 iclr a 38 38 sandeep subramanian adam trischler yoshua bengio and christopher j pal learning general purpose distributed sentence representations via large scale multi task learning in proceedings of the 6th international conference on learning representations iclr 18 april 2018 gensen a name vaswani 2017 nips a 39 39 ashish vaswani noam shazeer niki parmar jakob uszkoreit llion jones aidan n gomez lukasz kaiser and illia polosukhin attention is all you need in proceedings of the 31st conference on neural information processing systems nips 17 december 2017 transformer a name weaver 1955 book a 40 40 warren weaver translation in machine translation of languages fourteen essays pages 15 23 1955 foundations a name ledell 2018 aaai a 41 41 ledell wu adam fisch sumit chopra keith adams antoine bordes and jason weston starspace embed all the things in proceedings of the 32nd aaai conference on artificial intelligence aaai 18 pages 5569 5577 february 2018 starspace a name yan 2009 book a 42 42 jun yan text representation in encyclopedia of database systems pages 3069 3072 2009 a name gulli 2005 www a 43 43 antonio gulli and alessio signorini the indexable web is more than 11 5 billion pages in proceedings of the 14th international conference on world wide web www 15 pages 902 903 may 2015 a name vandenbosch 2016 scientometrics a 44 44 antal van den bosch toine bogers and maurice de kunder estimating search engine index size variability a 9 year longitudinal study scientometrics 107 2 839 856 may 2016 a name chen 2009 book a 45 45 lei chen curse of dimensionality in encyclopedia of database systems pages 545 546 springer 2009 a name jurafsky 2023 book a 46 46 dan jurafsky and james h martin speech and language processing 3 edition 2023 1 https arxiv org abs 1803 02893 2 http doi org 10 1145 321138 321148 3 http dl acm org citation cfm id 944919 944966 4 https doi org 10 7551 mitpress 9780262034241 001 0001 5 http dl acm org citation cfm id 944919 944937 6 http dx doi org 10 1162 tacl a 00051 7 http dl acm org citation cfm id 176313 176316 8 http dx doi org 10 18653 v1 d18 2029 9 https www aclweb org anthology l18 1269 10 http dx doi org 10 18653 v1 d17 1070 11 http dx doi org 10 1002 28sici 291097 4571 28199009 2941 3a6 3c391 3a 3aaid asi1 3e3 0 co 3b2 9 12 https doi org 10 1111 j 1467 968x 1935 tb01254 x 13 http cs brown edu courses csci2952d readings lecture1 firth pdf 14 https doi org 10 4324 9781410616470 ch2 15 https doi org 10 1016 b978 0 12 601850 9 50009 9 16 https doi org 10 2200 s00762ed1v01y201703hlt037 17 http dx doi org 10 1080 00437956 1954 11659520 18 http dx doi org 10 18653 v1 n16 1162 19 https www aclweb org anthology p15 1162 20 https arxiv org abs 1705 00557 21 http dx doi org 10 18653 v1 d18 1524 22 https arxiv org abs 1506 06726 23 http dx doi org 10 1162 tacl a 00134 24 https arxiv org abs 1803 02893 25 https nlp stanford edu ir book pdf irbookonlinereading pdf 26 https nlp stanford edu fsnlp 27 https arxiv org abs 1708 00107 28 https arxiv org abs 1301 3781 29 https arxiv org abs 1310 4546 30 https arxiv org abs 1710 04334 31 http dx doi org 10 18653 v1 n18 1049 32 http www aclweb org anthology d14 1162 33 https arxiv org abs 1806 06259 34 http dx doi org 10 18653 v1 n18 1202 35 http doi org 10 1145 290941 291008 36 http doi acm org 10 1145 361219 361220 37 http doi org 10 1098 rsif 2013 0520 38 https arxiv org abs 1804 00079 39 https arxiv org abs 1706 03762 40 http www mt archive info weaver 1949 pdf 41 https aaai org ocs index php aaai aaai18 paper download 16998 16114 42 https doi org 10 1007 978 0 387 39940 9 420 43 http doi org 10 1145 1062745 1062789 44 https doi org 10 1007 s11192 016 1863 z 45 https doi org 10 1007 978 0 387 39940 9 133 46 https web stanford edu jurafsky slp3 3 pdf | ai |
|
AC3.3 | js fundamentals intro to data types strings and numbers lessons javascript fundamentals types strings and numbers intro to data types objects and arrays lessons javascript fundamentals objects and arrays objects and arrays continued lessons javascript fundamentals objects and arrays deep dive functions and scope lessons javascript fundamentals functions deep dive advanced array methods lessons javascript fundamentals advanced array methods js advanced fundamentals closures lessons javascript advanced closures oop lessons javascript advanced oop call apply bind lessons javascript advanced call apply bind callbacks lessons javascript advanced callbacks dom intro to the dom part 1 basics lessons dom intro to the dom intro to the dom part 2 events lessons dom dom deep dive html and css intro to html and css part 1 lessons html and css intro to html and css part 1 intro to html and css part 2 lessons html and css intro to html and css part 2 flexbox lessons html and css flexbox es6 let const and arrow functions lessons es6 let const arrow funcs es6 classes lessons es6 es6 classes spread operator lessons es6 spread operator object oriented es6 lessons es6 object oriented es6 ds a computer science intro to ds a lessons computer science intro to ds a big o lessons computer science big o stacks and queues lessons computer science stacks queues hash tables lessons computer science hash tables linked lists lessons computer science linked lists recursion lessons computer science recursion functional programming lessons computer science functional programming binary search trees lessons computer science binary search trees how the internet works lessons computer science how the internet works intro to databases lessons computer science intro to databases whiteboarding tips lessons computer science whiteboarding practice problems lessons computer science dsa practcice problems authentication intro to authentication with cookies and sessions lessons authentication debugging debugging lessons debugging databases intro to databases lessons intro databases md intro to sequelize lessons advancedsql advanced sql lessons sequelize intro md music database project lessons music database project md deployment deploying with heroky lessons deployment heroku deployment express intro to express lessons express intro to restful apis lessons express restfulapis md express routing lessons express backend routing md express blog api project lessons express express blog api project express music api project lessons express express music api project git intro to git lessons git intro git md intermediate git lessons git intermediate git md advanced git lessons git advanced git md | front_end |
|
IndicNLP | indicnlp workshop material for natural language processing on indian languages this workshop was delivered at scipy india 2020 belpy 2021 and fossasia 2021 before the workshop 1 create a github account link https www github com 2 create a repl it account link https www repl it 3 fork the indicnlp repository to your github account 4 install the required python packages on repl it or setup the environment locally easy setup 1 open the url https www kaggle com kernels language python 2 create account and sign in 3 create a new notebook with python 4 go to settings and enable accelerator and internet 5 install packages using the commands in setup instructions append before pip during the workshop 1 import the repository to repl it 2 execute the python scripts 3 modify the python scripts save and run 4 push the modified scripts to github repository my setup operating system ubuntu 20 4 lts python version 3 8 setup instructions 1 install inltk link https pypi org project inltk pip install torch 1 7 1 cpu f https download pytorch org whl torch stable html pip install inltk 2 install indic nlp library link https pypi org project indic nlp library pip install indic nlp library 3 install stanza previously stanfordnlp link https github com stanfordnlp stanza pip install stanza | python3 | ai |
ESD-ALL | esd all embedded system design amp projects | os |
|
nifi-fds | apache nifi flow design system the apache nifi flow design system is an atomic reusable platform for providing a consistent set of ui ux components for open source friendly web applications to consume users can interact with this design system by running the demo application locally or by visiting https apache github io nifi fds https apache github io nifi fds the demo application serves 2 main purposes as a way for code reviewers to validate code changes and nifi fds core releases provides a working example of how an angular application should leverage nifi fds core quick start for developers not interested in building the fds ngmodule you can use npm to install the distribution files bash npm install nifi fds core save es6 javascript import ngmodule from angular core import fdscoremodule from nifi fds core function appmodule appmodule prototype constructor appmodule appmodule annotations new ngmodule imports fdscoremodule style and theming nifi fds is a themeable ui ux component platform to customize the core fds components create a simple sass file that defines your primary accent and warn palettes and passes them to mixins that output the corresponding styles a typical theme file will look something like this sass include the base styles and mixins for nifi fds core import platform core common styles flow design system change these primary color rose1 primary color hover rose2 accent color blue grey1 accent color hover blue4 include the base styles for angular material core we include this here so that you only have to load a single css file for angular material in your app include mat core define the palettes fds base palette 50 89df79 100 primary color hover 200 65d550 300 53d03b 400 46c32f 500 primary color 600 primary color 700 89df79 800 29701b 900 215c16 a100 9be48d a200 ade9a2 a400 bfedb6 a700 1a4711 contrast 50 black 87 opacity 100 black 87 opacity 200 black 87 opacity 300 white 400 white 500 white 87 opacity 600 white 87 opacity 700 white 87 opacity 800 white 87 opacity 900 white 87 opacity a100 black 87 opacity a200 white a400 white a700 white 87 opacity fds accent palette 50 89df79 100 accent color hover 200 65d550 300 53d03b 400 46c32f 500 accent color 600 accent color 700 89df79 800 29701b 900 215c16 a100 9be48d a200 ade9a2 a400 bfedb6 a700 1a4711 contrast 50 black 87 opacity 100 black 87 opacity 200 black 87 opacity 300 white 400 white 500 white 87 opacity 600 white 87 opacity 700 white 87 opacity 800 white 87 opacity 900 white 87 opacity a100 black 87 opacity a200 white a400 white a700 white 87 opacity fds warn palette 50 81410f 100 d14a50 200 af5814 300 c66317 400 dd6f19 500 warn color 600 warn color 700 eea66e 800 f1b485 900 f4c29b a100 ec9857 a200 89df79 a400 89df79 a700 f6d0b2 contrast 50 black 87 opacity 100 black 87 opacity 200 black 87 opacity 300 white 400 white 500 white 87 opacity 600 white 87 opacity 700 white 87 opacity 800 white 87 opacity 900 white 87 opacity a100 black 87 opacity a200 white a400 white a700 white 87 opacity fds primary mat palette fds base palette 500 100 500 fds accent mat palette fds accent palette 500 100 500 fds warn mat palette fds warn palette 500 100 500 define the theme optionally specify a default lighter and darker hue fds theme mat light theme fds primary fds accent fds warn fds theme mixin include fds theme fds theme you don t have to use sass to style the rest of your application but you will need to compile this file and include the corresponding style sheet in the head of the html document html link rel stylesheet href node modules nifi fds core common styles css flow design system min css note the theme file may be concatenated and minified with the rest of the application s css overriding font files path optionally you can override the font file paths if you want your font files to be served from a different location sass fds font path path to font files developing developers can perform code changes and automatically build this project using npm and webpack from the root directory via bash npm run watch building note building depends on bash scripts found in the scripts folder therefore building on windows is not supported at this time full builds are also available using npm from the root directory via bash npm run clean install or to build without running unit tests run bash npm run clean install skiptests note full builds for this project assume a 2 stage build but it only completes the first stage for you in the first stage all of the assets for the project are copied into the target frontend working directory tested and bundled minified obfuscated it is up to the consumer of this project to integrate the second stage to include the produced index html and optimized assets files into any deployable archive of their choosing running full builds locally once built you can start the application from the target frontend working directory directory via bash cd target frontend working directory npm start the demo application should now be available at http 127 0 0 1 28080 http 127 0 0 1 28080 the port may differ if there is a conflict on 28080 see the output of the start command for the available urls contact us the developer mailing list dev nifi apache org is monitored pretty closely and we tend to respond quickly if you have a question don t hesitate to shoot us an e mail we re here to help unfortunately though e mails can get lost in the shuffle so if you do send an e mail and don t get a response within a day or two it s our fault don t worry about bothering us just ping the mailing list again documentation contributing guidelines docs contributing md | nifi ui ux hacktoberfest | os |
ptp4FreeRTOS | ptp4freertos freertos port of linuxptp ptp4l ideally suited to run ptp4l on a dedicated core say cortex r5 features 1 supports linuxptp v1 6 2 supports multi instance currently tested with two ptp4l instances 3 uses lwip 1 4 1 4 changes are needed in lwip to support timestamping included in this repo as a lwip patch 5 tested with gptp profile only 6 hw tested with xilinx tsn ip reference design on zcu102 board freertos running on r5 building see the template makefile you need to cross compile for your target arch link lwip with the patches and freertos libraries to generate freertos elf there is a template emac driver which registers two mac netif porting guide 1 clock adjtime clock adjtime is implemented in missing h clock adjtime calls functions from timer 1588 c ptp adjfreq ptp adjtime these are currenly implemented as direct adjustment of rtc clock of xilnx tsn hw this must be changed for the target hw rtc 2 hw timestamping support two new apis lwip send with ts and lwip recv with ts to support timestamping see lwip 1 4 1 src api sockets c lwip 1 4 1 src include lwip pbuf h struct pbuf has two new elements u32 t ts sec u32 t ts nsec these values need to be set in your lwip emac driver implementation while doing rx via netif input while doing tx via low level output 3 polling mechanism with lwip ptp4l uses poll system call to wait for timer events as well as network events lwip has polling only for socket fd the timers in freertos are implemented as psuedo socket so same select call can be used for network timer events see linuxptp 1 6 freertos poll c freertos poll see lwip 1 4 1 src api sockets c lwip post timer event and linuxptp 1 6 freertos timers c timer callback | os |
|
image-filtering-lab | udagram image filtering microservice udagram is a simple cloud application developed alongside the udacity cloud engineering nanodegree it allows users to register and log into a web client post photos to the feed and process photos using an image filtering microservice the project is split into three parts 1 the simple frontend https github com udacity cloud developer tree master course 02 exercises udacity c2 frontend a basic ionic client web application which consumes the restapi backend covered in the course 2 the restapi backend https github com udacity cloud developer tree master course 02 exercises udacity c2 restapi a node express server which can be deployed to a cloud service covered in the course 3 the image filtering microservice https github com udacity cloud developer tree master course 02 project image filter starter code the final project for the course it is a node express application which runs a simple script to process images your assignment tasks setup node environment you ll need to create a new node server open a new terminal within the project directory and run 1 initialize a new project npm i 2 run the development server with npm run dev create a new endpoint in the server ts file the starter code has a task for you to complete an endpoint in src server ts which uses query parameter to download an image from a public url filter the image and return the result we ve included a few helper functions to handle some of these concepts and we re importing it for you at the top of the src server ts file typescript import filterimagefromurl deletelocalfiles from util util deploying your system follow the process described in the course to eb init a new application and eb create a new environment to deploy your image filter service don t forget you can use eb deploy to push changes stand out optional refactor the course restapi if you re feeling up to it refactor the course restapi to make a request to your newly provisioned image server authentication prevent requests without valid authentication headers note if you choose to submit this make sure to add the token to the postman collection and export the postman collection file to your submission so we can review custom domain name add your own domain name and have it point to the running services try adding a subdomain name to point to the processing server note domain names are not included in aws free tier and will incur a cost | cloud |
|
PromptifyJs | div align center img width 110px src https raw githubusercontent com promptslab promptify main assets logo png h1 promptifyjs h1 div div align center img width 110px src https upload wikimedia org wikipedia commons thumb d d9 node js logo svg 2560px node js logo svg png div h2 align center promptifyjs h2 p align center p align center prompt engineering solve nlp problems with llm s easily generate different nlp task prompts for popular generative models like gpt palm and more with promptify in javascript p p h4 align center a href https github com promptslab promptify blob main license img src https img shields io badge license apache 2 0 blue svg alt promptify is released under the apache 2 0 license a a href https pypi org project promptify img src https badge fury io js promptify svg alt npm version a a href http makeapullrequest com img src https img shields io badge prs welcome brightgreen svg style flat square alt http makeapullrequest com a a href https discord gg m88xfymbk6 img src https img shields io badge discord community orange alt community a a href img src https colab research google com assets colab badge svg alt colab a h4 quick tour to immediately use a llm model for your nlp task we provide the prompter api js import ner from config ner js import openai from models openai js import prompter from promptify index js import nerdata from examples data optimized ner js const model openai api key const examples nerdata samples 0 data const firstexample examples slice 0 3 const prompt ner text input i have alzheimers diease i need medicine for it description medicine ner expert domain medicine labels examples firstexample const result await prompter model prompt text davinci 003 console log result output e 93 year old t age e chronic right hip pain t medical condition e osteoporosis t medical condition e hypertension t medical condition e depression t medical condition e chronic atrial fibrillation t medical condition e severe nausea and vomiting t symptom e urinary tract infection t medical condition branch internal medicine group geriatrics | ai |
|
database-engineering | database engineering notes code and projects related to database engineering table of contents acid docs acid md understanding database internals database indexing b tree vs b tree in production database systems database partitioning database sharding concurrency control database replication database system design database engines database cursors sql vs nosql architecture database security homomorphic encryption performing database queries on encrpyted data | server |
|
tutorials.IoT-over-MQTT | fiware banner https fiware github io tutorials iot over mqtt img fiware png https www fiware org developers ngsi v2 https img shields io badge ngsi v2 5dc0cf svg https fiware ges github io orion api v2 stable fiware iot agents https nexus lab fiware org repository raw public badges chapters iot agents svg https github com fiware catalogue blob master iot agents readme md license mit https img shields io github license fiware tutorials iot over mqtt svg https opensource org licenses mit support badge https img shields io badge tag fiware orange svg logo stackoverflow https stackoverflow com questions tagged fiware ultralight 2 0 https img shields io badge payload ultralight 27ae60 svg https fiware iotagent ul readthedocs io en latest usermanual index html user programmers manual br documentation https img shields io readthedocs fiware tutorials svg https fiware tutorials rtfd io this tutorial uses introduces the use of the mqtt protocol across iot devices connecting to fiware the ultralight 2 0 https fiware iotagent ul readthedocs io en latest usermanual index html user programmers manual iot agent created in the previous tutorial https github com fiware tutorials iot agent is reconfigured to communicate with a set of dummy iot devices using mqtt via a mosquitto https mosquitto org message broker the tutorial uses curl https ec haxx se commands throughout but is also available as postman documentation https fiware github io tutorials iot agent run in postman https run pstmn io button svg https app getpostman com run collection acfd27a941ed57a0cae5 open in gitpod https gitpod io button open in gitpod svg https gitpod io https github com fiware tutorials iot agent over mqtt tree ngsi v2 readme ja md contents details summary strong details strong summary what is mqtt what is mqtt architecture architecture mosquitto configuration mosquitto configuration dummy iot devices configuration dummy iot devices configuration iot agent for ultralight 2 0 configuration iot agent for ultralight 20 configuration prerequisites prerequisites docker and docker compose docker and docker compose cygwin for windows cygwin for windows start up start up provisioning an iot agent ultralight over mqtt provisioning an iot agent ultralight over mqtt checking mosquitto health checking mosquitto health start an mqtt subscriber one st terminal start an mqtt subscriber onest terminal start an mqtt publisher two nd terminal start an mqtt publisher twond terminal stop an mqtt subscriber one st terminal stop an mqtt subscriber onest terminal show mosquitto log show mosquitto log checking the iot agent service health checking the iot agent service health connecting iot devices connecting iot devices provisioning a service group for mqtt provisioning a service group for mqtt provisioning a sensor provisioning a sensor provisioning an actuator provisioning an actuator provisioning a smart door provisioning a smart door provisioning a smart lamp provisioning a smart lamp enabling context broker commands enabling context broker commands ringing the bell ringing the bell opening the smart door opening the smart door switching on the smart lamp switching on the smart lamp next steps next steps details what is mqtt with the technology at our disposal the possibilities are unbounded all we need to do is make sure we keep talking stephen hawking mqtt is a publish subscribe based messaging protocol used in the internet of things it works on top of the tcp ip protocol and is designed for connections with remote locations where a small code footprint is required or the network bandwidth is limited the goal is to provide a protocol which is bandwidth efficient and uses little battery power sup 1 footnote1 sup the previous tutorial https github com fiware tutorials iot agent used http as its transport mechanism between the devices and the iot agent http uses a request response paradigm where each device connects directly to the iot agent mqtt is different in that publish subscribe is event driven and pushes messages to clients it requires an additional central communication point known as the mqtt broker which it is in charge of dispatching all messages between the senders and the rightful receivers each client that publishes a message to the broker includes a topic into the message the topic is the routing information for the broker each client that wants to receive messages subscribes to a certain topic and the broker delivers all messages with the matching topic to the client therefore the clients don t have to know each other they only communicate over the topic this architecture enables highly scalable solutions without dependencies between the data producers and the data consumers a summary of the differences between the two transport protocols can be seen below http transport mqtt transport https fiware github io tutorials iot over mqtt img http png https fiware github io tutorials iot over mqtt img mqtt png iot agent communicates with iot devices directly iot agent communicates with iot devices indirectly via an mqtt broker request response https en wikipedia org wiki request e2 80 93response paradigm publish subscribe https en wikipedia org wiki publish e2 80 93subscribe pattern paradigm iot devices must always be ready to receive communication iot devices choose when to receive communication higher power requirement low power requirement the ultralight 2 0 iot agent will only send or interpret messages using the ultralight 2 0 https fiware iotagent ul readthedocs io en latest usermanual index html user programmers manual syntax however it can be used to send and receive messages over multiple transport mechanisms therefore we are able to use the same fiware generic enabler to connect to a wider range of iot devices mosquitto mqtt broker mosquitto https mosquitto org is a readily available open source mqtt broker which will be used during this tutorial it is available licensed under epl edl more information can be found at https mosquitto org device monitor for the purpose of this tutorial a series of dummy iot devices have been created which will be attached to the context broker details of the architecture and protocol used can be found in the iot sensors tutorial https github com fiware tutorials iot sensors tree ngsi v2 the state of each device can be seen on the ultralight device monitor web page found at http localhost 3000 device monitor fiware monitor https fiware github io tutorials iot over mqtt img device monitor png architecture this application builds on the components created in previous tutorials https github com fiware tutorials iot agent it will make use of two fiware components the orion context broker https fiware orion readthedocs io en latest and the iot agent for ultralight 2 0 https fiware iotagent ul readthedocs io en latest usage of the orion context broker is sufficient for an application to qualify as powered by fiware both the orion context broker and the iot agent rely on open source mongodb https www mongodb com technology to keep persistence of the information they hold we will also be using the dummy iot devices created in the previous tutorial https github com fiware tutorials iot agent additionally we will add an instance of the mosquitto https mosquitto org mqtt broker which is open source and available under the epl edl therefore the overall architecture will consist of the following elements the fiware orion context broker https fiware orion readthedocs io en latest which will receive requests using ngsi v2 https fiware github io specifications openapi ngsiv2 the fiware iot agent for ultralight 2 0 https fiware iotagent ul readthedocs io en latest which will receive southbound requests using ngsi v2 https fiware github io specifications openapi ngsiv2 and convert them to ultralight 2 0 https fiware iotagent ul readthedocs io en latest usermanual index html user programmers manual mqtt topics for the mqtt broker listen to the mqtt broker on registered topics to send measurements northbound the mosquitto https mosquitto org mqtt broker which acts as a central communication point passing mqtt topics between the iot agent and iot devices as necessary the underlying mongodb https www mongodb com database used by the orion context broker to hold context data information such as data entities subscriptions and registrations used by the iot agent to hold device information such as device urls and keys a webserver acting as set of dummy iot devices https github com fiware tutorials iot sensors tree ngsi v2 using the ultralight 2 0 https fiware iotagent ul readthedocs io en latest usermanual index html user programmers manual protocol running over mqtt the context provider ngsi proxy is not used in this tutorial it does the following receive requests using ngsi v2 https fiware github io specifications openapi ngsiv2 makes requests to publicly available data sources using their own apis in a proprietary format returns context data back to the orion context broker in ngsi v2 https fiware github io specifications openapi ngsiv2 format the stock management frontend is not used in this tutorial will it does the following display store information show which products can be bought at each store allow users to buy products and reduce the stock count since all interactions between the elements are initiated by http or mqtt requests over tcp the entities can be containerized and run from exposed ports https fiware github io tutorials iot over mqtt img architecture png the necessary configuration information for wiring up the mosquitto mqtt broker the iot devices and the iot agent can be seen in the services section of the associated docker compose yml file mosquitto configuration yaml mosquitto image eclipse mosquitto hostname mosquitto container name mosquitto networks default expose 1883 9001 ports 1883 1883 9001 9001 volumes mosquitto mosquitto conf mosquitto config mosquitto conf the mosquitto container is listening on two ports port 1883 is exposed so we can post mqtt topics port 9001 is the standard port for http websocket communications the attached volume is a configuration file https github com fiware tutorials iot over mqtt blob master mosquitto mosquitto conf used to increase the debug level of the mqtt message broker dummy iot devices configuration yaml tutorial image quay io fiware tutorials context provider hostname iot sensors container name fiware tutorial networks default expose 3000 3001 ports 3000 3000 3001 3001 environment debug tutorial web app port 3000 dummy devices port 3001 dummy devices api key 4jggokgpepnvsb2uv4s40d59ov dummy devices transport mqtt the tutorial container is listening on two ports port 3000 is exposed so we can see the web page displaying the dummy iot devices port 3001 is exposed purely for tutorial access so that curl or postman can make ultralight commands without being part of the same network the tutorial container is driven by environment variables as shown key value description debug tutorial debug flag used for logging web app port 3000 port used by web app which displays the dummy device data dummy devices port 3001 port used by the dummy iot devices to receive commands dummy devices api key 4jggokgpepnvsb2uv4s40d59ov random security key used for ultralight interactions used to ensure the integrity of interactions between the devices and the iot agent dummy devices transport mqtt the transport protocol used by the dummy iot devices the other tutorial container configuration values described in the yaml file are not used in this tutorial iot agent for ultralight 2 0 configuration the iot agent for ultralight 2 0 https fiware iotagent ul readthedocs io en latest can be instantiated within a docker container an official docker image is available from docker hub https hub docker com r fiware iotagent ul tagged fiware iotagent ul the necessary configuration can be seen below yaml iot agent image quay io fiware iotagent ul latest hostname iot agent container name fiware iot agent depends on mongo db networks default expose 4041 ports 4041 4041 environment iota cb host orion iota cb port 1026 iota north port 4041 iota registry type mongodb iota log level debug iota timestamp true iota cb ngsi version v2 iota autocast true iota mongo host mongo db iota mongo port 27017 iota mongo db iotagentul iota provider url http iot agent 4041 iota mqtt host mosquitto iota mqtt port 1883 the iot agent container relies on the presence of the orion context broker and uses a mongodb database to hold device information such as device urls and keys the container is listening on a single port port 4041 is exposed purely for tutorial access so that curl or postman can make provisioning commands without being part of the same network the iot agent container is driven by environment variables as shown key value description iota cb host orion hostname of the context broker to update context iota cb port 1026 port that context broker listens on to update context iota north port 4041 port used for configuring the iot agent and receiving context updates from the context broker iota registry type mongodb whether to hold iot device info in memory or in a database iota log level debug the log level of the iot agent iota timestamp true whether to supply timestamp information with each measurement received from attached devices iota cb ngsi version v2 whether to supply use ngsi v2 when sending updates for active attributes iota autocast true ensure ultralight number values are read as numbers not strings iota mongo host context db the hostname of mongodb used for holding device information iota mongo port 27017 the port mongodb is listening on iota mongo db iotagentul the name of the database used in mongodb iota provider url http iot agent 4041 url passed to the context broker when commands are registered used as a forwarding url location when the context broker issues a command to a device iota mqtt host mosquitto the hostname of the mqtt broker iota mqtt port 1883 the port the mqtt broker is listening on to receive topics as you can see use of the mqtt transport is driven by only two environment variables iota mqtt host and iota mqtt port prerequisites docker and docker compose to keep things simple all components will be run using docker https www docker com docker is a container technology which allows to different components isolated into their respective environments to install docker on windows follow the instructions here https docs docker com docker for windows to install docker on mac follow the instructions here https docs docker com docker for mac to install docker on linux follow the instructions here https docs docker com install docker compose is a tool for defining and running multi container docker applications a yaml file https raw githubusercontent com fiware tutorials iot over mqtt master docker compose yml is used configure the required services for the application this means all container services can be brought up in a single command docker compose is installed by default as part of docker for windows and docker for mac however linux users will need to follow the instructions found here https docs docker com compose install you can check your current docker and docker compose versions using the following commands console docker compose v docker version please ensure that you are using docker version 20 10 or higher and docker compose 1 29 or higher and upgrade if necessary cygwin for windows we will start up our services using a simple bash script windows users should download cygwin http www cygwin com to provide a command line functionality similar to a linux distribution on windows start up before you start you should ensure that you have obtained or built the necessary docker images locally please clone the repository and create the necessary images by running the commands as shown console git clone https github com fiware tutorials iot over mqtt git cd tutorials iot over mqtt git checkout ngsi v2 services create thereafter all services can be initialized from the command line by running the services https github com fiware tutorials iot over mqtt blob ngsi v2 services bash script provided within the repository console services start information source note if you want to clean up and start over again you can do so with the following command console services stop provisioning an iot agent ultralight over mqtt to follow the tutorial correctly please ensure you have the device monitor page available in your browser and click on the page to enable audio before you enter any curl commands the device monitor displays the current state of an array of dummy devices using ultralight 2 0 syntax device monitor the device monitor can be found at http localhost 3000 device monitor checking mosquitto health we will start by mimicking the roles of both the iot agent and a dummy iot device and send and receive some messages using mqtt this section of the tutorial requires several open terminals start an mqtt subscriber one st terminal eventually once we have wired by the system correctly iot agent will subscribe to all relevant events to listen for northbound traffic in the form of sensor measurements it therefore will need to make a subscription across all topics similarly an actuator must subscribe to a single topic to receive events which effect itself when commands are sent southbound to check that the lines of communication are open we can subscribe to a given topic and see that we are able to receive something when a message is published open a new terminal and create a new running mqtt subscriber docker container as follows console docker run it rm name mqtt subscriber network fiware default efrecon mqtt client sub h mosquitto t the terminal will then be ready to receive events note there is no change on whilst running this command the on screen output will only respond once you have completed the next step start an mqtt publisher two nd terminal a sensor sending northbound measurements will publish to those measurements to the mqtt broker to be passed on to any subscriber than wants them the sensor will not need to make a connection to the subscriber directly open a new terminal and run a mqtt publisher docker container to send a message as follows console docker run it rm name mqtt publisher network fiware default efrecon mqtt client pub h mosquitto m hello world t test one st terminal result if the mqtt broker is functioning correctly the message should be received in the other terminal hello world stop an mqtt subscriber two nd terminal to terminate the mqtt subscriber run the following docker command console docker stop mqtt subscriber show mosquitto log to show that the communication occurred via the mqtt broker we can inspect the log of the mosquitto docker container as shown console docker logs tail 10 mosquitto result 1529661883 new client connected from 172 18 0 5 as mqttjs 8761e518 c1 k0 1529662472 new connection from 172 18 0 7 on port 1883 1529662472 new client connected from 172 18 0 7 as mosqpub 1 5637527c63c1 c1 k60 1529662472 client mosqpub 1 5637527c63c1 disconnected 1529662614 new connection from 172 18 0 7 on port 1883 1529662614 new client connected from 172 18 0 7 as mosqsub 1 64b27d675f58 c1 k60 1529662623 new connection from 172 18 0 8 on port 1883 1529662623 new client connected from 172 18 0 8 as mosqpub 1 ef03e74b0270 c1 k60 1529662623 client mosqpub 1 ef03e74b0270 disconnected 1529667841 socket error on client mosqsub 1 64b27d675f58 disconnecting checking the iot agent service health you can check if the iot agent is running by making an http request to the exposed port one request console curl x get http localhost 4041 iot about the response will look similar to the following json libversion 2 6 0 next port 4041 baseroot version 1 6 0 next what if i get a failed to connect to localhost port 4041 connection refused response if you get a connection refused response the iot agent cannot be found where expected for this tutorial you will need to substitute the url and port in each curl command with the corrected ip address all the curl commands tutorial assume that the iot agent is available on localhost 4041 try the following remedies to check that the docker containers are running try the following console docker ps you should see four containers running if the iot agent is not running you can restart the containers as necessary this command will also display open port information if you have installed docker machine https docs docker com machine and virtual box https www virtualbox org the context broker iot agent and dummy device docker containers may be running from another ip address you will need to retrieve the virtual host ip as shown console curl x get http docker machine ip default 4041 version alternatively run all your curl commands from within the container network console docker run network fiware default rm appropriate curl s x get http iot agent 4041 iot about connecting iot devices the iot agent acts as a middleware between the iot devices and the context broker it therefore needs to be able to create context data entities with unique ids once a service has been provisioned and an unknown device makes a measurement the iot agent add this to the context using the supplied device id unless the device is recognized and can be mapped to a known id there is no guarantee that every supplied iot device device id will always be unique therefore all provisioning requests to the iot agent require two mandatory headers fiware service header is defined so that entities for a given service can be held in a separate mongodb database fiware servicepath can be used to differentiate between arrays of devices for example within a smart city application you would expect different fiware service headers for different departments e g parks transport refuse collection etc and each fiware servicepath would refer to specific park and so on this would mean that data and devices for each service can be identified and separated as needed but the data would not be siloed for example data from a smart bin within a park can be combined with the gps unit of a refuse truck to alter the route of the truck in an efficient manner the smart bin and gps unit are likely to come from different manufacturers and it cannot be guaranteed that there is no overlap within device id s used the use of the fiware service and fiware servicepath headers can ensure that this is always the case and allows the context broker to identify the original source of the context data provisioning a service group for mqtt invoking group provision is always the first step in connecting devices for mqtt communication provisioning supplies the authentication key so the iot agent will know which topic it must subscribe to it is possible to set up default commands and attributes for all devices as well but this is not done within this tutorial as we will be provisioning each device separately this example provisions an anonymous group of devices it tells the iot agent that a series of devices will be communicating by sending device measures over the ul 4jggokgpepnvsb2uv4s40d59ov topic note measures and commands are sent over different mqtt topics measures are sent on the protocol api key device id attrs topic commands are sent on the api key device id cmd topic the reasoning behind this is that when sending measures northbound from device to iot agent it is necessary to explicitly identify which iot agent is needed to parse the data this is done by prefixing the relevant mqtt topic with a protocol otherwise there is no way to define which agent is processing the measure this mechanism allows smart systems to connect different devices to different iot agents according to need for southbound commands this distinction is unnecessary since the correct iot agent has already registered itself for the command during the device provisioning step and the device will always receive commands in an appropriate format the resource attribute is left blank since http communication is not being used the url location of cbroker is an optional attribute if it is not provided the iot agent uses the default context broker url as defined in the configuration file however it has been added here for completeness two request console curl ix post http localhost 4041 iot services h content type application json h fiware service openiot h fiware servicepath d services apikey 4jggokgpepnvsb2uv4s40d59ov cbroker http orion 1026 entity type thing resource provisioning a sensor it is common good practice to use urns following the ngsi ld specification https www etsi org deliver etsi gs cim 001 099 009 01 07 01 60 gs cim009v010701p pdf when creating entities furthermore it is easier to understand meaningful names when defining data attributes these mappings can be defined by provisioning a device individually three types of measurement attributes can be provisioned attributes are active readings from the device lazy attributes are only sent on request the iot agent will inform the device to return the measurement static attributes are as the name suggests static data about the device such as relationships passed on to the context broker note in the case where individual id s are not required or aggregated data is sufficient the attributes can be defined within the provisioning service rather than individually three request console curl ix post http localhost 4041 iot devices h content type application json h fiware service openiot h fiware servicepath d devices device id motion001 entity name urn ngsi ld motion 001 entity type motion protocol pdi iota ultralight transport mqtt timezone europe berlin attributes object id c name count type integer static attributes name refstore type relationship value urn ngsi ld store 001 in the request we are associating the device motion001 with the urn urn ngsi ld motion 001 and mapping the device reading c with the context attribute count which is defined as an integer a refstore is defined as a static attribute placing the device within store urn ngsi ld store 001 the addition of the transport mqtt attribute in the body of the request is sufficient to tell the iot agent that it should subscribe to the api key device id topic to receive measurements you can simulate a dummy iot device measurement coming from the motion sensor device motion001 by posting an mqtt message to the following topic four mqtt request console docker run it rm name mqtt publisher network fiware default efrecon mqtt client pub h mosquitto m c 1 t ul 4jggokgpepnvsb2uv4s40d59ov motion001 attrs the value of the m parameter defines the message this is in ultralight syntax the value of the t parameter defines the topic the topic must be in the following form protocol api key device id attrs note in the previous tutorial https github com fiware tutorials iot agent when testing http connectivity between the motion sensor and an iot agent a similar dummy http request was sent to update the count value this time the iot agent is configured to listen to mqtt topics and we need to post a dummy message to an mqtt topic when using the mqtt transport protocol the iot agent is subscribing to the mqtt topics and the device monitor will be configured to display all mqtt messages sent to each topic effectively it is showing the list messages received and sent by mosquitto with the iot agent connected via mqtt the service group has defined the topic which the agent is subscribed to since the api key matches the root of the topic the mqtt message from the motion sensor is passed to the iot agent which has previously subscribed because we have specifically provisioned the device motion001 the iot agent is able to map attributes before raising a request with the orion context broker you can see that a measurement has been recorded by retrieving the entity data from the context broker don t forget to add the fiware service and fiware service path headers five request console curl x get http localhost 1026 v2 entities urn ngsi ld motion 001 type motion h fiware service openiot h fiware servicepath response json id urn ngsi ld motion 001 type motion timeinstant type iso8601 value 2018 05 25t10 51 32 00z metadata count type integer value 1 metadata timeinstant type iso8601 value 2018 05 25t10 51 32 646z the response shows that the motion sensor device with id motion001 has been successfully identified by the iot agent and mapped to the entity id urn ngsi ld motion 001 this new entity has been created within the context data the c attribute from the dummy device measurement request has been mapped to the more meaningful count attribute within the context as you will notice a timeinstant attribute has been added to both the entity and the metadata of the attribute this represents the last time the entity and attribute have been updated and is automatically added to each new entity because the iota timestamp environment variable was set when the iot agent was started up provisioning an actuator provisioning an actuator is similar to provisioning a sensor the transport mqtt attribute defines the communications protocol to be used for mqtt communications the endpoint attribute is not required as there is no http url where the device is listening for commands the array of commands is mapped to directly to messages sent to the api key device id cmd topic the commands array includes a list of each command that can be invoked the example below provisions a bell with the deviceid bell001 six request console curl ix post http localhost 4041 iot devices h content type application json h fiware service openiot h fiware servicepath d devices device id bell001 entity name urn ngsi ld bell 001 entity type bell protocol pdi iota ultralight transport mqtt apikey 4jggokgpepnvsb2uv4s40d59ov commands name ring type command static attributes name refstore type relationship value urn ngsi ld store 001 before we wire up the context broker we can test that a command can be sent from the iot agent to a device by making a rest request directly to the iot agent s north port using the v2 op update endpoint it is this endpoint that will eventually be invoked by the context broker once we have connected it up to test the configuration you can run the command directly as shown seven request console curl ix post http localhost 4041 v2 op update h content type application json h fiware service openiot h fiware servicepath d actiontype update entities type bell id urn ngsi ld bell 001 ring type command value if you are viewing the device monitor page you can also see the state of the bell change https fiware github io tutorials iot over mqtt img bell ring gif the result of the command to ring the bell can be read by querying the entity within the orion context broker eight request console curl x get http localhost 1026 v2 entities urn ngsi ld bell 001 type bell options keyvalues h fiware service openiot h fiware servicepath response json id urn ngsi ld bell 001 type bell timeinstant 2018 05 25t20 06 28 00z refstore urn ngsi ld store 001 ring info ring ok ring status ok ring the timeinstant shows last the time any command associated with the entity has been invoked the result of ring command can be seen in the value of the ring info attribute provisioning a smart door provisioning a device which offers both commands and measurements is merely a matter of making an http post request with both attributes and command attributes in the body of the request once again the transport mqtt attribute defines the communications protocol to be used and no endpoint attribute is required as there is no http url where the device is listening for commands nine request console curl ix post http localhost 4041 iot devices h content type application json h fiware service openiot h fiware servicepath d devices device id door001 entity name urn ngsi ld door 001 entity type door protocol pdi iota ultralight transport mqtt apikey 4jggokgpepnvsb2uv4s40d59ov commands name unlock type command name open type command name close type command name lock type command attributes object id s name state type text static attributes name refstore type relationship value urn ngsi ld store 001 provisioning a smart lamp similarly a smart lamp with two commands on and off and two attributes can be provisioned as follows one zero request console curl ix post http localhost 4041 iot devices h content type application json h fiware service openiot h fiware servicepath d devices device id lamp001 entity name urn ngsi ld lamp 001 entity type lamp protocol pdi iota ultralight transport mqtt apikey 4jggokgpepnvsb2uv4s40d59ov commands name on type command name off type command attributes object id s name state type text object id l name luminosity type integer static attributes name refstore type relationship value urn ngsi ld store 001 the full list of provisioned devices can be obtained by making a get request to the iot devices endpoint one one request console curl x get http localhost 4041 iot devices h fiware service openiot h fiware servicepath enabling context broker commands having connected up the iot agent to the iot devices the orion context broker was informed that the commands now are available in other words the iot agent registered itself as a context provider https github com fiware tutorials context providers for the command attributes once the commands have been registered it will be possible to ring the bell open and close the smart door and switch the smart lamp on and off by sending requests to the orion context broker rather than sending ultralight 2 0 requests directly the iot devices as we did in the previous tutorial https github com fiware tutorials iot sensors tree ngsi v2 all the communications leaving and arriving at the north port of the iot agent use the standard ngsi syntax the transport protocol used between the iot devices and the iot agent is irrelevant to this layer of communication effectively the iot agent is offering a simplified facade pattern of well known endpoints to actuate any device therefore this section of registering and invoking commands duplicates the instructions found in the previous tutorial https github com fiware tutorials iot agent ringing the bell to invoke the ring command the ring attribute must be updated in the context one two request console curl ix patch http localhost 1026 v2 entities urn ngsi ld bell 001 attrs h content type application json h fiware service openiot h fiware servicepath d ring type command value if you are viewing the device monitor page you can also see the state of the bell change https fiware github io tutorials iot over mqtt img bell ring gif opening the smart door to invoke the open command the open attribute must be updated in the context one three request console curl ix patch http localhost 1026 v2 entities urn ngsi ld door 001 attrs h content type application json h fiware service openiot h fiware servicepath d open type command value switching on the smart lamp to switch on the smart lamp the on attribute must be updated in the context one four request console curl ix patch http localhost 1026 v2 entities urn ngsi ld lamp 001 attrs h content type application json h fiware service openiot h fiware servicepath d on type command value next steps want to learn how to add more complexity to your application by adding advanced features you can find out by reading the other tutorials in this series https fiware tutorials rtfd io license mit license 2018 2023 fiware foundation e v footnotes a name footnote1 a wikipedia mqtt https en wikipedia org wiki mqtt a central communication point known as the mqtt broker which is in charge of dispatching all messages between services | tutorial fiware fiware-iot-agents ultralight mqtt mqtt-broker iot iot-agent | server |
torchcv | torchcv a pytorch based framework for deep learning in computer vision misc you2019torchcv author ansheng you and xiangtai li and zhen zhu and yunhai tong title torchcv a pytorch based framework for deep learning in computer vision howpublished url https github com donnyyou torchcv year 2019 this repository provides source code for most deep learning based cv problems we ll do our best to keep this repository up to date if you do find a problem about this repository please raise an issue or submit a pull request diff semantic flow for fast and accurate scene parsing code and models https github com lxtgh sfsegnets implemented papers image classification https github com youansheng torchcv tree master runner cls vgg very deep convolutional networks for large scale image recognition resnet deep residual learning for image recognition densenet densely connected convolutional networks shufflenet an extremely efficient convolutional neural network for mobile devices shufflenet v2 practical guidelines for ecient cnn architecture design partial order pruning for best speed accuracy trade off in neural architecture search semantic segmentation https github com youansheng torchcv tree master runner seg deeplabv3 rethinking atrous convolution for semantic image segmentation pspnet pyramid scene parsing network denseaspp denseaspp for semantic segmentation in street scenes asymmetric non local neural networks for semantic segmentation semantic flow for fast and accurate scene parsing object detection https github com youansheng torchcv tree master runner det ssd single shot multibox detector faster r cnn towards real time object detection with region proposal networks yolov3 an incremental improvement fpn feature pyramid networks for object detection pose estimation https github com youansheng torchcv tree master runner pose cpm convolutional pose machines openpose realtime multi person 2d pose estimation using part affinity fields instance segmentation https github com youansheng torchcv tree master runner seg mask r cnn generative adversarial networks https github com youansheng torchcv tree master runner gan pix2pix image to image translation with conditional adversarial nets cyclegan unpaired image to image translation using cycle consistent adversarial networks quickstart with torchcv now only support python3 x pytorch 1 3 bash pip3 install r requirements txt cd lib exts sh make sh performances with torchcv all the performances showed below fully reimplemented the papers results image classification imagenet center crop test 224x224 model train test top 1 top 5 bs iters scripts resnet50 train val 77 54 93 59 512 30w resnet50 https github com youansheng torchcv blob master scripts cls imagenet run ic res50 imagenet cls sh resnet101 train val 78 94 94 56 512 30w resnet101 https github com youansheng torchcv blob master scripts cls imagenet run ic res101 imagenet cls sh shufflenetv2x0 5 train val 60 90 82 54 1024 40w shufflenetv2x0 5 https github com youansheng torchcv blob master scripts cls imagenet run ic shufflenetv2x0 5 imagenet cls sh shufflenetv2x1 0 train val 69 71 88 91 1024 40w shufflenetv2x1 0 https github com youansheng torchcv blob master scripts cls imagenet run ic shufflenetv2x1 0 imagenet cls sh dfnetv1 train val 70 99 89 68 1024 40w dfnetv1 https github com youansheng torchcv blob master scripts cls imagenet run ic dfnetv1 imagenet cls sh dfnetv2 train val 74 22 91 61 1024 40w dfnetv2 https github com youansheng torchcv blob master scripts cls imagenet run ic dfnetv2 imagenet cls sh semantic segmentation cityscapes single scale whole image test base lr 0 01 crop size 769 model backbone train test miou bs iters scripts pspnet 3x3 res101 https drive google com open id 1buzckazlh8elgvywlabbab0b0uiqfgtr train val 78 20 8 4w pspnet https github com youansheng torchcv blob master scripts seg cityscapes run fs pspnet cityscapes seg sh deeplabv3 3x3 res101 https drive google com open id 1buzckazlh8elgvywlabbab0b0uiqfgtr train val 79 13 8 4w deeplabv3 https github com youansheng torchcv blob master scripts seg cityscapes run fs deeplabv3 cityscapes seg sh ade20k single scale whole image test base lr 0 02 crop size 520 model backbone train test miou pixelacc bs iters scripts pspnet 3x3 res50 https drive google com open id 1zpqlfd9c1yhfkqn5cwbccekmjeeqxswx train val 41 52 80 09 16 15w pspnet https github com youansheng torchcv blob master scripts seg ade20k run fs res50 pspnet ade20k seg sh deeplabv3 3x3 res50 https drive google com open id 1zpqlfd9c1yhfkqn5cwbccekmjeeqxswx train val 42 16 80 36 16 15w deeplabv3 https github com youansheng torchcv blob master scripts seg ade20k run fs res50 deeplabv3 ade20k seg sh pspnet 3x3 res101 https drive google com open id 1buzckazlh8elgvywlabbab0b0uiqfgtr train val 43 60 81 30 16 15w pspnet https github com youansheng torchcv blob master scripts seg ade20k run fs res101 pspnet ade20k seg sh deeplabv3 3x3 res101 https drive google com open id 1buzckazlh8elgvywlabbab0b0uiqfgtr train val 44 13 81 42 16 15w deeplabv3 https github com youansheng torchcv blob master scripts seg ade20k run fs res101 deeplabv3 ade20k seg sh object detection pascal voc2007 2012 single scale test 20 classes model backbone train test map bs epochs scripts ssd300 https drive google com open id 15j5blvyzq7lqceph q8s2pxim3 f 8lp vgg16 https drive google com open id 1nm0uwmqr4lihzmrwvs71jfp gaekjuky 07 12 trainval 07 test 0 786 32 235 ssd300 https github com youansheng torchcv blob master scripts det voc run ssd300 vgg16 voc det sh ssd512 https drive google com open id 1rf5gnqfiyz ecsfu1osk7tnux vrobvw vgg16 https drive google com open id 1nm0uwmqr4lihzmrwvs71jfp gaekjuky 07 12 trainval 07 test 0 808 32 235 ssd512 https github com youansheng torchcv blob master scripts det voc run ssd512 vgg16 voc det sh faster r cnn https drive google com open id 15sfklrii1mcvweq9eaceznk 9sxxsqr4 vgg16 https drive google com open id 1zl9ss9krzsdqhme8kypq1lha60wx vcj 07 trainval 07 test 0 706 1 15 faster r cnn https github com youansheng torchcv blob master scripts det voc run fr vgg16 voc det sh pose estimation openpose realtime multi person 2d pose estimation using part affinity fields instance segmentation mask r cnn generative adversarial networks pix2pix cyclegan datasets with torchcv torchcv has defined the dataset format of all the tasks which you could check in the subdirs of data https github com youansheng torchcv tree master data following is an example dataset directory trees for training semantic segmentation you could preprocess the open datasets with the scripts in folder data seg preprocess https github com youansheng torchcv tree master data seg preprocess dataset train image 00001 jpg png 00002 jpg png label 00001 png 00002 png val image 00001 jpg png 00002 jpg png label 00001 png 00002 png commands with torchcv take pspnet as an example tag could be any string include an empty one training bash cd scripts seg cityscapes bash run fs pspnet cityscapes seg sh train tag resume training bash cd scripts seg cityscapes bash run fs pspnet cityscapes seg sh train tag validate bash cd scripts seg cityscapes bash run fs pspnet cityscapes seg sh val tag testing bash cd scripts seg cityscapes bash run fs pspnet cityscapes seg sh test tag demos with torchcv div align center img src demo openpose samples 000000319721 vis png width 500px p example output of b vgg19 openpose b p img src demo openpose samples 000000475191 vis png width 500px p example output of b vgg19 openpose b p div | ai |
|
Book | book issues 475164744 | pdf | server |
TwitterClone | project 3 twitterclone twitterclone is a basic twitter app to read and compose tweets the twitter api https apps twitter com time spent 18 hours spent in total user stories the following required functionality is completed x user sees app icon in home screen and styled launch screen x user can sign in using oauth login flow x user can logout x user can view last 20 tweets from their home timeline x in the home timeline user can view tweet with the user profile picture username tweet text and timestamp x user can pull to refresh x user can tap the retweet and favorite buttons in a tweet cell to retweet and or favorite a tweet x user can compose a new tweet by tapping on a compose button x using autolayout the tweet cell should adjust its layout for iphone 11 pro and se device sizes as well as accommodate device rotation x user should display the relative timestamp for each tweet 8m 7h x tweet details page user can tap on a tweet to view it with controls to retweet and favorite the following optional features are implemented user can view their profile in a profile tab contains the user header view picture and tagline contains a section with the users basic stats tweets following followers profile view should include that user s timeline x user should be able to unretweet and unfavorite and should decrement the retweet and favorite count refer to this guide unretweeting for help on implementing unretweeting links in tweets are clickable user can tap the profile image in any tweet to see another user s profile contains the user header view picture and tagline contains a section with the users basic stats tweets following followers x user can load more tweets once they reach the bottom of the feed using infinite loading similar to the actual twitter client x when composing you should have a countdown for the number of characters remaining for the tweet out of 280 1 point x after creating a new tweet a user should be able to view it in the timeline immediately without refetching the timeline from the network x user can reply to any tweet and replies should be prefixed with the username and the reply id should be set when posting the tweet 2 points user sees embedded images in tweet if available user can switch between timeline mentions or profile view through a tab bar 3 points profile page pulling down the profile page should blur and resize the header image 4 points the following additional features are implemented list anything else that you can get done to improve the app functionality please list two areas of the assignment you d like to discuss further with your peers during the next class examples include better ways to implement something how to extend your app in certain ways etc 1 how to implement the clickable links i didn t quite understand how to turn the video tutorial from swift to objective c 2 how would one go about making a table view occupy only a percentage of the phone screen video walkthrough here s a walkthrough of implemented user stories gif1 gif gif2 gif gif3 gif notes getting to connect to the different twitter apis was one of the more fun and challenging tasks i had to do for this project definitely working with autolayout and getting everything to work properly with different phone sizes was the most challenging part for this project credits list an 3rd party libraries icons graphics or other assets you used in your app afnetworking https github com afnetworking afnetworking networking task library datetools https github com matthewyork datetools formatted date strings date formatting library boboauthmanager library to get oauth with twitter api working license copyright 2021 sebasti n salda a licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license | os |
|
whitecat-console | what s the whitecat console the whitecat console is a command line tool that allows the programmer to send and receive files to from lua rtos compatible boards without using an ide how to build 1 go to your go s workspace location for example lua cd gows 1 download and install lua go get github com whitecatboard whitecat console 1 go to the project source root lua cd src github com whitecatboard whitecat console 1 build project lua go build for execute linux osx lua wcc windows lua wcc exe prerequisites please note you need probably to download and install drivers for your board s usb to serial adapter for windows and mac osx versions the gnu linux version usually doesn t need any drivers this drivers are required for connect to your board through a serial port connection board whitecat esp32 n1 https www silabs com products development tools software usb to uart bridge vcp drivers esp32 core https www silabs com products development tools software usb to uart bridge vcp drivers esp32 thing http www ftdichip com drivers vcp htm for linux the currently logged user should have read and write access the susb to serial device on most linux distributions this is done by adding the user to dialout group with the following command sudo usermod a g dialout user usage lua wcc p port ports ls path down source destination up source destination f ffs erase d ports list all available serial ports on your computer p port serial port device for example dev tty slab usbtouart ls path list files present in path down src dst transfer the source file board to destination file computer up src dst transfer the source file computer to destination file board f flash board with last firmware ffs flash board with last filesystem erase erase flash board d show debug messages examples list files in examples directory lua wcc p dev tty slab usbtouart ls examples download system lua file and store it as s lua in your computer lua wcc p dev tty slab usbtouart down system lua s lua upload s lua file and store it as system lua in your board lua wcc p dev tty slab usbtouart up s lua system lua upgrade the board with last available firmware lua wcc p dev tty slab usbtouart f upgrade the board with last available firmware and last available filesystem lua wcc p dev tty slab usbtouart f fs upgrade the board with available filesystem lua wcc p dev tty slab usbtouart fs erase the flash memory lua wcc p dev tty slab usbtouart erase | os |
Subsets and Splits