names
stringlengths
1
98
readmes
stringlengths
8
608k
topics
stringlengths
0
442
labels
stringclasses
6 values
InteRecAgent
interecagent recommender ai agent integrating large language models for interactive recommendations this repo is under policy review for now please contact a href mailto jialia microsoft com jialia microsoft com a or a href mailto xuhuangcs mail ustc edu cn xuhuangcs mail ustc edu cn a for a preview version of code
ai
IOTCar
how to deploy a node js program to bluemix 1 first of all you should have a bluemix account and a git account 2 login your account and create a workspace click the button on the top right link to scan your profile make sure your region is in the us south image https raw githubusercontent com christial iotcar master public images 1 png 3 click the create application button image https raw githubusercontent com christial iotcar master public images 2 png 4 select cloud foundry app in the left sidebar click sdk for node js image https raw githubusercontent com christial iotcar master public images 3 png 5 named your app your app name will generate the host name automatically image https raw githubusercontent com christial iotcar master public images 4 png 6 click create button in the bottom right the page will jump to the getting started page image https raw githubusercontent com christial iotcar master public images 5 png 7 select overview item from the menu on the left click the button enable on the right bottom page image https raw githubusercontent com christial iotcar master public images 6 png 8 if you are a new user you will enter a welcome page click get started to jump to next page select the agree checkbox and click enable image https raw githubusercontent com christial iotcar master public images added1 png image https raw githubusercontent com christial iotcar master public images added2 png 9 click create toolchain from template select simple cloud foundry toolchain in the next page image https raw githubusercontent com christial iotcar master public images added3 png 10 click on github button to authorize the access to the github then you ll be navigated to github website image https raw githubusercontent com christial iotcar master public images added5 png 11 on the github click on authorize application button to grant the access form bluemix devops service image https raw githubusercontent com christial iotcar master public images added6 png 12 type github password to confirm the operation image https raw githubusercontent com christial iotcar master public images added7 png 13 after authorization select fork as repository type change the source repository url to https github com christial iotcar click create image https raw githubusercontent com christial iotcar master public images added9 png 14 select the second item code to explore the code on your git repository image https raw githubusercontent com christial iotcar master public images 8 png 15 the code is missing some key code about sending data with socketio which needs user to fill in if you are not good at coding please open the folder document to scan and copy the answer to the code image https raw githubusercontent com christial iotcar master public images 10 png 1 open index html under the public folder line 39 50 61 72 need to fill in a common method socket emit the first parameter declares the data type is string the second parameter defines the data is w or s or a or d up down left right image https raw githubusercontent com christial iotcar master public images 11 png 2 open app js in the root directory fill in a method in line 35 the first parameter declares the data type is message the second parameter defines the data is the parameter key in the callback function image https raw githubusercontent com christial iotcar master public images 12 png 3 open manifest yml in the root directory change the name and host name to yours image https raw githubusercontent com christial iotcar master public images 13 png 16 back to the devops page step 9 click the third item deliver click the start button on the top right in build stage field after build success click the start button on the right in deploy stage image https raw githubusercontent com christial iotcar master public images 14 png 17 the last step after deploy success open your bluemix link yourname mybluemix net to view your new program image https raw githubusercontent com christial iotcar master public images 15 png congratulations you have successful deploy a node js program to bluemix
server
RTuinOS
what is rtuinos rtuinos is a small scale real time operating system for arduino the main documentation is found as doc manual rtuinos 1 0 userguide pdf https github com sudar rtuinos tree master doc manual rtuinos 1 0 userguide pdf the hints given here are just a brief summary of what s stated there this distribution has been compiled for arduino 1 0 5 under windows linux or mac os only the mega 2560 board is supported just like that in the user guide in the source code and by compiler error directives you ll get hints how to modify the code for other arduino boards the arduino environment can be got at www arduino cc it needs to be installed and operational before using rtuinos ensure that the original arduino ide and the sample sketches work well to install rtuinos extract the files in the archive to a directory of your choice the target location must grant read access to all files and write access to all directories and the path to the target location must not contain any blanks the link between the rtuinos build environment gnu make based and the arduino installation is made by means of a new environment variable you need to create the variable arduino home prior to opening any shell window arduino home holds the path to the arduino installation like c programfiles arduino 1 0 5 under windows caution no blanks in paths are permitted and this holds true also for the arduino installation files an arduino installation at e g c program files would make the makefile fail to compile the sources the gnu make tool which is shipped with arduino needs to be on the operating system s search path extending the search path accordingly is not part of the arduino standard installation procedure you will probably still have to do this type make version to find out gnu make 3 81 should start up be aware revision 3 80 or elder of make is not sufficient what s new in release 1 0 the makefile has been revised different operating systems windows linux and mac os are now supported path conventions are now obeyed usage of forward slashes and a trailing slash in path names tools are addressed by absolute paths to avoid conflicts with improperly set path variable the build of different test cases has been decoupled now each one has its own build folder a clean is no longer necessary when switching the application the creation of required working directories has been integrated into the build directory creation is no longer a build rule which has to be called explicitly the makefile has been split in parts the configuration part is now separated and clearly recognizable and readable to the user the invariant parts of the makefile have been hidden in a sub directory a kind of callback is made into the application folder an optional makefile fragment located in the application folder will be included into the build and permits to override general settings in an application related fashion support of arduino 1 0 5 the current release as of today 31 7 2013 all test cases can be built and run with arduino 1 0 5 support of mutexes and semaphores the existing concept of events has been extended an event can now be of kind ordinary broadcasted event as before mutex or semaphore task resume conditions can continue to combine any events regardless of the kind an extension of rtuinos api was not necessary to introduce mutexes and semaphores the api function rtos setevent has been renamed to the more adequate rtos sendevent the old name is deprecated but still operational as a macro maps it onto the new name more assertions have been placed in the kernel for debug compilation which anticipate and notify many typical application errors like an idle task which tries to suspend a new test case see tc14 proves the compatibility of the arduino liquidcrystal library with rtuinos the cpu load estimation is unchanged but has been moved from a test case folder to the common folder rtos and is now available to any application just like that the doxygen documentation now includes those test cases which contain instructive sample code of general interest
os
eventrouter
event router a c library for event based inter task communication in freertos https www freertos org quickstart the example applications demonstrate how to define events initialize the event router publish events and subscribe to them developers who wish to use the event router must compile eventrouter c with a c11 compatible compiler add repo include to their list of include paths and provide an eventrouter config h overview this document models event based rtos applications as collections of tasks each of which contain modules each of which may generate events and post them to queues each task reads events from its queue and delivers them to the modules it contains based on the event s type mermaid graph lr subgraph task a task a module a1 module a 1 module a2 module a 2 queue a queue end queue a module a1 queue a module a2 module a1 event queue b subgraph task b task b module b1 module b 1 queue b queue end queue b module b1 ideally modules only know about the events types they publish and the events types they want to receive they should know nothing about the modules that consume the data they produce and nothing about the modules that produce what they consume this results in a loosely coupled application mermaid graph lr event a event type a requires module event b event type b requires module event c event type c requires module module provides event d event type d the event router achieves this by sitting between queues and modules and between modules and queues to provide a uniform publisher subscriber framework when a module sends an event the event router delivers it to all modules which subscribe to events of that type and then returns it to the sending module mermaid graph lr subgraph task a task a queue a queue module a1 modula 1 module a2 module a 2 eventrouter a event router style eventrouter a color green end eventrouter mid event router style eventrouter mid color green subgraph task b task b queue b queue eventrouter b event router style eventrouter b color green module b1 module b 1 end subgraph task c task c queue c queue eventrouter c event router style eventrouter c color green module c1 module c 1 module c2 module c 2 end queue a eventrouter a eventrouter a module a1 eventrouter a module a2 module a1 eventrouter mid eventrouter mid queue b queue b eventrouter b eventrouter b module b1 eventrouter mid queue c queue c eventrouter c eventrouter c module c1 eventrouter c not interested module c2 the event router passes events by reference this means modules may not modify events after sending them until the event router returns them in this way the send deliver return flow lets modules perform a primitive form of ownership tracking
os
Demand-Skills-for-Data-Scientists
demand skills for data scientists this project is a part of the subject problem solving in information technology psit faculty of information technology king mongkut s institute of technology ladkrabang kmitl this project is about doing data analysis we ve chosen the title demand skills for data scientists for this project our main target is to how should data scientists who want to be in demand by employers spend their learning budget information project site https bit ly 2cgk9ao project present video https youtu be qcdugkqtylu project detail 1 talk about what is data scientists 2 talk about employment opportunity from web job search 3 talk about salary 4 talk about general skill in data scientist job listing 5 talk about what did the most data scientist graduate in 6 talk about popular languge for data scientist 7 talk about what is data scientist do overall operation if you are concerned about being a data scientist you should studying business computers analyzing statistics computer science machine learning and more and you should study python r and sql languages and others for to be a good data scientist and have a good job img src img screenshot 8 png statistics project started 2 november 2018 completed 17 december 2018 project status completed main language python python module pygal pandas numpy group members img src img member 1 png width 120px height 120px img src img member 4 jpg width 120px height 120px img src img member 3 png width 120px height 120px img src img member 2 png width 120px height 120px kuroishi https github com kuroishi1221 toplordsaito https github com toplordsaito chokcolate https github com chokcolate tanjry https github com tanjry ratchanon br chumbunyeanyong waruwat br chaidit terawat br kanjanapanwong jharinya br jaipakdee 61070182 61070214 61070093 61070021 credits https www emolument com salary reports jobs data science eic analysis economic intelligence center eic https www kaggle com kaggle kaggle survey 2018 https www kaggle com discdiver
server
TODO-App
kanban web engineering i project authors manuel franz https github com manuel f 04 janik piehler https github com janikpiehler lukas schulz https github com lukas ms marius w rfel https github com raboro br assets overview 20main 20page png instruction guide 1 install node js https nodejs org en download 2 clone code 3 add a env file which contains all secret data about the server connection etc 4 after collecting the sourcecode load all dependencies with bash npm install 5 to run the code server bash npm run dev project structure github br workflows br node modules br src br controllers br middleware br public br routes br services br sql br app js br env br eslintrc json br gitignore br package json br package lock json br readme md technical report major contribution of each team member manuel franz database set up commands backend database commands integration frontend signin page and signup page janik piehler frontend signin page and signup page main page draggable add task window loading tasks when logging in lukas schulz frontend add edit task window drag and drop function for all tasks design frontend backend connection for edit and delete buttons database hosting marius w rfel backend routing structure dependencies setup frontend add delete edit functionality introduction of the website what is it for kanban is a website designed to help individuals and teams organize their work using the kanban methodology kanban is a lean approach to project management that originated in the toyota production system and is now widely adopted across various industries the kanban website is specifically designed to simplify and streamline task management making it easy for users to visualize their workflow prioritize tasks and track progress the kanban project offers a user friendly interface that allows users to create and manage task cards which represent individual work items the application s intuitive drag and drop functionality makes it easy to move cards between columns that represent different stages of the workflow from to do to in progress to done users can also add due dates and work simultaneously with different devices on one kanban board structure of the website i e navigation routing structure the application contains three pages sign in sign up and the main page br all the routes can be assigned to one of them signin get gives access to the signin html page signup get gives access to the signup html page mainpage get gives access to the todo html page user signin post user sends his login data and if valid gets his jwt token for authentication user signup post user sends his user data which are after validation added to the database and also gets his jwt token for authentication user logout get token gets removed task getalltasks get fetching and sending all tasks grouped by category task addtask post getting all task data and adding to the database also returning id of task to frontend task deletetask post deleting task by using the sent id as reference changecategoryoftask post changing category of task by id used for drag and drop feature everything related to tasks and user logout are located in the main page the others are related to user sign in or up to authenticate the user a jwt token is generated which is user unique this token gets destroyed if the user logs out or closes the browser if there is no valid token the user gets redirected to sign in abstract layout of the web pages the sign in and sign up pages are built with a simple div that contains the heading labels for user input and a submit button the main page is built with flexbox containing three items each representing a category each category contains a heading a button to add tasks and a grid which contains the tasks every task has a heading date content and two buttons for edit and delete key functions implemented by the back end program the application uses a database this makes it possible to access the entered data despite a server restart the database stores the different users and their tasks which are needed for the website for example during login and registration the user data is accessed which is uniquely identified by the e mail address when the main page is loaded the corresponding tasks are loaded which are unique by an individual id in case of additional interaction on the main page the task data are directly adjusted in the database in order to have them available again for the next login how the mvc pattern is applied the application is build with the popular node js web application framework express express implements the following folder structure source controllers middleware public routes services sql used to store the database queries and table setup as backup the application does not use a views folder because our static html elements are placed in the public folder tech stacks frontend html css javascript backend javascript node js express various dependencies chalk cookie parser dotenv jsonwebtoken nodemon mysql2 github action to check javascript code styling with eslint automatically on push eslinter to check javascript code styling with standard configuration and 4 override rules mysql database hosted by hostinger https www hostinger de ppc campaign google search brand bidkw hostinger env file to store secret info e g database password only locally not in git any highlights of the design implementation one account can use the kanban board on different devices at the same time because of the async backend structure remote database responsive design drag and drop function for all tasks movable addtask window error handling for sign in and sign up detect and visualize overdue tasks horizontal scrolling through tasks on mobile devices user authentication and security via jwt
express html-css-javascript mysql nodejs fullstack web
server
maxmertkit
maxmertkit http maxmert com not supported anymore join the chat at https gitter im maxmert maxmertkit https badges gitter im join 20chat svg https gitter im maxmert maxmertkit utm source badge utm medium badge utm campaign pr badge utm content badge build status https travis ci org maxmert maxmertkit svg branch master https travis ci org maxmert maxmertkit maxmertkit powerful most customizable and easiest for usage mobile first front end framework for web development created by vetrenko maxim http twitter com vmaxmert and maintained by the core team https github com maxmert tab members with the support and involvement of the community to get started check out http maxmert com table of contents quick start quick start bugs and feature requests bugs and feature requests documentation documentation compiling css and javascript compiling css and javascript contributing contributing community community versioning versioning author author copyright and license copyright and license quick start three quick start options are available download the latest release https github com maxmert maxmertkit releases latest clone the repo git clone https github com maxmert maxmertkit git install with bower http bower io bower install maxmertkit read the start page http maxmert com start for information on the framework contents howto videos examples and more what s included within the download you ll find the following directories and files logically grouping common assets and providing both compiled and minified variations bugs errors and feature requests have a bug text error or a feature request please first read the issue guidelines https github com maxmert maxmertkit blob master contributing md using the issue tracker and search for existing and closed issues if your problem or idea is not addressed yet please open a new issue https github com maxmert maxmertkit issues new documentation maxmertkit s documentation included in this repo in the root directory is run with nodejs http nodejs org the docs may be run locally running documentation locally go to maxmert com start maxmert com start and watch howto video 1 if necessary install nodejs http nodejs org install npm http npmjs org install bower http bower io 2 from the root maxmertkit directory run npm install in the command line 3 run bower install in the command line 4 from the docs directory run npm install in the command line 5 from the docs directory run bower install in the command line 6 from the root maxmertkit directory run gulp in the command line 7 open http localhost 3333 in your browser documentation for previous releases documentation for v0 0 2 has been made available for the time being at http old maxmert com while folks transition to maxmertkit 1 0 0 compiling css and javascript maxmertkit uses gulp http gulpjs com with convenient methods for working with the framework it s how we compile our code run tests and more to use it install the required dependencies as directed and then run some gulp commands install gulp from the command line 1 install gulp globally with npm install g gulp maybe you ll need to run it with sudo sudo npm install g gulp 2 read about running documentation locally when completed you ll be able to run the various gulp commands provided from the command line unfamiliar with npm don t have node installed that s a okay npm stands for node packaged modules http npmjs org and is a way to manage development dependencies through node js download and install node js http nodejs org download before proceeding available gulp commands build and watch development gulp run gulp to run buld and run documentation locally it will compile coffeescript http coffeescript org and sass http sass lang com into docs and run nodemon https github com remy nodemon server at port 3333 only compile css and javascript production gulp build run gulp build to clear the build directory and recompile all coffeescript http coffeescript org and sass http sass lang com files with gzip and standart version tests gulp test troubleshooting dependencies should you encounter problems with installing dependencies or running gulp commands uninstall all previous dependency versions global and local then rerun npm install and bower install in the root and docs directory contributing please read through our contributing guidelines https github com maxmert maxmertkit blob master contributing md included are directions for opening issues coding standards and notes on development moreover if your pull request contains javascript patches or features please include relevant unit tests all html and css should conform to the code guide http github com mdo code guide maintained by mark otto http github com mdo community keep track of development and community news follow maxmertkit on twitter http twitter com maxmertkit follow vmaxmert on twitter http twitter com vmaxmert implementation help may be found at stack overflow tagged maxmertkit 1 http stackoverflow com questions tagged maxmertkit 1 versioning for transparency into our release cycle and in striving to maintain backward compatibility maxmertkit is maintained under the semantic versioning guidelines sometimes i screw up but i ll adhere to these rules whenever possible releases will be numbered with the following format major minor patch and constructed with the following guidelines breaking backward compatibility bumps the major while resetting minor and patch new additions without breaking backward compatibility bumps the minor while resetting the patch bug fixes and misc changes bumps only the patch for more information on semver please visit http semver org author vetrenko maxim http twitter com vmaxmert http github com maxmert http facebook com vetrenko maxim copyright and license code and documentation copyright 2012 2014 maxmert code released under the mit license license docs released under creative commons docs license bitdeli badge https d2weczhvl823v0 cloudfront net maxmert maxmertkit trend png https bitdeli com free bitdeli badge
front_end
altschool-cloud-exercises
altschool cloud exercises exercises for cloud engineering second semester
cloud
Chat-Server
chat app development front end and back end using flutter socketio and nodejs a href https www buymeacoffee com devstack06 target blank align center p align center br img src https cdn buymeacoffee com buttons default orange png alt buy me a coffee height 41 width 174 p a br img src https github com devstack06 images blob master chatimages chatapp png br playlist for chat app development series playlist name youtube playlist link chat app development main playlist link https youtube com playlist list pltiu0bh0pkkovueansrge xd5tz3m1zec chat app development only front end using flutter link https youtube com playlist list pltiu0bh0pkkrgqat 9jsrrryetkvekdn6 chat server development only back end using node socketio link https youtube com playlist list pltiu0bh0pkkqkm88pusrwiksz50ztsitv br if this tutorial helped you please give a star and also fork the repo thank you happy coding to use this app follow below instructions 1 clone this app using below syntax git clone https github com devstack06 chat server git 2 after cloning install packages using below syntax npm install above command will install all the neccery packges 3 run the app on your mobile using below command npm run dev
nodejs socket-io whatsapp-chat chatserver flutterwithnodejs
server
MuDBa
mudba bd database to store information about the music world developed as a project for bases de dados course in computer software engineering at university of minho contributors vitor peixoto https github com vitorpeixoto97 francisco oliveira https github com tibblue raul vilas boas https github com mrboas jo o carvalho classification 15 0
server
wandb
p align center img src docs readme images logo dark svg gh dark mode only width 600 alt weights biases img src docs readme images logo light svg gh light mode only width 600 alt weights biases p p align center a href https pypi python org pypi wandb img src https img shields io pypi v wandb a a href https anaconda org conda forge wandb img src https img shields io conda vn conda forge wandb a a href https circleci com gh wandb wandb img src https img shields io circleci build github wandb wandb main a a href https codecov io gh wandb wandb img src https img shields io codecov c gh wandb wandb a p p align center a href https colab research google com github wandb examples blob master colabs intro intro to weights 26 biases ipynb img src https colab research google com assets colab badge svg a p use w b to build better models faster track and visualize all the pieces of your machine learning pipeline from datasets to production machine learning models get started with w b today sign up for a free account https wandb com utm source github utm medium code utm campaign wandb utm content readme w b is free for students educators and academic researchers for more information visit https wandb ai site research https wandb ai site research utm source github utm medium code utm campaign wandb utm content readme want to use weights biases for seamless collaboration between your ml or data science team looking for production grade mlops at scale sign up to one of our plans https wandb ai site pricing or contact the sales team https wandb ai site contact nbsp documentation p align center a target blank href https docs wandb ai guides track utm source github utm medium code utm campaign wandb utm content readme picture source media prefers color scheme dark srcset docs readme images product icons dark background experiments dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light experiments light svg width 14 0 img alt weights and biases experiments src picture a a target blank href https docs wandb ai guides reports utm source github utm medium code utm campaign wandb utm content readme picture source media prefers color scheme dark srcset docs readme images product icons dark background report dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light report light svg width 14 0 img alt weights and biases reports src picture a a target blank href https docs wandb ai guides artifacts utm source github utm medium code utm campaign wandb utm content readme picture source media prefers color scheme dark srcset docs readme images product icons dark background artifacts dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light artifacts light svg width 14 0 img alt weights and biases artifacts src picture a a target blank href https docs wandb ai guides data vis utm source github utm medium code utm campaign wandb utm content readme picture source media prefers color scheme dark srcset docs readme images product icons dark background tables dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light tables light svg width 14 0 img alt weights and biases tables src picture a a target blank href https docs wandb ai guides sweeps utm source github utm medium code utm campaign wandb utm content readme picture source media prefers color scheme dark srcset docs readme images product icons dark background sweeps dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light sweeps light svg width 14 0 img alt weights and biases sweeps src picture a a target blank href https docs wandb ai guides launch utm source github utm medium code utm campaign wandb utm content readme picture source media prefers color scheme dark srcset docs readme images product icons dark background launch dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light launch light svg width 14 0 img alt weights and biases launch src picture a a target blank href https docs wandb ai guides models utm source github utm medium code utm campaign wandb utm content readme picture source media prefers color scheme dark srcset docs readme images product icons dark background models dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light models light svg width 14 0 img alt weights and biases model management src picture a a a target blank href https docs wandb ai guides prompts utm source github utm medium code utm campaign wandb utm content readme picture source media prefers color scheme dark srcset docs readme images product icons dark background prompts dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light prompts light svg width 14 0 img alt weights and biases prompts src picture a target blank href https github com wandb weave picture source media prefers color scheme dark srcset docs readme images product icons dark background weave dark svg width 14 0 source media prefers color scheme light srcset docs readme images product icons light weave light svg width 14 0 img alt weights and biases prompts src picture p see the w b developer guide https docs wandb ai utm source github utm medium code utm campaign wandb utm content documentation and api reference guide https docs wandb ai ref utm source github utm medium code utm campaign wandb utm content documentation for a full technical description of the w b platform quickstart get started with w b in four steps 1 first sign up for a free w b account https wandb ai login utm source github utm medium code utm campaign wandb utm content quickstart 2 second install the w b sdk with pip https pip pypa io en stable navigate to your terminal and type the following command bash pip install wandb 3 third log into w b python wandb login 4 use the example code snippet below as a template to integrate w b to your python script python import wandb start a w b run with wandb init run wandb init project my first project save model inputs and hyperparameters in a wandb config object config run config config learning rate 0 01 model training code here log metrics over time to visualize performance with wandb log for i in range 10 run log loss loss that s it navigate to the w b app to view a dashboard of your first w b experiment use the w b app to compare multiple experiments in a unified place dive into the results of a single run and much more p align center img src docs readme images wandb demo experiments gif width 100 p p align center example w b dashboard that shows runs from an experiment p nbsp integrations use your favorite framework with w b w b integrations make it fast and easy to set up experiment tracking and data versioning inside existing projects for more information on how to integrate w b with the framework of your choice see the integrations chapter https docs wandb ai guides integrations in the w b developer guide p align center img src docs readme images integrations png width 100 p details summary pytorch summary call watch and pass in your pytorch model to automatically log gradients and store the network topology next use log to track other metrics the following example demonstrates an example of how to do this python import wandb 1 start a new run run wandb init project gpt4 2 save model inputs and hyperparameters config run config config dropout 0 01 3 log gradients and model parameters run watch model for batch idx data target in enumerate train loader if batch idx args log interval 0 4 log metrics to visualize performance run log loss loss run an example google colab notebook http wandb me pytorch colab read the developer guide https docs wandb com guides integrations pytorch utm source github utm medium code utm campaign wandb utm content integrations for technical details on how to integrate pytorch with w b explore w b reports https app wandb ai wandb getting started reports pytorch vmlldzoymtewnzm utm source github utm medium code utm campaign wandb utm content integrations details details summary tensorflow keras summary use w b callbacks to automatically save metrics to w b when you call model fit during training the following code example demonstrates how your script might look like when you integrate w b with keras python this script needs these libraries to be installed tensorflow numpy import wandb from wandb keras import wandbmetricslogger wandbmodelcheckpoint import random import numpy as np import tensorflow as tf start a run tracking hyperparameters run wandb init set the wandb project where this run will be logged project my awesome project track hyperparameters and run metadata with wandb config config layer 1 512 activation 1 relu dropout random uniform 0 01 0 80 layer 2 10 activation 2 softmax optimizer sgd loss sparse categorical crossentropy metric accuracy epoch 8 batch size 256 optional use wandb config as your config config run config get the data mnist tf keras datasets mnist x train y train x test y test mnist load data x train x test x train 255 0 x test 255 0 x train y train x train 5 y train 5 x test y test x test 20 y test 20 labels str digit for digit in range np max y train 1 build a model model tf keras models sequential tf keras layers flatten input shape 28 28 tf keras layers dense config layer 1 activation config activation 1 tf keras layers dropout config dropout tf keras layers dense config layer 2 activation config activation 2 compile the model model compile optimizer config optimizer loss config loss metrics config metric wandbmetricslogger will log train and validation metrics to wandb wandbmodelcheckpoint will upload model checkpoints to wandb history model fit x x train y y train epochs config epoch batch size config batch size validation data x test y test callbacks wandbmetricslogger log freq 5 wandbmodelcheckpoint models optional finish the wandb run necessary in notebooks run finish get started integrating your keras model with w b today run an example google colab notebook https wandb me intro keras utm source github utm medium code utm campaign wandb utm content integrations read the developer guide https docs wandb com guides integrations keras utm source github utm medium code utm campaign wandb utm content integrations for technical details on how to integrate keras with w b explore w b reports https app wandb ai wandb getting started reports keras vmlldzoymtewnjq utm source github utm medium code utm campaign wandb utm content integrations details details summary hugging face transformers summary pass wandb to the report to argument when you run a script using a hugging face trainer w b will automatically log losses evaluation metrics model topology and gradients note the environment you run your script in must have wandb installed the following example demonstrates how to integrate w b with hugging face python this script needs these libraries to be installed numpy transformers datasets import wandb import os import numpy as np from datasets import load dataset from transformers import trainingarguments trainer from transformers import autotokenizer automodelforsequenceclassification def tokenize function examples return tokenizer examples text padding max length truncation true def compute metrics eval pred logits labels eval pred predictions np argmax logits axis 1 return accuracy np mean predictions labels download prepare the data dataset load dataset yelp review full tokenizer autotokenizer from pretrained distilbert base uncased small train dataset dataset train shuffle seed 42 select range 1000 small eval dataset dataset test shuffle seed 42 select range 300 small train dataset small train dataset map tokenize function batched true small eval dataset small train dataset map tokenize function batched true download the model model automodelforsequenceclassification from pretrained distilbert base uncased num labels 5 set the wandb project where this run will be logged os environ wandb project my awesome project save your trained model checkpoint to wandb os environ wandb log model true turn off watch to log faster os environ wandb watch false pass wandb to the report to parameter to turn on wandb logging training args trainingarguments output dir models report to wandb logging steps 5 per device train batch size 32 per device eval batch size 32 evaluation strategy steps eval steps 20 max steps 100 save steps 100 define the trainer and start training trainer trainer model model args training args train dataset small train dataset eval dataset small eval dataset compute metrics compute metrics trainer train optional finish the wandb run necessary in notebooks wandb finish run an example google colab notebook http wandb me hf utm source github utm medium code utm campaign wandb utm content integrations read the developer guide https docs wandb com guides integrations huggingface utm source github utm medium code utm campaign wandb utm content integrations for technical details on how to integrate hugging face with w b details details summary pytorch lightning summary build scalable structured high performance pytorch models with lightning and log them with w b python this script needs these libraries to be installed torch torchvision pytorch lightning import wandb import os from torch import optim nn utils from torchvision datasets import mnist from torchvision transforms import totensor import pytorch lightning as pl from pytorch lightning loggers import wandblogger class litautoencoder pl lightningmodule def init self lr 1e 3 inp size 28 optimizer adam super init self encoder nn sequential nn linear inp size inp size 64 nn relu nn linear 64 3 self decoder nn sequential nn linear 3 64 nn relu nn linear 64 inp size inp size self lr lr save hyperparameters to self hparamsm auto logged by wandb self save hyperparameters def training step self batch batch idx x y batch x x view x size 0 1 z self encoder x x hat self decoder z loss nn functional mse loss x hat x log metrics to wandb self log train loss loss return loss def configure optimizers self optimizer optim adam self parameters lr self lr return optimizer init the autoencoder autoencoder litautoencoder lr 1e 3 inp size 28 setup data batch size 32 dataset mnist os getcwd download true transform totensor train loader utils data dataloader dataset shuffle true initialise the wandb logger and name your wandb project wandb logger wandblogger project my awesome project add your batch size to the wandb config wandb logger experiment config batch size batch size pass wandb logger to the trainer trainer pl trainer limit train batches 750 max epochs 5 logger wandb logger train the model trainer fit model autoencoder train dataloaders train loader optional finish the wandb run necessary in notebooks wandb finish run an example google colab notebook http wandb me lightning utm source github utm medium code utm campaign wandb utm content integrations read the developer guide https docs wandb ai guides integrations lightning utm source github utm medium code utm campaign wandb utm content integrations for technical details on how to integrate pytorch lightning with w b details details summary xgboost summary use w b callbacks to automatically save metrics to w b when you call model fit during training the following code example demonstrates how your script might look like when you integrate w b with xgboost python this script needs these libraries to be installed numpy xgboost import wandb from wandb xgboost import wandbcallback import numpy as np import xgboost as xgb setup parameters for xgboost param objective multi softmax eta 0 1 max depth 6 nthread 4 num class 6 start a new wandb run to track this script run wandb init set the wandb project where this run will be logged project my awesome project track hyperparameters and run metadata config param download data from wandb artifacts and prep data run use artifact wandb intro dermatology data v0 type dataset download data np loadtxt dermatology data delimiter converters 33 lambda x int x 34 lambda x int x 1 sz data shape train data int sz 0 0 7 test data int sz 0 0 7 train x train 33 train y train 34 test x test 33 test y test 34 xg train xgb dmatrix train x label train y xg test xgb dmatrix test x label test y watchlist xg train train xg test test add another config to the wandb run num round 5 run config num round 5 run config data shape sz pass wandbcallback to the booster to log its configs and metrics bst xgb train param xg train num round evals watchlist callbacks wandbcallback get prediction pred bst predict xg test error rate np sum pred test y test y shape 0 log your test metric to wandb run summary error rate error rate optional finish the wandb run necessary in notebooks run finish run an example google colab notebook https wandb me xgboost utm source github utm medium code utm campaign wandb utm content integrations read the developer guide https docs wandb ai guides integrations xgboost utm source github utm medium code utm campaign wandb utm content integrations for technical details on how to integrate xgboost with w b details details summary sci kit learn summary use wandb to visualize and compare your scikit learn models performance python this script needs these libraries to be installed numpy sklearn import wandb from wandb sklearn import plot precision recall plot feature importances from wandb sklearn import plot class proportions plot learning curve plot roc import numpy as np from sklearn import datasets from sklearn ensemble import randomforestclassifier from sklearn model selection import train test split load and process data wbcd datasets load breast cancer feature names wbcd feature names labels wbcd target names test size 0 2 x train x test y train y test train test split wbcd data wbcd target test size test size train model model randomforestclassifier model fit x train y train model params model get params get predictions y pred model predict x test y probas model predict proba x test importances model feature importances indices np argsort importances 1 start a new wandb run and add your model hyperparameters run wandb init project my awesome project config model params add additional configs to wandb run config update test size test size train len len x train test len len x test log additional visualisations to wandb plot class proportions y train y test labels plot learning curve model x train y train plot roc y test y probas labels plot precision recall y test y probas labels plot feature importances model optional finish the wandb run necessary in notebooks run finish run an example google colab notebook https wandb me scikit colab utm source github utm medium code utm campaign wandb utm content integrations read the developer guide https docs wandb ai guides integrations scikit utm source github utm medium code utm campaign wandb utm content integrations for technical details on how to integrate scikit learn with w b details nbsp w b hosting options weights biases is available in the cloud or installed on your private infrastructure set up a w b server in a production environment in one of three ways 1 production cloud https docs wandb ai guides hosting hosting options self managed on prem private cloud utm source github utm medium code utm campaign wandb utm content hosting set up a production deployment on a private cloud in just a few steps using terraform scripts provided by w b 2 dedicated cloud https docs wandb ai guides hosting hosting options wb managed dedicated cloud utm source github utm medium code utm campaign wandb utm content hosting a managed dedicated deployment on w b s single tenant infrastructure in your choice of cloud region 3 on prem bare metal https docs wandb ai guides hosting how to guides bare metal utm source github utm medium code utm campaign wandb utm content hosting w b supports setting up a production server on most bare metal servers in your on premise data centers quickly get started by running wandb server to easily start hosting w b on your local infrastructure see the hosting documentation https docs wandb ai guides hosting utm source github utm medium code utm campaign wandb utm content hosting in the w b developer guide for more information nbsp tutorials explore example colab notebooks at wandb examples github repository https github com wandb examples tree master colabs here are some of our favorites insert nbsp contribution guidelines weights biases open source and we welcome contributions from the community see the contribution guide https github com wandb wandb blob main contributing md for more information on the development workflow and the internals of the wandb library for wandb bugs and feature requests visit github issues https github com wandb wandb issues or contact support wandb com nbsp w b community be a part of the growing w b community and interact with the w b team in our discord https wandb me discord stay connected with the latest ml updates and tutorials with w b fully connected https wandb ai fully connected nbsp license mit license https github com wandb wandb blob main license
machine-learning experiment-track deep-learning keras tensorflow pytorch hyperparameter-search reinforcement-learning mlops data-science collaboration hyperparameter-optimization reproducibility hyperparameter-tuning data-versioning model-versioning ml-platform
ai
proteinnet
proteinnet proteinnet is a standardized data set for machine learning of protein structure it provides protein sequences structures secondary https en wikipedia org wiki protein secondary structure and tertiary https en wikipedia org wiki protein tertiary structure multiple sequence alignments msas https en wikipedia org wiki multiple sequence alignment position specific scoring matrices pssms https en wikipedia org wiki position weight matrix and standardized training validation test https en wikipedia org wiki training test and validation sets splits proteinnet builds on the biennial casp http predictioncenter org assessments which carry out blind predictions of recently solved but publicly unavailable protein structures to provide test sets that push the frontiers of computational methodology it is organized as a series of data sets spanning casp 7 through 12 covering a ten year period to provide a range of data set sizes that enable assessment of new methods in relatively data poor and data rich regimes note that this is a preliminary release the raw data used for construction of the data sets as well as the msas are not yet generally available however the raw msa data 4tb for proteinnet 12 is available upon request transfer requires downloading of a globus client see the raw data https github com aqlaboratory proteinnet blob master docs raw data md section for more information motivation protein structure prediction is one of the central problems of biochemistry while the problem is well studied within the biological and chemical sciences it is less well represented within the machine learning community we suspect this is due to two reasons 1 a high barrier to entry for non domain experts and 2 lack of standardization in terms of training validation test splits that make fair and consistent comparisons across methods possible if these two issues are addressed protein structure prediction can become a major source of innovation in ml research alongside the canonical tasks of computer vision nlp and speech recognition much like imagenet http www image net org helped spur the development https qz com 1034972 the data that changed the direction of ai research and possibly the world of new computer vision techniques proteinnet aims to facilitate ml research on protein structure by providing a standardized data set and standardized training validation test splits that any group can use with minimal effort to get started approach once every two years the casp http predictioncenter org assessment is held during this competition structure predictors from across the globe are presented with protein sequences whose structures have been recently solved but which have not yet been made publicly available the predictors make blind predictions of these structures which are then assessed for their accuracy the casp structures thus provide a standardized benchmark for how well prediction methods perform at a given moment in time the basic idea behind proteinnet is to piggyback on casp by using casp structures as test sets proteinnet augments these test sets with training validation sets that reset the historical record to the conditions preceding each casp experiment in particular proteinnet restricts the set of sequences used for building pssms and msas and structures to those available prior to the commencement of each casp this is critical as standard databases such as blast https blast ncbi nlm nih gov blast cgi do not maintain historical versions we use time reset versions of the uniparc http www uniprot org uniparc dataset as well as metagenomic sequences from the jgi https img jgi doe gov to build sequence databases for deriving msas proteinnet further provides carefully split validation sets that range in difficulty from easy 90 seq id useful for assessing a model s ability to predict minor changes in protein structure such as mutations to extremely difficult 10 seq id useful for assessing a model s abiliy to predict entirely new protein folds as in the casp free modeling fm category in a sense our validation sets provide a series of transferability challenges to test how well a model can withstand distributional shifts in the data set we have found that our most difficult validation subsets exceed the difficulty of casp fm targets download proteinnet records are provided in two forms human and machine readable text files that can be used programmatically by any tool and tensorflow specific tfrecord files more information on the file format can be found in the documentation here https github com aqlaboratory proteinnet blob master docs proteinnet records md file formats casp7 casp8 casp9 casp10 casp11 casp12 text based https sharehost hms harvard edu sysbio alquraishi proteinnet human readable casp7 tar gz text based https sharehost hms harvard edu sysbio alquraishi proteinnet human readable casp8 tar gz text based https sharehost hms harvard edu sysbio alquraishi proteinnet human readable casp9 tar gz text based https sharehost hms harvard edu sysbio alquraishi proteinnet human readable casp10 tar gz text based https sharehost hms harvard edu sysbio alquraishi proteinnet human readable casp11 tar gz text based https sharehost hms harvard edu sysbio alquraishi proteinnet human readable casp12 tar gz tf records https sharehost hms harvard edu sysbio alquraishi proteinnet tfrecords casp7 tar gz tf records https sharehost hms harvard edu sysbio alquraishi proteinnet tfrecords casp8 tar gz tf records https sharehost hms harvard edu sysbio alquraishi proteinnet tfrecords casp9 tar gz tf records https sharehost hms harvard edu sysbio alquraishi proteinnet tfrecords casp10 tar gz tf records https sharehost hms harvard edu sysbio alquraishi proteinnet tfrecords casp11 tar gz tf records https sharehost hms harvard edu sysbio alquraishi proteinnet tfrecords casp12 tar gz secondary structure data astral entries https www dropbox com s 59y3nud4rixombf single domain dssp annotations json gz dl 0 pdb entries https www dropbox com s sne2ak1woy1lrqr full protein dssp annotations json gz dl 0 casp12 test set is incomplete due to embargoed structures once the embargo is lifted we will release all structures documentation proteinnet records docs proteinnet records md splitting methodology docs splitting methodology md raw data docs raw data md faq docs faq md pytorch parser proteinnet includes an official tensorflow based parser jeppe hallgren https github com jeppehallgren has kindly created a pytorch based parser that is available here https github com openprotein openprotein blob master preprocessing py extensions sidechainnet https github com jonathanking sidechainnet extends proteinnet by adding angle and atomic coordinate information for side chain atoms citation please cite the proteinnet paper https bmcbioinformatics biomedcentral com articles 10 1186 s12859 019 2932 0 in bmc bioinformatics acknowledgements construction of this data set consumed millions of compute hours and was possible thanks to the generous support of the hms laboratory of systems pharmacology http hits harvard edu the program laboratory of systems pharmacology about the harvard program in therapeutic science http hits harvard edu the program program in regulatory science about and the research computing https rc hms harvard edu group at harvard medical school https hms harvard edu we also thank martin steinegger https github com martin steinegger and milot mirdita https github com milot mirdita for their extensive help with the mmseqs2 and hhblits software packages sergey ovchinnikov http site solab org for providing metagenomic sequences andriy kryshtafovych http predictioncenter org people kryshtafovych index cgi for his assistance with casp data and sean eddy https github com cryptogenomicon for his help with the hmmer software package this data set is hosted by the hms research information technology solutions https rits hms harvard edu group at harvard university funding this work was supported by nigms grant p50gm107618 and nci grant u54 ca225088
machine-learning deep-learning protein-structure dataset protein-sequence proteins
ai
sushi-ui-android
div align center sushi design system android ui kit application is available here a href https play google com store apps details id com zomato sushiapp img alt get it on google play src https play google com intl en us badges images generic en badge web generic png height 80 a downloads badge https img shields io endpoint logo google play url https api playstore rajkumaar co in downloads id com zomato sushiapp color success https play google com store apps details id com zomato sushiapp rating badge https img shields io endpoint logo google play url https api playstore rajkumaar co in rating id com zomato sushiapp color success https play google com store apps details id com zomato sushiapp latest release version badge https img shields io endpoint color blue url https api playstore rajkumaar co in version id com zomato sushiapp div usage the master branch is being used for release and dev is the default branch installation this package is available via github package registry to use this follow these steps add the github maven repository and the dependency in your app s build gradle groovy repositories google jcenter etc maven url https maven pkg github com zomato sushi ui android credentials httpheadercredentials name authorization value token system getenv github token authentication header httpheaderauthentication dependencies other dependencies implementation com zomato sushilib sushilib android latest version note make sure you have the github token environment variable set this token should have read packages enabled documentation a delicious ui kit to build android apps made with by zomato br https zomato github io sushi ui android testing coverage run all tests and get coverage report shell gradlew jacocotestreport publishing to publish this package go to actions tab of the repo and select version bump ci workflow this workflow has to be manually triggered by clicking run workflow button it will create a pr on master branch which after merge will generate and publish a new package license copyright 2022 zomato limited licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license
android
os
machine-learning-asset-management
machine learning in asset management if you like this type of content then visit ml quant https www ml quant com site below https www ml quant com https user images githubusercontent com 26666267 145840123 bc077dd0 0980 439a 824b 9c09a5f779de png part one follow this link https papers ssrn com sol3 papers cfm abstract id 3420952 for ssrn paper if you feel like citing something you can use snow d 2020 https jfds pm research com content 2 1 10 machine learning in asset management part 1 portfolio construction trading strategies the journal of financial data science winter 2020 2 1 10 23 this is the first in a series of articles dealing with machine learning in asset management asset management can be broken into the following tasks 1 portfolio construction 2 risk management 3 capital management 4 infrastructure and deployment and 5 sales and marketing this article focuses on portfolio construction using machine learning historically algorithmic trading could be more narrowly defined as the automation of sell side trade execution but since the introduction of more advanced algorithms the definition has grown to include idea generation alpha factor design asset allocation position sizing and the testing of strategies machine learning from the vantage of a decision making tool can help in all these areas editors frank j fabozzi marcos lop z de prado joseph simonian https docs google com drawings d e 2pacx 1vs02qa7xuhjmj2w42dbxodvjg5alike6 ylnqp uaw 7xwxtgwp2jikqexajcu e9hgz50hcpac1wk8 pub w 3197 h 2191 this paper investigates various machine learning trading and portfolio optimisation models and techniques the notebooks to this paper are python based by last count there are about 15 distinct trading varieties and around 100 trading strategies code and data are made available where appropriate the hope is that this paper will organically grow with future developments in machine learning and data processing techniques all feedback contributions and criticisms are highly encouraged you can find my contact details on the website firmai https www firmai org trading strategies br 1 tiny cta br resources br see this paper https papers ssrn com sol3 papers cfm abstract id 2695101 and blog https www linkedin com pulse implement cta less than 10 lines code thomas schmelzer for further explanation br data http drive google com open id 12bb8kpfyjsx41yvhhtolye zzohnamp8 code https drive google com open id 1ewbhhbzl prtphr25ebmqa9dv7jc4cjt br br 2 tiny rl br resources br see this paper http cs229 stanford edu proj2006 molina stocktradingwithrecurrentreinforcementlearning pdf and or blog https teddykoker com for further explanation br data https drive google com open id 1k7j5y1xcssina45d xw78d2frgzd94li code https drive google com open id 1irrr6kwjunerzzqrszj9 q c1yj5l0qj br br 3 tiny vix cmf br resources br data https drive google com open id 1yv2 mtjzmanol9fm0ajosofec9mjzamu code https drive google com open id 186j gtkxcgzj06wcwdau9yhyxp9sfglu br br 4 quantamental br resources br web scrapers https drive google com drive folders 12az7vg 3hidpyz4gavyy7bjptlapgftc usp sharing data https drive google com open id 1b0oxisknacedftykgov619scfxwpcuwt code https drive google com open id 1pqtffcr1ejregr6xiozcs8jsd7accul7 interactive report https github com firmai interactive corporate report paper https papers ssrn com sol3 papers cfm abstract id 3420490 br br 5 earnings surprise br resources br code https drive google com open id 1ktgaukizs8qisudcw0swixbypebwtqxf paper https papers ssrn com sol3 papers cfm abstract id 3420722 br br 6 bankruptcy prediction br resources br data https drive google com open id 1uaizbnhag adwz4z7nd y5thq89d iqh code https drive google com open id 1z2zyveowsrfhsa1f7g0m1o jixedudb paper https papers ssrn com sol3 papers cfm abstract id 3420889 br br 7 filing outcomes br resources br data https drive google com open id 1cdhrrap07e 2tgrpqginxunqpdbtpq u br br 8 credit rating arbitrage br resources br code https drive google com open id 1i yerl4i6qp57c0ldswev8iyv rtazlf br br 9 factor investing br resources br paper https docplayer net 120877135 industry return predictability a machine learning approach html code https drive google com open id 1o0lq khtfsbfg5an3 aqv6deirwq6uup data https drive google com open id 1cc43729ryopcsdj3r46sdhcjjp1aumaa br br 10 systematic global macro br resources br data https drive google com open id 1epkftfjbrfg3xdtg dbssykesd8zma1z code https drive google com open id 10bn3knjl9emdb5tt1arxo8iaxliph zd br br 11 mixture models br resources br data https drive google com open id 1jmr2jlk6hy7j7c2jzfek1oxptohbdylk code https drive google com open id 1trit7lijerwkwohiubs6rzbzo2eybntn br br 12 evolutionary br resources br code https drive google com open id 116aj9kbzcrcyr5mdu58hkwe53lacae52 repo https github com huseinzol05 stock prediction models tree master free agent br br 13 agent strategy br resources br code https drive google com open id 1qcvieui5djkmxnjum9 wipf65vvhdwwz repo https github com huseinzol05 stock prediction models tree master agent br br 14 stacked trading br resources br code https drive google com open id 11sg9kiwuxv9fgrrpas0qifggrcdzk2dh blog https www kdnuggets com 2017 02 stacking models imropved predictions html br br 15 deep trading br resources br code https drive google com open id 1nosoi29gic3zoewnmgqcuuqcrxemd9ix repo https github com huseinzol05 stock prediction models tree master deep learning br br part two snow d 2020 https jfds pm research com content early 2020 03 12 jfds 2020 1 029 machine learning in asset management part 2 portfolio construction weight optimization the journal of financial data science spring 2020 2 1 10 23 this is the second in a series of articles dealing with machine learning in asset management this article focuses on portfolio weighting using machine learning following from the previous article snow 2020 which looked at trading strategies this article identifies different weight optimization methods for supervised unsupervised and reinforcement learning frameworks in total seven submethods are summarized with the code made available for further exploration weight optimisation jfds br 1 deep portfolio br resources br data https drive google com open id 1bjcuzbrz8hfxs cd0vghemop16vf3n23 code https drive google com open id 1 hoeaijqantuyiyamj26zvhjnzq9xv09 paper https arxiv org abs 1605 07230 br br 2 linear regression br resources br code https drive google com open id 1ydzqvz6pn2afdx2uprfaq9jogvk7rpjy paper https onlinelibrary wiley com doi abs 10 1111 0022 1082 00120 br br 3 bayesian sentiment br resources br code https colab research google com drive 1smaojzuunirnrivazxhv5fulmowo17mb br br 4 pca and hierarchical br resource br code https colab research google com drive 1mm9r6ezoerhykycdbc74gy7s2u6h1otc br br 5 hrp br resources br data https drive google com open id 198fphhd973i3rka9d7oz srmbwpykqec code https drive google com open id 1z3fe7qxz6c566kog3htqefcc84uagwff br br 6 network graph br resources br code https colab research google com drive 10wnivuicvfajw2utdrwi6w7asukjinpl br br 7 rl deep deterministic br resources br code https colab research google com drive 1l3 d2zmgzkprsb9gb5bvigkskmtlti7 br weight optimisation ssrn br 1 online portfolio selection olps br resources br code https drive google com open id 1tpije6klq7d1zzwokhztpa6wzwd1txhd br other ssrn br 1 ganvar br resources br code https drive google com open id 1c0qlvv2ic8qvvcg7f4bhp8dp3wugkj8e br all data and code https drive google com open id 1utwe xx1n93btdkofiwpbhjcfh w8 ak top 1 ssrn paper downloads all time top 10 paper https papers ssrn com sol3 papers cfm abstract id 3420952 applied computing ejournal https papers ssrn com sol3 topten toptenresults cfm groupingid 3191581 netorjrnl jrnl compscirn algorithms https papers ssrn com sol3 topten toptenresults cfm groupingid 3176752 netorjrnl jrnl compscirn clustering https papers ssrn com sol3 topten toptenresults cfm groupingid 3176752 netorjrnl jrnl banking financial institutions ejournals https papers ssrn com sol3 topten toptenresults cfm groupingid 320840 netorjrnl ntwk compscirn artificial intelligence https papers ssrn com sol3 topten toptenresults cfm groupingid 3178496 netorjrnl jrnl econometric modeling capital markets portfolio theory ejournal https papers ssrn com sol3 topten toptenresults cfm groupingid 2167133 netorjrnl jrnl machine learning ejournal https papers ssrn com sol3 topten toptenresults cfm groupingid 3178495 netorjrnl jrnl other projects other firmai projects include atspy https github com firmai atspy automating python s best time series models pandapy https github com firmai pandapy a data structure solutions that has the speed of numpy and the usability of pandas 10x to 50x faster fairput https github com firmai fairput a holistic approach to implement fair machine learning outputs at the individual and group level pandasvault https github com firmai pandasvault a package for advanced pandas functions and code snippets and icr https github com firmai interactive corporate report an interactive and fully automated corporate report built with python
machine-learning trading-strategies assets-management algorithmic-trading portfolio-optimization jupyter-notebook google-colab quantitative-finance quant
ai
insight-api
this proyect has been replaced by bitcore node at https github com bitpay bitcore
blockchain
attackgen
attackgen attackgen is a cybersecurity incident response testing tool that leverages the power of large language models and the comprehensive mitre att ck framework the tool generates tailored incident response scenarios based on user selected threat actor groups and your organisation s details table of contents star the repo star the repo features features releases releases requirements requirements installation installation data setup data setup running attackgen running attackgen usage usage contributing contributing licence licence star the repo if you find attackgen useful please consider starring the repository on github this helps more people discover the tool your support is greatly appreciated features generates unique incident response scenarios based on chosen threat actor groups allows you to specify your organisation s size and industry for a tailored scenario displays a detailed list of techniques used by the selected threat actor group as per the mitre att ck framework create custom scenarios based on a selection of att ck techniques capture user feedback on the quality of the generated scenarios downloadable scenarios in markdown format integrated with langsmith https docs smith langchain com for powerful debugging testing and monitoring of model performance attackgen screenshot images screenshot jpg releases v0 2 current what s new why is it useful custom scenarios based on att ck techniques for mature organisations this feature is particularly beneficial if your organisation has advanced threat intelligence capabilities for instance if you re monitoring a newly identified or lesser known threat actor group you can tailor incident response testing scenarios specific to the techniques used by that group br br focused testing alternatively use this feature to focus your incident response testing on specific parts of the cyber kill chain or certain mitre att ck tactics like lateral movement or exfiltration this is useful for organisations looking to evaluate and improve specific areas of their defence posture user feedback on generated scenarios collecting feedback is essential to track model performance over time and helps to highlight strengths and weaknesses in scenario generation tasks improved error handling for missing api keys improved user experience replaced streamlit st spinner widgets with new st status widget provides better visibility into long running processes i e scenario generation v0 1 initial release requirements recent version of python python packages pandas streamlit and any other packages necessary for the custom libraries langchain and mitreattack openai api key data files enterprise attack json mitre att ck dataset in stix format and groups json installation 1 clone the repository git clone https github com mrwadams attackgen git 2 change directory into the cloned repository cd attackgen 3 install the required python packages pip install r requirements txt langsmith setup if you would like to use langsmith for debugging testing and monitoring of model performance you will need to set up a langsmith account and create a streamlit secrets toml file that contains your langchain api key please follow the instructions here https docs smith langchain com to set up your account and obtain your api key if you do not wish to use langsmith you can delete the langsmith related environment variables from the top of the following files pages 1 threat group scenarios py pages 2 custom scenarios py data setup download the latest version of the mitre att ck dataset in stix format from here https github com mitre attack attack stix data blob master enterprise attack enterprise attack json ensure to place this file in the data directory within the repository running attackgen after the data setup you can run attackgen with the following command streamlit run welcome py you can also try the app on streamlit community cloud https attackgen streamlit app usage standard scenario generation 1 enter your openai api key 2 select your organisation s industry and size from the dropdown menus 3 navigate to the threat group scenarios page 4 select the threat actor group that you want to simulate 5 click on generate scenario to create the incident response scenario 6 use the or buttons to provide feedback on the quality of the generated scenario custom scenario generation 1 enter your openai api key 2 select your organisation s industry and size from the dropdown menus 3 navigate to the custom scenario page 4 use the multi select box to search for and select the att ck techniques relevant to your scenario 5 click generate scenario to create your custom incident response testing scenario based on the selected techniques 6 use the or buttons to provide feedback on the quality of the generated scenario please note that generating scenarios may take a minute or so once the scenario is generated you can view it on the app and also download it as a markdown file contributing i m very happy to accept contributions to this project please feel free to submit an issue or pull request licence this project is licensed under gnu gplv3 https choosealicense com licenses gpl 3 0
ai
FDU-Natural-Language-Processing
introduction to natural language processing fdu this is a repo including all projects in my introduction to natural language processing course data130006 http www sdspeople fudan edu cn zywei data130006 index html in school of data science http www sds fudan edu cn wp fudan university http www fudan edu cn 2016 index html notice requirements and code listed may be outdated please refer to course website http www sdspeople fudan edu cn zywei data130006 index html to see latest news quick review of projects 1 spell correction 1 2 stock market prediction 2 3 chinese event extraction 3 4 word2vec and sentiment analysis 4 h3 id 1 project 1 spell correction h3 this project is aimed at using doing spell correction using language model and channel model selection mechanism we choose the candidate with the highest probability language model p c is the probability that c appears as a word of english text we use chain rule and markov assumption to compute it candidate model we use edit distance to find which candidate corrections c to consider channel model p w c is the probability that w would be typed in a text when the author meant c you can find detailed requirements of this project here https github com rshcaroline fdu nlp stock market prediction blob master project 201 20spell 20correction files 20and 20report requirements pdf and my report is here https github com rshcaroline fdu nlp stock market prediction blob master project 201 20spell 20correction files 20and 20report requirements pdf my score of this project is 13 4 15 h3 id 2 project 2 stock market prediction h3 this project is aimed at using text classification and sentiment analysis to process financial news and predict whether the price of a stock will go up or down for reading and saving data i use libraries like xlrd pickle and codecs in terms of tokenization i choose jieba to achieve higher accuracy rate i ve added some financial dictionary to jieba and removed stop word from the already tokenized word list as for extracting features both positive and negative word dictionary are used and only considering the most common words in news for the purpose of reducing features dimension talking about training and testing models i divided the development set into training set and dev test set and have used cross validation to find the best classifier among naive bayes decision tree maximum entropy from nltk and bernoulli nb logistic regression svc linear svc nusvc from sklearn finally the best accuracy was achieved at 69 5 with svm you can find my report here https github com rshcaroline fdu nlp stock market prediction blob master project 202 20stock 20market 20prediction stock 20market 20prediction pdf and my score of this project is 15 15 h3 id 3 project 3 chinese event extraction h3 this project is aimed at doing sequence labeling to extract chinese event using hidden markov models and conditional random field which can be separated as two subtasks trigger labeling for 8 types and argument labeling for 35 types during this project for reading and saving data i use libraries like pickle and codecs in terms of tokenization and tagging part of speech for preparation for the crf toolkit i choose jieba to achieve higher accuracy rate for hmm i ve used several smoothing methods and implemented both bigram and trigram models talking about training and testing models i divided the development set into training set and dev test set finally the best accuracy was achieved at 71 65 for argument 94 68 for trigger with crf 96 15 for argument 71 88 for trigger with hmm you can find my report here https github com rshcaroline fdu natural language processing blob master project 203 20chinese 20event 20extraction chinese 20event 20extraction pdf and my score of this project is 14 4 15 h3 id 4 project 4 word2vec and sentiment analysis h3 this project is aimed at using word2vec models for sentiment analysis which can be separated as two subtasks implementing word2vec model skip gram in this task to train my own word vectors and use the average of all the word vectors in each sentence as its feature to train a classifier e g softmax regression with gradient descent method during this project alone with implementing the already well framed code block i ve spent much time improving my code s efficiency and comparing different implementation meth ods talking about the sentiment analysis to achieve higher accuracy i ve tried different combinations with context size c word vector s dimension dimvectors and regularization in terms of training and testing models the development set has been divided into training set and dev test set finally the best accuracy for dev set was achieved at 29 79 you can find detailed assignment here https github com rshcaroline fdu natural language processing blob master project 204 20word2vec 20and 20sentiment 20analysis assignment04 pdf and my solution to it here https github com rshcaroline fdu natural language processing blob master project 204 20word2vec 20and 20sentiment 20analysis assignment04 20solution pdf my report is here https github com rshcaroline fdu natural language processing blob master project 204 20word2vec 20and 20sentiment 20analysis word2vec 20sentiment 20analysis pdf and my score of this project is 14 6 15
ai
isetbio
the image system engineering toolbox for biology isetbio is a matlab toolbox for calculating the properties of the front end of the visual system this includes a description of the scene the optics and retinal image the capture of light by the photopigment the photocurrent responses in the receptors bipolar responses and retinal ganglion cell responses this repository includes a wiki https github com isetbio isetbio wiki that describes the software as well as many examples of how to perform computations to calculate the visual encoding of light in the eye the wiki https github com isetbio isetbio wiki also describes tools to view and and analyze the information contained the information at different neural stages finally we describe the methods we use to validate and test our code history the isetbio code includes a portion of image systems engineering toolbox iset that is sold by imageval consulting llc http www imageval com that code is designed to help industrial partners design novel image sensors the isetbio portion of the iset code is freely distributed for use in modeling image formation in biological systems isetbio also includes the wavefrontoptics code developed by david brainard heidi hofer and brian wandell that code implements methods for taking adaptive optics data from wavefront sensors and calculating the optical blur as a function of wavelength for model human eyes the toolbox relies on data collected by thibos and colleagues we also gratefully acknowledge important contributions from jon winawer
front_end
Ionic2Parks
ionic2parks this is a starter template for the ionic2parks app from mobile app development with ionic 2 http www ionic2book com published by o reilly press how to use this template this template does not work on its own bash sudo npm install g ionic cordova ionic start ionic2parks tabs cd ionic2parks with ionic cli version 3 the use of third party templates was removed once you have created the starter ionic application using the steps above download this repo and copy over the src and www directories into the newly created folder replacing the initial src and www directory you will still need to follow any additional steps in the book
front_end
react-lightning-design-system
react lightning design system https mashmatrix github io react lightning design system build status https travis ci org mashmatrix react lightning design system svg branch master https travis ci org mashmatrix react lightning design system salesforce lightning design system http www lightningdesignsystem com components built with react see the demo https mashmatrix github io react lightning design system install npm install react lightning design system example javascript import react from react import reactdom from react dom import button from react lightning design system function click alert clicked reactdom render div button onclick click simple button button type neutral onclick click neutral button button type brand onclick click brand button button type neutral icon download iconalign left onclick click icon 1 button button type neutral disabled disabled neutral button button type brand disabled disabled brand button div document body see more examples in examples https github com mashmatrix react lightning design system tree master stories directory running example stories locally this repo ships with a react storybook based story scripts to run stories and get component examples follow these steps 1 run npm install 2 run npm run storybook 3 find the stories running on localhost 9001 http localhost 9001 snapshot testing in react storybook this repo ships with story snapshots to examine differences in rendering as a result of changes to source code to identify render differences run npm run test storyshots if all changes are intentional run npm run test storyshots u to learn about other run options including interactive mode read snapshot testing in react storybook https voice kadira io snapshot testing in react storybook 43b3b71cec4f
react salesforce salesforce-lightning
os
Embedded-System-Design
embedded system design embedded system design mcte 4342 weekly examples assignment and project webinar link https youtu be vltfedre1ai
os
flight-pricing-project
this is the repo for the flight pricing project which is thoroughly explained in the following link https medium com victor regism why is my portfolio a medium article 96005c3128cb
cloud
Data_Engineer_Flights
data engineer flights slide1 https user images githubusercontent com 57310653 110224252 be69e980 7e8e 11eb 8f43 1d462ecc5288 png slide2 https user images githubusercontent com 57310653 110224254 c164da00 7e8e 11eb 9b57 867a26e20669 png slide3 https user images githubusercontent com 57310653 110224255 c1fd7080 7e8e 11eb 97bc 3a34dec183a4 png slide4 https user images githubusercontent com 57310653 110224256 c2960700 7e8e 11eb 8402 27572a7a75f0 png slide5 https user images githubusercontent com 57310653 110224258 c2960700 7e8e 11eb 9da2 123ef705c208 png slide6 https user images githubusercontent com 57310653 110224259 c32e9d80 7e8e 11eb 9bd7 7659c5370a74 png slide7 https user images githubusercontent com 57310653 110224262 c3c73400 7e8e 11eb 9de5 3d5c2618e467 png
server
database-crime-reports
building a database for crime reports in this guided project by dataquest data engineering track i built a database using postgresql for storing data related to crimes that occurred in boston dataquest provided the dataset boston csv for input my goal in this project is to create the database crime db with the table boston crimes create the table with the appropriate data types for storing the information from boston csv store the table inside the schema crimes create the user groups readonly and readwrite with appropriate privileges create the users data analyst and data scientist and assign to readonly and readwrite groups respectively verify if the privileges of user groups are set accordingly to accomplish my goals in this project i performed the following created the required database and schema after installing postgresql and psycopg2 module explored the column headings and content of boston csv to determine the appropriate data types created the required table using the appropriate data types loaded the data from boston csv into the table created the user group readonly which has the following privileges database connection schema usage and data selection from all tables in the schema created the user group readwrite which has similar privileges with readonly and capable of inserting deleting and updating the data in all tables in the schema created the requested users and assigned them to their respective user groups tested the database if correct objects were created and users groups have the right privileges at the end of the project i built my postgre database for the boston crime reports as illustrated below goal jpg for more information please see the project5 ipynb notebook and the boston csv file above
server
sbs-iot-data-generator
sample data generator for aws iot simple beer service this is the code repository for sample code to locally generate iot device data similar to what is generated by the aws simple beer service https github com awslabs simplebeerservice devices and feed it to aws iot service pre requisites amazon web services account aws command line interface cli https aws amazon com cli python boto3 script details the script generates random values within a reasonable range for each of the four parameters flow temperature humidity and sound you can tweak the values by changing the random randint min max values corresponding to each parameter the script is set to generate messages of each of the four types in a fixed percentage if you want more or less messages of a particular parameter you can change the values of rnd in the if else part of the code running example python sbs py run the script on amazon ec2 instance if for some reason you are unable to run this script on your local machine or prefer to host it externally you can run it from an amazon ec2 instance follow these steps 1 create an iam role with a policy that gives access to iot example awsiotfullaccess 2 launch a new ec2 instance and assign it the iot iam role at launch 3 login to the ec2 instance and change to root user sudo su 4 set your default region and output format in aws configure 5 upload sbs py file to ec2 or nano sbs py copy the entire script save and exit 6 make sure you have boto3 installed if not type pip install boto3 7 run python sbs py
server
Introduction-to-database-engineering-course
introduction to database engineering course personal notes for the course introduction to database engineering
server
Sublime-Super-Snippets
a collection of front end developer related sublime text 2 snippets this is a very young repository and will be shaped into a well oiled collection in no time go nuts lots of credit goes to joshua hibbert for the base of the css snippets https github com joshnh css snippets
front_end
js-course-2018
js course 2018 masters academy front end nodejs courses project for season 2018 2019 terms and conditions front end nodejs masters academy 2018 2019 masters academy front end nodejs javascript javascript asap 99 utf 8 js 4 html 4 css scss 2 unix lf telegram project structure b homeworks b pull request b name surname githubusername b hometasks https github com mastersacademy js course 2018 issues useful links markdown cheatsheet https github com adam p markdown here wiki markdown cheatsheet markdown cheatsheet try github https try github io authors mentors of mastersacademy
javascript course mastersacademy nodejs angular css html masters-academy learning self-learning front-end
front_end
Linux-exercises
altschool exercises my exercises on cloud engineering just grateful to be able to document my exercises as i transition into becoming a cloud engineer
cloud
clickhouse-stocks-analytics
clickhouse stocks analytics the project for advanced databases class on system and software engineering master program at hse university project goal to demonstrate the possibilities of olap dbms clickhouse for storing and processing stock data of companies from s p 500 rating using grafana and android application group members andrey volkov asgar zagitov alina kolchanova liana batalova project description the project allows users to see stocks data from s p 500 rating in two formats grafana dashboard and android app the data is stored in dbms clickhouse components the project consists of several applications servers clickhouse server main server of clickhouse dbms clickhouse metrics exporter prometheus storage for clickhouse metrics grafana tool for visualization stocks time series loader back end application that saves data to clickhouse in real time reader back end application that executes queries in clickhouse by requests from android android app application that shows stats charts based on the result from reader data data s p 500 stock data kaggle link https www kaggle com kp4920 s p 500 stock data time series analysis data select all stocks 5yr csv applications functionalities loader 1 loads dataset of s p 500 stock data 2 inserts the data to clickhouse server reader 1 receives requests from android app 2 selects data from clickhouse 3 map data to special format 4 returns data to android app ui functionalities the user will be provided with 2 interfaces to access stocks data stored in clickhouse grafana android app the functionality of these interfaces will be the following 1 user selects the date range 2 user selects the companies one or more in the list of 500 companies 3 user discover the charts on open high low close prices for selected companies project planning 1 week prepare project infrastructure 2 weeks design clickhouse tables check different options and make the most effective design decision 1 week develop loader application 1 week develop reader application 1 weeks develop android application 1 week test all project components
server
FreeRTOS-LTS
overview freertos offers feature stability with long term support lts releases freertos lts libraries come with security updates and critical bug fixes to the freertos kernel and iot libraries listed below for two years and are maintained by aws for the benefit of the freertos community with freertos lts you get a complete set of libraries needed to build secure connected iot and embedded products long term support helps reduce maintenance and testing costs associated with updating libraries on your devices already in production aws also offers freertos extended maintenance plan emp that provides you with security patches and critical bug fixes on your chosen freertos lts version for up to an additional 10 years with freertos emp your freertos based long lived devices can rely on a version that has feature stability and receives security updates for years you receive timely notification of upcoming patches on freertos libraries so you can plan the deployment of security patches on your iot devices to learn more about freertos emp see the freertos features page https aws amazon com freertos features freertos freertos long term support libraries in this github branch also listed below are part of the freertos 202210 lts https github com freertos freertos lts tree 202210 lts release learn more at https freertos org lts libraries html library version lts until lts repo url freertos kernel 10 5 1 10 31 2024 https github com freertos freertos kernel tree v10 5 1 freertos plus tcp 3 1 0 10 31 2024 https github com freertos freertos plus tcp tree v3 1 0 coremqtt 2 1 1 10 31 2024 https github com freertos coremqtt tree v2 1 1 corehttp 3 0 0 10 31 2024 https github com freertos corehttp tree v3 0 0 corepkcs11 3 5 0 10 31 2024 https github com freertos corepkcs11 tree v3 5 0 corejson 3 2 0 10 31 2024 https github com freertos corejson tree v3 2 0 coresntp 1 2 0 10 31 2024 https github com freertos coresntp tree v1 2 0 cellular interface 1 3 0 10 31 2024 https github com freertos freertos cellular interface tree v1 3 0 backoffalgorithm 1 3 0 10 31 2024 https github com freertos backoffalgorithm tree v1 3 0 sigv4 1 2 0 10 31 2024 https github com aws sigv4 for aws iot embedded sdk tree v1 2 0 aws iot device shadow 1 3 0 10 31 2024 https github com aws device shadow for aws iot embedded sdk tree v1 3 0 aws iot device defender 1 3 0 10 31 2024 https github com aws device defender for aws iot embedded sdk tree v1 3 0 aws iot jobs 1 3 0 10 31 2024 https github com aws jobs for aws iot embedded sdk tree v1 3 0 aws iot fleet provisioning 1 1 0 10 31 2024 https github com aws fleet provisioning for aws iot embedded sdk tree v1 1 0 aws iot over the air update 3 4 0 10 31 2024 https github com aws ota for aws iot embedded sdk tree v3 4 0 upgrading to freertos 202210 lts from a previous version of freertos lts refer to https freertos org lts libraries html on how to upgrade to freertos 202210 lts freertos lts versioning and patches freertos lts releases use a date based versioning scheme yyyymm followed by a patch sequential number xx for example freertos 202210 01 lts means the first patch to the october 2022 freertos lts release you can review the changelog changelog md and subscribe to github notifications https docs github com en free pro team latest github managing subscriptions and notifications on github about notifications to receive information on patches or other updates to this repository security see contributing contributing md security issue notifications for more information license this library is licensed under the mit license see the license license md file
os
eznlp
eznlp easy access to text summarization sentiment analysis subject classification semantic search and question answering includes a get text method that extracts raw text from any url or file pdf docx html etc these capabilities are built using the excellent machine learning library ktrain https github com amaiya ktrain which provides clean interfaces to a number of pretrained models bert nbsvm fasttext lda etc https github com amaiya ktrain overview text extraction is performed via textract https textract readthedocs io en stable installation pip install git https github com dpinney eznlp please note that this module requires multiple large machine learning libraries and pre trained models full installation size is multiple gigabytes usage examples python import eznlp get the text of a test document demo doc eznlp get text https drive google com uc export download id 13dd5nwdvdzrsf01d8g tzzhh32ewz rc is url true demo doc guilty pleas victim impact statements could have slew of implications for pg e summarizing the document eznlp summarize demo doc judge imposes roughly 3 5 million fine on pacific gas electric pg e has pleaded guilty to 84 counts of involuntary manslaughter caused by the camp fire victim impact statements criticized pg e s maintenance of its power system the guilty pleas could have a slew of implications for the company and state stakeholders say sentiment analyis is this a positive or negative article eznlp sentiment demo doc negative 0 6047303676605225 positive 0 09912186115980148 test whether the document is about the given subjects eznlp subjects demo doc wildfires energy bacon pge wildfires 0 9944069385528564 energy 0 8900003433227539 bacon 0 012442146427929401 pge 0 9794674515724182 extract the named entities in the document along with their type eznlp named entities demo doc butte county misc palermo org pg e corp org gather a folder full of documents and make a semantic search index eznlp get sample data qae eznlp search make index en docs en docs index semantic answering via the indexed documents search qae energy storage deployment answer global energy storage deployment to increase 122x over the next two decades confidence 0 3261045284748744 context reference 134 answer the revisions to the iso market rule will become effective april 1 and will allow storage to be dispatched into real time energy markets confidence 0 30367338538738653 context reference 131 answer the los angeles department of water and power ladwp is preparing a potentially world record setting power purchase agreement ppa for solar storage confidence 0 011269367223707761 context reference 202 future work improved ci testing better requirement version control raw text synthesis tried huggingface gpt2 and xlnet both have mediocre results will have to rely on separate gpt3 library https news ycombinator com item id 25819803 for this semantic answering via google google has a very good question answering model trained on the entire internet however api access to google is very tricky there are some experiments in the code using serpapi https stackoverflow com questions 54162249 is there a google api for people also ask which is expensive at 50 month the google custom search api https stackoverflow com a 49122258 7447778 which is free but doesn t have access to the semantic answer material and selenium which works great and is free but will require a lot of careful parsing
ai
VTVL
vtvl stm32 amp rtos based vertical landing rocket control system this repository is a development archive for rtos based vertical landing rocket control system estimator library https github com ibrahimcahit vtvl tree main estimator 20library filter library https github com ibrahimcahit vtvl tree main filter 20library matlab exports https github com ibrahimcahit vtvl tree main matlab 20exports stm32cubeide projects https github com ibrahimcahit vtvl tree main stm32cubeide 20projects sensor library https github com ibrahimcahit vtvl tree main sensor 20library https media wired com photos 5a7cb68fa2d3835392e1b469 4 3 w 2133 h 1600 c limit spacexrocketreturn jpg
os
rosgpt
rosgpt chatgpt interface for ros2 for human robot interaction rosgpt is a pioneering approach that combines the power of chatgpt and ros robot operating system to redefine human robot interaction by leveraging large language models like chatgpt rosgpt enables the conversion of unstructured human language into actionable robotic commands this repository contains the implementation of rosgpt allowing developers to explore and contribute to the project reference paper doi https img shields io badge doi 10 20944 2fpreprints202304 0827 v2 blue https www preprints org manuscript 202304 0827 v2 author anis koubaa citation koubaa a 2023 rosgpt next generation human robot interaction with chatgpt and ros preprints org 2023 2023040827 https www preprints org manuscript 202304 0827 v2 bibtex citation bibtex article koubaa2023rosgpt title rosgpt next generation human robot interaction with chatgpt and ros author koubaa anis journal preprints org year 2023 volume 2023 pages 2023040827 doi 10 20944 preprints202304 0827 v2 video demo explore rosgpt in action with this video demonstration showcasing the process of getting started and the capabilities of the system rosgpt video demonstration https img youtube com vi urkqd hb5hg 0 jpg https www youtube com watch v urkqd hb5hg rosgpt ros2 package description the rosgpt ros2 package includes a collection of scripts that work together to provide a convenient way of translating natural human language text into structured json commands which can be utilized by robots like turtlesim and turtlebot3 below is a brief overview of each script rosgpt py this script creates the rosgpt node which is a ros2 node with a rest server that takes in post requests containing natural human language text it then translates the text into structured json commands via an api call to chatgpt the script also defines an ontology based prompt that helps chatgpt convert human commands into json commands the rosgpt node publishes the json command on the voice cmd topic rosgpt client node py this script establishes a ros2 client node that sends post requests with natural human language text to the rosgpt rest server it waits for the structured json commands and displays them upon receipt use the ros2 run command to execute this node rosgpt client py similar to rosgpt client node py this script sends post requests with natural human language text to the rosgpt rest server but without implementing a ros2 node it solely functions as a rest client for rosgpt use the python command not ros2 run to execute this script rosgptparser turtlesim py this script implements the rosgptparser which subscribes to the voice cmd topic and receives json commands the node parses the json command and determines the ros2 primitives required to execute the specified tasks in this script a simple navigation task for the turtlesim robot is considered including move and rotate functions rosgptparser tb3 nav py this script also implements the rosgptparser subscribing to the voice cmd topic and receiving json commands the json commands are parsed and transformed into navigation goal tasks for the turtlebot3 robot getting started to get started with rosgpt follow these steps 1 clone the repository to your local machine 2 install the dependencies listed in the environment setup section 3 after following the environment setup steps run the rosgpt flask server using bash ros2 run rosgpt rosgpt 4 run the turtlesim node using bash ros2 run turtlesim turtlesim node 5 run the rosgptparser turtlesim py using bash ros2 run rosgpt rosgptparser turtlesim 6 run the rosgpt client node py using bash ros2 run rosgpt rosgpt client node 7 now you can start giving commands to the robot using the rosgpt client node terminal for example you can say want that you move forward 1 meter speed 1 and the robot will move forward 1 meter with speed 1 environment setup this ros 2 package was tested using ros 2 humble with ubuntu 22 04 it should also work with ros 2 foxy and other ros 2 versions you need to install the following dependencies add your openai api key in your bashrc as an environment variable bash echo export openai api key your api key bashrc install the dependencies required for the text to speech functionality bash sudo apt get install libespeak1 sudo apt install ros humble turtlesim for ros2 humble verison ubuntu 22 04 downgrading the setuptools is required bash pip3 install upgrade setuptools 58 0 2 install python dependencies bash cd rosgpt pip3 install r requirements txt then build the package bash colcon build packages select rosgpt source the workspace bash source install setup bash to get started with rosgpt got this section getting started getting started rosgpt rest api the rosgpt rest api is a convenient way of interacting with rosgpt it allows you to send post requests with natural human language text to the rosgpt server which will then translate the text into structured json commands the json commands can be used to control robots like turtlesim and turtlebot3 to use the rosgpt rest api follow these steps 1 run the rosgpt flask server using ros2 run rosgpt rosgpt 2 run the turtlesim node using ros2 run turtlesim turtlesim node 3 run the rosgptparser turtlesim py using ros2 run rosgpt rosgptparser turtlesim 4 run the rosgpt client py using python rosgpt client py 5 send a post request to the rosgpt server using curl x post h content type application json d text move forward http localhost 5000 rosgpt you can replace move forward with any natural human language text you want the rosgpt server will translate the text into structured json commands and send them back to the client 6 the client will display the json commands on the terminal you can use these commands to control the turtlesim robot ros1 support ros1 is an earlier version of the robot operating system ros which is still widely used in many robotics applications while rosgpt was originally developed for ros 2 humble on ubuntu 22 04 we recognize the importance of supporting ros1 as well to use rosgpt with ros1 you will need to modify the ros 2 code in the scripts to the corresponding ros 1 code we are actively working on developing this functionality but it is still a work in progress if you have already developed an extension to enable rosgpt to work with ros1 we would love to hear from you please create a pull request in a new branch and we will review it for inclusion in the rosgpt repository license this project is licensed under the creative commons attribution noncommercial 4 0 international license you are free to use share and adapt this material for non commercial purposes as long as you provide attribution to the original author s and the source contribute as this project is still under progress contributions are welcome to contribute please follow these steps 1 fork the repository on github 2 create a new branch for your feature or bugfix 3 commit your changes and push them to your fork 4 create a pull request to the main repository before submitting your pull request please ensure that your changes do not break the build and adhere to the project s coding style for any questions or suggestions please open an issue on the github issue tracker https github com aniskoubaa rosgpt issues
ai
machinehearing
machine hearing machine hearing or machine listening is the use of machine learning and audio sensors to derive meaningful information from sound this include listening for and diagnosing problems in machinery understanding events and activities that cause noise and estimation of how humans perceive certain sounds here you can find some notes on the topic compiled by jon nordby http jonnor com soundsensing logo img soundsensing banner png this research is sponsored by soundsensing https soundsensing no a provider of iot audio sensors with built in machine learning used for noise monitoring and condition monitoring the sensors are ideal for continious monitoring of audible noises and events and can perform tasks such as audio classification audio event detection and acoustic anomaly detection their sensors can transmit compressed and privacy preserving spectrograms allowing machine learning to be done in the cloud using familiar tools like python or models can be deployed onto the sensor itself for a highly efficient on edge ml solution pages some information is found in sub pages audio quality audio quality recent work europython 2021 sound event detection with machine learning a href https youtu be jrhsffcol s img src https github com jonnor machinehearing raw master europython2021 cover jpg height 200 alt youtube sound event detection with machine learning europython 2021 a july 26 2021 presented at europython 2021 https ep2021 europython eu video recording https youtu be jrhsffcol s slides https jonnor github io machinehearing europython2021 slides html notes europython2021 todo add icsv27 presentation tinyml emea 2021 perfect coffee roasting with tinyml sound sensing a href https youtu be muzy1ecke40 img src https github com jonnor machinehearing raw master tinymlemea2021 cover jpg height 200 alt perfect coffee roasting with tinyml sound sensing a june 7 2021 presented at tinyml emea technical forum 2021 https www tinyml org event emea 2021 video recording coming slides https jonnor github io machinehearing tinymlemea2021 slides html notes tinymlemea2021 tinyml summit 2021 environmental sound classification on microcontrollers a href https www youtube com watch v carhrotq5ha t 0s img src https github com jonnor machinehearing raw master tinyml2021 cover jpg height 200 alt environmental sound classification on microcontrollers a march 25 2021 video recording https www youtube com watch v carhrotq5ha t 0s slides https jonnor github io machinehearing tinyml2021 slides html notes tinyml2021 classifying sound using machine learning a href https www youtube com watch v 1h63pewtdbo img src https github com jonnor machinehearing raw master knowit2020 video png height 200 alt youtube classifying sound using machine learning a at knowit oslo 2020 video recording https www youtube com watch v 1h63pewtdbo slides https jonnor github io machinehearing knowit2020 slides html notes knowit2020 environmental sound classification on microcontrollers using convolutional neural networks a href https github com jonnor esc cnn microcontroller img src https github com jonnor esc cnn microcontroller raw master report img frontpage png height 200 alt github jonnor esc cnn microcontroller a master thesis report and code available in the github repository https github com jonnor esc cnn microcontroller europython2019 audio classification using machine learning a href https www youtube com watch v ucgroouo wy img src https github com jonnor machinehearing raw master europython2019 video png height 200 alt youtube audio classification using machine learning by jon nordby europython 2019 a presentation at europython2019 video recording https www youtube com watch v ucgroouo wy notes europython2019 pycode2019 recognizing sounds with machine learning and python a href https jonnor github io machinehearing pycode2019 slides html img src https github com jonnor machinehearing raw master pycode2019 slides png height 200 alt slides a a href https youtu be 2fmmessd2cm t 8470 img src https github com jonnor machinehearing raw master europython2019 video png height 200 alt youtube audio classification using machine learning by jon nordby europython 2019 a presentation at pycode conference 2019 in gdansk slides https jonnor github io machinehearing pycode2019 slides html notes pycode2019 video recording coming maybe in november sensecamp2019 classification of environmental sound using iot sensors a href https jonnor github io machinehearing sensecamp2019 slides html img src https github com jonnor machinehearing raw master sensecamp2019 slides png height 200 alt slides a presentation at sensecamp 2019 hosted by force technology senselab slides web https jonnor github io machinehearing sensecamp2019 slides html pdf https github com jonnor machinehearing raw master sensecamp2019 slides pdf nmbu lecture on audio classification report and lecture at nmbu https nmbu no data science report https github com jonnor datascience master raw master dat390 merged pdf slides https jonnor github io datascience master dat390 slides html stack overflow answers with example code in python loading youtube audio data with youtube dl and librosa https stackoverflow com a 57832701 1967571 extracting fixed size analysis windows from audio https stackoverflow com a 54326750 1967571 classifying an audio clip of many analysis windows using keras timedistributed and globalaveragepooling https stackoverflow com a 55286629 1967571 classifying an audio clip by voting over analysis windows https stackoverflow com a 55267520 1967571 mean majority voting annotating labeling audio data using audacity https datascience stackexchange com a 56372 54096 preprocessing audio into mel spectrograms https stats stackexchange com a 403051 201327 multi core preprocessing of audio files using joblib https stackoverflow com a 55680757 1967571 compute mfcc or mel spectrogram from existing stft spectrograms https stackoverflow com a 57833078 1967571 converting mel spectrograms into png images https stackoverflow com a 57204349 1967571 converting mel spectrogram or mfcc back to audio waveform using librosa https stackoverflow com a 57323359 1967571 https stackoverflow com questions 57443870 stream binary audio data from http request for librosa analysis 57672134 57672134 streaming audio from http to for audio classification supports real time streaming chunk download audio from youtube benefit can classify in parallel can get a particular location in time https unix stackexchange com questions 230481 how to download portion of video with youtube dl command notes rough notes on various topics applications applications md practical applications of machine hearing tasks tasks md established problem formulations audio quality audio quality metrics for measuring audio quality explainable models for audio explainable features features md feature representations preprocessing preprocessing md preprocessing techniques dcase2018 dcase2018 md notes from dcase2018 challenge and conference commercial solutions commercial md companies and products in machine hearing speech speech md speech specific techniques and applications music music md music specific techniques and applications compressive sensing compressive sensing md resources useful resources to learn more presentations audio event detection w deep learning https www youtube com watch v 9x66iweqsyi by robert coop ph d head of ai and ml stanley b d from data science connect 2028 books computational analysis of sound scenes and events tuomas virtanen mark d plumbley dan ellis 2018 human and machine hearing extracting meaning from sound richard f lyon 2017 revised 2018 an introduction to audio content analysis applications in signal processing and music informatics alexander lerch 2012 companion website https www audiocontentanalysis org machine learning for audio image and video analysis theory and applications advanced information and knowledge processing francesco camastra 3 sections from perception to computation machine learning applications online courses csc 83060 speech and audio understanding http mr pc org t csc83060 brooklyn college cuny deep learning for audio with python https www youtube com playlist list pl watfeyamnrtbkcnslcpoaybbrjzvlnf by valerio velardo pytorch for audio music processing https www youtube com playlist list pl watfeyamnoirn4idjev6aru8iszyvwm by valerio velardo software feature extraction librosa http librosa github io the go to python module essentia https essentia upf edu c library with python bindings lots of music analysis extractors used by freesound and acousticbrainz kapre https github com keunwoochoi kapre on demand gpu computation of melspectrograms for keras torchaudio https pytorch org audio stable index html audio processing in pytorch data augmentation muda python library for augmenting annotated audio data https github com bmcfee muda audiomentations https github com iver56 audiomentations scaper https github com justinsalamon scaper soundscape synthesis tool with automatic label handling lecture notes audio classification http www cs tut fi sgn24006 pdf l04 audio classification pdf covers low level features mfcc classification by distance metrics gmm hmm speech signal analysis lecture 2 https www inf ed ac uk teaching courses asr 2016 17 asr02 signal handout pdf january 2017 hiroshi shimodaira and steve renals great diagrams of audio discretization mel filters wide versus narrow band spectrograms competions kaggle whale detection kaggle freesound tagging 2018 kaggle freesound dcase2014 dcase2018 dcase2019 dcase2020 dcase2021 datasets audio cats and dogs kaggle https www kaggle com mmoreaux audio cats and dogs heartbeat anomalies kaggle https www kaggle com kinguistics heartbeat sounds respiratory sounds kaggle https www kaggle com vbookshelf respiratory sound database online communities https mircommunity slack com music information retrieval the sound of ai https valeriovelardo com the sound of ai community slack community lists awesome deep learning music https github com ybayle awesome deep learning music fast ai forums deep learning with audio https forums fast ai t deep learning with audio thread 38123 large lists of resources both in first post and popular links feb 2019 315 replies over 4 months
machine-learning audio-analysis audio-processing notes audio-classsification
ai
sdsmdg.github.io
sdsmdg github io jekyll powered blog for sds mobile development group
front_end
cheatsheet-translation
translation of vip cheatsheets goal this repository aims at collaboratively translating our machine learning https github com afshinea stanford cs 229 machine learning deep learning https github com afshinea stanford cs 230 deep learning and artificial intelligence https github com afshinea stanford cs 221 artificial intelligence cheatsheets into a ton of languages so that this content can be enjoyed by anyone from any part of the world contribution guidelines the translation process of each cheatsheet contains two steps the translation step where contributors follow a template of items to translate the review step where contributors go through each expression translated by their peers on top of which they add their suggestions and remarks translators 0 check for existing pull requests https github com shervinea cheatsheet translation pulls to see which cheatsheet is yet to be translated 1 fork the repository 2 copy the template https github com shervinea cheatsheet translation tree master template of the cheatsheet you wish to translate into the language folder with a naming that follows the iso 639 1 notation https www loc gov standards iso639 2 php code list php e g es for spanish zh for mandarin chinese 3 translate sentences by keeping the following structure 34 english blabla 10230 translated blabla 4 commit the changes to your forked repository 5 submit a pull request https help github com articles creating a pull request and call it language code file name for example the pr related to the translation in spanish of the template cs 229 deep learning md cheatsheet will be entitled es cs 229 deep learning reviewers 1 go to the list of pull requests https github com shervinea cheatsheet translation pulls and filter them by your native language 2 locate pull requests where help is needed those contain the tag reviewer wanted 3 review the content line per line and add comments and suggestions when necessary important note please make sure to propose the translation of only one cheatsheet per pull request it simplifies a lot the review process progression cs 221 artificial intelligence reflex models https github com shervinea cheatsheet translation blob master template cs 221 reflex models md states models https github com shervinea cheatsheet translation blob master template cs 221 states models md variables models https github com shervinea cheatsheet translation blob master template cs 221 variables models md logic models https github com shervinea cheatsheet translation blob master template cs 221 logic models md deutsch not started not started not started not started espa ol not started not started not started not started in progress https github com shervinea cheatsheet translation pull 200 not started not started not started fran ais done done done done not started not started not started not started italiano not started not started not started not started not started not started not started not started not started not started not started not started portugu s not started not started not started not started in progress https github com shervinea cheatsheet translation pull 218 in progress https github com shervinea cheatsheet translation pull 219 in progress https github com shervinea cheatsheet translation pull 220 in progress https github com shervinea cheatsheet translation pull 217 t rk e done done done done ti ng vi t not started not started not started done not started not started not started not started not started not started not started not started cs 229 machine learning deep learning https github com shervinea cheatsheet translation blob master template cs 229 deep learning md supervised https github com shervinea cheatsheet translation blob master template cs 229 supervised learning md unsupervised https github com shervinea cheatsheet translation blob master template cs 229 unsupervised learning md ml tips https github com shervinea cheatsheet translation blob master template cs 229 machine learning tips and tricks md probabilities https github com shervinea cheatsheet translation blob master template cs 229 probability md algebra https github com shervinea cheatsheet translation blob master template cs 229 linear algebra md done done done done done done catal not started not started not started in progress https github com shervinea cheatsheet translation pull 47 in progress https github com shervinea cheatsheet translation pull 47 in progress https github com shervinea cheatsheet translation pull 47 deutsch done not started not started in progress https github com shervinea cheatsheet translation pull 135 not started in progress https github com shervinea cheatsheet translation pull 136 not started not started not started in progress https github com shervinea cheatsheet translation pull 209 not started not started espa ol done done done done done done eesti not started not started not started done not started not started done done done done done done suomi in progress https github com shervinea cheatsheet translation pull 34 not started not started not started not started not started fran ais done done done done done done in progress https github com shervinea cheatsheet translation pull 156 not started not started not started not started not started in progress https github com shervinea cheatsheet translation pull 37 in progress https github com shervinea cheatsheet translation pull 46 not started in progress https github com shervinea cheatsheet translation pull 40 not started not started magyar in progress https github com shervinea cheatsheet translation pull 124 in progress https github com shervinea cheatsheet translation pull 124 in progress https github com shervinea cheatsheet translation pull 124 in progress https github com shervinea cheatsheet translation pull 124 in progress https github com shervinea cheatsheet translation pull 124 in progress https github com shervinea cheatsheet translation pull 124 bahasa indonesia in progress https github com shervinea cheatsheet translation pull 154 not started in progress https github com shervinea cheatsheet translation pull 139 not started done done italiano in progress https github com shervinea cheatsheet translation pull 78 in progress https github com shervinea cheatsheet translation pull 207 not started not started done done done done done done done done done done done done done done polski in progress https github com shervinea cheatsheet translation pull 8 in progress https github com shervinea cheatsheet translation pull 8 not started in progress https github com shervinea cheatsheet translation pull 8 in progress https github com shervinea cheatsheet translation pull 208 not started portugu s done done done done done done in progress https github com shervinea cheatsheet translation pull 221 in progress https github com shervinea cheatsheet translation pull 225 in progress https github com shervinea cheatsheet translation pull 226 in progress https github com shervinea cheatsheet translation pull 223 in progress https github com shervinea cheatsheet translation pull 224 in progress https github com shervinea cheatsheet translation pull 222 t rk e done done done done done done not started not started not started not started done in progress https github com shervinea cheatsheet translation pull 95 ti ng vi t done done done done done done in progress https github com shervinea cheatsheet translation pull 12 done in progress https github com shervinea cheatsheet translation pull 48 in progress https github com shervinea cheatsheet translation pull 7 in progress https github com shervinea cheatsheet translation pull 73 in progress https github com shervinea cheatsheet translation pull 72 done done done done done done cs 230 deep learning convolutional neural networks https github com shervinea cheatsheet translation blob master template cs 230 convolutional neural networks md recurrent neural networks https github com shervinea cheatsheet translation blob master template cs 230 recurrent neural networks md deep learning tips https github com shervinea cheatsheet translation blob master template cs 230 deep learning tips and tricks md not started not started not started catal not started not started not started deutsch not started not started not started espa ol not started not started in progress https github com shervinea cheatsheet translation pull 210 done done done suomi not started not started not started fran ais done done done not started not started not started not started not started not started magyar not started not started not started bahasa indonesia done in progress https github com shervinea cheatsheet translation pull 152 in progress https github com shervinea cheatsheet translation pull 153 italiano not started not started not started done done done done in progress https github com shervinea cheatsheet translation pull 107 in progress https github com shervinea cheatsheet translation pull 108 polski not started not started not started portugu s done not started not started in progress https github com shervinea cheatsheet translation pull 227 in progress https github com shervinea cheatsheet translation pull 229 in progress https github com shervinea cheatsheet translation pull 228 t rk e done done done not started not started not started ti ng vi t done done done in progress https github com shervinea cheatsheet translation pull 212 in progress https github com shervinea cheatsheet translation pull 181 not started done not started not started acknowledgements thank you everyone for your help please do not forget to add your name to the contributors file so that we can give you proper credit in the cheatsheets official website https stanford edu shervine teaching
ai
antivirus_demo
antivirus demo overview this project helps train a classifier to be able to detect pe files https en wikipedia org wiki portable executable as either malicious or legitimate it tries out 6 different classification algorithms before deciding which one to use for prediction by comparing their results this is the code for build an antivirus in 5 min on youtube https youtu be ilnhvwsu9ea dependencies pandas pip install pandas numpy pip install numpy pickle pip install pickle scipy pip install scipy scikit pip install u scikit learn use pip https pypi python org pypi pip to install any missing dependencies basic usage 1 run python learning py to train the model it will train on the dataset included called data csv 2 once trained you can test the model via python checkpe py your pe file it will output either malicious or legitimate that s it credits credit for the vast majority of code here goes to te k https github com te k i ve merely created a wrapper around all of the important functions to get people started
ai
translate-shell
translate shell pre commit ci status https results pre commit ci badge github freed wu translate shell main svg https results pre commit ci latest github freed wu translate shell main github workflow https github com freed wu translate shell actions workflows main yml badge svg https github com freed wu translate shell actions codecov https codecov io gh freed wu translate shell branch main graph badge svg https codecov io gh freed wu translate shell readthedocs https shields io readthedocs translate shell https translate shell readthedocs io deepsource https deepsource io gh freed wu translate shell svg show trend true https deepsource io gh freed wu translate shell github downloads https shields io github downloads freed wu translate shell total https github com freed wu translate shell releases github downloads latest https shields io github downloads freed wu translate shell latest total https github com freed wu translate shell releases latest github issues https shields io github issues freed wu translate shell https github com freed wu translate shell issues github issues closed https shields io github issues closed freed wu translate shell https github com freed wu translate shell issues q is 3aissue is 3aclosed github issues pr https shields io github issues pr freed wu translate shell https github com freed wu translate shell pulls github issues pr closed https shields io github issues pr closed freed wu translate shell https github com freed wu translate shell pulls q is 3apr is 3aclosed github discussions https shields io github discussions freed wu translate shell https github com freed wu translate shell discussions github milestones https shields io github milestones all freed wu translate shell https github com freed wu translate shell milestones github forks https shields io github forks freed wu translate shell https github com freed wu translate shell network members github stars https shields io github stars freed wu translate shell https github com freed wu translate shell stargazers github watchers https shields io github watchers freed wu translate shell https github com freed wu translate shell watchers github contributors https shields io github contributors freed wu translate shell https github com freed wu translate shell graphs contributors github commit activity https shields io github commit activity w freed wu translate shell https github com freed wu translate shell graphs commit activity github last commit https shields io github last commit freed wu translate shell https github com freed wu translate shell commits github release date https shields io github release date freed wu translate shell https github com freed wu translate shell releases latest github license https shields io github license freed wu translate shell https github com freed wu translate shell blob main license github languages https shields io github languages count freed wu translate shell https github com freed wu translate shell github languages top https shields io github languages top freed wu translate shell https github com freed wu translate shell github directory file count https shields io github directory file count freed wu translate shell https github com freed wu translate shell github code size https shields io github languages code size freed wu translate shell https github com freed wu translate shell github repo size https shields io github repo size freed wu translate shell https github com freed wu translate shell github v https shields io github v release freed wu translate shell https github com freed wu translate shell pypi status https shields io pypi status translate shell https pypi org project translate shell description pypi v https shields io pypi v translate shell https pypi org project translate shell history pypi downloads https shields io pypi dd translate shell https pypi org project translate shell files pypi format https shields io pypi format translate shell https pypi org project translate shell files pypi implementation https shields io pypi implementation translate shell https pypi org project translate shell files pypi pyversions https shields io pypi pyversions translate shell https pypi org project translate shell files translate text by online translators google bing youdaozhiyun haici offline dictionaries stardict llm openai llama use your local model supports cli gui gnu linux android macos windows repl script python shell vim deprecation vim port will be replaced by language server language server ci cd github action usage ui cli sh trans translators google bing haici stardict crush cli https user images githubusercontent com 32936898 205699472 5349d422 54c9 47a3 afc0 53a17f0acaf8 jpg repl console trans enter repl en ja change source language to english and target language to japanese swap source and target languages stardict use stardict to translate text cat example test txt execute a shell command example test txt translate a file hacker translate text painter artist enter shell echo shell execute a shell command usr bin zsh exit exit shell repl https user images githubusercontent com 32936898 205617815 3a2ba6b4 2673 4233 907b 202ffd4a9e44 jpg tui vim vim translate translators google bing free as in freedom vim https user images githubusercontent com 32936898 205475332 61c0a90e b145 4af0 8658 c0cf45b87150 jpg gui gnu linux gnu linux https user images githubusercontent com 32936898 205699484 c6fdefd5 dca2 4263 aed4 e41d9d16fde6 jpg android android toast https user images githubusercontent com 32936898 206078648 0db6480f 7e35 4252 9f33 9fb51e03e172 jpg script python pycon from translate shell translate import translate translate the mythical man month zh tw translations status 1 results translation translator google sl auto tl zh tw text the mythical man month phonetic paraphrase explains details alternatives text the mythical man month to lang zh tw from lang auto shell script console xsel o trans format json jq r results paraphrase text the cathedral and the bazaar vim script vim let g text just for fun let g translation json decode translate shell call format json g text echo g text is g translation results 0 paraphrase in chinese just for fun is in chinese language server x document hover display translated results x completions complete translated words ci cd github action this repo provides an action to translate po of a repository see inputs https github com freed wu translate shell blob main action yml for example you have a repository which contains translations of another project s documents upstream you can write a github workflow to detect if upstream has update if a new version exist update the version and generate new po https www gnu org software gettext manual gettext html po files s then translate the changed po s and git commit examples tmux zh https github com freed wu tmux zh blob main github workflows version yml yaml on schedule run this ci cd at 0 00 on friday cron 0 0 5 workflow dispatch jobs translate runs on ubuntu latest steps uses actions checkout v3 name generate new po id version run update version then use perl sed to replace the version string of your file then generate new po echo version xxx github output name translate your po uses freed wu translate shell main name git commit run git add po git config global user name github actions git config global user email 41898282 github actions bot users noreply github com git commit m bookmark dump version to version git tag version git remote set url origin https x access token gh token github com github repository git push git push tags env version steps version outputs version gh token secrets gh token you can use the following commands to get the new version bash get a github repo s version curl https api github com repos user repo releases latest jq r tag name get a gitlab repo s version curl https gitlab com api v4 projects 41218592 repository tags per page 1 jq r name you can use the following tools to generate the new po s sphinx intl https sphinx intl readthedocs io generate po for any project using sphinx to generate document po4a https po4a org generate po for any project which use markdown latex man to write document similar projects see comparison https translate shell readthedocs io en latest resources translator html features translate with different translators at same time like translator https github com skywind3000 translator translate clipboard contents automatically like ydcv https github com felixonmars ydcv speak the pronunciation of words support online translate engines support offline dictionaries many methods to use from shell python and vim magic text like en to change source language zh cn to change target language file to translate file etc allow customization by config py good shell completions especially for zsh https github com zsh users zsh complete options and translation history manpage man trans beautiful ui cross platforms rich api can be easily called from shell and python good document unit test keep code quality ci cd clean code respect pep484 https peps python org pep 0484 respect pep621 https peps python org pep 0621 respect xdg https specifications freedesktop org basedir spec basedir spec latest html the last but not least it is a libre software see document https translate shell readthedocs io to know more ps pr is welcome please make code clean and keep test pass ex nowrap
commandline-tool python translate vim lsp-server bing chatgpt google llama llamacpp openai repl shell stardict youdao haici lftp prompt
ai
USB-PD-Firmware-FreeRTOS
usb pd firmware freertos dfu util a 0 s 0x08000000 leave d build usb pd otterfirmware bin todo usb pd x lib implement test adc x fet x leds x usb cdc x usb dfu display
stm32f072 usb-pd freertos fusb302b
os
base-ui
bit components https img shields io badge dynamic json svg color 6e3991 label bit 20components query payload totalcomponents url https api bit dev scope bit base ui usecache 1 https bit dev bit base ui a href https opensource org licenses apache 2 0 img alt apache src https img shields io badge license apache 202 0 blue svg a a href https github com teambit example templates blob master readme md contributing img alt prs src https img shields io badge prs welcome brightgreen svg a base component design system of bit dev the reusable set of infra level react components https bit dev bit base ui used to build bit dev https bit dev screenshot docs scope screenshot png components all components in this frontend codebase were contained and exposed using bit https github com teambit bit as a set of independently usable components see the base collection on bit dev https bit dev teambit base ui to explore and integrate any component into your project install independent components with npm yarn use bbit import to source and edit components locally for quick editing and integration try any component hands on in a live playground this is a component based micro frontend wait what install independent components with bbit install use bbit import to explore components in your local workspace and modify them to your own needs try any component hands on in the docs live playground and in the compositions page show me an example take a look at the bit dev homepage https bit dev you will notice that it s built from components that live in different front end codebases evangelist marketing components https github com teambit evangelist base ui components https github com teambit base ui container application private etc we use bit https github com teambit bit to contain and expose components from any codebase as a set of apis in bit dev https bit dev that can be integrated into different pages and applications for example exposed evangelist marketing components on bit dev exposed base ui components on bit dev https bit dev teambit base ui structure theme all shared styles colors sizes fonts and css variables belong here theme provider https bit dev teambit base ui theme theme provider applies all of these styles at the root of your app and different apps may implement their own unique theme constants hard coded singleton values like storage url and enums in case of change this central location could update all other components layout components controlling the position of elements in the document grid breakpoints etc atoms generic building blocks for any front end application these components are vanilla meaning they should not contain content like texts or icons and no specific styles this is because different designs could look entirely different so any styles in the base component could lead to a css specificity war so add the bare minimum of css here and keep these components purely logical utils pure logic components and helpers no visual components setup 1 install bit npm install teambit bit global 2 clone this bit workspace git clone https github com teambit evangelist git evangelist 3 go to the workspace directory cd evangelist 4 install all packages and import all components bbit install 5 run the workspace ui to explore the evangalist components bbit start and go to https localhost 3000 6 enjoy
os
654fa16
information technologies at pratt si welcome to 654 i m using github this semester as an experimental tool for distributing files and as a method of helping folks get more acquainted with code editors and open tools etc see below for basic headline info but see the syllabus syllabus section for the formal full course information documents and the assignments assignments folder for the descriptions of materials you will turn in see you in class identifier lis 654 03 credits 3 day and time wednesday 06 30pm 09 30pm location manhattan room 606 instructor josh hadro office hours by appointment course etherpad https public etherpad mozilla org p lis654fa16 1 https public etherpad mozilla org p lis654fa16 1 increment last digit for subsequent class numbers e g 2 for class session 2 6 for class session 6 etc course wordpress site https lis654fa16 wordpress com https lis654fa16 wordpress com invitations and logins to come course hashtag 654fa16 https twitter com search f tweets amp q 23654fa16 optional publishing everything in this repo as cc by see license license md a rel license href http creativecommons org licenses by 4 0 img alt creative commons license style border width 0 src https i creativecommons org l by 4 0 88x31 png a br this work is licensed under a a rel license href http creativecommons org licenses by 4 0 creative commons attribution 4 0 international license a
server
adaptnlp
welcome to adaptnlp a high level framework and library for running training and deploying state of the art natural language processing nlp models for end to end tasks p align center a href https github com novetta adaptnlp img src https raw githubusercontent com novetta adaptnlp master docs assets images company logo png width 400 a p ci https github com novetta adaptnlp workflows ci badge svg pypi https img shields io pypi v adaptnlp color blue label pypi 20version https pypi org project adaptnlp description what is adaptnlp adaptnlp is a python package that allows users ranging from beginner python coders to experienced machine learning engineers to leverage state of the art natural language processing nlp models and training techniques in one easy to use python package utilizing fastai https docs fast ai with huggingface s transformers https github com huggingface transformers library and humboldt university of berlin s flair https github com flairnlp flair library adaptnlp provides machine learning researchers and scientists a modular and adaptive approach to a variety of nlp tasks simplifying what it takes to train perform inference and deploy nlp based models and microservices what is the benefit of adaptnlp rather than just using transformers despite quick inference functionalities such as the pipeline api in transformers it still is not quite as flexible nor fast enough with adaptnlp s easy inference modules these tend to be slightly faster than the pipeline interface bare minimum the same speed while also providing the user with simple intuitive returns to alleviate any unneeded junk that may be returned along with this with the integration of the fastai library the code needed to train or run inference on your models has a completely modular api through the fastai callback https docs fast ai callbacks core system rather than needing to write your entire torch loop if there is anything special needed for a model a callback can be written in less than 10 lines of code to achieve your specific functionalities finally when training your model fastai is on the forefront of beign a library constantly bringing in the best practices for achiving state of the art training with new research methodologies heavily tested before integration as such adaptnlp fully supports training with the one cycle policy and using new optimizer combinations such as the ranger optimizer with cosine annealing training through simple one line fitting functions fit one cycle and fit flat cos installation directions pypi to install with pypi please use bash pip install adaptnlp or if you have pip3 bash pip3 install adaptnlp conda coming soon developmental builds to install any developmental style builds please follow the below directions to install directly from git stable master branch the master branch generally is not updated much except for hotfixes and new releases to install please use bash pip install git https github com novetta adaptnlp developmental branch include note html content generally this branch can become unstable and it is only recommended for contributors or those that really want to test out new technology please make sure to see if the latest tests are passing a green checkmark on the commit message before trying this branch out you can install the developmental builds with bash pip install git https github com novetta adaptnlp dev docker images there are actively updated docker images hosted on novetta s dockerhub https hub docker com r novetta adaptnlp the guide to each tag is as follows latest this is the latest pypi release and installs a complete package that is cuda capable dev these are occasionally built developmental builds at certain stages they are built by the dev branch and are generally stable api the api builds are for the rest api https novetta github io adaptnlp rest to pull and run any adaptnlp image immediatly you can run bash docker run itp 8888 8888 novetta adaptnlp tag replacing tag with any of the afformentioned tags earlier afterwards check localhost 8888 or localhost 888 lab to access the notebook containers navigating the documentation the adaptnlp library is built with nbdev https nbdev fast ai so any documentation page you find including this one can be directly run as a jupyter notebook each page at the top includes an open in colab button as well that will open the notebook in google colaboratory to allow for immediate access to the code the documentation is split into six sections each with a specific purpose getting started https novetta github io adaptnlp this group contains quick access to the homepage what are the adaptnlp cookbooks and how to contribute models and model hubs https novetta github io adaptnlp model html these contain any relevant documentation for the adaptivemodel class the huggingface hub model search integration and the result class that various inference api s return class api this section contains the module documentation for the inference framework the tuning framework as well as the utilities and foundations for the adaptnlp library inference and training cookbooks https novetta github io adaptnlp cookbook html these two sections provide quick access to single use recipies for starting any adaptnlp project for a particular task with easy to use code designed for that specific use case there are currently over 13 different tutorials available with more coming soon nlp services with fastapi https novetta github io adaptnlp rest this section provides directions on how to use the adaptnlp rest api for deploying your models quickly with fastapi contributing there is a contribution guide available here https novetta github io adaptnlp contributing testing adaptnlp is run on the nbdev framework to run all tests please do the following 1 pip install nbverbose 2 git clone https github com novetta adaptnlp 3 cd adaptnlp 4 pip install e 5 nbdev test nbs this will run every notebook and ensure that all tests have passed please see the nbdev documentation https nbdev fast ai for more information about it contact please contact zachary mueller at zmueller novetta com with questions or comments regarding adaptnlp follow us on twitter at thezachmueller https twitter com thezachmueller and adaptnlp https twitter com adaptnlp for updates and nlp dialogue license this project is licensed under the terms of the apache 2 0 license
nlp pytorch transformers natural-language-processing machine-learning deep-learning deep-learning-tutorial docker fine-tuning language-models easy api-rest bert gpt xlnet gpu ulmfit
ai
python-cv-samples
discontinuation of project this project will no longer be maintained by intel intel has ceased development and contributions including but not limited to maintenance bug fixes new releases or updates to this project intel no longer accepts patches to this project python cv samples python computer vision samples
ai
befake-backend
this project is the spring boot backend of a photo sharing app it is for my thesis that has a focus on cloud native scalable application development to start it up follow the instructions in the install txt file ports service name port rabbitmq 15672 5672 zipkin 9411 kafka 9092 zookeeper 2181 api gateway 8765 naming server 8761 notification service 8101 authentication service 8082 time service 8081 user service 8000 friend service 8003 interaction service 8002 post service 8001 postgres 5432 ignore this section if you are not planning on running this project on google kubernetes engine service host name environment variables this table is only useful if the services are running on gke it generates environment variables for the services and these variables can be used for service discovery in the feignclient when being used feignclient must have the url tag in the following form url environment variable http localhost the port that the service is running on service name environment variable user service user service service host post service post service service host interaction service interactions service service host friend service friend service service host time service time service service host
aws cloud-native docker java kubernetes microservices postgresql spring-boot ibm-cloud scalability
server
Self-healing-test
applitools example basic selenium webdriver in java this is the example project for the selenium java basic tutorial https applitools com tutorials quickstart web selenium java basic it shows how to start automating visual tests with applitools eyes https applitools com platform eyes and selenium webdriver https www selenium dev in java it uses java https www java com as the programming language selenium webdriver https www selenium dev for browser automation google chrome https www google com chrome downloads as the local browser for testing apache maven https maven apache org index html for dependency management applitools eyes https applitools com platform eyes for visual testing it can also run tests with applitools ultrafast grid https applitools com platform ultrafast grid for cross browser execution applitools execution cloud https applitools com platform execution cloud for self healing remote webdriver sessions to run this example project you ll need 1 an applitools account https auth applitools com users register which you can register for free 2 the java development kit jdk https www oracle com java technologies downloads version 8 or higher 3 a good java editor such as jetbrains intellij idea https www jetbrains com idea 4 apache maven https maven apache org download cgi typically bundled with ides 5 an up to date version of google chrome https www google com chrome downloads 6 a corresponding version of chromedriver https chromedriver chromium org downloads the main test case is acmebanktests java src test java com applitools example acmebanktests java by default the project will run tests with ultrafast grid but not execution cloud you can change these settings in the test class to execute tests set the applitools api key environment variable to your account s api key https applitools com tutorials guides getting started registering an account you can launch the test from your ide or you can run it from the command line with maven like this mvn exec exec run the tests dexec classpathscope test for full instructions on running this project take our selenium java basic tutorial https applitools com tutorials quickstart web selenium java basic self healing test
cloud
stash-plugin-performer-creator
stash plugin performer creator this is a plugin for stash it adds a parse all scenes for performers task this task processes all scenes and using natural language processing tries to detect performer names and tries to find add them how to set it up add the python files too your stash plugins directory create a virtualenv bash virtualenv p python3 system site packages stash plugins env source stash plugins env bin activate pip install stash plugins requirements txt python m spacy download en core web md how to use rescan the plugins you will find a new button in the tasks sections in the settings setup on unraid ssh to your server or if you use the webui to open stash s console then start at the second line sh docker exec i t stash sh apk update apk add git cd root stash plugins git clone https github com com1234475 stash plugin performer creator git cd stash plugin performer creator dont know if subversion python3 dev is needed but this worked apk add make automake gcc g subversion python3 dev python3 m venv env source env bin activate python m pip install r requirements txt python m spacy download en core web md known limitations gotcha s it uses the file names not the titles
ai
embeddings
clarin embeddings warning this file was autogenerated do not edit installation bash pip install clarinpl embeddings example text classification with polemo2 dataset and transformer based embeddings python from embeddings pipeline lightning classification import lightningclassificationpipeline pipeline lightningclassificationpipeline dataset name or path clarin pl polemo2 official embedding name or path allegro herbert base cased input column name text target column name target output path print pipeline run warning as for now default pipeline model hyperparameters may provide poor results it will be subject to change in further releases we encourage users to use optimized pipelines optimized pipelines to select appropriate hyperparameters conventions we use many of the huggingface concepts such as models https huggingface co models or datasets https huggingface co datasets to make our library as easy to use as it is possible we want to enable users to create customise test and execute nlp nlu slu tasks in the fastest possible manner moreover we present easy to use static embeddings that were trained by clarin pl pipelines we share predefined pipelines for common nlp tasks with corresponding scripts for transformer based pipelines we utilize pytorch lighting https www pytorchlightning ai trainers with transformers automodels https huggingface co docs transformers master en model doc auto transformers automodel for static embedding based pipelines we use flair https github com flairnlp flair library under the hood transformer embedding based pipelines e g bert roberta herbert task class script text classification lightningclassificationpipeline embeddings pipeline lightning classification py evaluate lightning document classification py examples evaluate lightning document classification py sequence labelling lightningsequencelabelingpipeline embeddings pipeline lightning sequence labeling py evaluate lightning sequence labeling py examples evaluate lightning sequence labeling py run classification task the example with non default arguments bash python evaluate lightning document classification py embedding name or path allegro herbert base cased dataset name clarin pl polemo2 official input columns name text target column name target run sequence labeling task the example with default language model and dataset bash python evaluate lightning sequence labeling py compatible datasets as most datasets in huggingface repository should be compatible with our pipelines there are several datasets that were tested by the authors dataset name task type input column name s target column name description clarin pl kpwr ner https huggingface co datasets clarin pl kpwr ner sequence labeling named entity recognition tokens ner kpwr ner is a part of the polish corpus of wroc aw university of technology kpwr its objective is recognition of named entities e g people institutions etc clarin pl polemo2 official https huggingface co datasets clarin pl polemo2 official classification sentiment analysis text target a corpus of consumer reviews from 4 domains medicine hotels products and school clarin pl 2021 punctuation restoration https huggingface co datasets clarin pl 2021 punctuation restoration punctuation restoration text in text out dataset contains original texts and asr output it is a part of poleval 2021 competition clarin pl nkjp pos https huggingface co datasets clarin pl nkjp pos sequence labeling part of speech tagging tokens pos tags nkjp pos is a part of the national corpus of polish its objective is part of speech tagging e g nouns verbs adjectives adverbs etc clarin pl aspectemo https huggingface co datasets clarin pl aspectemo sequence labeling sentiment classification tokens labels aspectemo corpus is an extended version of a publicly available polemo 2 0 corpus of polish customer reviews used in many projects on the use of different methods in sentiment analysis laugustyniak political advertising pl https huggingface co datasets laugustyniak political advertising pl sequence labeling political advertising tokens tags first publicly open dataset for detecting specific text chunks and categories of political advertising in the polish language laugustyniak abusive clauses pl https huggingface co datasets laugustyniak abusive clauses pl classification abusive clauses text class dataset with polish abusive clauses examples allegro klej dyk https huggingface co datasets allegro klej dyk pair classification question answering question answer target the did you know pol czy wiesz dataset consists of human annotated question answer pairs allegro klej psc https huggingface co datasets allegro klej psc pair classification text summarization extract text summary text label the polish summaries corpus contains news articles and their summaries allegro klej cdsc e https huggingface co datasets allegro klej cdsc e pair classification textual entailment sentence a sentence b entailment judgment the polish sentence pairs which are human annotated for textualentailment br sup only pair classification task is supported for now sup passing task model and task training parameters to predefined flair pipelines model and training parameters can be controlled via task model kwargs and task train kwargs parameters that can be populated using the advanced config tutorial on how to use configs can be found in tutorials directory of the repository two types of config are defined in our library basicconfig and advancedconfig in summary the basicconfig takes arguments and automatically assign them into proper keyword group while the advancedconfig takes as the input keyword groups that should be already correctly mapped the list of available config can be found below lightning lightningbasicconfig lightningadvancedconfig example with polemo2 dataset lightning pipeline python from embeddings config lightning config import lightningbasicconfig from embeddings pipeline lightning classification import lightningclassificationpipeline config lightningbasicconfig learning rate 0 01 max epochs 1 max seq length 128 finetune last n layers 0 accelerator cpu pipeline lightningclassificationpipeline embedding name or path allegro herbert base cased dataset name or path clarin pl polemo2 official input column name text target column name target load dataset kwargs train domains hotels medicine dev domains hotels medicine test domains hotels medicine text cfg text output path config config you can also define an advanced config with populated keyword arguments in general the keywords are passed to the object when constructing specific pipelines we can identify and trace the keyword arguments to find the possible arguments that can be set in the config kwargs python from embeddings config lightning config import lightningadvancedconfig config lightningadvancedconfig finetune last n layers 0 task train kwargs max epochs 1 devices auto accelerator cpu deterministic true task model kwargs learning rate 5e 4 use scheduler false optimizer adamw adam epsilon 1e 8 warmup steps 100 weight decay 0 0 datamodule kwargs downsample train 0 01 downsample val 0 01 downsample test 0 05 dataloader kwargs num workers 0 available embedding models for polish instead of the allegro herbert base cased model user can pass any model from huggingface hub https huggingface co models that is compatible with transformers https huggingface co transformers or with our library embedding type description clarin pl herbert kgr10 https huggingface co clarin pl herbert kgr10 bert herbert large trained on supplementary data the kgr10 corpus optimized pipelines transformers embeddings task optimized pipeline lightning text classification optimizedlightingclassificationpipeline embeddings pipeline lightning hps pipeline py lightning sequence labeling optimizedlightingsequencelabelingpipeline embeddings pipeline lightning hps pipeline py example with text classification optimized pipelines can be run via following snippet of code python from embeddings config lighting config space import lightingtextclassificationconfigspace from embeddings pipeline lightning hps pipeline import optimizedlightingclassificationpipeline pipeline optimizedlightingclassificationpipeline config space lightingtextclassificationconfigspace embedding name or path allegro herbert base cased dataset name or path clarin pl polemo2 official input column name text target column name target persisting best params path best prams yaml log path hps log pickle df metadata pipeline run training model with obtained parameters after the parameters search process we can train model with best parameters found but firstly we have to set output path parameter which is not automatically generated from optimizedlightingclassificationpipeline python metadata output path now we are able to train the pipeline python from embeddings pipeline lightning classification import lightningclassificationpipeline pipeline lightningclassificationpipeline metadata results pipeline run selection of best embedding model instead of performing search with single embedding model we can search with multiple embedding models via passing them as list to configspace python pipeline optimizedlightingclassificationpipeline config space lightingtextclassificationconfigspace embedding name or path allegro herbert base cased clarin pl roberta polish kgr10 dataset name or path clarin pl polemo2 official input column name text target column name target persisting best params path best prams yaml log path hps log pickle citation the paper describing the library is available on arxiv https arxiv org abs 2211 13112 it will be shortly published in procedings of neurips 2022 https neurips cc conferences 2022 schedulemultitrack event 55618 bibtex augustyniak tagowski k sawczyn a janiak d bartusiak r szymczak a w troba m janz a szyma ski p morzy m kajdanowicz t piasecki m 2022 this is the way designing and compiling lepiszcze a comprehensive nlp benchmark for polish neurips 2022 arxiv https doi org 10 48550 arxiv 2211 13112 bibtex article https doi org 10 48550 arxiv 2211 13112 doi 10 48550 arxiv 2211 13112 url https arxiv org abs 2211 13112 author augustyniak ukasz and tagowski kamil and sawczyn albert and janiak denis and bartusiak roman and szymczak adrian and w troba marcin and janz arkadiusz and szyma ski piotr and morzy miko aj and kajdanowicz tomasz and piasecki maciej keywords computation and language cs cl information retrieval cs ir machine learning cs lg fos computer and information sciences fos computer and information sciences title this is the way designing and compiling lepiszcze a comprehensive nlp benchmark for polish publisher arxiv year 2022 copyright creative commons attribution non commercial share alike 4 0 international
languagemodel nlp nlp-machine-learning fine-tuning classification sequence-tagging benchmark lm
ai
seurat-unity-plugin
importing seurat meshes into unity seurat is a scene simplification technology designed to process very complex 3d scenes into a representation that renders efficiently on mobile 6dof vr systems this document covers how to import seurat meshes into unity to learn more about the seurat pipeline visit the main seurat github page https github com googlevr seurat introduction this document is organized into two sections the first describes the steps to load a mesh produced by seurat into unity the second provides detailed diagnostic steps to examine if the imported seurat mesh shows artifacts gaps or cracks in various places typically along the edges of the mesh the document assumes some familiarity with the unity editor and is written against version 5 6 importing seurat meshes the instructions in this section assume the following file layout c unity projects seuratimport contains a blank unity project c seurat output contains a set of files produced by seurat in particular seurat obj seurat png follow these steps to import the seurat output into unity 1 import prerequisites open the seuratimport project in unity import the seurat unity capture package into the project with assets import package custom package 2 import the seurat mesh and texture as an asset use asset import new asset to copy seurat obj and seurat png into the unity project s assets folder browse the assets folder in project window locate the seurat output model seurat obj in the assets folder 3 add the seurat mesh to the scene drag and drop the seurat obj model from the asset folder into the scene window or hierarchy window as appropriate note unity may split the mesh into several parts to fit under vertex count limits unity should then display a solid shaded version of the seurat mesh 4 apply the seurat shader to the seurat mesh locate the new node seurat instancing the seurat obj in the hierarchy window and expand the hierarchy it contains until the leaf nodes are visible the hierarchy should contain something like the following nodes and the leaf nodes will have mesh render components attached seurat default default meshpart0 default meshpart1 default meshpart2 select the first leaf node a mesh render component default meshpart0 locate the mesh render component in the inspector panel apply the seurat shader to the geometry click the shaders popup at the bottom of the panel and navigate the menu to the shader googlevr softserve alphablended and click that menu option to apply the alpha blended material 5 apply the seurat texture atlas to the mesh locate the seurat output texture atlas seurat png in the assets folder apply the texture atlas to the chunks of the seurat mesh drag and drop seurat png onto each of the leaf nodes here named default meshpart 6 configure texture atlas settings select the seurat png texture in the assets browser locate the inspector panel for the texture expand the advanced rollup disable the option generate mip maps change wrap mode to clamp locate the build platform subpanel enable override for pc mac linux standalone change max size to a resolution greater than or equal to the dimensions of the seurat png typically this will be 4096 but depends on seurat processing settings note seurat requires that unity not resize the texture click the apply button at the bottom of the panel unity will reprocess the texture and should now display the seurat mesh correctly if the seurat output has artifacts or does not look correct please continue on to the next section the section provides detailed instructions on configuring both the imported assets unity project settings to correctly render seurat meshes diagnosing cracks this section illustrates what crack artifacts may appear and lists many unity settings that can trigger these artifacts example of cracks in unity images cracks 01 png example of cracks in unity images cracks 02 png determine the cause the easiest way to determine the cause of crack or gap artifacts in seurat output is to set the camera background color to something with great contrast to the scene e g bright red and see if there are holes in the mesh generated by seurat if you see holes in the mesh you should try to rebake with higher quality settings if you do not see holes adjust texture and shader settings texture settings bilinear filtering for premultiplied alpha uncheck alpha is transparency otherwise unity will inpaint the transparent areas of the texture this process can be lengthy and will show artifacts in areas that are supposed to be completely transparent no mip maps low or no anisotropic filtering 1 2 in unity any higher may cause cracks do not autoresize to power of 2 wrap mode clamp a unity project setting can affect the texture resolution during the unity application build check that the texture quality option under edit project settings quality is set to full res mesh settings make sure mesh compression is turned off for the uv0 channel in project settings player android vertex compression shader settings centroid and anti aliasing if you are using msaa you may notice edge artifacts centroid interpolation will fix edge sampling errors caused by msaa for more information see fabien giesen s post in unity this can be done by appending centroid to the texcoord interpolator semantic like so glsl struct vertextofragment float4 position sv position float2 uv texcoord0 centroid fragment shader texture coordinate precision is important use highp or float precision for texture coordinate variables rather than lowp modifier or the hlsl min16 prefix important centroid requires open gl es 3 0 and is performance intensive only use centroid interpolation if you are using msaa and absolutely need it currently the centroid modifier is implicated in gpu driver issues on pixel devices workarounds bug fixes are in progress unless you absolutely need depth write e g you are doing something fancy like casting dynamic shadows off seurat geometry you should prefer alpha blending alpha blended uv0 set to centroid interpolation or disable msaa cull off zwrite off ztest lequal queue transparent blend srcalpha oneminussrcalpha alpha tested uv0 set to centroid interpolation or disable msaa cull off zwrite on ztest lequal queue transparent blend srcalpha oneminussrcalpha alpha to coverage unity alphatomask on skybox clear color and background some seurat scenes can have gaps cracks you could say of varying size against the background you should let the team know if you encounter these still colors from background color can bleed through and appear as cracks several things in unity can generate a background color 1 geometry in the scene drawn before seurat s mesh try toggling it on and off to see if a skybox mesh is generating cracks for example 2 the skybox material option of the scene tab of lighting inspector panel window lighting settings can control the background color to evaluate if this feature is contributing to the problem try selecting a black material or a bright red material to see if this changes any of the cracks 3 in the camera inspector panel of the node containing the ldi headbox for the capture clear flags and background color control buffer color initialization for the capture capture settings if none of the above fixes the issue or you see holes in the mesh try rebaking with higher quality capture settings disclaimer this is not an officially supported google product
os
Go-Web-Development-Cookbook
go web development cookbook this is the code repository for go web development cookbook https www packtpub com web development go web development cookbook utm source github utm medium repository utm campaign 9781787286740 published by packt https www packtpub com utm source github it contains all the supporting project files necessary to work through the book from start to finish about the book go is an open source programming language that is designed to scale and support concurrency at the language level this gives you the liberty to write large concurrent web applications with ease from creating web application to deploying them on amazon cloud services this book will be your one stop guide to learn web development in go the go web development cookbook teaches you how to create rest services write microservices and deploy go docker containers whether you are new to programming or a professional developer this book will help get you up to speed with web development in go we will focus on writing modular code in go in depth informative examples build the base one step at a time you will learn how to create a server work with static files sql nosql databases and beego you will also learn how to create and secure rest services and create and deploy go web application and go docker containers on amazon cloud services by the end of the book you will be able to apply the skills you ve gained in go to create and explore web applications in any domain instructions and navigation all of the code is organized into folders each folder starts with a number followed by the application name for example chapter02 the code will look like the following for conn err listener accept if err nil log fatal error accepting err error log println conn readers should possess basic knowledge of go and have go installed on the machine to execute the instructions and the code related products go network programming cookbook https www packtpub com application development go network programming cookbook utm source github utm medium repository utm campaign 9781788392860 mastering go https www packtpub com networking and servers mastering go utm source github utm medium repository utm campaign 9781788626545 security with go https www packtpub com networking and servers security go utm source github utm medium repository utm campaign 9781788627917 suggestions and feedback click here https docs google com forms d e 1faipqlse5qwunkgf6puvzpirpdtuy1du5rlzew23ubp2s p3wb gcwq viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781787286740 https packt link free ebook 9781787286740 a p
front_end
PreTTI
pretti improving text to image models with large language models the goal of this project is to explore potential uses of large language models for the task of improving current state of the art text to image models https en wikipedia org wiki text to image model such as stable diffusion https github com compvis stable diffusion the state of prompt writing writing optimal text prompts to best guide a text to image model towards a desired result can be a complex task often requiring the use of seemingly arbitrary keywords and various style modifiers heavy use of these modifiers is common practice among experienced users due to their frequent positive effect on subjective aesthetic quality as well as their ability to generate images more closely aligned with the desired result even subtle changes in word placement can have a significant effect creating potentially unnecessary work for even the most skilled prompt writers given this complexity and lack of intuitiveness prompt input as ui for text to image models is currently less than ideal next steps this project is currently in the exploratory phase we welcome any and all feedback from the community and would love to discuss potential proposals with anyone interested in the project check out the discussions https github com scf4 pretti discussions tab to get started proposals name description status initial experiment proposals 000 initial md expand prompt detail with a llm complete trained unsimplification model proposals 001 unsimplify model md train a model to unsimplify prompts feedback requested
ai-art diffusion-models generative-art gpt-3 large-language-models openai prompt-engineering stable-diffusion text-to-image
ai
AI_Grand_Challenge_For_Resiliency
rff info session 1 june 3rd 2021 3 00pm 4 30pm register here https www eventbrite com e information session ai grand challenge information session registration 157164385909 request for feedback ai grand challenge for resilience impact of u s government policy on covid 19 using natural language processing amp text analytics agency general service administration 39 s office of technology transformation services tts action request for feedback rff summary gsa tts is planning for an ai challenge for resilience to help to understand the impact of u s government policy on the covid 19 pandemic within the u s to be posted on challenge gov in the june july timeframe the ai challenge for resilience is tts first ai challenge and the first step to expanding the federal ai community with public private academic and other partners the primary objective of this challenge is to bring together ai practitioners with particular emphasis on those who work in the natural language processing and text analytics fields and policy experts who understand and navigate the policy domain the secondary objective of this challenge is to glean insight from the collective impact of regulations across government on our national resilience during the covid 19 pandemic in essence the policy response is a reflection of where stressors and shocks on the nation begin to show rather than waiting until the pandemic is over the challenge team hopes to begin the retrospective now so the united states can be better prepared for unpredictable crises in the future through this rff we seek input on approaches to constructing the challenge to better achieve the above goals the public input provided in response to this rff will inform the tts team who will conduct the challenge and will be beneficial for the development of future challenges we are particularly interested in alternative or complementary data sources analytic approaches and relevant policy domains as well as more general feedback on the tentative challenge process defined below dates response deadline june 4th 2021 to request further information and submit feedback for further information and to submit feedback contact the ai challenge team at the team email tts ai challenge gsa gov we have provided 8 questions for your feedback at the bottom of this rff questions comments or rff submissions via email should include 1 ai grand challenge for resilience in the subject line of the message and 2 how you came across this rff in the body of the email responses to questions will be posted publicly in the challenge a href https github com gsa ai grand challenge for resiliency github a repository and accessible to all please refer to the proposed challenge document below in your rff submissions proposed challenge tts is considering launching the ai challenge for resilience natural language processing text analytics to understand the impacts of the pandemic assess the u s government response and strengthen the resilience of communities across the united states since the start of the global pandemic demand for federal government services has dramatically increased in 2020 a wide range of policy changes were made to respond to covid 19 exposing limitations of the existing system are executive orders new regulations or changes to existing regulations resulting in their intended outcomes with the ai grand challenge for resilience series gsa s primary objective is to bring ai practitioners and policy experts together for one or more challenges to assess resilience in the face of covid 19 the first proposed ai challenge for resilience would seek to understand the landscape and impact of covid 19 on federal policy making and the impact of policies enacted during the pandemic the objective of this challenge would be to use natural language processing nlp text analytics and artificial intelligence machine learning ai ml methods to improve the analysis of existing unstructured policy information demonstrate ways that structure can be applied for improved analysis and lead to recommendations for improving data driven regulatory review we know that the data created and managed at the federal level is not exhaustive but a representation of data collected across the country we invite challenge participants to leverage additional data in order to fill gaps that would impede the ability to analyze policy outcomes the top level questions we are interested in better understanding through this challenge are 1 how can we characterize the federal response to covid 19 through policy actions like new regulations or changes to existing regulations 2 what are the relationships between the federal policy responses and outcomes are there data gaps that would impede the ability to analyze the outcomes of policies how can practices for regulatory review be improved and better supported with data 3 in what ways has covid 19 related regulatory activity impacted americans what can we add or change to the methods for representing or processing regulatory information that will improve our ability to answer the above questions regulations are primarily expressed in unstructured natural language text and regulatory analysis involves human resource intensive knowledge work legal policy analysts must discern general characteristics of policies as well as interpret nuance and context within a single policy domain they often must also navigate complex semantics and interrelationships across policy domains overall changes to policies and rules span dozens if not hundreds of organizations are fragmented across silos and have downstream effects that may not be immediately visible to either rulemakers or the public data the following datasets associated with this challenge are key to understanding the government response to the pandemic documents published on a href http regulations gov https www regulations gov a reflecting federal level rule changes additions and exemptions information on the regulations gov api a href https open gsa gov api regulationsgov can be found here a the challenge team pulled 4gb of a href https ai challenge regulations gov data s3 amazonaws com regulations gov data zip starter data here a this data contains documents pulled from regulations gov under the keyword search terms covid covid 19 and coronavirus from nov 2020 mar 2021 the docs are a mix of json html and pdf formats the code of federal regulations cfr from both a href https github com gsa ai grand challenge for resiliency tree main data cfr 2019 2019 a and a href https github com gsa ai grand challenge for resiliency tree main data cfr 2020 2020 a search a href https github com gsa ai grand challenge for resiliency blob main data coronavirus gov 20search 20logs 20210309t185408z 001 zip logs a from coronavirus gov which provide some insight into the questions the federal government received from the public at different stages of the pandemic an a href https github com gsa ai grand challenge for resiliency blob main data 2020 20covid 20relevant 20eos 20210309t190708z 001 zip archive a of executive orders post pandemic 2020 we anticipate that solvers will bring in additional data sets such as census public health related and economic data sets depending on their approach to a solution tools regulatory analytics is a narrow domain but to the extent possible we encourage solvers to leverage existing open source tools such as a href https 18f gsa gov what we deliver eregulations eregulations a a href https www quantgov org quantgov a a href https github com dod advana gamechanger data gamechanger a and a href https github com mitre policynet policynet a in their solutions to this challenge also we are interested in solutions that leverage ontologies xml and other formalisms and contribute to knowledge generation in the regulatory ontology space we encourage solvers to present solutions that leverage prior work in this area research presented at a href http jurix nl jurix a and a href https www remep net remep a and packages such as a href https github com rinkehoekstra lkif core lkif a a href https jogracia github io ontolex lexicog lexicog a a href http www metalex eu cen metalex a a href http www akomantoso org akomantoso a and a href https github com usgpo uslm uslm a are good places to start when considering technical approaches to this challenge stakeholders personnel from the gsa office of regulation management are key stakeholders for this challenge have helped to frame it and will be part of the cohort of challenge evaluators part of the office of regulation management s mission is to support federal agencies as they develop regulations including management of agencies recordkeeping systems known as dockets as well as review of public comments during the course of the challenge personnel from the office of regulation management will be made available during public meetings such as webinars to ensure that solvers have the information they need to ensure their solutions are relevant to the regulatory process additionally organizations like the undersecretary of defense comptroller s digital transformation office is sponsoring technology modernization efforts around policy discovery evaluation and analytics the gamechanger initiative is an example of one such activity which has a href https github com dod advana gamechanger data open sourced their data science and engineering tools a to the public challenge goals and categories data driven policy analysis has the potential to improve and better coordinate policymaking leading to improved outcomes across a range of issues in food supply education health care economic recovery and resiliency the tts challenge team is considering accepting submissions across the following three categories once the challenge competition begins text summarization classification amp topic modeling using natural language processing techniques describe the types of changes and updates made to federal regulations policies and rules named entity recognition data annotation amp document linking combining natural language processing nlp and knowledge representation techniques to create policy domain specific knowledge graphs which show relationships between organizations topics and policy issuances alternatively leveraging labeling and annotation methods to create high quality enriched datasets to be used for training nlp or ai ml models for policy domain specific tasks impact modeling quantitatively understanding the impact of policy on resilience the solver will leverage analytic techniques from disciplines like operations research statistical analysis and mathematical modeling to enable inferences about the social and economic impact of covid 19 related policy changes the provided solution must include an nlp and or other text analytics but can be augmented with other data analytic techniques and data visualization judging criteria the challenge team is considering using the following evaluation criteria for the submissions data preparation and analytic output creative use of at least one of the required data sets and when necessary supplement with additional datasets to achieve a meaningful analytic output additionally this score focuses on the performance output and interpretation of nlp text analytics and ai ml techniques employed in the project technique selection and analytic methodology this score focuses on the technical approach to employing nlp text analytics and other ai ml techniques in the project project overview and analytic storytelling this score focuses on the overall communication of the project central thesis and the so what explainability transparency responsibility and trust this score also includes communication about strengths weaknesses and limitations behind the analytic techniques used which could include things like bias in the data limited data technical limitations of ai ml techniques future practical application this score focuses on how the solution provided could be applied in combination with data practices processes and services to support ongoing policy analysis provide your feedback to the following questions 1 are there specific data sets e g federal policy corpus geospatial covid 19 related that you think would be critical to include as part of the challenge 2 while evaluating policy impact and outcomes is difficult is there sufficient data and possible research areas to test and analyze hypotheses related to policy impact 3 what are your thoughts on the level of guidance access to policy domain expertise required to make this challenge a success 4 are there additional submission categories that you believe further the overall goals of the challenge that should be considered 5 are there additional evaluation criteria or modifications to these criteria that you think should be included made 6 what are your thoughts on our plan to use a publicly viewable repository version control and collaboration platform as the method for challenge dissemination and submission collection 7 currently the challenge is structured to provide monetary awards to winner s do you think the challenge would be most effective as a prize competition including monetary and or honorary awards or a non competitive crowdsourcing activity 8 what have we missed what other thoughts do you have in relation to the proposed challenge
ai
mario-gpt
div align center mariogpt open ended text2level generation through large language models paper https img shields io badge paper arxiv 2302 05981 b31b1b svg https arxiv org abs 2302 05981 pypi version https badgen net pypi v mario gpt https pypi org project mario gpt a href https huggingface co spaces multimodalart mariogpt img src https img shields io badge 20huggingface 20 demo blue svg alt huggingface spaces a open in colab https colab research google com assets colab badge svg https colab research google com drive 16kr9idjuim6raiypasoqaac768avogxp usp sharing playing generated level interacting with levels generated level alt text static example interactive gif alt text static test level png div how does it work architecture example prompt generations alt text static architecture png alt text static prompt samples png mariogpt is a finetuned gpt2 model specifically distilgpt2 https huggingface co distilgpt2 that is trained on a subset super mario bros and super mario bros the lost levels levels provided by the video game level corpus https github com thevglc thevglc mariogpt is able to generate levels guided by a simple text prompt this generation is not perfect but we believe this is a great first step more controllable and diverse level environment generation forward generation alt text static timelapse 0 gif requirements python3 8 installation from pypi pip install mario gpt or from source git clone git github com shyamsn97 mario gpt git python setup py install generating levels since our models are built off of the amazing transformers https github com huggingface transformers library we host our model in https huggingface co shyamsn97 mario gpt2 700 context length this code snippet is the minimal code you need to generate a mario level python from mario gpt import mariolm sampleoutput pretrained model shyamsn97 mario gpt2 700 context length mario lm mariolm use cuda to speed stuff up import torch device torch device cuda mario lm mario lm to device prompts many pipes many enemies some blocks high elevation generate level of size 1400 pump temperature up to 2 4 for more stochastic but playable levels generated level mario lm sample prompts prompts num steps 1400 temperature 2 0 use tqdm true show string list generated level level show pil image generated level img save image generated level img save generated level png save text level to file generated level save generated level txt play in interactive generated level play run astar agent generated level run astar continue generation generated level continued mario lm sample seed generated level prompts prompts num steps 1400 temperature 2 0 use tqdm true load from text file loaded level sampleoutput load generated level txt play from loaded should be the same level that we generated loaded level play training the code to train mariogpt is pretty simple and straightforward the training class is located here mario gpt trainer py with a small example notebook notebooks train ipynb python import torch from mario gpt import mariodataset mariolm trainingconfig mariogpttrainer create basic gpt model base distilgpt2 mario lm mariolm lm path base tokenizer path base create dataset dataset mariodataset mario lm tokenizer create training config and trainer config trainingconfig save iteration 10 trainer mariogpttrainer mario lm dataset config config train for 100 iterations trainer train 100 batch size 1 see notebook notebooks sampling ipynb for a more in depth tutorial to generate levels interacting with levels right now there are two ways to interact with generated levels 1 huggingface demo https huggingface co spaces multimodalart mariogpt thanks to the amazing work by multimodalart https github com multimodalart you can generate and play levels interactively in the browser in addition gpus are provided so you don t have to own one yourself 2 using the play and astar methods mario gpt simulator simulator py these require you to have java installed on your computer java 8 tested for interactive use the play method and for astar use the run astar method example python from mario gpt import mariolm mario lm mariolm prompts many pipes many enemies some blocks high elevation generated level mario lm sample prompts prompts num steps 1400 temperature 2 0 use tqdm true play in interactive generated level play run astar agent generated level run astar future plans here s a list of some stuff that will be added to the codebase x basic inference code x add mariobert model x add interactive simulator x training code from paper inpainting functionality from paper open ended level generation code different generation methods eg constrained beam search etc authors shyam sudhakaran shyamsnair protonmail com https github com shyamsn97 https shyamsn97 github io miguel gonz lez duque migd itu dk https github com miguelgondu claire glanois clgl itu dk https github com claireaoi matthias freiberger matfr itu dk https github com matfrei elias najarro enaj itu dk https github com enajx sebastian risi sebr itu dk https github com sebastianrisi https sebastianrisi com citation if you use the code for academic or commecial use please cite the associated paper misc https doi org 10 48550 arxiv 2302 05981 doi 10 48550 arxiv 2302 05981 url https arxiv org abs 2302 05981 author sudhakaran shyam and gonz lez duque miguel and glanois claire and freiberger matthias and najarro elias and risi sebastian keywords artificial intelligence cs ai computation and language cs cl fos computer and information sciences fos computer and information sciences title mariogpt open ended text2level generation through large language models publisher arxiv year 2023 copyright arxiv org perpetual non exclusive license
ai
ADL-NLP
adl nlp notebooks for the course applied deep learning for natural language processing at tum
ai
MOS
mos 1 microos arm m3 m4 fpu shell stm32f103 log 1 20211007 shell stm32 demo 2 20211008 stm32f103rct6 mini freertos lcd demo freertos 14 4 freertos 2 mos 2 1 mos 1 img src images frame png alt frame style zoom 33 2 2 mos img src images frame 1 png alt frame 1 style zoom 67 2 3 mos img src images frame 2 png alt frame 2 style zoom 67 3 1 mos user config h 2 core include mos init h mos list h mos misc h mos sys h mos task h mos tick h mos heap h mos misc h mos shell h mos init c mos misc c mos sys c mos task c mos tick c mos heap c mos misc c printf mos shell c shell 3 ports mos hw c mos hw h mos port asm s pendsv svc mos port c mos port h mos typedef c 4 4 1 1 config h 4 2 1 2 3 4 5 shell 4 3 1 2 shell 3 5 stm32f103 5 1 5 1 1 1 svc handler pendsv handler systick handler 2 mos user config h cpu mos define mos config use dynamic heap no 3 mos init h mos task h mos port h 5 1 2 1 2 3 4 5 6 7 5 2 5 2 1 1 svc handler pendsv handler systick handler 2 mos user config h cpu mos config cpu frequency mos mos config tick per second define mos config use dynamic heap yes mos config heap size shell debug define mos config use shell yes define mos config use debug printf yes define mos shell debug port usart1 3 mos h mos init h mos task h mos port h mos ipc h 4 port h cpu data cpu 5 port c mos port output shell debug usart1 irqhandler 1 shell debug mos port bsp init shell 5 2 2 1 2 3 4 5 6 7 6 6 1 shell c help cmd help information ls hardware and os information task os task information ipc os ipc information heap os heap information time os time information
os
pets-front
contributor covenant https img shields io badge contributor 20covenant v2 0 20adopted ff69b4 svg code of conduct md integrate https github com pets oss pets front workflows integrate badge svg pets front pets information system front end live site is available at https app petbook lt setup before starting the setup choose what actions you will do with the code changes if you will keep it as a local copy make a repository clone if you will contribute to the project make a repository fork and read the contribution guideline contributing md the project codebase is optimized for using visual studio code which can be downloaded https code visualstudio com and used with the most of popular os install prettier https marketplace visualstudio com items itemname esbenp prettier vscode and eslint https marketplace visualstudio com items itemname dbaeumer vscode eslint extensions as these are mandatory for project codebase consistency get your local copy of the repository by cloning or forking if not yet installed get node js https nodejs org en download 10v and npm https www npmjs com get npm run node v in your terminal to check the actual node version if you need to be able using various node versions for your projects consider installing and using node version manager https www npmjs com package n install project package dependencies run npm install in the terminal running the same command is recommended after each repository update as more dependencies may be added in the project development make sure to set up local environment variables to run the project see next section to start development version with hot reload run npm start in the terminal environment variables app configuration values are stored in the env file for your local build create env local file from env sample and set custom values there only variables with react app prefix will be included auth0 variables and configs to have setup for local build register with auth0 and follow auth0 react integration guide https auth0 com docs quickstart spa react 01 login set react app auth0 domain react app auth0 client id variables according to yours created auth0 app follow other steps of this guide https auth0 com docs quickstart spa react 01 login configure callback urls in your auth0 application section set allowed callback urls allowed callback urls allowed web origins allowed origins cors the value of the react app auth0 auth audience parameter needs to match an existing api service identifier configured in the apis section of your dashboard continue setting params in your auth0 application section set application type to regular web application token endpoint authentication method to none after env variables are defined and auth0 service itself is configured you re good to go graphql graphql types used by typescript should not be defined manually but generated from the schema to re generate graphql types run npm run codegen command types are stored in the src graphql types ts file the latest api features exposed in the development graphql playground https petbook back dev herokuapp com graphql started by kayak wecancode academy 2021 kaunas
front_end
tracking.js
banner banner svg div align center point right https github com eduardolundgren tracking js issues 395 point left div tracking js build status http img shields io travis eduardolundgren tracking js svg style flat https travis ci org eduardolundgren tracking js devdependencies status http img shields io david dev eduardolundgren tracking js svg style flat https david dm org eduardolundgren tracking js info devdependencies the tracking js library brings different computer vision algorithms and techniques into the browser environment by using modern html5 specifications we enable you to do real time color tracking face detection and much more all that with a lightweight core 7 kb and intuitive interface official website http trackingjs com documentation http trackingjs com docs html api docs http trackingjs com api install install via bower http bower io npm https www npmjs com or download as a zip https github com eduardolundgren tracking js archive master zip bower install tracking npm install tracking examples demo 1 https cloud githubusercontent com assets 398893 3709347 ec72876c 1453 11e4 8450 149d06d487f2 jpg http trackingjs com examples face tag friends html demo 2 https cloud githubusercontent com assets 398893 3709357 1a1c2e16 1454 11e4 804d e6ada6c65997 jpg http trackingjs com examples face fish tank html demo 3 https cloud githubusercontent com assets 398893 3709361 38f86e8a 1454 11e4 811d 52bd21b37e85 jpg http trackingjs com examples color hexgl html demo 4 https cloud githubusercontent com assets 398893 3709464 5447a302 1456 11e4 96b2 d2fae28e2a01 jpg http trackingjs com examples color draw something html demo 5 https cloud githubusercontent com assets 398893 3709469 6a3e859a 1456 11e4 982a d46a55890e1e jpg http trackingjs com examples color fish tank html features trackers http trackingjs com docs html trackers color tracker http trackingjs com docs html color tracker object tracker http trackingjs com docs html object tracker utilities http trackingjs com docs html utilities feature detection fast http trackingjs com docs html feature detection feature descriptor brief http trackingjs com docs html feature descriptor convolution http trackingjs com docs html convolution gray scale http trackingjs com docs html gray scale image blur http trackingjs com docs html image blur integral image http trackingjs com docs html integral image sobel http trackingjs com docs html sobel viola jones http trackingjs com docs html viola jones web components http trackingjs com docs html web components color element http trackingjs com docs html color element object element http trackingjs com docs html object element browser support you can plug tracking js into some well supported html elements such as canvas video and img ie https cloud githubusercontent com assets 398893 3528325 20373e76 078e 11e4 8e3a 1cb86cf506f0 png chrome https cloud githubusercontent com assets 398893 3528328 23bc7bc4 078e 11e4 8752 ba2809bf5cce png firefox https cloud githubusercontent com assets 398893 3528329 26283ab0 078e 11e4 84d4 db2cf1009953 png opera https cloud githubusercontent com assets 398893 3528330 27ec9fa8 078e 11e4 95cb 709fd11dac16 png safari https cloud githubusercontent com assets 398893 3528331 29df8618 078e 11e4 8e3e ed8ac738693f png ie 9 latest latest latest latest however the browser support may vary if you request the user s camera which relies on getusermedia api http caniuse com feat stream roadmap optical flow face recognition pose estimation faster keypoint descriptor brief more trainings hand car plate etc contributing 1 fork it 2 create your feature branch git checkout b my new feature 3 commit your changes git commit m add some feature 4 push to the branch git push origin my new feature 5 submit a pull request d history for detailed changelog check releases https github com eduardolundgren tracking js releases team tracking js is maintained by these people and a bunch of awesome contributors https github com eduardolundgren tracking js graphs contributors eduardo lundgren https 2 gravatar com avatar 42327de520e674a6d1686845b30778d0 https github com eduardolundgren thiago rocha https 2 gravatar com avatar 09c627c62a26a770200819a41a71a3eb https github com thiago rocha zeno rocha https 2 gravatar com avatar e190023b66e2b8aa73a842b106920c93 https github com zenorocha pablo carvalho https 2 gravatar com avatar ae10d2692a6adbf051c6d4255e222df8 https github com pablocp maira bello https 2 gravatar com avatar 97e0e62c9c02badba4c321f7613e6acf https github com mairatma jerome etienne https 2 gravatar com avatar b381880f9f81065247ba9a0b7ff68358 https github com jeromeetienne eduardo lundgren https github com eduardolundgren thiago rocha https github com thiago rocha zeno rocha https github com zenorocha pablo carvalho https github com pablocp maira bello https github com mairatma jerome etienne https github com jeromeetienne license bsd license https github com eduardolundgren tracking js blob master license md eduardo lundgren
ai
Hands-on-Nuxt.js-Web-Development
hands on nuxt js web development a href https www packtpub com web development learn nuxt js utm source github utm medium repository utm campaign 9781789952698 img src https www packtpub com media catalog product cache 4cdce5a811acc0d2926d7f857dceb83b 9 7 9781789952698 original 408 jpeg alt hands on nuxt js web development height 256px align right a this is the code repository for hands on nuxt js web development https www packtpub com web development learn nuxt js utm source github utm medium repository utm campaign 9781789952698 published by packt build universal and static generated vue js applications using nuxt js what is this book about nuxt js is a progressive web framework built on top of vue js for server side rendering ssr with nuxt js and vue js building universal and static generated applications from scratch is now easier than ever before this book covers the following exciting features integrate nuxt with the latest version of vue js extend vuejs apps using nuxt s pages components routing middleware plugins and modules use nuxt to talk to apis or data platforms written in any server side language create a basic real time web app using node js koa js and rethinkdb develop universal and static generated web apps with nuxtjs headless cms and graphgl if you feel this book is for you get your copy https www amazon com dp 1789952697 today a href https www packtpub com utm source github utm medium banner utm campaign githubbanner img src https raw githubusercontent com packtpublishing github master github png alt https www packtpub com border 5 a instructions and navigations all of the code is organized into folders for example chapter02 the code will look like the following name nuxt app scripts dev nuxt following is what you need for this book the book is for any javascript or full stack developer who wants to build server side rendered vue js apps a basic understanding of the vue js framework will assist with understanding key concepts covered in the book with the following software and hardware list you can run all code files present in the book chapter 1 18 software and hardware list chapter software required os required 1 18 php v7 4 5 axios v0 19 2 windows mac os x and linux any 1 18 node js v12 18 2 lts at least v8 9 0 windows mac os x and linux any 1 18 npm v6 14 7 koa js v2 13 0 windows mac os x and linux any 1 18 mongodb v4 2 6 mysql v10 3 22 mariadb windows mac os x and linux any 1 18 foundation v6 6 3 swiper js v6 0 0 windows mac os x and linux any 17 rethinkdb v2 4 0 mac os x and linux any 18 keystone js v11 2 0 socket io v2 3 0 windows mac os x and linux any related products other books you may enjoy clean code in javascript packt https www packtpub com web development clean code in javascript utm source github utm medium repository utm campaign 9781789957648 amazon https www amazon com dp 1789957648 building forms with vue js packt https www packtpub com business other building forms with vue js utm source github utm medium repository utm campaign 9781839213335 amazon https www amazon com dp 1839213337 get to know the author lau tiam kok aka lau thiam kok is a cross disciplinary full stack web developer designer and analyst he was born in penang malaysia his studies include a bachelor of applied arts degree at university malaysia sarawak 1996 1999 and an msc in digital futures at the institute of digital art and technology university of plymouth uk 2002 2003 lau has freelanced for more than 10 years for various individuals institutions and companies he works with designers or independently from designing layouts to coding the frontend and server side programs to produce responsive websites he also works collaboratively on air quality monitoring projects for citizen sense based in the united kingdom which uses r openair shiny mongodb rethinkdb express js koa js socket io and nuxt js for data analysis web apps and iot data platforms suggestions and feedback click here https docs google com forms d e 1faipqlsdy7datc6qmel81fiuuymz0wy9vh1jhkvpy57oimekgqib ow viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781789952698 https packt link free ebook 9781789952698 a p
front_end
My-Wallet-V3-iOS
blockchain wallet for ios banner documentation other github banner png tooling homebrew 4 0 21 xcode 14 3 ruby 3 2 1 ruby gems 3 4 0 swiftlint 0 51 0 swiftformat 0 51 4 building install xcode after installing xcode open it to begin the command line tools installation after finished make sure that a valid cl tool version is selected in xcode preferences locations command line tools install homebrew https brew sh install ruby install a ruby version manager such as rbenv https github com rbenv rbenv brew update brew install rbenv rbenv init install a recent ruby version rbenv install 3 2 1 rbenv global 3 2 1 eval rbenv init install ruby dependencies then the project ruby dependencies fastlane etc gem install bundler bundle install install build dependencies brew sh scripts install brew dependencies sh add production config file clone the wallet ios credentials https github com blockchain wallet ios credentials repository and copy it s config directory to this project root directory it contains a xcconfig for each environment config authenticationkitconfig dev xcconfig config authenticationkitconfig production xcconfig config authenticationkitconfig staging xcconfig config authenticationkitconfig alpha xcconfig config blockchainconfig dev xcconfig config blockchainconfig production xcconfig config blockchainconfig staging xcconfig config blockchainconfig alpha xcconfig config networkkitconfig dev xcconfig config networkkitconfig production xcconfig config networkkitconfig staging xcconfig config networkkitconfig alpha xcconfig for example this is how authenticationkitconfig production xcconfig looks like blockchain url blockchain com login url login blockchain com google recaptcha site key 00000000 for example this is how blockchainconfig production xcconfig looks like include authenticationkitconfig authenticationkit production xcconfig include networkkitconfig networkkit production xcconfig assetcatalog compiler appicon name appicon openssl cert url blockchain info sift account id 00000000 sift beacon key 00000000 product bundle identifier com rainydayapps blockchain bundle display name blockchain login universal link login blockchain com universal link mode intercom api key 00000000 intercom app id 00000000 blockchain wallet page link blockchainwallet page link google recaptcha bypass relay host relay walletconnect com wallet connect product id 00000000 for example this is how networkkitconfig production xcconfig looks like api url api blockchain info checkout env live everypay api url pay every pay eu exchange url blockchainexchange page link exchange explorer server blockchain com iterable api key 00000000 pin certificate 1 retail core url api blockchain info nabu gateway wallet server blockchain info websocket server ws blockchain info add firebase config files clone wallet ios credentials repository and copy it s firebase directory into blockchain directory it contains a googleservice info plist for each environment firebase dev googleservice info plist firebase prod googleservice info plist firebase staging googleservice info plist firebase alpha googleservice info plist add environment variables for scripts clone wallet ios credentials repository and copy the env to the root folder of the project hide the file by using mv env env xcodegen we are integrating xcodegen and despite still committing project files in git we should generate project files using the following script installing brew install xcodegen generate projects dependencies sh scripts bootstrap sh you may need to run the following command if you encounter an xcode select error sudo xcode select s applications xcode app contents developer build the project cmd r modules please refer to the readme modules readme md in the modules directory please also refer to the readme testkit readme md in the testkit directory troubleshooting lfs in case git shows some files as modified when they were not and displays this warning encountered x file s that should have been pointers but weren t when trying to checkout try first this command sequence git lfs uninstall git reset hard git lfs install git lfs pull if it didn t work this sequence should git rm cached r git reset hard git rm gitattributes git reset git checkout contributing if you would like to contribute code to the blockchain ios app you can do so by forking this repository making the changes on your fork and sending a pull request back to this repository when submitting a pull request please make sure that your code compiles correctly and all tests in the blockchaintests target passes be as detailed as possible in the pull request s summary by describing the problem you solved and your proposed solution additionally for your change to be included in the subsequent release s change log make sure that your pull request s title and commit message is prefixed using one of the changelog types the pull request and commit message format should be changelog type component brief description for example fix create wallet fix email validation for a full list of supported types see changelogrc https github com blockchain my wallet v3 ios blob master changelogrc l6 l69 license source code license lgpl v3 artwork images remain copyright blockchain luxembourg s a r l security security issues can be reported to us in the following venues email security blockchain info bug bounty https hackerone com blockchain
bitcoin wallet ios-app swift objective-c
blockchain
Community-driven-initivates
tech talk codes the project contains source code implemented in various open source tech talks
opensource arduino esp32 raspberry-pi cmakelists
os
Guilanet
guilanet guillant information technology
server
secondbrain
p align center img height 60 src assets secondbrain png p p align center img height 120 src assets secondbrain icon png p h2 align center multi platform desktop app to download and run large language models llm locally in your computer h2 br p align center img width 600 src assets video gif p p align center a href https secondbrain sh target blank download a nbsp nbsp give it a star nbsp a href https twitter com intent tweet text if you want to easily download and use local llms try this app https github com juliooa secondbrain target blank share it on twitter a p features the power of ai in your computer local it can work without internet privacy first your messages don t leave your computer uncensored you can talk whatever you want open source try the app if you just want to get the app installers and try the app go to a href https secondbrain sh target blank secondbrain sh a how to use the first time you open the app you will need to download a model and then activate it download a model secondbrain comes with some models ready to download that we know work you can check or modify the models json src tauri configs models json to see their details img width 600 src assets screenshot1 png you can also add your own model files to the models folder and then activate them from within secondbrain app the model needs to be in ggml format activate the model just select the model and press activate model and you are ready to start using the model the prompt is important language models are predictive machines you throw some words tokens actually at them and they try to predict what is the most likely token to come after that and after that new one and so on not all the models work so smooth as chatgpt it depends on the pre training the fine tuning and the under the hood prompting when using models you need to take into account what format they understand better for example alpaca models were trained with this format https github com tatsu lab stanford alpaca data release below is an instruction that describes a task write a response that appropriately completes the request instruction instruction response so if you want to download and use your own models take into account the prompt and change it in the configuration screen with foundational models like llama things get crazy there is no fine tuning so finally you can flex your prompt engineering skills and play around how to run from source the app is build with tauri so basically you need to follow this guide https tauri app v1 guides getting started prerequisites techstack ggml https github com ggerganov ggml llm https github com rustformers llm rust https www rust lang org tauri https tauri app sveltekit https kit svelte dev skeleton https www skeleton dev contribution yes please just send your prs gracias updates for updates follow here julioandresdev https twitter com julioandresdev
ai
lyne-components
h1 align center lyne components h1 lyne components are the building blocks of the lyne design system https github com lyne design system lyne blob master docs terminology md lyne and are based on standard compliant web components https github com lyne design system lyne blob master docs terminology md web components compiled by stenciljs https github com lyne design system lyne blob master docs terminology md stenciljs and browsable through storybook https github com lyne design system lyne blob master docs terminology md storybook br notice lyne components are experimental at the moment with enthusiasm br don t use the project until it has left infancy br what we re working on right now check the current tasks we re working on over here https github com lyne design system lyne components projects 4 but since the project is still experimental and in rapid development not all tasks we re working on are reflected as issues or tickets since this would be too cumbersome at the current development stage also check out over todos todos md what all needs to be done at some point browser and screen reader support this library supports the most recent two versions of all major browsers chrome including android firefox safari including ios and edge we aim for great user experience with the following screen readers windows nvda and jaws with ff chrome macos voiceover with safari chrome ios voiceover with safari android android accessibility suite formerly talkback with chrome storybook the latest still experimental version of the storybook component browser for lyne components can be found here https lyne storybook app sbb ch npm the current experimental build of lyne components can be found on npm https www npmjs com package sbb esta lyne components getting started to see how to use lyne components in react angular vue svelte or plain javascript please refer to this https github com lyne design system lyne getting started repo to see example implementations documentation check the docs docs readme md directory for our documentation which we will continuously enhance contributing see our contributing guide github contributing md and check also our code of conduct github code of conduct md license this software is published by sbb cff ffs under the mit license licence and unsupported unless otherwise clearly stated use at your own risk
design-system stenciljs javascript travis-ci sbb-cff-ffs web-components typescript netlify storybook design-tokens lyne-components lyne lyne-design-tokens
os
cs733
cs 733 this repository contains project assignments for the course cloud computing cs 733 iit bombay www cse iitb ac in instructor for the course is prof sriram srinivasan for any queries feel free to get in touch at 153050019 iitb ac in
cloud
bartender
check out dev branch for the new unstable versions bartender a window addon to game called age of empires ii hd the addon displays additional information of the game such as buildings currently researched technologies or training units screenshot https i imgur com jwstbwb jpg bartender is customizable overlay and it can be divided into four parts 1 bartender s research panels displays current researches and researched technologies 2 bartender s offscreen unit icons shows you unit which are out of the screen its icon is shown on the side of screen displaying the object location 3 bartender s info panels gives you statistics about your villagers ships and trade on the top of your screen it can display the sum of carrying resources and amount of villagers or other units gathering the resource it shows you idle units furthermore it provides k d ratios of units and buildings amount of civilians and military number of owning relics and reseed farms 4 bartender s bars provides you information about your civilization each bar can show you different things one bar may provide you data about your buildings other one about your army composition some buildings may change the icon to currently trained unit or researched technology moreover it shows the time in seconds when it will complete the training or the research or it can show you the length of training queue or the number of garissoned units current hp attack etc furthermore you can filter these icons see the next image dependencies python 3 6 https www python org downloads download python python org pyqt5 https www riverbankcomputing com software pyqt download5 pip3 install pyqt5 pywin32 https pypi org project pywin32 pip3 install pywin32 faq can i use it in singleplayer records multiplayer yes yes no but it works how does it work bartender reads memory of the game there is no api boo at you forgotten empires voobly version or aoe 2 de version voobly is possible to do just replace some offsets aoe 2 de nah the game crashes as soon as you attach the debugger why did you create this addon age of empires ii uses ui from the previous century i just wanted to simplify ui to me and to other players todo watch for crashes consistency of displayed data ui 1 fix lag problems caused by transparency of the window in my case it decreases aoe s fps to 25 2 make it more user friendly mechanics 1 detect if the game is a record game or sp mp game add training researches info about other players it can be done by comparing three game pointers in aoc game py two of them are null while watching recorded game 2 it gets laggy if the game loads a lot of data we need to change the approach used for memory reading this file 1 update screenshots 2 add a ui explenation known limitations 1 needs update when the aoe2hd version changes license bartender copyright c 2018 flea blk panther this program located in this folder is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 3 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses
aoe2hd aoe2 aoc aok age-of-empires spectator spectatoroverlay addon bartender
server
sinosteel
spring boot react node js ui a href https ant design index cn ant design a web 90 http 47 93 233 254 9016 admin admin img src https github com dimitrizhao screenshots blob master sinosteel framework0 png img src https github com dimitrizhao screenshots blob master sinosteel framework1 png img src https github com dimitrizhao screenshots blob master sinosteel framework2 png img src https github com dimitrizhao screenshots blob master sinosteel framework3 png img src https github com dimitrizhao screenshots blob master sinosteel framework4 png img src https github com dimitrizhao screenshots blob master sinosteel framework5 png json orm es6 spring boot orm spring data jpa mybatis redis apache shiro apache maven react ant design react redux router redux thunk webpack nginx server client server client server framework framework example java framework framework example framework example mysql 3306 root redis 6379 mysql redis framework example src main resources config redis framework framework sql ide import maven cd framework example mvn package cd framework example target java jar framework example 1 0 0 jar 9016 client nodejs react cd framework webclient npm install npm run dev 3000 localhost 3000 cd framework webclient npm install npm run build dist nginx a href https github com dimitrizhao sinosteel blob master readme dev guide md a https github com owlaford easy react desktop https github com davezuko react redux starter kit mit
java spring-boot es6 react ant-design spring-data-jpa mybatis shiro redis
front_end
go-cv
go cv go cv is a computer vision and image processing library for go using golang assembly it is a works in progress wrapper around the simd https github com ermig1979 simd library for now most work has been done on the sse2 version simd the simd https github com ermig1979 simd library is a highly optimized image processing library it provides many useful high performance algorithms for image processing such as pixel format conversion image scaling and filtration extraction of statistic information from images motion detection object detection haar and lbp classifier cascades classification neural network the algorithms are optimized using different simd cpu extensions in particular the library supports following cpu extensions x86 x64 sse sse2 sse3 ssse3 sse4 1 sse4 2 avx and avx2 arm neon installation go get u github com fwessels go cv samples see the samples directory for various sample algorithms for example cd samples go run filtering go performance compared to opencv 2 x a comparison against go opencv https github com lazywei go opencv shows the following results opencv sse2 benchmark old ns op new ns op delta benchmarkgaussian 8 74338 18481 75 14 benchmarkgaussianrgb 8 186024 57169 69 27 benchmarkblur 8 110155 16623 84 91 benchmarkblurrgb 8 293017 53716 81 67 benchmarkmedian3x3 8 129268 23270 82 00 benchmarkmedian3x3rgb 8 169857 65896 61 21 benchmarkmedian5x5 8 883311 131812 85 08 benchmarkmedian5x5rgb 8 1246845 388415 68 85 go cv see the underlying package go cv https github com fwessels go cv for more information license go cv is released under the apache license v2 0 you can find the complete text in the file license
simd golang opencv visualization image-processing
ai
cotrain-prompting
co train large language models this repo contains the code for our icml 2022 paper co training improves prompt based learning for large language models https arxiv org abs 2202 00828 and updates extensions including tuning based on t few https github com r three t few this code is useful for boosting the zero shot and few shot performance of large language models distilling large models like gpt 3 and t0 into smaller task specific models large parts of the repo are built on top of the excellent t few https github com r three t few repository if you find this code useful please consider citing our paper inproceedings lang2022co title co training improves prompt based learning for large language models author lang hunter and agrawal monica n and kim yoon and sontag david booktitle international conference on machine learning pages 11985 12003 year 2022 organization pmlr setup conda create n cotrain conda activate cotrain pip install r requirements txt f https download pytorch org whl cu113 torch stable html zero shot co training bert and t0 3b with t few since the publication of our icml paper t few has emerged as a better technique for fine tuning t0 than soft prompt tuning we have included code for co training t0 using t few with bert using regular head tuning method model rte cb t0 3b no training 62 1 51 8 t0 3b co training 86 1 0 6 78 9 9 5 deberta large co training 87 1 0 3 79 3 9 4 the median performances of cotrained t0 and bert on cb are 82 1 and 85 7 respectively the large standard deviation is because 2 5 seeds get stuck at 67 accuracy for both models even these low seeds are near the best performance of cotraining with soft prompt tuning reproducing to run co training for all seeds and datasets cuda visible devices 0 bin cotrain tfew sh once this is finished we can look in dev scores json model to get mean performance across seeds after 5 iterations cat exp out cotrain ia3 rte seed round5 dev scores json bert cut d f 2 cut d f 1 jq s add length cat exp out cotrain ia3 rte seed round5 dev scores json t0 awk nr 2 0 cut d f 2 cut d f 1 jq s add length cat exp out cotrain ia3 cb seed round5 dev scores json bert cut d f 2 cut d f 1 jq s add length cat exp out cotrain ia3 cb seed round5 dev scores json t0 awk nr 2 0 cut d f 2 cut d f 1 jq s add length to just do one run cuda visible devices 0 dataset rte seed 0 t03b json dataset json ia3 json k exp name cotrain ia3 dataset seed seed seed seed few shot false allow skip exp true train template idx 0 eval template idx 0 bert name microsoft deberta large mnli bert epochs 40 eval epoch interval 1 note the performance is sensitive to the choice of prompt for t0 train template idx eval template idx since this dictates the quality of the initial pseudo labeled data used for co training by default the code uses the first template with soft prompt tuning this is the original method from our icml paper which used soft prompt tuning since t few had not been released method model rte cb t0 3b co training 84 8 0 8 64 6 2 4 deberta large co training 86 4 0 7 72 9 1 3 reproducing to run all seeds and datasets cuda visible devices 0 bin cotrain spt sh once this is finished we can look in dev scores json model to get mean performance across seeds after 5 iterations cat exp out cotrain spt rte seed round5 dev scores json bert cut d f 2 cut d f 1 jq s add length cat exp out cotrain spt rte seed round5 dev scores json t0 awk nr 2 0 cut d f 2 cut d f 1 jq s add length cat exp out cotrain spt cb seed round5 dev scores json bert cut d f 2 cut d f 1 jq s add length cat exp out cotrain spt cb seed round5 dev scores json t0 awk nr 2 0 cut d f 2 cut d f 1 jq s add length to just do one run cuda visible devices 0 dataset rte seed 0 python m src cotrain c t03b json dataset json prompt tuning 10 prompts json k exp name cotrain spt dataset seed seed seed seed few shot false allow skip exp true train template idx 0 eval template idx 0 bert name microsoft deberta large mnli bert epochs 40 eval epoch interval 1 prompt tuning num prefix emb 20 prompt tuning decoder false num steps 30000 prompt tuning init with pad true cotrain load best true batch size 16 grad accum factor 2 the large number of steps for soft prompt tuning here is key to obtaining good performance replacing spt with t few thus maintains or improves the performance while being much more efficient due to requiring fewer steps using your own dataset 1 create a dataset reader for your dataset in src data dataset readers py inheriting from basedatasetreader your dataset reader should set self templates with appropriate templates to use with t0 see hswagreader for a good example to follow note the code uses validation as the name of the test split because following other work we report test performance on the public superglue validation sets make sure your test split is called validation the co training code samples a separate validation set for you already 2 add your reader to the get dataset reader function in src data dataset readers py 3 add a config file in configs your dataset name json configs rte json is a good one to copy 4 tell bert how to tokenize your data by adding an entry in task text field map task name input column name target column name for your task in src data dataset module py co training bert and gpt 3 this code is useful for distilling the outputs of gpt 3 into a smaller performant model method model rte cb trec label model no cotrain 62 8 76 8 77 2 label model cotrain 67 2 1 3 82 1 2 3 79 2 1 8 deberta large cotrain 80 1 4 2 84 6 1 4 81 6 1 6 these results differ from table 1 in the paper because we replaced https github com clinicalml cotrain prompting blob 7747e6a4092713f6f9b9e724889aa51c1cba7e7c src cotrain gpt py l93 the more sensitive confidence based data selection for the label model by using the cut statistic on the bert representations in each iteration this selects higher quality pseudo labeled training data based on the label model pseudolabels and removes the need to set a constraint on the minimum label frequency reproducing to run all seeds and datasets cuda visible devices 0 bin cotrain gpt sh once this is finished we can look in dev scores json model to get mean performance across seeds after 5 iterations cat exp out cotrain gpt rte seed round5 dev scores json bert cut d f 2 cut d f 1 jq s add length cat exp out cotrain gpt rte seed round5 dev scores json lm awk nr 2 0 cut d f 2 cut d f 1 jq s add length cat exp out cotrain gpt cb seed round5 dev scores json bert cut d f 2 cut d f 1 jq s add length cat exp out cotrain gpt cb seed round5 dev scores json lm awk nr 2 0 cut d f 2 cut d f 1 jq s add length cat exp out cotrain gpt trec seed round5 dev scores json bert cut d f 2 cut d f 1 jq s add length cat exp out cotrain gpt trec seed round5 dev scores json lm awk nr 2 0 cut d f 2 cut d f 1 jq s add length using your own dataset 1 get gpt 3 or other llm probabilities for each output token in your desired vocabulary i e the feature set of tokens you want to use for the label model for each input example you should have a num prompts x num tokens matrix turn this into a vector with reshape 1 and add it as a new column to your huggingface dataset note make sure the initial verbalizer tokens are the first columns see paper figure 2 3 obtain calibrate before use output matrix for each prompt and add it to cbu mat in src cotrain gpt py this should be num prompts x num initial verbalizer tokens each row corresponds to the diagonal of the initial calibration matrix 4 add a config file for you dataset to configs gpt your dataset name json you can copy gpt trec but update the config with the number of prompts you used 5 add your dataset to get dataset reader in src data dataset readers py map it to gptreader 6 tell bert how to tokenize your data by setting task text field map for your task in src data dataset modules py
ai
banda-todo
bandatodo this project was generated with angular cli https github com angular angular cli version 9 1 6 development server run ng serve for a dev server navigate to http localhost 4200 the app will automatically reload if you change any of the source files code scaffolding run ng generate component component name to generate a new component you can also use ng generate directive pipe service class guard interface enum module build run ng build to build the project the build artifacts will be stored in the dist directory use the prod flag for a production build running unit tests run ng test to execute the unit tests via karma https karma runner github io running end to end tests run ng e2e to execute the end to end tests via protractor http www protractortest org further help to get more help on the angular cli use ng help or go check out the angular cli readme https github com angular angular cli blob master readme md
cloud
amazon-sagemaker-mlops-workshop
amazon sagemaker mlops with classic ci cd tools workshop machine learning ops workshop with sagemaker and codepipeline lab guides and materials introduction img align left src imgs eyecatch sagemaker png data scientists and ml developers need more than a jupyter notebook to create a ml model to test it to put it into production and to integrate it with a portal and or a basic web mobile application in a reliable and flexible way br br br br there are two basic questions that you should consider when you start developing a ml model for a real business case 1 how long would it take your organization to deploy a change that involves a single line of code 2 can you do this on a repeatable reliable basis so if you re not happy with the answers you have mlops is a concept that can help you a to create or improve the organization culture for ci cd applied to ml b to create an automated infrastructure that will support your processes in this workshop you ll see how to create operate an automated ml pipeline using a traditional ci cd tool called codepipeline https aws amazon com codepipeline to orchestrate the ml workflow during the exercises you ll see how to create a docker container from scratch with your own algorithm start a training deployment job by just copying a zip file to an s3 repo run a b tests and more this is a reference architecture that can be used as an inspiration to create your own solution amazon sagemaker https aws amazon com sagemaker a service that supports the whole pipeline of a ml model development lifecycle is the heart of this solution around it you can add several different services as the aws code for creating an automated pipeline building your docker images train test deploy integrate your models etc here you can find more information about devops https aws amazon com devops at aws what is devops https aws amazon com pt devops what is devops some important references another aws service that can used for this purpose is step functions https aws step functions data science sdk readthedocs io en latest readmelink html getting started with sample jupyter notebooks in this link you ll also find the documentation of the python library that can be executed directly from your jupyter notebook apache airflow https airflow apache org is a powerful open source tool that can also be integrated with sagemaker curious just take a look on the sagemaker operators for airflow https sagemaker readthedocs io en stable using workflow html ah you have a kubernetes cluster and want to integrate sagemaker to that and manage the ml pipeline from the cluster no problem take a look on the sagemaker operators for kubernetes https aws amazon com blogs machine learning introducing amazon sagemaker operators for kubernetes anyway there are lots of workflow managers that can be perfectly integrated with sagemaker to do the same job pick yours and use your creativity to create your own mlops platform pre requisites services you should have some basic experience with train test a ml model python scikit learn https scikit learn org stable jupyter notebook https jupyter org aws codepipeline https aws amazon com codepipeline aws codecommit https aws amazon com codecommit aws codebuild https aws amazon com codebuild amazon ecr https aws amazon com ecr amazon sagemaker https aws amazon com sagemaker aws cloudformation https aws amazon com cloudformation some experience working with the aws console is helpful as well aws account in order to complete this workshop you ll need an aws account with access to the services above there are resources required by this workshop that are eligible for the aws free tier if your account is less than 12 months old see the aws free tier https aws amazon com free page for more details scenario in this workshop you ll implement and experiment a basic mlops process supported by an automated infrastructure for training testing deploying integrating ml models it is comprised into four parts 1 you ll start with a warmup for reviewing the basic features of amazon sagemaker 2 then you will optionally create a customized docker image with your own algorithm we ll use scikit learn as our library 3 after that you will train the model using the built in xgboost or a custom container if you ran the step 2 deploy them into a dev environment approve and deploy them into a prd environment with high availability and elasticity 4 finally you ll run a stress test on your production endpoint to test the elasticity and simulate a situation where the number of requests on your ml model can vary parts 2 and 3 are supported by automated pipelines that reads the assets produced by the ml developer and execute control the whole process architecture for part 2 the following architecture will support the process in part 2 you ll create a docker image that contains your own implementation of a randomforest classifier using python 3 7 and scikit learn remember that if you are happy with the built in xgboost https docs aws amazon com sagemaker latest dg xgboost html you can skip this part build docker image imgs mlops buildimage jpg 1 the ml developer creates the assets for docker image based on scikit learn using sagemaker and pushes all the assets to a git repo codecommit 2 codepipeline listens the push event of codecommit gets the source code and launches codebuild 3 codebuild authenticates into ecr build the docker image and pushes it into the ecr repository 4 done for part 3 you ll make use of the following structure for training the model testing it deploying it in two different environments dev qa development simple endpoint and prd production ha elastic endpoint although there is an etl part in the architecture we ll not use glue or other etl tool in this workshop the idea is just to show you how simple it is to integrate this architecture with your data lake and or legacy databases using an etl process train deploy and test a ml model imgs mlops train deploy testmodel jpg 1 an etl process or the ml developer prepares a new dataset for training the model and copies it into an s3 bucket 2 codepipeline listens to this s3 bucket calls a lambda function for start training a job in sagemaker 3 the lambda function sends a training job request to sagemaker 4 when the training is finished codepipeline gets its status goes to the next stage if there is no error 5 codepipeline calls cloudformation to deploy a model in a development qa environment into sagemaker 6 after finishing the deployment in dev qa codepipeline awaits for a manual approval 7 an approver approves or rejects the deployment if rejected the pipeline stops here if approved it goes to the next stage 8 codepipeline calls cloudformation to deploy a model into production this time the endpoint will count with an autoscaling policy for ha and elasticity 9 done crisp dm img align left src imgs crisp png it is important to mention that the process above was based on an industry process for data mining and machine learning called crisp dm https en wikipedia org wiki cross industry standard process for data mining crisp dm stands for cross industry standard process data mining and is an excellent skeleton to build a data science project around br br br br br br br there are 6 phases to crisp business understanding don t dive into the data immediately first take some time to understand business objectives surrounding context ml problem category data understanding exploring the data gives us insights about the paths we should follow data preparation data cleaning normalization feature selection feature engineering etc modeling select the algorithms train your model optimize it as necessary evaluation test your model with different samples with real data if possible and decide if the model will fit the requirements of your business case deployment deploy into production integrate it do a b tests integration tests etc notice the arrows in the diagram though crisp frames data science as a cyclical endeavour more insights lead to better business understanding which kicks off the process again instructions first you need to execute a cloudformation script to create all the components required for the exercises 1 select the below to launch cloudformation stack region launch us east n virginia launch mlops solution in us east 1 imgs cloudformation launch stack png https console aws amazon com cloudformation home region us east 1 stacks new stackname aiworkshop templateurl https s3 amazonaws com aws ai ml aod latam mlops workshop m yml 1 then open the jupyter notebook instance in sagemaker and start doing the exercises 1 warmup lab 00 warmup 01 basicmodel part1 traindeploytest ipynb this is a basic exercise for exploring the sagemaker features like training deploying and optimizing a model if you already have experience with sagemaker you can skip this exercise 2 container image with a scikit classifier lab 01 createalgorithmcontainer 01 creating 20a 20classifier 20container ipynb in this exercise we ll create a docker image that encapsulates all the code required for training and deploying a randomforest classifier if you don t want to create a custom container skip this section 1 test the models locally lab 01 createalgorithmcontainer 02 testing 20our 20local 20model 20server ipynb this is part of the exercise 3 you can use this jupyter to test your local webservice to simulate how sagemaker will call it when you ask it to create an endpoint or launch a batch job for you 2 test the container using a sagemaker estimator lab 01 createalgorithmcontainer 03 testing 20the 20container 20using 20sagemaker 20estimator ipynb this optional exercise can be used for understanding how sagemaker estimators can encapsulate your container and abstract the complexity of the training tuning deploying processes 4 train your models lab 02 trainyourmodel 01 training 20our 20model ipynb in this exercise you ll use the training pipeline you ll see how to train or retrain a particular model by just copying a zip file with the required assets to a given s3 bucket 1 check training progress and test lab 02 trainyourmodel 02 check 20progress 20and 20test 20the 20endpoint ipynb here you can monitor the training process approve the production deployment and test your endpoints 5 stress test lab 03 testinghacking 01 stress 20test ipynb here you can execute stress tests to see how well your model is performing cleaning first delete the following stacks mlops deploy iris model dev mlops deploy iris model prd mlops training iris model job then delete the stack you created if you named aiworkshop find this stack using the cloudformation console and delete it warning all the assets will be deleted including the s3 bucket and the ecr docker images created during the execution of this workshop suggested agenda introduction 1 00 mlstack aws sagemaker concepts features mlops etc warmup part 1 0 30 break 0 15 warmup parts 2 3 4 0 50 container image 0 40 lunch 1 00 mlops pipeline train deployment 0 30 stress tests auto scaling 0 30 wrap up discussion 0 20 total 5 45 license summary this sample code is made available under a modified mit license see the license file thank you
ai
CS426_16CTT
cs436 16ctt mobile development 1 demo git 2 demo git flow 3 code by student boss 16ctt 4 code by mr buoi
front_end
eng-metrics-schema-migrations
eng metrics schema migrations database migrations for the engineering metrics dashboard
server
computer-vision
computer vision computer vision related programs either college assignments or personal development research
ai
iot_projects
esp8266 esp8266 related projects and reference material follow my channel for more iot videos https www youtube com c stechiezdiy support my channel https www buymeacoffee com stechiezdiy
server
schema-search
schema search an application designed to facilitate database schema model reverse engineering in short search through a schema for a provided value displaying all table columns that contain that value
server
cog
cog containers for machine learning cog is an open source tool that lets you package machine learning models in a standard production ready container you can deploy your packaged model to your own infrastructure or to replicate https replicate com highlights docker containers without the pain writing your own dockerfile can be a bewildering process with cog you define your environment with a simple configuration file how it works and it generates a docker image with all the best practices nvidia base images efficient caching of dependencies installing specific python versions sensible environment variable defaults and so on no more cuda hell cog knows which cuda cudnn pytorch tensorflow python combos are compatible and will set it all up correctly for you define the inputs and outputs for your model with standard python then cog generates an openapi schema and validates the inputs and outputs with pydantic automatic http prediction server your model s types are used to dynamically generate a restful http api using fastapi https fastapi tiangolo com automatic queue worker long running deep learning models or batch processing is best architected with a queue cog models do this out of the box redis is currently supported with more in the pipeline cloud storage files can be read and written directly to amazon s3 and google cloud storage coming soon ready for production deploy your model anywhere that docker images run your own infrastructure or replicate https replicate com how it works define the docker environment your model runs in with cog yaml yaml build gpu true system packages libgl1 mesa glx libglib2 0 0 python version 3 11 python packages torch 1 8 1 predict predict py predictor define how predictions are run on your model with predict py python from cog import basepredictor input path import torch class predictor basepredictor def setup self load the model into memory to make running multiple predictions efficient self model torch load weights pth the arguments and types the model takes as input def predict self image path input description grayscale input image path run a single prediction on the model processed image preprocess image output self model processed image return postprocess output now you can run predictions on this model console cog predict i image input jpg building docker image running prediction output written to output jpg or build a docker image for deployment console cog build t my colorization model building docker image built my colorization model latest docker run d p 5000 5000 gpus all my colorization model curl http localhost 5000 predictions x post h content type application json d input image https input jpg note bfirsh development environment instructions intentionally left out of readme for now so as not to confuse the ship a model to production message in development you can also run arbitrary commands inside the docker environment console cog run python train py or spin up a jupyter notebook docs notebooks md console cog run p 8888 jupyter notebook allow root ip 0 0 0 0 why are we building this it s really hard for researchers to ship machine learning models to production part of the solution is docker but it is so complex to get it to work dockerfiles pre post processing flask servers cuda versions more often than not the researcher has to sit down with an engineer to get the damn thing deployed andreas https github com andreasjansson and ben https github com bfirsh created cog andreas used to work at spotify where he built tools for building and deploying ml models with docker ben worked at docker where he created docker compose https github com docker compose we realized that in addition to spotify other companies were also using docker to build and deploy machine learning models uber https eng uber com michelangelo pyml and others have built similar systems so we re making an open source version so other people can do this too hit us up if you re interested in using it or want to collaborate with us we re on discord https discord gg replicate or email us at team replicate com mailto team replicate com prerequisites macos linux or windows 11 cog works on macos linux and windows 11 with wsl 2 docs wsl2 wsl2 md docker cog uses docker to create a container for your model you ll need to install docker https docs docker com get docker before you can run cog if you install docker engine instead of docker desktop you will need to install buildx https docs docker com build architecture buildx as well install a id upgrade a if you re using macos you can install cog using homebrew console brew install cog you can also download and install the latest release of cog directly from github by running the following commands in a terminal console sudo curl o usr local bin cog l https github com replicate cog releases latest download cog uname s uname m sudo chmod x usr local bin cog alternatively you can build cog from source and install it with these commands console make sudo make install next steps get started with an example model docs getting started md get started with your own model docs getting started own model md using cog with notebooks docs notebooks md using cog with windows 11 docs wsl2 wsl2 md take a look at some examples of using cog https github com replicate cog examples deploy models with cog docs deploy md cog yaml reference docs yaml md to learn how to define your model s environment prediction interface reference docs python md to learn how the predictor interface works training interface reference docs training md to learn how to add a fine tuning api to your model http api reference docs http md to learn how to use the http api that models serve need help join us in cog on discord https discord gg replicate contributors thanks goes to these wonderful people emoji key https allcontributors org docs en emoji key all contributors list start do not remove or modify this section prettier ignore start markdownlint disable table tbody tr td align center valign top width 14 28 a href https fir sh img src https avatars githubusercontent com u 40906 v 4 s 100 width 100px alt ben firshman br sub b ben firshman b sub a br a href https github com replicate cog commits author bfirsh title code a a href https github com replicate cog commits author bfirsh title documentation a td td align center valign top width 14 28 a href https replicate ai img src https avatars githubusercontent com u 713993 v 4 s 100 width 100px alt andreas jansson br sub b andreas jansson b sub a br a href https github com replicate cog commits author andreasjansson title code a a href https github com replicate cog commits author andreasjansson title documentation a a href maintenance andreasjansson title maintenance a td td align center valign top width 14 28 a href http zeke sikelianos com img src https avatars githubusercontent com u 2289 v 4 s 100 width 100px alt zeke sikelianos br sub b zeke sikelianos b sub a br a href https github com replicate cog commits author zeke title code a a href https github com replicate cog commits author zeke title documentation a a href tool zeke title tools a td td align center valign top width 14 28 a href https rory bio img src https avatars githubusercontent com u 9436784 v 4 s 100 width 100px alt rory byrne br sub b rory byrne b sub a br a href https github com replicate cog commits author synek title code a a href https github com replicate cog commits author synek title documentation a a href https github com replicate cog commits author synek title tests a td td align center valign top width 14 28 a href https github com hangtwenty img src https avatars githubusercontent com u 2420688 v 4 s 100 width 100px alt michael floering br sub b michael floering b sub a br a href https github com replicate cog commits author hangtwenty title code a a href https github com replicate cog commits author hangtwenty title documentation a a href ideas hangtwenty title ideas planning feedback a td td align center valign top width 14 28 a href https bencevans io img src https avatars githubusercontent com u 638535 v 4 s 100 width 100px alt ben evans br sub b ben evans b sub a br a href https github com replicate cog commits author bencevans title documentation a td td align center valign top width 14 28 a href https shashank pw img src https avatars githubusercontent com u 778870 v 4 s 100 width 100px alt shashank agarwal br sub b shashank agarwal b sub a br a href https github com replicate cog commits author imshashank title code a a href https github com replicate cog commits author imshashank title documentation a td tr tr td align center valign top width 14 28 a href https victorxlr me img src https avatars githubusercontent com u 22397950 v 4 s 100 width 100px alt victorxlr br sub b victorxlr b sub a br a href https github com replicate cog commits author victorxlr title code a a href https github com replicate cog commits author victorxlr title documentation a a href https github com replicate cog commits author victorxlr title tests a td td align center valign top width 14 28 a href https annahung31 github io img src https avatars githubusercontent com u 39179888 v 4 s 100 width 100px alt hung anna br sub b hung anna b sub a br a href https github com replicate cog issues q author 3aannahung31 title bug reports a td td align center valign top width 14 28 a href http notes variogr am img src https avatars githubusercontent com u 76612 v 4 s 100 width 100px alt brian whitman br sub b brian whitman b sub a br a href https github com replicate cog issues q author 3abwhitman title bug reports a td td align center valign top width 14 28 a href https github com jimothyjohn img src https avatars githubusercontent com u 24216724 v 4 s 100 width 100px alt jimothyjohn br sub b jimothyjohn b sub a br a href https github com replicate cog issues q author 3ajimothyjohn title bug reports a td td align center valign top width 14 28 a href https github com ericguizzo img src https avatars githubusercontent com u 26746670 v 4 s 100 width 100px alt ericguizzo br sub b ericguizzo b sub a br a href https github com replicate cog issues q author 3aericguizzo title bug reports a td td align center valign top width 14 28 a href http www dominicbaggott com img src https avatars githubusercontent com u 74812 v 4 s 100 width 100px alt dominic baggott br sub b dominic baggott b sub a br a href https github com replicate cog commits author evilstreak title code a a href https github com replicate cog commits author evilstreak title tests a td td align center valign top width 14 28 a href https github com dashstander img src https avatars githubusercontent com u 7449128 v 4 s 100 width 100px alt dashiell stander br sub b dashiell stander b sub a br a href https github com replicate cog issues q author 3adashstander title bug reports a a href https github com replicate cog commits author dashstander title code a a href https github com replicate cog commits author dashstander title tests a td tr tr td align center valign top width 14 28 a href https github com hurricane eye img src https avatars githubusercontent com u 31437546 v 4 s 100 width 100px alt shuwei liang br sub b shuwei liang b sub a br a href https github com replicate cog issues q author 3ahurricane eye title bug reports a a href question hurricane eye title answering questions a td td align center valign top width 14 28 a href https github com ericallam img src https avatars githubusercontent com u 534 v 4 s 100 width 100px alt eric allam br sub b eric allam b sub a br a href ideas ericallam title ideas planning feedback a td td align center valign top width 14 28 a href https perdomo me img src https avatars githubusercontent com u 178474 v 4 s 100 width 100px alt iv n perdomo br sub b iv n perdomo b sub a br a href https github com replicate cog issues q author 3aiperdomo title bug reports a td td align center valign top width 14 28 a href http charlesfrye github io img src https avatars githubusercontent com u 10442975 v 4 s 100 width 100px alt charles frye br sub b charles frye b sub a br a href https github com replicate cog commits author charlesfrye title documentation a td td align center valign top width 14 28 a href https github com phamquiluan img src https avatars githubusercontent com u 24642166 v 4 s 100 width 100px alt luan pham br sub b luan pham b sub a br a href https github com replicate cog issues q author 3aphamquiluan title bug reports a a href https github com replicate cog commits author phamquiluan title documentation a td td align center valign top width 14 28 a href https github com tommydew42 img src https avatars githubusercontent com u 46992350 v 4 s 100 width 100px alt tommydew br sub b tommydew b sub a br a href https github com replicate cog commits author tommydew42 title code a td td align center valign top width 14 28 a href https m4ke org img src https avatars githubusercontent com u 27 v 4 s 100 width 100px alt jesse andrews br sub b jesse andrews b sub a br a href https github com replicate cog commits author anotherjesse title code a a href https github com replicate cog commits author anotherjesse title documentation a a href https github com replicate cog commits author anotherjesse title tests a td tr tr td align center valign top width 14 28 a href https whiteink com img src https avatars githubusercontent com u 3602 v 4 s 100 width 100px alt nick stenning br sub b nick stenning b sub a br a href https github com replicate cog commits author nickstenning title code a a href https github com replicate cog commits author nickstenning title documentation a a href design nickstenning title design a a href infra nickstenning title infrastructure hosting build tools etc a a href https github com replicate cog commits author nickstenning title tests a td td align center valign top width 14 28 a href https merrell io img src https avatars githubusercontent com u 14996837 v 4 s 100 width 100px alt justin merrell br sub b justin merrell b sub a br a href https github com replicate cog commits author justinmerrell title documentation a td td align center valign top width 14 28 a href https github com ruriky img src https avatars githubusercontent com u 19946546 v 4 s 100 width 100px alt rurik yl onnenvuori br sub b rurik yl onnenvuori b sub a br a href https github com replicate cog issues q author 3aruriky title bug reports a td td align center valign top width 14 28 a href https www youka club img src https avatars githubusercontent com u 59315275 v 4 s 100 width 100px alt youka br sub b youka b sub a br a href https github com replicate cog issues q author 3ayoukaclub title bug reports a td td align center valign top width 14 28 a href https github com afiaka87 img src https avatars githubusercontent com u 3994972 v 4 s 100 width 100px alt clay mullis br sub b clay mullis b sub a br a href https github com replicate cog commits author afiaka87 title documentation a td td align center valign top width 14 28 a href https github com mattt img src https avatars githubusercontent com u 7659 v 4 s 100 width 100px alt mattt br sub b mattt b sub a br a href https github com replicate cog commits author mattt title code a a href https github com replicate cog commits author mattt title documentation a a href infra mattt title infrastructure hosting build tools etc a td td align center valign top width 14 28 a href https github com juneezee img src https avatars githubusercontent com u 20135478 v 4 s 100 width 100px alt eng zer jun br sub b eng zer jun b sub a br a href https github com replicate cog commits author juneezee title tests a td tr tr td align center valign top width 14 28 a href https github com bbedward img src https avatars githubusercontent com u 550752 v 4 s 100 width 100px alt bb br sub b bb b sub a br a href https github com replicate cog commits author bbedward title code a td td align center valign top width 14 28 a href https github com williamluer img src https avatars githubusercontent com u 85975676 v 4 s 100 width 100px alt williamluer br sub b williamluer b sub a br a href https github com replicate cog commits author williamluer title documentation a td td align center valign top width 14 28 a href http sirupsen com img src https avatars githubusercontent com u 97400 v 4 s 100 width 100px alt simon eskildsen br sub b simon eskildsen b sub a br a href https github com replicate cog commits author sirupsen title code a td td align center valign top width 14 28 a href https erbridge co uk img src https avatars githubusercontent com u 1027364 v 4 s 100 width 100px alt f br sub b f b sub a br a href https github com replicate cog issues q author 3aerbridge title bug reports a a href https github com replicate cog commits author erbridge title code a td td align center valign top width 14 28 a href https github com philandstuff img src https avatars githubusercontent com u 581269 v 4 s 100 width 100px alt philip potter br sub b philip potter b sub a br a href https github com replicate cog issues q author 3aphilandstuff title bug reports a a href https github com replicate cog commits author philandstuff title code a td td align center valign top width 14 28 a href https github com joannejchen img src https avatars githubusercontent com u 33409024 v 4 s 100 width 100px alt joanne chen br sub b joanne chen b sub a br a href https github com replicate cog commits author joannejchen title documentation a td td align center valign top width 14 28 a href http technillogue github io img src https avatars githubusercontent com u 945691 v 4 s 100 width 100px alt technillogue br sub b technillogue b sub a br a href https github com replicate cog commits author technillogue title code a td tr tbody table markdownlint restore prettier ignore end all contributors list end this project follows the all contributors https github com all contributors all contributors specification contributions of any kind welcome
containers cuda deep-learning docker machine-learning pytorch tensorflow
ai
Windows-iotcore-samples
samplefwlink https go microsoft com fwlink linkid 860459 windows 10 iot core samples this repo contains the samples that demonstrate the usage patterns for microsoft s windows 10 iot core these code samples were created with templates available in visual studio and are designed but not limited to run on devices that run windows 10 iot core note if you are unfamiliar with git and github you can download the entire collection as a zip file archive master zip but be sure to unzip everything to access shared dependencies for more info see get started https docs microsoft com en us windows iot core getstarted windows 10 iot core development these samples require visual studio 2017 to build and windows 10 iot core to execute the setup and installation steps are different based on what hardware device you have get a free copy of visual studio 2017 community edition http go microsoft com fwlink p linkid 280676 additionally to stay on top of the latest updates to windows and the development tools become a windows insider by joining the windows insider program become a windows insider https insider windows com using the samples the easiest way to use these samples without using git is to download the zip file containing the current version using the following link or by clicking the download zip button on the repo page you can then unzip the entire archive and use the samples in visual studio 2017 download the samples zip archive master zip notes before you unzip the archive right click it select properties and then select unblock be sure to unzip the entire archive and not just individual samples the samples all depend on the sharedcontent folder in the archive in visual studio 2017 the platform target defaults to arm so be sure to change that to x64 or x86 if you want to test on a non arm device the samples use linked files in visual studio to reduce duplication of common files including sample template files and image assets these common files are stored in the sharedcontent folder at the root of the repository and are referred to in the project files using links reminder if you unzip individual samples they will not build due to references to other portions of the zip file that were not unzipped you must unzip the entire archive if you intend to build the samples for more info about the programming models platforms languages and apis demonstrated in these samples please refer to the guidance tutorials and reference topics provided in the windows 10 documentation available in the windows developer center http go microsoft com fwlink p linkid 532421 these samples are provided as is in order to indicate or demonstrate the functionality of the programming models and feature apis for windows contributions note when contributing make sure you are contributing from the develop branch and not the master branch your contribution will not be accepted if your pr is coming from the master branch if you find a bug in any of these samples please file it using the feedback hub app you can find instructions on how to use the feedback hub app here https social msdn microsoft com forums en us fad1c6a0 e578 44a7 8e8d 95cc28c06ccd need logs if your device hasnt updated to the latest iotcore version forum windowsiot these samples are direct from the feature teams and we welcome your input on issues and suggestions for new samples if you would like to see new coverage or have feedback please consider contributing you can edit the existing content add new content or simply create new issues we ll take a look at your suggestions and will work together to incorporate them into the docs this project has adopted the microsoft open source code of conduct https opensource microsoft com codeofconduct for more information see the code of conduct faq https opensource microsoft com codeofconduct faq or contact opencode microsoft com mailto opencode microsoft com with any additional questions or comments see also for additional windows samples see windows on github http microsoft github io windows samples by category devices sensors and samples that involve wiring table tr td a href samples appserviceblinky appserviceblinky a td td a href samples digitalsign digitalsign a td td a href samples helloblinky helloblinky a td tr tr tr td a href samples helloblinkybackground helloblinkybackground a td td a href samples nfcforiot nfcforiot a td td a href samples potentiometersensor potentiometer sensor a td tr tr td a href samples pushbutton push button a td td a href samples rgbled rgb led a td td a href samples accelerometer accelerometer a td tr tr td a href samples spidisplay spi display a td td a href samples tempforcesensor tempforcesensor a td td a href samples videocapturesample videocapturesample a td tr tr td a href samples i2ccompass i2c compass a td td a href samples containerwebsocket containerwebsocket a td td a href samples gpioonewire gpioonewire a td tr tr td a href samples i2cportexpander i2c port expander a td td a href samples iotblockly iot blockly a td tr table samples that demonstrate universal windows application features table tr td a href samples appservicesharednotepad appservicesharednotepad a td td a href samples companionapp companionapp a td td a href samples externalprocesslauncher externalprocesslauncher a td tr tr td a href samples foregroundappwithbackgroundapp foregroundappwithbackgroundapp a td td a href samples helloblinkybackground helloblinkybackground a td td a href samples helloworld helloworld a td tr tr td a href samples iotbrowser iot browser a td td a href samples iotcoredefaultapp iotcore defaultapp a td td a href samples iotcoremediaplayer iotcore mediaplayer a td tr tr td a href samples iotonboarding iot onboarding a td td a href samples cognitiveservicesexample cognitive services a td td a href samples companionapp companion app a td tr tr td a href samples opencvexample opencv example a td td a href samples serialuart serial uart a td td a href samples webcamapp webcam app a td tr tr td a href samples wificonnector wifi connector a td tr table samples that utilize microsoft azure features table tr td a href samples iotconnector iotconnector a td td a href samples speechtranslator speechtranslator a td td a href samples weatherstation weatherstation a td tr tr td a href samples azure hellocloud hellocloud a td td a href samples azure hellocloud headless hellocloud headless a td td a href samples azure readdevicetocloudmessages readdevicetocloudmessages a td tr tr td a href samples azure tpmdevicetest tpmdevicetest a td td a href samples azure weatherstation weatherstation a td td a href samples azure weatherstation powerbi weatherstation powerbi a td tr tr td a href samples azure iothubclients iot hub clients a td td a href samples edgemodules azure iot edge modules a td tr table samples that involve device drivers services or realtime processing table tr td a href samples iotcoreservice iotcoreservice a td td a href samples ntservicerpc ntservicerpc a td td a href samples serialuart serialuart a td tr tr td a href samples shiftregister shift register a td td a href samples memorystatus memory status a td td a href samples containerwebsocket container web socket a td tr tr td a href samples customdeviceaccessor custom device accessor a td td a href samples iotonboarding rfcomm iot onboarding bluetooth rfcomm a td td a href samples virtualmicrophonearraydriver virtual microphone array driver a td tr table
ms-iot iot internet-of-things
server
mlwscv2022
mlwscv2002 welcome to the duke machine learning winter school computer vision 2022 the mlws cv includes 3 hands on training sessions on implementing machine learning tools with the pytorch software platform you can see the full description at https plus datascience duke edu mlwscv2022 day 1 introduction to pytorch billy carson student notebook contains space for student to code along https github com dukeplusds mlwscv2022 blob main day1 student notebook ipynb full solutions notebook https github com dukeplusds mlwscv2022 blob main day1 solution notebook ipynb day 2 pytorch for image analysis with convolutional neural networks gavin karr student notebook contains space for student to code along https github com dukeplusds mlwscv2022 blob main day2 student notebook ipynb full solutions notebook https github com dukeplusds mlwscv2022 blob main day2 solution notebook ipynb day 3 pytorch for image analysis including image segmentation and object detection akhil ambekar part 1 student notebook contains space for student to code along https github com dukeplusds mlwscv2022 blob main day3 student notebook part1 ipynb full solutions notebook https github com dukeplusds mlwscv2022 blob main day3 solutions notebook part1 ipynb part 2 student notebook contains space for student to code along https github com dukeplusds mlwscv2022 blob main day3 student notebook part2 ipynb full solutions notebook https github com dukeplusds mlwscv2022 blob main day3 solutions notebook part2 ipynb we are using a virtual computing environment provided by duke s office of information technology oit for those that are interested the source code for the docker container can be found here https gitlab oit duke edu mccahill ml winter school
ai
ui-kit
xola ui kit xola s react component library with tailwind css for the next generation of xola apps see a preview at https ui xola io requirements node js v16 npm v7 or higher usage install the ui kit bash npm install xola ui kit install peer dependencies bash npm install autoprefixer postcss tailwindcss lodash create postcss and tailwind config files bash echo module exports require xola ui kit tailwind config tailwind config js echo module exports require xola ui kit postcss config postcss config js import main css files in your project js import xola ui kit index css import xola ui kit build style css ui kit expects you already have a working react dev environment with postcss support import and use the components js import button from xola ui kit development installation install all required dependencies bash nvm use project needs node js v16 with npm v7 npm install start the storybook development server bash npm start advanced integrate your app with a locally installed ui kit in order for this to work you will have to set up an npm workspace that means ui kit and your project has to be in the same directory start by creating a package json file in your workspace directory with the following content json workspaces ui kit your project your workspace directory should also contain npmrc and nvmrc files copy them from this project bash cd workspace cp ui kit npmrc cp ui kit nvmrc now we re ready to install the dependencies for both projects bash cd workspace npm install if all went well npm will use locally installed ui kit in your project next start the build command from ui kit bash cd ui kit npm run build watch this will build and watch for changes the ui kit project any change made in the ui kit should be visible in your project if you don t see any changes in your project that probably means that npm installed a separate package in your your project node modules directory to fix this just remove the whole package with the following command bash cd your project rm rf node modules xola troubleshooting if you encounter some package related issues try removing the following directories and running the install command again bash cd workspace rm rf package lock json node modules ui kit node modules your project node modules npm install lint auto fix to automatically fix lint issues in this project you have the following commands bash npm run lint run lint on src and output issues npm run lint fix run lint and automatically fix any issues any that are not fixed are output to screen notes to avoid issues with how npm v7 resolves peer dependencies we enabled legacy peer deps rule in npmrc in order to avoid issues in your projects that are using this ui kit use the same npmrc file or always run installs with legacy peer deps flag for example bash npm install legacy peer deps or bash npm install some package legacy peer deps publishing the package install np https github com sindresorhus np readme which will help you publish the package bash npm g install np once you re ready run this command to publish your package bash npm run build np your new version tag latest yolo then make sure to push all the tags upstream to xola ui kit repo git push upstream remote master tags
design-system tailwindcss uikit react
os
disneyplus-clone
title disney react clone description a curated list of amazingly awesome open source sysadmin resources author tobicorradi disney front end clone made with react js h this a disney front end clone that i ve built using reactjs you can visit the website clicking here https disneyplus clone f8077 web app i dont own any rights from disney as a company this platform was made for educational purposes and to be shown as a piece of portfolio there are not any commercial or monetary purposes features get all the movies made by disney filter all this movies depending on the producer or saga marvel starwars national geographic etc get details from an specific movie like casting runtime overview director release date etc get a recommended list of movies depending on the movie that you ve chosen installation clone the project execute npm install command edit the env example file with your the movie db api key execute npm start done libraries technologies used react js react router the movie database api axios material ui firebase slick slider images home del clon de disney en una mac en forma de presentaci n https www corraditobias com ar img work 09 01 jpg clon de disney visitada en un ipad pro para visualizar como se presenta el contenido en dicha pantalla https www corraditobias com ar img work 09 02 jpg home del clon de disney en un iphone 12 para demostrar como se adapta el clon de disney a diferentes dispositivos https www corraditobias com ar img work 09 03 jpg diferentes secciones del clon de disney en el ipad pro https www corraditobias com ar img work 09 04 jpg collage del clon de disney y sus respectivas secciones pantallas https www corraditobias com ar img work 09 05 jpg
disneyplus reactjs frontend disney-front movie movies-depending disney react-js
front_end
design-systems-cli
div align center img src logo png h1 h1 p a cli toolbox for creating design systems in minutes p p uses typescript css styled components support p p outputs cjs and mjs p p no tooling configuration required p div div align center a href https circleci com gh intuit design systems cli tree master img src https img shields io circleci project github intuit design systems cli master svg style flat square logo circleci alt circleci a a href https www npmjs com package design systems cli img src https img shields io npm v design systems cli svg style flat square logo npm alt npm a a href https www npmjs com package design systems cli img src https img shields io npm dt design systems cli svg style flat square logo npm alt npm a img src https camo githubusercontent com 1e90782cb83e8540fdb707b54ce8d055c8224b07 68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6e6f64652d25334525334425323031302e31382e312d627269676874677265656e a href https github com intuit auto img src https img shields io badge release auto svg style flat square colora 888888 amp colorb 9b065a amp label auto amp logo data image png base64 ivborw0kggoaaaansuheugaaabqaaaaucayaaacnir0naaaczeleqvr4ayxbw2ivbqaa4o nlllo9nm7jsxasko2aszmakyhrkedh2ohxhvwy6eiiiilogizg9ctdgg0vnqojexrogvgzyyli1skikvitptttnv3m7 v8uvng3m r7aplirxstn69qzqebbrmyybdil4sd0vefmrwtrkri5ijp0f7rjzrsjvbtqwubilzffysrhrrsghbja8ebyy0nyljt8bdbotzbey72tldq1krm6otana8jk3 kzn 3v nbpu6hsnnnlzaz ukoalb0rbjkeqnykd7lix5fp yxuqlfuuhxbg8di5gl9jbxfq tla86ppxphaprwcyaiors8l uupjh1hzfbcr8mewrx0d7jshr3f7pnw4vx0grakkwvk7tadq7upvfww8ykmcpvb vfvfrz1i7zqfwjtmfoul72y6c 0l0ie3gvaqxryyvb3yzne32 a d9bvlcrb3yw3hkrcdadutfl6ykr20aalvkoqixudbmj6gfzamdxfwx9iirrkdr1f27cfongmuo gri jnbimyxjoor1cy0ogavpb5z9mlkbyjp esdmixvsfmm7ql42neblx3xi1bbybtkxcqrnxubgzpo4t7sqbnebg7zbaidi8nwfzdhqwycg4pfr hmbq6l5vpjyberyjxwsdyj crnljv0yb4zluytfqikmznst8frrpckezhcblz2iinmikpzbbyb9mw42nwinc2xme0y61aj06ogsxl5rcok1udcbexivwnxsey 6 ebaivg8eeeafxvaosbnch61uod7bs1ul8eshbkwxcrdyd6eynkihgevrwoabqruoytubyiffac3gvn6iawhjkyncepyhvjxgbozaryau4hctyizq5ei1ygiuoilt1b7zjbyqmrwybwtdyjowon7 loiqefiqkawlzk6id69ggpqgwhhecwgguzfepaipqscxadfsaaaaasuvork5cyii alt auto release a div br overview design systems cli is basically a create react app https github com facebook create react app for design systems the main benefit it brings you as a developer is time savings setting up all of the monorepo storybook https storybook js org and build tools for a design system takes over a week if you piece it together yourself you can do it with this project in minutes features star scaffold components and entire design systems star build your components for multiple outputs cjs and esm star write styles with styled components or css modules star craft excellent components using storybook star let component consumer try your components with playroom star testing and linting support star typescript supported out of the box star track the size of your components and debug the changes for the full documentation go here https intuit github io design systems cli installation ensure you have the following softwares installed node 10 18 1 installation guide https nodejs org en download yarn installation guide https classic yarnpkg com en docs install if node gyp throws errors during installation installation may still be successful to get started to get set up fork and clone the project then run the following command sh yarn yarn start creating a new plugin to scaffold a new plugin inside this repo run the following command sh yarn run create plugin my plugin contributing feel free to open an issue https github com intuit design systems cli issues or a pull request https github com intuit design systems cli pulls make sure to read our code of conduct code of conduct md we actively welcome pull requests learn how to contribute contributing md contributors thank you to all these wonderful people emoji key https allcontributors org docs en emoji key all contributors list start do not remove or modify this section prettier ignore start markdownlint disable table tr td align center a href https adamdierkens com img src https avatars1 githubusercontent com u 13004162 v 4 s 100 width 100px alt br sub b adam dierkens b sub a br a href https github com intuit design systems cli commits author adierkens title code a a href design adierkens title design a a href ideas adierkens title ideas planning feedback a a href https github com intuit design systems cli commits author adierkens title documentation a a href https github com intuit design systems cli commits author adierkens title tests a td td align center a href http hipstersmoothie com img src https avatars3 githubusercontent com u 1192452 v 4 s 100 width 100px alt br sub b andrew lisowski b sub a br a href https github com intuit design systems cli commits author hipstersmoothie title code a a href design hipstersmoothie title design a a href https github com intuit design systems cli commits author hipstersmoothie title documentation a a href ideas hipstersmoothie title ideas planning feedback a a href infra hipstersmoothie title infrastructure hosting build tools etc a a href https github com intuit design systems cli commits author hipstersmoothie title tests a td td align center a href http tylerkrupicka com img src https avatars1 githubusercontent com u 5761061 v 4 s 100 width 100px alt br sub b tyler krupicka b sub a br a href https github com intuit design systems cli commits author tylerkrupicka title code a a href https github com intuit design systems cli commits author tylerkrupicka title documentation a a href https github com intuit design systems cli commits author tylerkrupicka title tests a td td align center a href https github com kendallgassner img src https avatars0 githubusercontent com u 15275462 s 400 v 4 s 100 width 100px alt br sub b kendall gassner b sub a br a href https github com intuit design systems cli commits author kendallgassner title code a a href https github com intuit design systems cli commits author kendallgassner title documentation a a href https github com intuit design systems cli commits author kendallgassner title tests a td td align center a href https github com kharrop img src https avatars0 githubusercontent com u 24794756 v 4 s 100 width 100px alt br sub b kelly harrop b sub a br a href design kharrop title design a td td align center a href http peter mikit sh img src https avatars3 githubusercontent com u 1571918 v 4 s 100 width 100px alt br sub b peter mikitsh b sub a br a href https github com intuit design systems cli commits author petermikitsh title documentation a td td align center a href https renovate whitesourcesoftware com img src https avatars0 githubusercontent com u 25180681 v 4 s 100 width 100px alt br sub b whitesource renovate b sub a br a href https github com intuit design systems cli commits author renovate bot title code a a href https github com intuit design systems cli commits author renovate bot title tests a td tr tr td align center a href https github com mishavp2001 img src https avatars2 githubusercontent com u 1007097 v 4 s 100 width 100px alt br sub b mishavp2001 b sub a br a href https github com intuit design systems cli commits author mishavp2001 title code a td td align center a href https github com vasikarla img src https avatars0 githubusercontent com u 1945958 v 4 s 100 width 100px alt br sub b raj vasikarla b sub a br a href https github com intuit design systems cli commits author vasikarla title code a a href https github com intuit design systems cli commits author vasikarla title documentation a a href https github com intuit design systems cli commits author vasikarla title tests a td td align center a href http uptrend tech img src https avatars3 githubusercontent com u 126236 v 4 s 100 width 100px alt br sub b brandon orther b sub a br a href https github com intuit design systems cli commits author orther title documentation a a href https github com intuit design systems cli commits author orther title code a td td align center a href https github com alan cruz2 img src https avatars3 githubusercontent com u 11319336 v 4 s 100 width 100px alt br sub b alan cruz2 b sub a br a href https github com intuit design systems cli commits author alan cruz2 title code a td td align center a href https github com hainessss img src https avatars1 githubusercontent com u 6373177 v 4 s 100 width 100px alt br sub b hainessss b sub a br a href https github com intuit design systems cli commits author hainessss title code a td td align center a href http athityakumar github io img src https avatars0 githubusercontent com u 17109060 v 4 s 100 width 100px alt br sub b athitya kumar b sub a br a href https github com intuit design systems cli commits author athityakumar title code a td td align center a href https jasonrundell com img src https avatars0 githubusercontent com u 524344 v 4 s 100 width 100px alt br sub b jason rundell he him b sub a br a href https github com intuit design systems cli commits author jasonrundell title documentation a a href https github com intuit design systems cli commits author jasonrundell title tests a a href https github com intuit design systems cli commits author jasonrundell title code a td tr tr td align center a href https github com reubenae img src https avatars1 githubusercontent com u 17691502 v 4 s 100 width 100px alt br sub b reuben b sub a br a href https github com intuit design systems cli commits author reubenae title documentation a td td align center a href https github com vzsky img src https avatars1 githubusercontent com u 20735983 v 4 s 100 width 100px alt br sub b my99n b sub a br a href https github com intuit design systems cli commits author vzsky title documentation a a href https github com intuit design systems cli commits author vzsky title tests a a href https github com intuit design systems cli commits author vzsky title code a td td align center a href https github com anjaliguptaz img src https avatars2 githubusercontent com u 13619573 v 4 s 100 width 100px alt br sub b anjaliguptaz b sub a br a href https github com intuit design systems cli commits author anjaliguptaz title documentation a td td align center a href https github com chaopan img src https avatars3 githubusercontent com u 7483159 v 4 s 100 width 100px alt br sub b chaopan b sub a br a href https github com intuit design systems cli commits author chaopan title tests a td td align center a href https github com talor a img src https avatars2 githubusercontent com u 11509865 v 4 s 100 width 100px alt br sub b talor anderson b sub a br a href https github com intuit design systems cli commits author talor a title code a a href https github com intuit design systems cli commits author talor a title documentation a a href https github com intuit design systems cli commits author talor a title tests a td td align center a href https github com spentacular img src https avatars2 githubusercontent com u 1043478 v 4 s 100 width 100px alt br sub b spencer hamm b sub a br a href https github com intuit design systems cli commits author spentacular title code a a href https github com intuit design systems cli commits author spentacular title documentation a a href https github com intuit design systems cli commits author spentacular title tests a td td align center a href https github com amalik2 img src https avatars githubusercontent com u 25858348 v 4 s 100 width 100px alt br sub b adil malik b sub a br a href https github com intuit design systems cli commits author amalik2 title tests a td tr tr td align center a href https github com salilbc img src https avatars githubusercontent com u 9673247 v 4 s 100 width 100px alt br sub b salil cuncoliencar b sub a br a href https github com intuit design systems cli commits author salilbc title documentation a a href https github com intuit design systems cli commits author salilbc title tests a a href https github com intuit design systems cli commits author salilbc title code a td td align center a href https github com gauravkesarwani img src https avatars githubusercontent com u 5545506 v 4 s 100 width 100px alt br sub b gaurav kesarwani b sub a br a href https github com intuit design systems cli commits author gauravkesarwani title documentation a a href https github com intuit design systems cli commits author gauravkesarwani title tests a a href https github com intuit design systems cli commits author gauravkesarwani title code a td td align center a href https nicolas hoizey com img src https avatars githubusercontent com u 78213 v 4 s 100 width 100px alt br sub b nicolas hoizey b sub a br a href https github com intuit design systems cli commits author nhoizey title documentation a td td align center a href https github com hborawski img src https avatars githubusercontent com u 1325154 v 4 s 100 width 100px alt br sub b harris borawski b sub a br a href https github com intuit design systems cli commits author hborawski title code a td td align center a href https github com fattslug img src https avatars githubusercontent com u 18297343 v 4 s 100 width 100px alt br sub b sean powell b sub a br a href https github com intuit design systems cli commits author fattslug title code a td td align center a href https github com melindali255 img src https avatars githubusercontent com u 29384338 v 4 s 100 width 100px alt br sub b melindali255 b sub a br a href https github com intuit design systems cli commits author melindali255 title documentation a a href https github com intuit design systems cli commits author melindali255 title tests a a href https github com intuit design systems cli commits author melindali255 title code a td td align center a href https yuchoho com img src https avatars githubusercontent com u 9959271 v 4 s 100 width 100px alt br sub b yucho ho b sub a br a href https github com intuit design systems cli commits author yucho title code a td tr tr td align center a href https github com sugarmanz img src https avatars githubusercontent com u 9255651 v 4 s 100 width 100px alt br sub b jeremiah zucker b sub a br a href https github com intuit design systems cli commits author sugarmanz title tests a td td align center a href https github com abdelgzali img src https avatars githubusercontent com u 16235866 v 4 s 100 width 100px alt br sub b abd el ghazali b sub a br a href https github com intuit design systems cli commits author abdelgzali title documentation a a href https github com intuit design systems cli commits author abdelgzali title code a td tr table markdownlint restore prettier ignore end all contributors list end this project follows the all contributors https github com all contributors all contributors specification contributions of any kind welcome
hacktoberfest
os
AppOpsX
appopsx build status https img shields io travis 8enet appopsx svg ci release version https img shields io github release 8enet appopsx svg releases issues https img shields io github issues 8enet appopsx svg issues software license https img shields io github license 8enet appopsx svg license crowdin https d322cqt584bo4o cloudfront net appopsx localized svg crowdin coolapk https img shields io badge coolapk download blue svg coolapk preview version https img shields io badge preview 20version download orange svg preview appopsx is a front end application for the android appopsservice it allows you to restrict app permissions root or adb shell is required readme zh md img src https f droid org badge get it on png alt get it on f droid height 80 https f droid org packages com zzzmode appopsx features search applications group apps by permissions import export backup file automatically turn off permissions multi user support supports 4 4 and more reporting bugs bug reports and feature requests can be made via the public issue tracker issues contributing please fork this repository and contribute back using pull requests pr all contributions large or small major features bug fixes additional language translations unit integration tests are welcomed translating see the translation page crowdin if you would like to contribute license appopsx is released under the mit license license pr https github com 8enet appopsx pulls issues https github com 8enet appopsx issues crowdin https crowdin com project appopsx license https github com 8enet appopsx blob master license ci https travis ci org 8enet appopsx releases https github com 8enet appopsx releases coolapk http www coolapk com apk com zzzmode appopsx preview https www zzzmode com appopsx apk
android appops permissions adb root
front_end
Natural-Language-Processing-with-Disaster-Tweets
natural language processing with disaster tweets abstract with today s technology each person s online footprint opens the door for a large treasure trove of information that can be used for many purposes that varies from analyzing market trends to understanding the general emotion of a group of people twitter data is especially very useful for a variety of purposes when it comes to the latter use case mainly because there are more than 6000 new tweets every second with the advancement of technology and natural language processing methodologies the process of text and sentiment analysis has become much easier than a few years earlier if in case a person tweets a message which was about an emergency or an impending disaster and this was recognized immediately by our nlp models we would be able to react quicker than normal which would help save lives this is pretty much the crux of our project the main aim of our project is to distinguish if a tweet talks about a real disaster or not this is for a competition hosted by kaggle and the dataset provided consists of a training set of 10 000 hand classified tweets on which we built our models for the purpose of identifying if a tweet is pertaining to a disaster or not we tried out a variety of different models like bag of words tf idf features count vectorizer followed by a ridge classifier a naive bayes approach svm and lstm models glove vectorization bert and k fold cross validation on our bert model out of all these we found that the bert model with a k fold cross validation worked best for this dataset and gave us an f1 score of 0 83573 dataset https www kaggle com c nlp getting started data data pre processing and visulaization the following were the steps we performed for understanding the data better visualizations histograms bar charts etc embeddings vectorisation count vectors tf idf vectorization continuous bag of words glove fasttext topic modelling latent dirichlet allocation word cloud scatter text a special type of interactive visualization implemenation methods simple linear classification logistic regression a multinomial na ve bayes support vector machine bidirectional lstm bert bidirectional encoder representations from transformers bert k fold cross validation google universal encoder take away we implemented many interesting things like scatter text the bert model we tried different libraries and explored many new topics in nlp despite us being new to nlp we had fun note to acsses or view html files please refer to https htmlpreview github io
bert-model disaster tweets natural-language-processing nlp scattertext svm kfold-cross-validation glove naive-bayes lstm count-vectorizer fasttext
ai
lib-python
swubanner https raw githubusercontent com vshymanskyy standwithukraine main banner direct svg https github com vshymanskyy standwithukraine blob main docs readme md blynk python library this library provides api to connect iot hardware that supports micropython python to blynk cloud and communiate with blynk apps ios and android you can send raw and processed sensor data and remotely control anything that is connected to your hardware relays motors servos from anywhere in the world github version https img shields io github release blynkkk lib python svg lib release github download https img shields io github downloads blynkkk lib python total svg lib release github stars https img shields io github stars blynkkk lib python svg lib stars github issues https img shields io github issues blynkkk lib python svg lib issues build status https img shields io travis blynkkk lib python svg lib travis license https img shields io badge license mit blue svg lib licence if you like blynk give it a star or fork it and contribute github stars https img shields io github stars blynkkk lib python svg style social label star lib stars github forks https img shields io github forks blynkkk lib python svg style social label fork lib network blynk banner blynk banner blynk is the most popular internet of things platform for connecting hardware to the cloud designing apps to control them and managing your deployed devices at scale with blynk library you can connect over 400 hardware models including esp8266 esp32 nodemcu all arduinos raspberry pi particle texas instruments etc to the blynk cloud full list of supported hardware can be found here blynk hw with blynk apps for ios and android apps you can easily build graphic interfaces for all of your projects by simply dragging and dropping widgets on your smartphone it s a purely wysiwg experience no coding on ios or android required hardware can connect to blynk cloud open source server over the internet using hardware connectivity on board or with the use of various shields ethernet wifi gsm lte etc blynk cloud is available for every user of blynk for free installation of blynk python library installation via python pip check python availability in your system commandline python version to exclude compatibility issue preferable versions are python 2 7 9 or greater or python 3 4 or greater if python not present you can download and install it from here python org note to run python in sandbox you can try virtualenv module check this document virtual env how to do it if you re using preferable versions of python mentioned above then pip comes installed with python by default check pip availability commandline pip version install blynk library commandline sudo pip install blynklib manual installation library can be installed locally from git sources commandline git clone https github com blynkkk lib python git cd lib python pip install user e sudo pip install e if installation needed not for current but for all users testing you can run unit tests for cpython version of library blynklib py using the command python setup py test note blynklib version 0 2 6 should use pytest mock 1 11 2 in version 1 11 2 were added restrictions for context manager usage note unit tests for micropython env are not available yet micropython installation some hardware platforms can use micropython micropython org package this is helpful for preliminary testing and debugging of your code outside of real hardware supported platforms and related installation docs can be found here micropython pkg features this library supports python2 python3 blynklib py and micropython blynklib mp py communication with public or local blynk server blynk server exchange any data between your hardware and app tested to work with raspberry pi any esp32 esp8266 list of available operations subscribe to connect disconnect events ssl connection supported only by cpython lib subscribe to read write events of virtual pins blynk vpins virtual pin blynk vpins write virtual pin blynk vpins sync send mobile app push notifications send email notifications send twitter notifications change widget gui parameters in blynk app based on hardware input quickstart 1 install blynk python library as described above 2 install blynk app img src https cdn rawgit com simple icons simple icons develop icons googleplay svg width 18 height 18 google play blynk app android img src https cdn rawgit com simple icons simple icons develop icons apple svg width 18 height 18 app store blynk app ios create new account in blynk app using your email address create a new project in blynk app you will get auth token delivered to your email account put this auth token within your python script to authenticate your device on public blynk server public or local blynk server python blynk auth yourauthtoken insert your auth token here usage example python import blynklib import blynklib mp as blynklib micropython import blynk auth yourauthtoken insert your auth token here base lib init blynk blynklib blynk blynk auth advanced options of lib init from future import print function blynk blynklib blynk blynk auth server blynk cloud com port 80 ssl cert none heartbeat 10 rcv buffer 1024 log print lib init with ssl socket connection blynk blynklib blynk blynk auth port 443 ssl cert path to local blynk server certificate current blynk cloud com certificate stored in project as https github com blynkkk lib python blob master certificate blynk cloud com crt note ssl feature supported only by cpython register handler for virtual pin v22 reading by blynk app when a widget in blynk app asks virtual pin data from server within given configurable interval 1 2 5 10 sec etc server automatically sends notification about read virtual pin event to hardware this notification captured by current handler blynk handle event read v22 def read virtual pin handler pin your code goes here example get sensor value perform calculations etc sensor data yoursensordata critilcal data value yourthresholdsensorvalue send value to virtual pin and store it in blynk cloud blynk virtual write pin sensor data you can define if needed any other pin example blynk virtual write 24 sensor data you can perform actions if value reaches a threshold e g some critical value if sensor data critilcal data value blynk set property pin color ff0000 set red color for the widget ui element blynk notify warning critical value send push notification to blynk app blynk email youremail email com email subject email body send email to specified address main loop that starts program and handles registered events while true blynk run other examples examples can be found here blynk py examples check them all to get familiar with main blynk api features core operations 01 write virtual pin py https github com blynkkk lib python blob master examples 01 write virtual pin py how to read incoming data from blynk app to virtual pin and use it in your code 02 read virtual pin py https github com blynkkk lib python blob master examples 02 read virtual pin py how to update value on virtual pin 03 connect disconnect py https github com blynkkk lib python blob master examples 03 connect disconnect py managing connection with blynk cloud 04 email py https github com blynkkk lib python blob master examples 04 email py how to send send email and push notifications from your hardware 05 set property notify py https github com blynkkk lib python blob master examples 05 set property notify py how to change some of widget ui properties like colors labels etc 06 terminal widget py https github com blynkkk lib python blob master examples 06 terminal widget py communication between hardware and app through terminal widget 07 tweet and logging py https github com blynkkk lib python blob master examples 07 tweet and logging py how to post to twitter and log events from your hardware 08 blynk timer py https github com blynkkk lib python blob master examples 08 blynk timer py how send data periodically from hardware by using blynk timer blynktimer doc 09 sync virtual pin py https github com blynkkk lib python blob master examples 09 sync virtual pin py how to sync virtual pin states and properties 10 rtc sync py https github com blynkkk lib python blob master examples 10 rtc sync py how to perform rtc sync with blynk server 11 ssl socket py https github com blynkkk lib python blob master examples 11 ssl socket py ssl server connection feature supported only by cpython library 12 app connect disconnect py https github com blynkkk lib python blob master examples 12 app connect disconnect py managing app connect disconnect events with blynk cloud raspberry pi any read raspberry pi guide https github com blynkkk lib python tree master examples raspberry first 01 weather station pi3b py https github com blynkkk lib python blob master examples raspberry 01 weather station pi3b py connect dht22 bmp180 sensors and send data to blynk app esp32 read esp32 guide https github com blynkkk lib python tree master examples esp32 first 01 touch button py https github com blynkkk lib python blob master examples esp32 01 touch button py connect ttp223b touch sensor to esp32 and react to touch 02 terminal cli py https github com blynkkk lib python blob master examples esp32 02 terminal cli py communication between esp32 hardware and app through terminal widget 03 temperature humidity dht22 py https github com blynkkk lib python blob master examples esp32 03 temperature humidity dht22 py connect dht22 sensor to esp32 and send data to blynk app esp8266 read esp8266 guide https github com blynkkk lib python tree master examples esp8266 first 01 potentiometer py https github com blynkkk lib python blob master examples esp8266 01 potentiometer py cconnect potentiometer to esp8266 and send resistance value to the app memory size limitations for hardware with limited memory size ex esp8266 you can use frozen modules or frozen bytecode approaches to load blynklib or any other library to hardware read this document esp8266 readme to get more information documentation and other helpful links full blynk documentation https docs blynk io a complete guide on blynk features community forum https community blynk cc join a 1 000 000 blynk community to ask questions and share ideas official website https blynk io social media facebook https www fb com blynkapp twitter https twitter com blynk app youtube https www youtube com blynk instagram https www instagram com blynk iot linkedin https www linkedin com company b l y n k blynk libraries for other platforms c https github com blynkkk blynk library node js espruino browsers https github com vshymanskyy blynk library js python https github com vshymanskyy blynk library python by volodymyr shymanskyy particle https github com vshymanskyy blynk library spark lua openwrt nodemcu https github com vshymanskyy blynk library lua openwrt packages https github com vshymanskyy blynk library openwrt mbed https developer mbed org users vshymanskyy code blynk node red for blynk iot https flows nodered org node node red contrib blynk iot labview https github com juncaofish ni labviewinterfaceforblynk c https github com sverrefroy blynklibrary contributing you are very welcome to contribute stability bugfixes new hardware support or any other improvements please license this project is released under the mit license mit lib release https github com blynkkk lib python releases latest lib licence https github com blynkkk lib python blob master license lib travis https travis ci org blynkkk lib python lib issues https github com blynkkk lib python issues lib stars https github com blynkkk lib python stargazers lib network https github com blynkkk lib python network blynk io https github com blynkkk blynkkk github io blynk hw https github com blynkkk blynkkk github io blob master supportedhardware md blynk architecture https github com blynkkk blynkkk github io blob master images architecture png blynk banner https github com blynkkk blynkkk github io blob master images githubbanner jpg blynk server https github com blynkkk blynk server blynk server public http blynk cloud com blynk docs https docs blynk cc blynk py examples https github com blynkkk lib python blob master examples blynk app android https play google com store apps details id cloud blynk blynk app ios https apps apple com us app blynk iot id1559317868 blynk vpins http help blynk cc getting started library auth token code examples blynk basics what is virtual pins python org https www python org downloads micropython org https micropython org micropython pkg https github com micropython micropython wiki getting started virtual env https virtualenv pypa io en latest installation esp8266 readme https github com blynkkk lib python blob master examples esp8266 readme md blynktimer doc https github com blynkkk lib python blob master timers md
blynk iot iot-platform iot-application iot-device iot-cloud esp8266 esp32 python micropython mcu microcontroller hardware library raspberry-pi raspberry-pi-3 linux embedded
server
Medical-Blockchain
store private healthcare data off chain and manage medical data using blockchain electronic medical records and data craves the need for innovation the way patient health records are stored and secured today do not showcase our technological advancement in this area in the past decade and hospitals continue to use age old data management systems for patient data this is partly due to strict regulations around privacy and security of medical data which has stifled the use of latest technology to make medical data management more transparent and useful for both patients as well as doctors this code pattern showcases a medical data access management platform built using blockchain the application shows the platform from the point of view of 4 stakeholders the solution admin is the admin of a conglomerate of hospitals and has the highest of access levels in the hierarchy they have the ability to onboard a new organization hospital to the conglomerate and assign de assign hospital admins on their dashboard the organization hospital admin is the admin of a particular hospital which is part of the conglomerate solution they have the ability to onboard new users with the role of either patient or doctor or remove a user the doctor is a user in the organization with the appropriate role and has the ability to upload documents for their patients and download view documents of their patients to which they have been granted access the patient is a user in the organization with the appropriate role and has the ability to upload documents on their own view them view the document access logs and also manage access to their documents on their dashboard this code pattern is for developers who want to integrate with the blockchain solution manager blockchain document store and the ibm blockchain platform when you have completed it you will understand how to connect the blockchain solution manager and blockchain document store with the ibm blockchain platform create a vuejs web app that has multiple dashboards on a single page application which can communicate in realtime with each other create a nodejs server that is deployed to kubernetes on ibm cloud and connected with a redis database deployed on the ibm cloud store and retrieve data from a redis datastore for persistent storage through a nodejs server make rest calls to an external service use jwt json web token tokens for user management architecture flow architecture flow docs doc images arch flow png raw true login flow 1 all the stakeholders of the application solution admin hospital admin doctor and patient begin the user flow by logging into their respective dashboards 2 clicking the login button leads to the login portal of the blockchain solution manager hosted on the ibm cloud 3 the login portal uses openapi connect and allows the user the login through any onboarded identity provider in our example we have on boarded ibmid ad googleid successful authentication leads to the jwt credentials for the user admin dashboard 4 the solution admin flow begins at the admin component and requires the user to authenticate themselves through the login flow described above 5 after successful authentication the user can access the solution admin dashboard they are able to view the solution and add remove hospitals from the solution using the admin api s 6 all the admin api s connect with the blockchain solution manager through rest to process the user queries 7 the blockchain solution manager connects with the ibm blockchain platform and updates the ledger appropriately organization dashboard 8 the hospital admin flow begins at the organization component and requires the user to authenticate themselves through the login flow described above 9 after successful authentication the user can access the hospital admin dashboard they are able to add remove any user in their respective hospital with the on boarded roles patient doctor in our case using the organization api s 10 all the organization api s connect with the blockchain solution manager through rest to process the user queries 11 the blockchain solution manager connects with the ibm blockchain platform and updates the ledger appropriately doctor dashboard 12 the doctor flow begins at the doctor component and requires the user to authenticate themselves through the login flow described above 13 after successful authentication the user can access the doctor dashboard they are able to upload a medical record for a patient who is part of their hospital and download any medical record associated with a patient to which they have access to using the doctor api s the acl s for all the patient documents is application level and is maintained through the document acl flow described below 14 all the doctor api s connect with the blockchain document store through rest to process the user queries 15 the blockchain document store connects with the ibm blockchain platform and updates the ledger appropriately patient dashboard 16 the patient flow begins at the patient component and requires the user to authenticate themselves through the login flow described above 17 after successful authentication the user can access the patient dashboard they are able to upload a medical record for themselves download any of their medical records view the access logs of their documents and view manage permissions to their documents using the patient api s the acl s for all the documents is application level and is maintained through the document acl flow described below 18 all the patient api s connect with the blockchain document store through rest to process the user queries 19 the blockchain document store connects with the ibm blockchain platform and updates the ledger appropriately document access control list acl flow 20 the doctor and patient component are connected with the redis api s that invoke methods to manage the document level access control across hospitals 21 the redis api s talk to a nodejs server deployed in a docker container in a kubernetes cluster on the ibm cloud 22 the server talks to two redis databases which hold the access per document and access per user permissions included components ibm blockchain platform https console bluemix net docs services blockchain howto ibp v2 deploy iks html ibp v2 deploy iks gives you total control of your blockchain network with a user interface that can simplify and accelerate your journey to deploy and manage blockchain components on the ibm cloud kubernetes service ibm blockchain solution manager https cloud ibm com docs services blockchain document store topic blockchain document store blockchain solution manager api acls the blockchain document store service includes the ibm blockchain solution manager component which enables organizations to easily manage blockchain networks solutions services and users ibm blockchain document store https cloud ibm com docs services blockchain document store topic blockchain document store getting started getting started is a comprehensive document management service for ibm blockchain platform business networks ibm cloud kubernetes service https www ibm com cloud container service creates a cluster of compute hosts and deploys highly available containers a kubernetes cluster lets you securely manage the resources that you need to quickly deploy update and scale applications ibm cloud databases for redis service https console bluemix net catalog services databases for redis redis is an open source in memory data structure store used as a database cache and message broker it supports data structures such as strings hashes lists sets sorted sets with range queries bitmaps hyperloglogs and geospatial indexes with radius queries featured technologies nodejs https www nodejs org is an open source cross platform javascript run time environment that executes javascript code server side vuejs https vuejs org is a progressive framework for building user interfaces redis https redis io is an open source bsd licensed in memory data structure store used as a database cache and message broker bootstrap https getbootstrap com is a free and open source front end web framework it contains html and css based design templates for typography forms buttons navigation and other interface components as well as optional javascript extensions docker https www docker com is a computer program that performs operating system level virtualization also known as containerization prerequisites we find that blockchain can be finicky when it comes to installing node we want to share this stackoverflow response https stackoverflow com questions 49744276 error cannot find module api hyperledger composer because many times the errors you see with compose are derived in having installed either the wrong node version or took an approach that is not supported by compose ibm cloud account https cloud ibm com registration target 2fdashboard 2fapps docker https www docker com products latest docker compose https docs docker com compose overview latest npm https www npmjs com get npm latest nvm latest node js https nodejs org en download node v8 9 x git client https git scm com downloads latest running the application manually deploy to local machine 1 set up your machine 1 set up your machine 2 create ibm cloud services 2 create ibm cloud services 3 create a solution 3 create a solution 4 clone the repository 4 clone the repository 5 modify the configuration files 5 modify the configuration files 6 run the application 6 run the application 1 set up your machine install the following dependencies docker https www docker com go to the docker website and download the installer after installation run docker git https git scm com install git which is a free and open source distributed version control system 2 create ibm cloud services create the ibm cloud kubernetes service https cloud ibm com catalog infrastructure containers kubernetes you can find the service in the catalog for this code pattern we can use the free cluster and give it a name note that the ibm cloud allows one instance of a free cluster and expires after 30 days br p align center img src docs doc gifs 1 gif p br create two instances of databases for redis service https cloud ibm com catalog services databases for redis you can find the service in the catalog br p align center img src docs doc gifs 2 gif p br note you can use just one instance of redis as well modify the code in the server repository to allow for this create the ibm blockchain service https cloud ibm com catalog services ibm blockchain 5 prod you can find the service in the catalog br p align center img src docs doc gifs 3 gif p br create the blockchain document store and blockchain solution manager services these services are not currently available publicly on the ibm cloud catalog you can reach out to rak joon choi rak joon choi us ibm com to provision these services for you follow the service documentation https cloud ibm com docs services blockchain document store topic blockchain document store getting started getting started to connect the blockchain document store to the blockchain service br p align center img src docs doc gifs 4 gif p br 3 create a solution after configuring your services in the previous step we now move on to creating a solution using our custom swagger url for the blockchain solution manager service go to the patch endpoint v1 solutions under solution and authorize using the api by going to the v1 logins url in a new tab logging in as administrator and getting the jwt add the token prepended by bearer such that it looks like bearer jwt after authorization click on try it out to execute the api and paste the following json in the on boarding section give the name medrec demo to the solution onboardingdata solution id medrec demo name demo for medrec pattern roles id role patient name patient solutionid medrec demo isblockchainrole true id role doctor name doctor solutionid medrec demo isblockchainrole true br p align center img src docs doc gifs 5 gif p br after creating the solution successfully add yourself as the admin of the solution go to the post endpoint v1 solutions solutionid administrators under solution and authorize using the api by going to the v1 logins url in a new tab logging in as administrator and getting the jwt add the token prepended by bearer such that it looks like bearer jwt after authorization click on try it out to execute the api and type your email id under solutionadministrators in the json object provide medrec demo as the solutionid br p align center img src docs doc gifs 6 gif p br 4 clone the repository git clone https github com ibm medical blockchain git cd medical blockchain 5 modify the configuration files modify the redis config file go to the previously provisioned redis services on ibm cloud click on service credentials click on new credential button once the new credentials are created click on view credentials from the json object extract the uri from connection rediss composed 0 from the json object extract the certificate from connection rediss certificate certificate base64 navigate to the server config json file in the cloned repository replace the uri and certificate values in the marked places repeat the steps for the second provisioned service and enter it in the second spot in the config file br p align center img src docs doc gifs 7 gif p br modify the blockchain config file go to the v1 logins url for your blockchain document store service login as administrator extract the iss field from the decoded jwt and remove onboarding string from it navigate to the src secrets config json file in the cloned repository replace the iss field with the extracted value above replace the blockchain channel field with the name of the channel provided during connecting the blockchain service to the document store br p align center img src docs doc gifs 8 gif p br 6 run the application running the application locally to run the application on the local system execute the run application sh file go to localhost 8080 to see the running application br p align center img src docs doc gifs 9 gif p br running the application on kubernetes navigate to server directory cd server build the docker image for the server docker build t dockerhub username medrec server replace the image name in manifest yml where indicated apply the manifest to the previously provisioned kubernetes cluster navigate to src apis redisapi js and replace the baseurl value with the kubernetes load balancer ip build and run the vue application by executing the below in the repository home go to localhost 8080 to see the running application docker build t medrec vue docker run d restart always name medrec vue p 8080 8080 medrec vue note you can also deploy the vue app to kubernetes by modifying the manifest yml to support two pods license this code pattern is licensed under the apache software license version 2 separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses contributions are subject to the developer certificate of origin version 1 1 dco https developercertificate org and the apache software license version 2 http www apache org licenses license 2 0 txt apache software license asl faq http www apache org foundation license faq html whatdoesitmean
blockchain ibm-cloud kubernetes vuejs redis docker
blockchain
pabd_cv
computer vision results cv csv 1 bash git github flow python python fork services server xxx py labintsev pabd cv main main pull request 2 ml cookiecutter ds codestyle linters formatters function docs unittest classify imagenet 3 dvc dvc data raw kaggle dvc dvc remote 4 cli python https keras io examples vision image classification from scratch https drive google com file d 1pw9ufmww8g9 bwvfwntitdtfcusx4ouu view usp sharing train py 5 google colab s3 pinterest https github com ataknkcyn pinterest crawler 1 2 evaluate py 2 model zip evaluate py 3 precision recall accuracy 0 8 4 s3 6 docker docker tutorial dockerhub https storage yandexcloud net pabdcv 221675 model zip
ai
luminoth
luminoth https user images githubusercontent com 270983 31414425 c12314d2 ae15 11e7 8cc9 42d330b03310 png https luminoth ai jan 2020 luminoth is not maintained anymore we recommend switching to facebook s detectron2 https github com facebookresearch detectron2 which implements more modern algorithms supporting additional use cases build status https travis ci org tryolabs luminoth svg branch master https travis ci org tryolabs luminoth documentation status https readthedocs org projects luminoth badge version latest http luminoth readthedocs io en latest badge latest codecov https codecov io gh tryolabs luminoth branch master graph badge svg https codecov io gh tryolabs luminoth license https img shields io badge license bsd 203 clause blue svg https opensource org licenses bsd 3 clause luminoth is an open source toolkit for computer vision currently we support object detection but we are aiming for much more it is built in python using tensorflow https www tensorflow org and sonnet https github com deepmind sonnet read the full documentation here http luminoth readthedocs io example of object detection with faster r cnn https user images githubusercontent com 1590959 36434494 e509be42 163d 11e8 99c1 d1aa728929ec jpg disclaimer luminoth is still alpha quality release which means the internal and external interfaces such as command line are very likely to change as the codebase matures installation luminoth currently supports python 2 7 and 3 4 3 6 pre requisites to use luminoth tensorflow https www tensorflow org install must be installed beforehand if you want gpu support you should install the gpu version of tensorflow with pip install tensorflow gpu or else you can use the cpu version using pip install tensorflow installing luminoth just install from pypi bash pip install luminoth optionally luminoth can also install tensorflow for you if you install it with pip install luminoth tf or pip install luminoth tf gpu depending on the version of tensorflow you wish to use google cloud if you wish to train using google cloud ml engine the optional dependencies must be installed bash pip install luminoth gcloud installing from source first clone the repo on your machine and then install with pip bash git clone https github com tryolabs luminoth git cd luminoth pip install e check that the installation worked simply run lumi help supported models currently we support the following models object detection faster r cnn https arxiv org abs 1506 01497 ssd https arxiv org abs 1512 02325 we are planning on adding support for more models in the near future such as retinanet https arxiv org abs 1708 02002 and mask r cnn https arxiv org abs 1703 06870 we also provide pre trained checkpoints for the above models trained on popular datasets such as coco http cocodataset org and pascal http host robots ox ac uk pascal voc usage there is one main command line interface which you can use with the lumi command whenever you are confused on how you are supposed to do something just type lumi help or lumi subcommand help and a list of available options with descriptions will show up working with datasets see adapting a dataset http luminoth readthedocs io en latest usage dataset html training see training your own model http luminoth readthedocs io en latest usage training html to learn how to train locally or in google cloud visualizing results we strive to get useful and understandable summary and graph visualizations we consider them to be essential not only for monitoring duh but for getting a broader understanding of what s going under the hood the same way it is important for code to be understandable and easy to follow the computation graph should be as well by default summary and graph logs are saved to jobs under the current directory you can use tensorboard by running bash tensorboard logdir path to jobs why the name the dark visor is a visor upgrade in metroid prime 2 echoes designed by the luminoth during the war it was used by the champion of aether a kul to penetrate dark aether s haze in battle against the ing dark visor wikitroid http metroid wikia com wiki dark visor license copyright 2018 tryolabs https tryolabs com released under the bsd 3 clause license
tensorflow sonnet deep-learning computer-vision object-detection python machine-learning toolkit faster-rcnn
ai
Qt-5-and-OpenCV-4-Computer-Vision-Projects
5 tech unlocked 2021 buy and download this book for only 5 on packtpub com https www packtpub com product qt 5 and opencv 4 computer vision projects 9781789532586 if you have read this book please leave a review on amazon com https www amazon com gp product 1789532582 potential readers can then use your unbiased opinion to help them make purchase decisions thank you the 5 campaign runs from december 15th 2020 to january 13th 2021 qt 5 and opencv 4 computer vision projects a href link img src https images na ssl images amazon com images i 518p6oye 2bhl sx404 bo1 204 203 200 jpg alt qt 5 and opencv 4 computer vision projects height 256px align right a this is the code repository for qt 5 and opencv 4 computer vision projects link published by packt get up to speed with cross platform computer vision app development by building seven practical projects what is this book about we are entering the age of artificial intelligence and computer vision plays an important role in the ai field this book combines opencv 4 and qt 5 as well as many deep learning models to develop many complete practical and functional applications through which the readers can learn a lot in cv gui and ai domains this book covers the following exciting features create an image viewer with all the basic requirements construct an image editor to filter or transform images develop a security app to detect movement and secure homes build an app to detect facial landmarks and apply masks to faces create an app to extract text from scanned documents and photos train and use cascade classifiers and dl models for object detection build an app to measure the distance between detected objects implement high speed image filters on gpu with open graphics library opengl if you feel this book is for you get your copy https www amazon com dp 1789532582 today a href https www packtpub com utm source github utm medium banner utm campaign githubbanner img src https raw githubusercontent com packtpublishing github master github png alt https www packtpub com border 5 a instructions and navigations all of the code is organized into folders for example chapter02 the code will look like the following qmenu editmenu qtoolbar edittoolbar qaction bluraction following is what you need for this book this book is for engineers and developers who are familiar with both qt and opencv frameworks and are capable of creating simple projects using them but want to build their skills to create professional level projects using them familiarity with the c language is a must to follow the example source codes in this book with the following software and hardware list you can run all code files present in the book chapter 1 8 software and hardware list chapter software required os required 1 qt 5 x windows mac os x and linux any 2 3 qt 5 x opencv 4 x windows mac os x and linux any 4 qt 5 x opencv 4 x opencv extra modules 4 x windows mac os x and linux any 5 qt 5 x opencv 4 x tesseract 4 x windows mac os x and linux any 6 qt 5 x opencv 3 4 5 opencv 4 x windows mac os x and linux any 7 qt 5 x opencv 4 x windows mac os x and linux any 8 qt 5 x mesa 18 x only on linux glfw 3 x glew 2 x windows mac os x and linux any we also provide a pdf file that has color images of the screenshots diagrams used in this book click here to download it http www packtpub com sites default files downloads 9781789532586 colorimages pdf code in action click on the following link to see the code in action click here to view the videos http bit ly 2ffysds related products computer vision with opencv 3 and qt5 packt https www packtpub com application development computer vision opencv 3 and qt5 utm source github utm medium repository utm campaign 9781788472395 amazon https www amazon com dp 178847239x packt mastering opencv 4 third edition utm source github utm medium repository utm campaign amazon https www amazon com dp 1789533570 get to know the author zhuo qingliang a k a kdr2 online is presently working at beijing paoding technology co ltd a start up fintech company in china that is dedicated to improving the financial industry by using artificial intelligence technologies he has over 10 years experience in linux c c python perl and java development he is interested in programming doing consulting work participating in and contributing to the open source community of course includes the julia community other books by the authors suggestions and feedback click here https docs google com forms d e 1faipqlsdy7datc6qmel81fiuuymz0wy9vh1jhkvpy57oimekgqib ow viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781789532586 https packt link free ebook 9781789532586 a p
qt opencv video book opengl ml
ai
MUENGR_ITCurriculum
information technology program course descriptions infotc 1000 introduction to information technology introduction to information technology introduces the field of information technology including foundation experiences and knowledge the history of digital technologies emphasis areas in the program career opportunities and ethical social issues students participate in activities that introduce students to digital media digital systems and software engineering students learn to use distributed version control systems and how to work on collaborative teams credits 3 prerequisites none note should not be restricted to information technology majors during early registration infotc 2040 programming languages and paradigms this course presents programming principles and their syntactical representation and implementation across languages including those that are compiled and interpreted the course shows how to implement algorithms and data structures to solve problems while utilizing paradigms offered by the programming languages such as procedural object oriented protocol oriented functional and declarative language support for strong and weak typing and type safety are covered along with support for optional values this course provides experience in developing algorithms and determining their efficiency designing application architecture and developing applications building and using libraries application programming interfaces is covered git and github are used for code versioning and collaboration integrated development environments ides are used for managing building debugging and testing applications credits 3 prerequisites infotc 1040 introduction to problem solving and programming cmp sc 1050 algorithm design and programming i or prior experience with programming and consent of instructor note should not be restricted to information technology majors during early registration infotc 2600 digital systems re this is a course change the title and topics of the course are being changed to more broadly cover digital concepts and topics related to digital systems rather than being specific to multimedia this course provides a foundation of knowledge of digital systems including terminology concepts architecture processes tools hardware and software credits 3 prerequisites none note should not be restricted to information technology majors during early registration infotc 3380 database systems and applications this course covers database management systems dbms and the development of applications that utilize databases including relational sql and nosql types topics include the evolution of data storage and databases data modeling relational and nosql databases sql document graph and key value storage and retrieval application development using databases database scaling database trends and popular database systems credits 3 prerequisites c or higher in cmp sc 2050 or infotc 2040 or experience developing applications and permission of the instructor note should not be restricted to information technology majors during early registration infotc 3600 user experience design 1 this course is a first in a series that focuses on user experience ux design for software applications this course introduces the beginner to processes techniques and methods of evaluation to design model and evaluate application designs and user interfaces credits 3 prerequisites none note should not be restricted to information technology majors during early registration infotc 4405 ios app development 1 re this is a course change the title of the course is being changed to reflect that it is part of three course series this is a first in a series of courses on developing ios applications using xcode and the swift programming language on the macos platform credits 3 prerequisites infotc 1040 introduction to problem solving and programming cmp sc 1050 algorithm design and programming i or prior experience with programming and consent of instructor note should not be restricted to information technology majors during early registration infotc 4410 android app development 1 this is a first in a series of courses on developing android applications using android studio and the java and kotlin programming languages credits 3 prerequisites infotc 1040 introduction to problem solving and programming cmp sc 1050 algorithm design and programming i or prior experience with programming and consent of instructor note should not be restricted to information technology majors during early registration infotc 4420 android app development 2 this is a second in a series of courses on developing android applications using android studio and the java and kotlin programming languages this course covers intermediate level topics in application design more complex ui implementations and data persistence credits 3 prerequisites infotc 4410 android app development 1 or permission of the instructor note should not be restricted to information technology majors during early registration infotc 4425 ios app development 2 this is the second in a series of courses on developing ios applications using xcode and the swift programming language on the macos platform this course covers intermediate level topics in application design more complex ui implementations and data persistence credits 3 prerequisites infotc 4405 ios app development 1 or infotc 4500 team based mobile device application development or permission of the instructor note should not be restricted to information technology majors during early registration infotc 4440 android app development 3 this is a third in a series of courses on developing android applications using android studio and the java and kotlin programming languages this course covers advanced topics in application architecture application design data persistence and client server architecture credits 3 prerequisites infotc 4420 android app development 2 or permission of the instructor note should not be restricted to information technology majors during early registration infotc 4445 ios app development 3 this is the third in a series of courses on developing ios applications using xcode and the swift programming language on the macos platform this course covers advanced topics in application architecture application design data persistence and client server architecture credits 3 prerequisites infotc 4425 ios app development 2 or permission of the instructor note should not be restricted to information technology majors during early registration infotc 4600 user experience design 2 this course is a second in a series that focuses on user experience ux design for software applications this course further develops the processes techniques and methods of evaluation to design model and evaluate application designs and user interfaces credits 3 prerequisites infotc 3600 user experience design 1 note should not be restricted to information technology majors during early registration
server
embedded
embedded systems ace caddie code for embedded systems design load main py onto esp8266 how to 1 connect the subscriber client to the broker eeerover 2 run subscriber py 2 reset the esp 3 press button to start game this is your 1st swing 4 play the game press the button everytime you take a shot background main py is the code that runs on the esp8266 connected to the proximity sensor the esp8266 is the main publisher publishing messages to the topic esys anonymous how main works creates a client instance connects to broker detects if game has started indicated by button press counts number of swings until the ball goes in hole swings are indicated by button press publishes score once a game has ended subscribe py is the code run by the client subscribed to esys anonymous e g an app on your smartphone how subscribe py works subcribes to topic esys anonymous takes in user entered data username and par for the golf course decodes message published by esp when a game has finished calculates and returns score returns player s postiion in ranking see our website https dharshana1407 wixsite com acecaddie for more info
os
computer-vision-uiuc
cs543 ece549 computer vision in 2018 spring uiuc instructors svetlana lazebnik http www cs illinois edu slazebni overview in the simplest terms computer vision is the discipline of teaching machines how to see this field dates back more than fifty years but the recent explosive growth of digital imaging and machine learning technologies makes the problems of automated image interpretation more exciting and relevant than ever there are two major themes in the computer vision literature 3d geometry and recognition the first theme is about using vision as a source of metric 3d information given one or more images of a scene taken by a camera with known or unknown parameters how can we go from 2d to 3d and how much can we tell about the 3d structure of the environment pictured in those images the second theme by contrast is all about vision as a source of semantic information can we recognize the objects people or activities pictured in the images and understand the structure and relationships of different scene components just as a human would this course will provide a coherent perspective on the different aspects of computer vision and give students the ability to understand state of the art vision literature and implement components that are fundamental to many modern vision systems prerequisites basic knowledge of probability linear algebra and calculus matlab programming experience and previous exposure to image processing are highly desirable recommended textbooks computer vision a modern approach by david forsyth and jean ponce 2nd ed computer vision algorithms and applications by richard szeliski pdf available online
computer-vision deep-learning
ai
Sparks-Foundation-Internship-Tasks
sparks foundation internship tasks this repo contains all taks submitted under the internship of the sparks foundation all 1 basic banking app 2 payment gateway integration 3 social media integration
sparks-fo mobile-application-development flutter tasks social-media-integration payment-gateway-intigration basic-banking-app sqlite android-application
front_end
se-shopping
se shopping simple shopping app using local database
server
Minimult
minimult minimal multitask library or rtos for cortex m microcontrollers minimult cortex m crate for rust that provides minimult https crates io crates minimult cortex m documentation https docs rs minimult cortex m examples specific board s examples of how to use minimult
rust cortex-m multitask rtos
os
amazon-mq-workshop
amazon mq workshop lab guide overview of workshop labs the amazon mq workshop readme md introduces the relevant concepts and features of a message driven application using amazon mq https aws amazon com amazon mq you will learn how to set up an amazon mq broker and how to connect senders and receivers to exchange messages this also includes using different protocols to demonstrate the protocol interoperability features amazon mq provides we will dive into the security and monitoring features apache active mq provides out of the box and what amazon mq adds on top of them we will also look at how the broker fail over works in a multi az set up and how long does it takes at the end we will look at how to integrate the broker with aws lambda for instructions go to https amazon mq intro workshop aws
os
Android
android mobile development
front_end
Sparkify-ETL
sparkify etl project overview this is the first project in the udacity data engineering nano degree program the purpose is to build data pipelines that read in two types of json files and create a relational postgres database consisting of 5 tables in a star schema spark is a fictional music streaming service the raw data files contain data on songs and user listener data the goal of this project is to create a database organized for queries that analyze song plays some of the code was provided as part of the project i provided all queries in sql queries py and much of the code in etl py although the general layout and functions were provided by udacity how to run 1 to run locally follow the instructions at the bottom of the readme below to install a docker image that will allow the postgresql db to run https github com kenhanscombe project postgres 2 run create tables py 3 run etl py 4 run or alter test py to view the contents of the databases schema design fact table songplays records in log data associated with song plays i e records with page nextsong songplay id start time user id level song id artist id session id location user agent dimension tables users users in the app user id first name last name gender level songs songs in music database song id title artist id year duration artists artists in music database artist id name location latitude longitude time timestamps of records in songplays broken down into specific units user id hour day week month year weekday song id with the rubric s scheme this table did not have a way to be linked to the other tables so i added in user id and song id other notes 1 if i was to devote more time to this i would optimize the datatypes researching whether smaller data types would be appropriate for some columns 2 i would also research and implement a way to insert multiple rows of data at a time since it would likely improve performance
server
Embedded-System-Design
embedded system design this are the compilation of exercises and examples of coding for embedded system design
os
OSSDC-VisionAI-Core
ossdc visionai a set of computer vision and artificial intelligence algorithms for robotics and self driving cars this project has support for race ossdc org webrtc based platform to allow for extensive and quick testing of computer vision and neural nets algorithms against live real life or simulated or streamed not live videos from youtube or other datasets to contribute follow the approach in video processing files to add your own algorithm and create a pr to integrate it in this project p align center ossdc visionai demo reel run the algoritms in google colab p p align center a href https colab research google com github ossdc ossdc visionai core blob master ossdc visionai demo reel ipynb target parent img src https camo githubusercontent com 52feade06f2fecbf006889a904d221e6a730c194 68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667 alt open demo reel in colab data canonical src https colab research google com assets colab badge svg a p oak d spacial ai camera demos p align center img src https github com ossdc ossdc visionai core blob master docs oak d gaze 1 gif raw true width 300px img src https github com ossdc ossdc visionai core blob master docs oak d pedestrian reidentification 1 gif raw true width 300px img src https github com ossdc ossdc visionai core master docs gifs oak d ssd 1 gif raw true width 300px p gaze estimation video can be found here https www youtube com watch v xmgnwrwytok pedestrian re identification video can be found here https www youtube com watch v pb0bphieu3y ssd object detection video can be found here midas mono depth demos p align center img src https github com ossdc ossdc visionai core blob master docs midas person 1 gif raw true width 300px img src https github com ossdc ossdc visionai core blob master docs midas night 1 gif raw true width 300px img src https github com ossdc ossdc visionai core blob master docs midas objects 1 gif raw true width 300px p midas mono depth person demo video can be found here https www youtube com watch v 6a6bqjizuam midas mono depth night walk demo video can be found here https www youtube com watch v t0znw1crm7m midas mono depth objects demo video can be found here datasets and pretrained models are available in https github com ossdc ossdc visionai datasets project install prerequisites pip install opencv python required for all video processors pip install opencv contrib python required for video processing opencv pip install aiortc aiohttp websockets python engineio 3 14 2 python socketio client 4 6 0 required for webrtc pip install dlib required for face landmarks pip install torch torchvision pip install tensorflow gpu pip install youtube dl required for youtube streaming sources install ossdc visionai android client app download and install the alpha version from here https github com ossdc ossdc visionai mobile releases demos prerequisite steps every time before running the python video processing scripts run visionai android app and setup the room name and password and start the webrtc conference update room info in signaling race py everytime the room name or password is modified in the visionai android app segformer semantic segmentation with transformers demo install segformer https github com nvlabs segformer see install steps in video processing segformer py or ossdc visionai demo reel ipynb notebook run the segformer video processor on the video stream from visionai android app python race ossdc org webrtc processing py t segformer b3 512 ade room your room name demo reel sh your room name enable segformer line demo videos segformer semantic segmentation with transformers using ossdc visionai platform hhttps www youtube com watch v 3ws irf4deq gansnroses demo install gansnroses https github com mchong6 gansnroses see install steps in video processing gansnroses py or ossdc visionai demo reel ipynb notebook run the gansnroses video processor on the video stream from visionai android app python race ossdc org webrtc processing py t gansnroses room your room name demo reel sh your room name enable gansnroses line demo videos have fun with gansnroses using ossdc visionai realtime video processing platform https www youtube com watch v yztzjk qh4w depthai oak d stereo smart camera side by side 3d streaming demo install latest depthai api from https github com luxonis depthai python run the depthai video processor on the stereo or rgb video stream from oak d camera and stream it to visionai android app python race ossdc org webrtc processing py t depthai sbs room your room name demo reel sh your room name enable depthai sbs line python race ossdc org webrtc processing py t depthai rgb room your room name demo reel sh your room name enable depthai rgb line demo videos live 3d video streamed over internet from a depthai oak d with ossdc visionai https www youtube com watch v 28awrl5mipq use a vr head set to see the 3d depth detectron2 demo install detectron2 see install steps in video processing detectron2 py or ossdc visionai demo reel ipynb notebook run the detectron2 video processor on the video stream from visionai android app python race ossdc org webrtc processing py t detectron2 room your room name demo reel sh your room name enable detectron2 line demo videos tbd deepmind nfnets demo install deepmind nfnets see install steps in video processing deepmind py or ossdc visionai demo reel ipynb notebook run the deepmind nfnets video processor on the video stream from visionai android app python race ossdc org webrtc processing py t deepmind nfnets room your room name demo reel sh your room name enable deepmind nfnets line demo samples images https www linkedin com feed update urn li activity 6766007580679557120 commenturn urn 3ali 3acomment 3a 28activity 3a6766007580679557120 2c6768387554418016256 29 mediapipe holistic demo install mediapipe see install steps in video processing mediapipe py or ossdc visionai demo reel ipynb notebook run the mediapipe holistic video processor on the video stream from visionai android app python race ossdc org webrtc processing py t mediapipe holistic room your room name demo reel sh your room name enable mediapipe holistic line demo video mediapipe holistic demo isn t this fun mediapipe holistic neural net model processed in real time on google cloud https www youtube com watch v 0l9bb5ic86e oak d gaze estimation demo the proceessing is done on luxonis oak d camera vision processing unit https store opencv ai products oak d install oak d depthai see install steps in video processing oakd py run the oak d video processor on the video stream from visionai android app python race ossdc org webrtc processing py t oakd gaze demo video gaze estimation demo with processing done on luxonis oak d camera processor processing at 10 fps on 486 x 1062 video streamed at 30 fps https www youtube com watch v xmgnwrwytok oak d people reidentification demo the proceessing is done on luxonis oak d camera vision processing unit https store opencv ai products oak d run visionai android app and setup the room and start the webrtc conference install oak d depthai see install steps in video processing oakd py run the oak d video processor on the video stream from visionai android app python race ossdc org webrtc processing py t oakd pre demo video people reidentification demo with processing done on luxonis oak d camera processor processing at 9 fps on 486 x 1062 video streamed at 30 fps https www youtube com watch v pb0bphieu3y oak d age and genrer recognition demo the proceessing is done on luxonis oak d camera vision processing unit https store opencv ai products oak d install oak d depthai see install steps in video processing oakd py run the oak d video processor on the video stream from visionai android app python race ossdc org webrtc processing py t oakd age gen demo video upcomming midas mono depth processing is done on nvidia gpu run visionai android app and setup the room and start the webrtc conference install midas see install steps in video processing midas py run the midas video processor on the video stream from visionai android app python race ossdc org webrtc processing py t midas demo videos mono depth over webrtc using race ossdc org platform https www youtube com watch v 6a6bqjizuam ossdc visionai midas mono depth night demo https www youtube com watch v t0znw1crm7m dlib face landmarks processing is done on cpu install dlib and face landmarks pretrained model see instructions steps in video processing face landmarks py run the dlib face landmarks video processor on the video stream from visionai android app python race ossdc org webrtc processing py t face landmarks opencv edges detection processing is done on cpu run the opencv edges video processor on the video stream from visionai android app python race ossdc org webrtc processing py t opencv edges
computer-vision artificial-intelligence neural-network algorithms datasets ossdc-visionai videos
ai
silverstripe-betternavigator
betternavigator for silverstripe diagram of module images demo png this module is intended to replicate and expand upon the functionality provided by silverstripe s built in silverstripenavigator class it provides a handy front end menu for cms users which offers these features for content authors indicates to a user that they are logged in indicates whether they are viewing draft or live content quickly edit the page you re viewing for developers when in dev mode links are included for accessing most of silverstripe s url variable tools http doc silverstripe org framework en reference urlvariabletools developers can access these tools on a live website by nominating themselves as a developer in the site config requirements silverstripe 5 0 4 0 and 3 1 through previous releases installation add jonom silverstripe betternavigator to your composer requirements composer require jonom silverstripe betternavigator upgrading 6 0 the namespace for this module s templates and configuration was changed in v6 to include a jonom prefix you may need to update your template directory structure and or app configuration accordingly how to use the navigator is auto injected into your template and no code changes are needed if your website uses caching make sure betternavigator s output is excluded access developer tools on a live website you can mark certain cms users as developers in your site s config so they can access developer tools when logged in example yaml jonom betternavigator developers dev yoursite com otherdev yoursite com customisation navigator display you can control whether the navigator is displayed by defining a showbetternavigator bool method in any controller with the extension applied by default the navigator will only show on controllers that have a datarecord property that is an instance of silverstripe cms model sitetree php public function showbetternavigator a user defined setting return this showdebugtools layout options betternavigator can be made translucent when collapsed by adding the following config setting jonom betternavigator translucent true betternavigator s default position is right top but can be changed to right bottom left top or left bottom example jonom betternavigator position right bottom template additions overrides betternavigator s output is controlled by templates so it can be easily overridden https docs silverstripe org en 5 developer guides templates template inheritance cascading themes some empty include placeholders are included to let you easily add more content new buttons for instance just create any of these templates in your theme or app directory and add your content templates jonom betternavigator includes betternavigatorextracontent ss templates jonom betternavigator includes betternavigatorextradebugging ss templates jonom betternavigator includes betternavigatorextradevtools ss the betternavigator ss template s scope is set to the page that is being viewed so any methods available in your page controller will be available in the betternavigator ss template this should allow you to add custom links by page type and introduce complex logic if you want to overriding the edit in cms link there may be occasions when you wish to override the edit in cms link for example to point to the edit form for a displayed dataobject rather than for the page itself to do so simply add a betternavigatoreditlink method to your page s controller e g php eventspagecontroller php return an alternative url for the betternavigator edit in cms link return string public function betternavigatoreditlink event this displayedevent return event canedit cmseditlinkapi find edit link for object event false this example uses sunnysideup cms edit link field https github com sunnysideup silverstripe cms edit link field to automatically find an edit link for a specified dataobject but you can return any url overriding the permissions required for the cms edit link by default users are required to have at least the cms access cmsmain permission in order to see the edit link in better navigator you can override this by setting the better navigator edit permission configuration option on your controller to another permission code or an array of permission codes e g yml my namespace eventcontroller better navigator edit permission custom permission code better navigator edit permission mode any optional but can be either any or all defaults to all recommended companions debugbar for better debugging tools this module provide quick access to silverstripe s built in url variable tools https docs silverstripe org en developer guides debugging url variable tools url variable tools but reading their output isn t much fun you can peek under silverstripe s hood much more conveniently using lekoala s silverstripe debugbar https github com lekoala silverstripe debugbar environment awareness to save your sites from yourself environment awareness https github com jonom silverstripe environment awareness makes it obvious which environment you re in to make it less likely that you nuke something in prod you can display the current environment right in the navigator https github com jonom silverstripe environment awareness blob master docs en how to use md include front end environment notice maintainer contact jono menz https jonomenz com sponsorship if you want to boost morale of the maintainer you re welcome to make a small monthly donation through github https github com sponsors jonom or a one time donation through paypal https www paypal com cgi bin webscr cmd s xclick hosted button id z5hezrezska6a thank you please also feel free to get in touch https jonomenz com if you want to hire the maintainer to develop a new feature or discuss another opportunity
front_end