names
stringlengths 1
98
| readmes
stringlengths 8
608k
| topics
stringlengths 0
442
| labels
stringclasses 6
values |
---|---|---|---|
keystone | keystoneml the biggest baddest pipelines around example pipeline build the keystoneml project sbt sbt assembly make this builds the native libraries used in keystoneml example mnist pipeline get the data from s3 wget http mnist data s3 amazonaws com train mnist dense with labels data wget http mnist data s3 amazonaws com test mnist dense with labels data keystone mem 4g bin run pipeline sh keystoneml pipelines images mnist mnistrandomfft trainlocation train mnist dense with labels data testlocation test mnist dense with labels data numffts 4 blocksize 2048 running with spark submit to run keystoneml pipelines on large datasets you will need a spark http spark apache org cluster keystoneml pipelines run on the cluster using spark submit http spark apache org docs latest submitting applications html you need to export spark home to run keystoneml using spark submit having done that you can similarly use run pipeline sh to launch your pipeline export spark home spark 1 3 1 bin cdh4 should match the version keystone is built with keystone mem 4g bin run pipeline sh keystoneml pipelines images mnist mnistrandomfft trainlocation train mnist dense with labels data testlocation test mnist dense with labels data numffts 4 blocksize 2048 | ai |
|
blockchain-http | blockchain http ci https github com helium blockchain http actions workflows ci yml badge svg https github com helium blockchain http actions workflows ci yml codecov https codecov io gh helium blockchain http branch master graph badge svg https codecov io gh helium blockchain http this is an erlang application to serve up the helium blockchain as stored by the blockchain etl https github com helium blockchain etl service and schema the two applications rely on the schema being compatible to work developer usage clone this repository create env file by copying env template and editing it to reflect your postgres read only and read write access urls run make release in the top level folder run make start to start the application logs will be at build default rel blockchain http log once started the application will start serving up the blockchain through a number of routes documentation for these routes will be added soon installing ubuntu required packages if running on ubuntu you will need the following packages installed before running make release bash wget https packages erlang solutions com erlang solutions 2 0 all deb sudo dpkg i erlang solutions 2 0 all deb sudo apt get update sudo apt install esl erlang 1 23 2 3 1 cmake libsodium dev libssl dev sudo apt install build essential warning this application does not serve up over tls and does not rate control or access control clients please run this service behind a load balancer that terminates ssl and does some rate and access control using docker building the docker image docker build t helium api running the docker container docker run d init restart unless stopped publish 8080 8080 tcp name api mount type bind source home api data target var data e database ro url postgresql user pass 127 0 0 1 5432 helium blockchain e database rw url postgresql user pass 127 0 0 1 5432 helium blockchain e database ro pool size 10 helium api updating docker navigate to your copy of the blockchain http repository cd path to blockchain http stop the docker container docker stop api remove the existing docker container docker rm api update the repository git pull rebuild the docker image docker build t helium api run the updated docker container docker run d init restart unless stopped publish 8080 8080 tcp name api mount type bind source home api data target var data e database ro url postgresql user pass 127 0 0 1 5432 helium blockchain e database rw url postgresql user pass 127 0 0 1 5432 helium blockchain e database ro pool size 10 helium api | helium-blockchain blockchain rest-api | blockchain |
CryptoVulhub | cryptovulhub analyze and reproduce attack events or vulnerabilities in the blockchain world list polygon double spend replay polygondoublespend compound tusd sweeptoken bypass replay compoundtusdsweeptokenbypass visor finance re hack visorfinance20211221 multichain re hack multichain20220118 treasuredao re hack treasuredao20220303 bacon protocol re hack baconprotocol20220305 fantasm finance re hack fantasmfinance20220309 paraluni re hack paraluni20220313 onering finance re hack oneringfinance20220321 auctus re hack auctus20220326 revest finance re hack revestfinance20220327 starstream finance re hack starstreamfinance20220408 gym network re hack gymnetwork20220409 elephant money re hack elephantmoney20220412 rikkei finance re hack rikkeifinance20220415 beanstalk farms re hack beanstalkfarms20220417 zeed finance re hack zeedfinance20220422 | blockchain |
|
creating-song-database | data modelling with postgres case study creating a database for sparkify the objective is to create a relational database using postgresql based in two datasets songs we have a subset from which we will extract information about songs and their artists in the future when running this project locally we ll bring on a bigger subset as in this case we found that several artists or songs were not found and our songplays table has too many nulls in artist and song ids log we will extract user play interactions to create the database and store a clean version of this data for future analysis target schema fact table songplays records in log data associated with song plays i e records with page nextsong columns songplay id start time user id level song id artist id session id location user agent dimension tables users users in the app user id first name last name gender level songs songs in music database song id title artist id year duration artists artists in music database artist id name location latitude longitude time timestamps of records in songplays broken down into specific units start time hour day week month year weekday steps taken 1 wrote all sql queries regarding drop and creation of all tables and revised the script in create tables py running this script will create a clean resetted version of the tables 2 started the etl process with etl ipynb using python which calls for postgresql functions written in sql queries py 3 completed the script in etl py which reads the datasets and populates the tables 4 during the process test ipynb was used to read what has been written in the tables to keep track of the progress how to run the scripts in order to run this project locally you will have to 1 create a postgresql server you can find the installation files here https www postgresql org download 2 from the installation you will define a username and password to your server to securely store them create a file called credentials py where you should declare two variables like this python user user example password password example it s important to include this script in your gitignore file this way your password won t be uploaded to github in the case you publish the project this script will be called by the functions to retrieve the server s username and password 3 run your scripts from your terminal careful make sure you don t have a database already called sparkifydb as this script will drop and create a new one as well as the tables in it first to create the database python3 create tables py then to fill the tables python3 etl py if all went well you will see printed that all files were loaded in the tables 4 to test if the data has been properly inserted you can run the jupyter notebook test ipynb and look into what has been inserted in the tables description of the files data where all json files are stored this data will be retrieved to populate the database create tables py script that will create the database and the tables dropping previous versions if they exist etl ipynb jupyter notebook where the first run of extraction transform and load is executed the code used in this script has been later used to create the final etl py open to see more details in the transformation process be careful not to leave the notebook running as it won t close the connection to the server etl py script used to extract transform and load the data from the data file in this repository to the local database sql queries py this python file stores all sql queries as variables and are called by other scripts for better readability in the code test ipynb run it to test if the tables have been created properly and to visualize the content reset or close to disconnect from the database to do to be improved make a diagram of the database for this readme file test this project with a bigger database and not a sample of it | server |
|
node-ledger-web | ledger web web front end to access the ledger command line interface ledger cli org http ledger cli org ledger is a powerful double entry accounting system that is accessed from the unix command line ledger web doc home preview png income income compared to expenditure over time daily monthly or yearly income doc income preview png spending over time and grouped by expense daily monthly or yearly spending doc spending preview png net worth assets minus liabilities over time daily monthly or yearly net worth doc net worth preview png balance breakdown of transactions filterable by type balance doc balance preview png dependencies ledger 3 http ledger cli org node js nodejs org and npm installing ledger the simplest way to install ledger 3 is through homebrew http mxcl github com homebrew brew install ledger head the head option is required to install version 3 x usage clone the node ledger web git repository from github git clone https github com slashdotdash node ledger web git install the dependencies with npm cd node ledger web npm install bower is used to manage javascript and css dependencies install it and our dependencies npm install g bower bower install grunt is used for building the front end assets install grunt and run its default build task npm install g grunt cli grunt finally run the express application and open http localhost 3000 http localhost 3000 in a web browser node app js two http servers will be started one to listen on port 3000 for web requests and one on port 3001 for api requests configuration copy and edit the sample config cp sample config json config json vim config json binary specify the ledger binary path leave it as ledger if it s already on your path otherwise specify the absolute path file specify the path to your ledger file | front_end |
|
SQL | employee database a mystery in two parts sql png sql png background it s a beautiful spring day and it s been two weeks since you were hired as a data engineer at pewlett hackard your first major task is a research project on company employees from the 1980s and 1990s all that remains from the database of employees from that period are six csv files you ve been asked to design the tables to hold data in the csvs import the csvs into a sql database and answer questions about the data in other words you will perform 1 data modeling 2 data engineering 3 data analysis instructions data modeling inspect the csvs and sketch out an erd of the tables feel free to use a tool like http www quickdatabasediagrams com http www quickdatabasediagrams com data engineering use the information you have to create a table schema for each of the six csv files remember to specify data types primary keys foreign keys and other constraints import each csv file into the corresponding sql table data analysis once you have a complete database do the following 1 list the following details of each employee employee number last name first name gender and salary 2 list employees who were hired in 1986 3 list the manager of each department with the following information department number department name the manager s employee number last name first name and start and end employment dates 4 list the department of each employee with the following information employee number last name first name and department name 5 list all employees whose first name is hercules and last names begin with b 6 list all employees in the sales department including their employee number last name first name and department name 7 list all employees in the sales and development departments including their employee number last name first name and department name 8 in descending order list the frequency count of employee last names i e how many employees share each last name bonus optional as you examine the data you are overcome with a creeping suspicion that the dataset is fake you surmise that your boss handed you spurious data in order to test the data engineering skills of a new employee to confirm your hunch you decide to take the following steps to generate a visualization of the data with which you will confront your boss 1 import the sql database into pandas yes you could read the csvs directly in pandas but you are after all trying to prove your technical mettle this step may require some research feel free to use the code below to get started be sure to make any necessary modifications for your username password host port and database name sql from sqlalchemy import create engine engine create engine postgresql localhost 5432 your db name connection engine connect consult sqlalchemy documentation https docs sqlalchemy org en latest core engines html postgresql for more information if using a password do not upload your password to your github repository see https www youtube com watch v 2uatpmnvh0i https www youtube com watch v 2uatpmnvh0i and https martin thoma com configuration files in python https martin thoma com configuration files in python for more information 2 create a bar chart of average salary by title 3 you may also include a technical report in markdown format in which you outline the data engineering steps taken in the homework assignment epilogue evidence in hand you march into your boss s office and present the visualization with a sly grin your boss thanks you for your work on your way out of the office you hear the words search your id number you look down at your badge to see that your employee id number is 499942 submission create an image file of your erd create a sql file of your table schemata create a sql file of your queries optional create a jupyter notebook of the bonus analysis copyright trilogy education services 2019 all rights reserved | server |
|
SQL-challenge | sql challenge data modeling inspect the csvs employees csv s data and sketch out an erd of the tables employees database erd employeesql employees db erd png data engineering using the information from the erd i have created a database table schema for each of the six csv files in the employees table schema employeesql employee tables schema sql file specifying the data types primary keys foreign keys and other constraints data analysis in the employees db queries employeesql employees db queries sql file you will find sql queries that display the requested information below 1 list the employee number last name first name sex and salary of each employee 2 list first name last name and hire date for employees who were hired in 1986 3 list department number department name the manager s employee number last name first name for the manager of each department 4 list the department of each employee with the following information employee number last name first name and department name 5 list first name last name and sex for employees whose first name is hercules and last names begin with b 6 list all employees in the sales department including their employee number last name first name and department name 7 list all employees in the sales and development departments including their employee number last name first name and department name 8 in descending order list the frequency count of employee last names i e how many employees share each last name data visualization with pandas in the employee db visualization in python employeesql employee db analysis ipynb file i have imported the employees sql database into pandas for further analysis and visualization the file contains the code for importing the sql database into pandas as well as the below graphs a histogram to visualize the most common salary ranges for employees a bar chart of average salary by title | postgresql python pandas sqlalchemy | server |
imageprocessing-labs | image processing and machine learning labs ensp license mit https img shields io badge license mit blue svg style flat square https opensource org licenses mit computer vision image processing and machine learning on the web browser or node note fast fourier transform 1d 2d fft stereo matching poisson image editing line segment detector corner detection fish eye transform image processing filters image histogram calculation image feature extraction decision tree learning k means clustering logistic regression adaptive regularization of weight vectors arow soft confidence weighted learning scw gradient boosting decision tree gbdt neural network denoising autoencoders t distributed stochastic neighbor embedding t sne 3d shape drawing mobius strip klein bottle heart surface webgl samples onnx runtime for web ort web etc demo demo site http rest term com labs html5 index html mirror https secret nether 01 herokuapp com license copyright copy 2017 wellflat licensed under the mit license mit mit http www opensource org licenses mit license php | javascript machine-learning computer-vision image-processing | ai |
bsc | bnb smart chain the goal of bnb smart chain is to bring programmability and interoperability to bnb beacon chain in order to embrace the existing popular community and advanced technology it will bring huge benefits by staying compatible with all the existing smart contracts on ethereum and ethereum tooling and to achieve that the easiest solution is to develop based on go ethereum fork as we respect the great work of ethereum very much bnb smart chain starts its development based on go ethereum fork so you may see many toolings binaries and also docs are based on ethereum ones such as the name geth api reference https camo githubusercontent com 915b7be44ada53c290eb157634330494ebe3e30a 68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667 https pkg go dev github com ethereum go ethereum tab doc discord https img shields io badge discord join 20chat blue svg https discord gg z2vpc455eu but from that baseline of evm compatible bnb smart chain introduces a system of 21 validators with proof of staked authority posa consensus that can support short block time and lower fees the most bonded validator candidates of staking will become validators and produce blocks the double sign detection and other slashing logic guarantee security stability and chain finality cross chain transfer and other communication are possible due to native support of interoperability relayers and on chain contracts are developed to support that bnb beacon chain dex remains a liquid venue of the exchange of assets on both chains this dual chain architecture will be ideal for users to take advantage of the fast trading on one side and build their decentralized apps on the other side the bnb smart chain will be a self sovereign blockchain provides security and safety with elected validators evm compatible supports all the existing ethereum tooling along with faster finality and cheaper transaction fees interoperable comes with efficient native dual chain communication optimized for scaling high performance dapps that require fast and smooth user experience distributed with on chain governance proof of staked authority brings in decentralization and community participants as the native token bnb will serve as both the gas of smart contract execution and tokens for staking more details in white paper https www bnbchain org en smartchain key features proof of staked authority although proof of work pow has been approved as a practical mechanism to implement a decentralized network it is not friendly to the environment and also requires a large size of participants to maintain the security proof of authority poa provides some defense to 51 attack with improved efficiency and tolerance to certain levels of byzantine players malicious or hacked meanwhile the poa protocol is most criticized for being not as decentralized as pow as the validators i e the nodes that take turns to produce blocks have all the authorities and are prone to corruption and security attacks other blockchains such as eos and cosmos both introduce different types of deputy proof of stake dpos to allow the token holders to vote and elect the validator set it increases the decentralization and favors community governance to combine dpos and poa for consensus bnb smart chain implement a novel consensus engine called parlia that 1 blocks are produced by a limited set of validators 2 validators take turns to produce blocks in a poa manner similar to ethereum s clique consensus engine 3 validator set are elected in and out based on a staking based governance on bnb beacon chain 4 the validator set change is relayed via a cross chain communication mechanism 5 parlia consensus engine will interact with a set of system contracts https docs bnbchain org docs learn system contract to achieve liveness slash revenue distributing and validator set renewing func light client of bnb beacon chain to achieve the cross chain communication from bnb beacon chain to bnb smart chain need introduce a on chain light client verification algorithm it contains two parts 1 stateless precompiled contracts https github com bnb chain bsc blob master core vm contracts lightclient go to do tendermint header verification and merkle proof verification 2 stateful solidity contracts https github com bnb chain bsc genesis contract blob master contracts tendermintlightclient sol to store validator set and trusted apphash native token bnb will run on bnb smart chain in the same way as eth runs on ethereum so that it remains as native token for bsc this means bnb will be used to 1 pay gas to deploy or invoke smart contract on bsc 2 perform cross chain operations such as transfer token assets across bnb smart chain and bnb beacon chain building the source many of the below are the same as or similar to go ethereum for prerequisites and detailed build instructions please read the installation instructions https geth ethereum org docs getting started installing geth building geth requires both a go version 1 19 or later and a c compiler gcc 5 or higher you can install them using your favourite package manager once the dependencies are installed run shell make geth or to build the full suite of utilities shell make all if you get such error when running the node with self built binary shell caught sigill in blst cgo init consult blst bindinds go readme md please try to add the following environment variables and build again shell export cgo cflags o d blst portable export cgo cflags allow o d blst portable executables the bsc project comes with several wrappers executables found in the cmd directory command description geth main bnb smart chain client binary it is the entry point into the bsc network main test or private net capable of running as a full node default archive node retaining all historical state or a light node retrieving data live it has the same and more rpc and other interface as go ethereum and can be used by other processes as a gateway into the bsc network via json rpc endpoints exposed on top of http websocket and or ipc transports geth help and the cli page https geth ethereum org docs interface command line options for command line options clef stand alone signing tool which can be used as a backend signer for geth devp2p utilities to interact with nodes on the networking layer without running a full blockchain abigen source code generator to convert ethereum contract definitions into easy to use compile time type safe go packages it operates on plain ethereum contract abis https docs soliditylang org en develop abi spec html with expanded functionality if the contract bytecode is also available however it also accepts solidity source files making development much more streamlined please see our native dapps https geth ethereum org docs dapp native bindings page for details bootnode stripped down version of our ethereum client implementation that only takes part in the network node discovery protocol but does not run any of the higher level application protocols it can be used as a lightweight bootstrap node to aid in finding peers in private networks evm developer utility version of the evm ethereum virtual machine that is capable of running bytecode snippets within a configurable environment and execution mode its purpose is to allow isolated fine grained debugging of evm opcodes e g evm code 60ff60ff debug run rlpdump developer utility tool to convert binary rlp recursive length prefix https ethereum org en developers docs data structures and encoding rlp dumps data encoding used by the ethereum protocol both network as well as consensus wise to user friendlier hierarchical representation e g rlpdump hex ce0183ffffffc4c304050583616263 running geth going through all the possible command line flags is out of scope here please consult our cli wiki page https geth ethereum org docs fundamentals command line options but we ve enumerated a few common parameter combos to get you up to speed quickly on how you can run your own geth instance hardware requirements the hardware must meet certain requirements to run a full node on mainnet vps running recent versions of mac os x linux or windows important 2 5 tb may 2023 of free disk space solid state drive ssd gp3 8k iops 250 mb s throughput read latency 1ms if node is started with snap sync it will need nvme ssd 16 cores of cpu and 64 gb of memory ram suggest m5zn 3xlarge instance type on aws c2 standard 16 on google cloud a broadband internet connection with upload download speeds of 5 mb s the requirement for testnet vps running recent versions of mac os x linux or windows 500g of storage for testnet 4 cores of cpu and 8 gigabytes of memory ram steps to run a fullnode 1 download the pre build binaries shell linux wget curl s https api github com repos bnb chain bsc releases latest grep browser grep geth linux cut d f4 mv geth linux geth chmod v u x geth macos wget curl s https api github com repos bnb chain bsc releases latest grep browser grep geth mac cut d f4 mv geth macos geth chmod v u x geth 2 download the config files shell mainnet wget curl s https api github com repos bnb chain bsc releases latest grep browser grep mainnet cut d f4 unzip mainnet zip testnet wget curl s https api github com repos bnb chain bsc releases latest grep browser grep testnet cut d f4 unzip testnet zip 3 download snapshot download latest chaindata snapshot from here https github com bnb chain bsc snapshots follow the guide to structure your files note if you can not download the chaindata snapshot and want to sync from genesis you have to generate the genesis block first you have already get the genesis json in step 2 so just run geth datadir datadir init genesis json 4 start a full node shell geth config config toml datadir node cache 8000 rpc allow unprotected txs txlookuplimit 0 it is recommand to run fullnode with tries verify mode none if you want high performance and care little about state consistency geth config config toml datadir node cache 8000 rpc allow unprotected txs txlookuplimit 0 tries verify mode none 5 monitor node status monitor the log from node bsc log by default when the node has started syncing should be able to see the following output shell t 2022 09 08t13 00 27 0000 lvl info msg imported new chain segment blocks 1 txs 177 mgas 17 317 elapsed 31 131ms mgasps 556 259 number 21 153 429 hash 0x42e6b54ba7106387f0650defc62c9ace3160b427702dab7bd1c5abb83a32d8db dirty 0 00 b t 2022 09 08t13 00 29 0000 lvl info msg imported new chain segment blocks 1 txs 251 mgas 39 638 elapsed 68 827ms mgasps 575 900 number 21 153 430 hash 0xa3397b273b31b013e43487689782f20c03f47525b4cd4107c1715af45a88796e dirty 0 00 b t 2022 09 08t13 00 33 0000 lvl info msg imported new chain segment blocks 1 txs 197 mgas 19 364 elapsed 34 663ms mgasps 558 632 number 21 153 431 hash 0x0c7872b698f28cb5c36a8a3e1e315b1d31bda6109b15467a9735a12380e2ad14 dirty 0 00 b 6 interact with fullnode start up geth s built in interactive javascript console https geth ethereum org docs interface javascript console via the trailing console subcommand through which you can interact using web3 methods https web3js readthedocs io en note the web3 version bundled within geth is very old and not up to date with official docs as well as geth s own management apis https geth ethereum org docs rpc server this tool is optional and if you leave it out you can always attach to an already running geth instance with geth attach 7 more more details about running a node https docs bnbchain org docs validator fullnode and becoming a validator https docs bnbchain org docs validator create val note although some internal protective measures prevent transactions from crossing over between the main network and test network you should always use separate accounts for play and real money unless you manually move accounts geth will by default correctly separate the two networks and will not make any accounts available between them configuration as an alternative to passing the numerous flags to the geth binary you can also pass a configuration file via shell geth config path to your config toml to get an idea of how the file should look like you can use the dumpconfig subcommand to export your existing configuration shell geth your favourite flags dumpconfig programmatically interfacing geth nodes as a developer sooner rather than later you ll want to start interacting with geth and the bsc network via your own programs and not manually through the console to aid this geth has built in support for a json rpc based apis standard apis https ethereum github io execution apis api documentation and geth specific apis https geth ethereum org docs interacting with geth rpc these can be exposed via http websockets and ipc unix sockets on unix based platforms and named pipes on windows the ipc interface is enabled by default and exposes all the apis supported by geth whereas the http and ws interfaces need to manually be enabled and only expose a subset of apis due to security reasons these can be turned on off and configured as you d expect http based json rpc api options http enable the http rpc server http addr http rpc server listening interface default localhost http port http rpc server listening port default 8545 http api api s offered over the http rpc interface default eth net web3 http corsdomain comma separated list of domains from which to accept cross origin requests browser enforced ws enable the ws rpc server ws addr ws rpc server listening interface default localhost ws port ws rpc server listening port default 8546 ws api api s offered over the ws rpc interface default eth net web3 ws origins origins from which to accept websocket requests ipcdisable disable the ipc rpc server ipcapi api s offered over the ipc rpc interface default admin debug eth miner net personal txpool web3 ipcpath filename for ipc socket pipe within the datadir explicit paths escape it you ll need to use your own programming environments capabilities libraries tools etc to connect via http ws or ipc to a geth node configured with the above flags and you ll need to speak json rpc https www jsonrpc org specification on all transports you can reuse the same connection for multiple requests note please understand the security implications of opening up an http ws based transport before doing so hackers on the internet are actively trying to subvert bsc nodes with exposed apis further all browser tabs can access locally running web servers so malicious web pages could try to subvert locally available apis operating a private network bsc deploy https github com bnb chain node deploy deploy tool for setting up both bnb beacon chain bnb smart chain and the cross chain infrastructure between them bsc docker https github com bnb chain bsc docker deploy tool for setting up local bsc cluster in container running a bootnode bootnodes are super lightweight nodes that are not behind a nat and are running just discovery protocol when you start up a node it should log your enode which is a public identifier that others can use to connect to your node first the bootnode requires a key which can be created with the following command which will save a key to boot key bootnode genkey boot key this key can then be used to generate a bootnode as follows bootnode nodekey boot key addr 30311 network bsc the choice of port passed to addr is arbitrary the bootnode command returns the following logs to the terminal confirming that it is running enode 3063d1c9e1b824cfbb7c7b6abafa34faec6bb4e7e06941d218d760acdd7963b274278c5c3e63914bd6d1b58504c59ec5522c56f883baceb8538674b92da48a96 127 0 0 1 0 discport 30311 note you re using cmd bootnode a developer tool we recommend using a regular node as bootstrap node for production deployments info 08 21 11 11 30 687 new local node record seq 1 692 616 290 684 id 2c9af1742f8f85ce ip nil udp 0 tcp 0 info 08 21 12 11 30 753 new local node record seq 1 692 616 290 685 id 2c9af1742f8f85ce ip 54 217 128 118 udp 30311 tcp 0 info 09 01 02 46 26 234 new local node record seq 1 692 616 290 686 id 2c9af1742f8f85ce ip 34 250 32 100 udp 30311 tcp 0 contribution thank you for considering helping out with the source code we welcome contributions from anyone on the internet and are grateful for even the smallest of fixes if you d like to contribute to bsc please fork fix commit and send a pull request for the maintainers to review and merge into the main code base if you wish to submit more complex changes though please check up with the core devs first on our discord channel https discord gg bnbchain to ensure those changes are in line with the general philosophy of the project and or get some early feedback which can make both your efforts much lighter as well as our review and merge procedures quick and simple please make sure your contributions adhere to our coding guidelines code must adhere to the official go formatting https golang org doc effective go html formatting guidelines i e uses gofmt https golang org cmd gofmt code must be documented adhering to the official go commentary https golang org doc effective go html commentary guidelines pull requests need to be based on and opened against the master branch commit messages should be prefixed with the package s they modify e g eth rpc make trace configs optional please see the developers guide https geth ethereum org docs developers geth developer dev guide for more details on configuring your environment managing project dependencies and testing procedures license the bsc library i e all code outside of the cmd directory is licensed under the gnu lesser general public license v3 0 https www gnu org licenses lgpl 3 0 en html also included in our repository in the copying lesser file the bsc binaries i e all code inside of the cmd directory is licensed under the gnu general public license v3 0 https www gnu org licenses gpl 3 0 en html also included in our repository in the copying file | blockchain bnb ethereum | blockchain |
RTOS | coursework of real time operating systems course assignment 1 this assignment is about getting familiar with smoothening of pulse sensor data which is given to a real time system and count the heart beat i e number of peaks br the solution and results are found under the assignment 1 https github com svradityareddy rtos tree master assignment 1 directory br assignment 2 this assignment is about getting familiar with asynchronous inputs and push inputs to a real time system here we are treating raspberry pi 2b as a real time system br the code and other information related to asynchronous inputs can be found under the assignment 2 asynchronous inputs https github com svradityareddy rtos tree master assignment 2 asynchronous inputs directory the code and other information related to push inputs can be found under the assignment 2 push inputs https github com svradityareddy rtos tree master assignment 2 push inputs directory br assignment 3 this assignment is about getting familiar with creation of child processess creation of threads and their address spaces br the code and other information related to process address space can be found under assignment 3 address space process https github com svradityareddy rtos tree master assignment 3 address space process directory the code and other information related to thread address space can be found under assignment 3 address space thread https github com svradityareddy rtos tree master assignment 3 address space thread directory br assignment 4 this assignment is about getting familiar with various ipc inter process communication mechanisms in this assignment we had used posix standard api calls instead of system v api calls for message queues and shared memory ipc mechanisms ipc mechanism directory of code and related files pipes assignment 4 pipes https github com svradityareddy rtos tree master assignment 4 pipes message queues assignment 4 message queues https github com svradityareddy rtos tree master assignment 4 message queues shared memory assignment 4 shared memory https github com svradityareddy rtos tree master assignment 4 shared memory sockets assignment 4 sockets https github com svradityareddy rtos tree master assignment 4 sockets references 1 savitzky golay filter https en wikipedia org wiki savitzky e2 80 93golay filter appendix br 2 smoothing derivative in c http www cplusplus com forum general 105692 br 3 median filter https en wikipedia org wiki median filter br 4 linux documentation gpio sysfs txt http elixir free electrons com linux latest source documentation gpio sysfs txt br 5 the sysfs filesystem by patrick mochel https www kernel org pub linux kernel people mochel doc papers ols 2005 mochel pdf br 6 man pages br 7 rpi gpio code samples https elinux org rpi gpio code samples br 8 raspberry pi gpio layout https www raspberrypi spy co uk 2012 06 simple guide to the rpi gpio header and pins raspberry pi gpio layout model b plus rotated 2700x900 prettyphoto 0 br 9 priority interrupts and threads http wiringpi com reference priority interrupts and threads br 10 rpi gpio code samples https elinux org rpi gpio code samples br 11 wiringpi pin mapping http wiringpi com pins br 12 closing the unwanted file descriptors https unix stackexchange com questions 132325 closing the unwanted file descriptors br 13 the linux programming interface by michael kerrisk https moodle2 units it pluginfile php 115306 mod resource content 1 the 20linux 20programming 20interface michael 20kerrisk pdf br 14 how to split a string in c c python and java https www geeksforgeeks org how to split a string in cc python and java br 15 c library limits h https www tutorialspoint com c standard library limits h htm br 16 catching sigchld https docs oracle com cd e19455 01 806 4750 signals 7 index html br | c processes thread savitzky-golay peak-detection wiringpi rpi2 rpi-gpio arduino-uno interprocess-communication | os |
012_Qt | 012 qt qt for c c java python gui design embedded system webassembly webgl shader and cross platform | os |
|
iot_remote_access | iot link iot edge lite https github com alibaba iot remote access wiki link iot lite e7 89 88 e7 8e af e5 a2 83 e6 90 ad e5 bb ba e6 8c 87 e5 8d 97 ssh web shell ip windows android adb wiki https github com alibaba iot remote access wiki https github com alibaba iot remote access blob master docs protocol cloud md centos 64bit centos ubuntu make board centos macos osx 10 11 6 make board macos arm v7 32 toolchain https releases linaro org components toolchain binaries latest 7 arm linux gnueabi gcc linaro 7 3 1 2018 05 i686 arm linux gnueabi tar xz make board armv7 cc home yuehu toolchain gcc linaro 7 3 1 2018 05 i686 arm linux gnueabi bin arm linux gnueabi gcc strip home yuehu toolchain gcc linaro 7 3 1 2018 05 i686 arm linux gnueabi bin arm linux gnueabi strip 1 scripts support armv7 sh scripts support xxx sh toolchainrootdirectory toolchain toolchain include bin lib crossprefix gcc toolchain bin gcc arm linux gnueabi gcc host os gcc v host target target gcc v target targetbit bit 32 newboardname armv8 2 scripts support xxx sh 3 make board xxxx cc xxxxx bin arm linux gnueabi gcc strip xxxx bin arm linux gnueabi strip build remoteterminaldaemon dynamic nopoll openssl ld library path remoteterminaldaemon static start for dynamic sh remoteterminaldaemon dynamic shell remote terminal json shell cloud ip backend iotx remote debug aliyun com ip url cloud port 443 is tls on 1 tls 1 is debug on 0 0 1 services type ftp ftp ssh sftp rdp adb http name ftp localhost 32 utf 8 ip 127 0 0 1 ip ip 127 0 0 1 ip 192 168 1 138 port 21 ssh 22 http 80 type sftp name sftp localhost ip 127 0 0 1 port 22 type ssh name ssh localhost ip 127 0 0 1 port 22 type telnet name telnet local ip 127 0 0 1 port 23 type http name http localhost ip 127 0 0 1 port 80 type rdp name rdp localhost ip 127 0 0 1 port 3389 type http name openapi ip 100 69 166 91 port 26999 product key pk iot productkey https help aliyun com document detail 73729 html spm a2c4g 11174283 6 584 5fd91668dmxbzt device name dn iot devicename device secret ds iot devicesecret shell cd build bin remoteterminaldaemon static product key device name device secret product key device name device secret shell cd build bin start for dynamic sh product key device name device secret product key device name device secret ps asdf https cdn nlark com yuque 0 2019 png 209889 1557195802207 24b3bc61 de22 45ab ae91 afecd400f0eb png | ssh-agent remote-access alibaba aliyun remote-debug | server |
ChatGPT-Prompt-Engineering-for-Developers | chatgpt prompt engineering for developers https www deeplearning ai short courses chatgpt prompt engineering for developers in chatgpt prompt engineering for developers you will learn how to use a large language model llm to quickly build new and powerful applications using the openai api you ll be able to quickly build capabilities that learn to innovate and create value in ways that were cost prohibitive highly technical or simply impossible before now this short course taught by isa fulford openai and andrew ng deeplearning ai will describe how llms work provide best practices for prompt engineering and show how llm apis can be used in applications for a variety of tasks including summarizing e g summarizing user reviews for brevity inferring e g sentiment classification topic extraction transforming text e g translation spelling grammar correction expanding e g automatically writing emails in addition you ll learn two key principles for writing effective prompts how to systematically engineer good prompts and also learn to build a custom chatbot all concepts are illustrated with numerous examples which you can play with directly in our jupyter notebook environment to get hands on experience with prompt engineering | summarizing iterative inferring chatbot expanding transforming chatgpt prompt-engineering | ai |
russian-troll-tweets-nlp | how to troll the world analyzing the russian troll tweets dataset through topic modeling sentiment analysis and word embeddings in 2018 twitter removed 3 000 russian linked accounts and their 3 million tweets investigators have traced the accounts to a kremlin linked propaganda outfit founded in 2013 known as the internet research agency ira a team at nbc was able to reconstruct and publish a subset of 200 000 tweets from 500 accounts it s a fantastic story and i d highly encourage checking it out https www nbcnews com tech social media now available more 200 000 deleted russian troll tweets n844731 in the spirit of the 2020 us presidential election i decided to dig into this dataset to explore the themes and sentiments of these tweets you can view the slides for my presentation at metis here https github com scrapfishies russian troll tweets nlp blob main presentation slides pdf methodologies and workflow i used natural language processing and unsupervised machine learning to tease out topics and sentiment and used word embeddings and scattertext to better understand word associations and tone within those topics workflow 1 text preprocessing lowercase characters and remove punctuation urls emojis remove stopwords standard english unique german words there were quite a few german tweets in this dataset and twitter stopwords break up some key hashtags into separate words lemmatize words with spacy for topic modeling 2 topic modeling discovery use lda and pyldavis to simultaneously find top keywords for topics and interpret their relationships through interactive visualization use semi supervised learning with corex topic anchoring to dig deeper into topics and tease out sub categories 3 word embeddings visualizations train a gensim word2vec model use word vectors and word similarities to visualize and understand keyword relationships in core topics 4 topics and sentiments trends over time plot lda topics over time to understand trends use vader sentiment analysis to quickly analyze tone of tweets and plot those trends by topic over time 5 scattertext visualizations compare word frequencies between clinton and trump topics using scattertext visualizations key findings implications and conclusions i was able to extract the following topics from this corpus right wing news pjnet patriot journalist network and other right wing outlets middle east terrorism and isis anti islamic sentiments refugee crises violence in the news lots of activity about blacklivesmatter and police violence brutality general twitter chit chat tweets about holidays gaming music media and other noise conservative chit chat similar to general twitter chit chat but with a more conservative right wing cultural lean hillary clinton generally anti clinton tweets focused on her scandals e g emails benghazi health concerns donald trump generally pro trump campaign related tweets indeed there is some overlap or fluidity between topics for example much of trump s campaign was focused on hillary clinton so tweets in the trump topic often contain references to her as well tracking topics over time is revealing we can see that these accounts are relavtively quiet for a few years and then a sudden flurry of activity around the 2016 election with the trump clinton and the general conservative chit chat categories as the most prevelant topics time series https github com scrapfishies russian troll tweets nlp blob main img top freq timeseries png unfortunately trolling and the spread of misinformation and disinformation are not going away anytime soon we can t rely on social media platforms to protect us though we can and should continue to demand more from them for now healthy skepticism remains our best line of defense as an online society we need to continue to develop ai that can detect misinformation disinformation and deep fakes we should also continue to analyze content from social media platforms especially from the time surrounding the 2020 us presidential election to continue to learn about troll behavior and build awareness tools libraries data science essentials python jupyter notebook pandas numpy machine learning pyldavis latent dirichlet allocation lda principal component analysis pca natural language processing spacy nltk sklearn corex vader sentiment analysis gensim word2vec emoji visualizations matplotlib seaborn scattertext wordcloud pyldavis sources and references nbc twitter deleted 200 000 russian troll tweets read them here ben popken https www nbcnews com tech social media now available more 200 000 deleted russian troll tweets n844731 kaggle russian troll tweets https www kaggle com vikasg russian troll tweets fivethirtyeight russian troll tweets https github com fivethirtyeight russian troll tweets | ai |
|
ProAgent | the code will be coming soon | cooperative cooperative-ai human-ai human-ai-interaction language-model llm-agent overcooked | ai |
refactor-monolith-to-microservices | udagram image filtering application udagram is a simple cloud application developed alongside the udacity cloud engineering nanodegree it allows users to register and log into a web client post photos to the feed and process photos using an image filtering microservice the project is split into two parts 1 frontend angular web application built with ionic framework 2 backend restful api node express application getting started tip it s recommended that you start with getting the backend api running since the frontend web application depends on the api prerequisite 1 the depends on the node package manager npm you will need to download and install node from https nodejs com en download https nodejs org en download this will allow you to be able to run npm commands 2 environment variables will need to be set these environment variables include database connection details that should not be hard coded into the application code environment script a file named set env sh has been prepared as an optional tool to help you configure these variables on your local development environment we do not want your credentials to be stored in git after pulling this starter project run the following command to tell git to stop tracking the script in git but keep it stored locally this way you can use the script for your convenience and reduce risk of exposing your credentials git rm cached set env sh afterwards we can prevent the file from being included in your solution by adding the file to our gitignore file database create a postgresql database either locally or on aws rds set the config values for environment variables prefixed with postgres in set env sh s3 create an aws s3 bucket set the config values for environment variables prefixed with aws in set env sh backend api to download all the package dependencies run the command from the directory udagram api bash npm install to run the application locally run bash npm run dev you can visit http localhost 8080 api v0 feed in your web browser to verify that the application is running you should see a json payload feel free to play around with postman to test the api s frontend app to download all the package dependencies run the command from the directory udagram frontend bash npm install install ionic framework s command line tools for us to build and run the application bash npm install g ionic prepare your application by compiling them into static files bash ionic build run the application locally using files created from the ionic build command bash ionic serve you can visit http localhost 8100 in your web browser to verify that the application is running you should see a web interface tips 1 take a look at udagram api does it look like we can divide it into two modules to be deployed as separate microservices 2 the dockerignore file is included for your convenience to not copy node modules copying this over into a docker container might cause issues if your local environment is a different operating system than the docker image ex windows or macos vs linux 3 it s useful to lint your code so that changes in the codebase adhere to a coding standard this helps alleviate issues when developers use different styles of coding eslint has been set up for typescript in the codebase for you to lint your code run the following bash npx eslint ext js ts src to have your code fixed automatically run bash npx eslint ext js ts src fix 4 over time our code will become outdated and inevitably run into security vulnerabilities to address them you can run bash npm audit fix 5 in set env sh environment variables are set with export var value setting it this way is not permanent every time you open a new terminal you will have to run set env sh to reconfigure your environment variables to verify if your environment variable is set you can check the variable with a command like echo postgres username | database frontend udacity cloud-development | cloud |
UDA | unsupervised domain adaptation for computer vision tasks for many computer vision tasks e g image classification object detection existing deep learning based models usually suffer from significant performance degradation when directly applying them to testing data due to the existence of domain gaps shifts e g when deployed in new environments new edge devices new production lines where the characteristics of testing images are different from those in training our project aims to design unsupervised domain adaptation uda models that are capable to perform well on target domains by exploiting unlabeled target domain data such techniques are in highly demand in many practical applications products due to their low cost not requiring annotation of the target data and effectiveness in enhancing the performance we aim to provide a toolbox that contains a series of effective unsupervised domain adaptation methods currently it includes metaalign toalign etc this includes the official implementation for toalign task oriented alignment for unsupervised domain adaptation https arxiv org abs 2106 10812 guoqiang wei cuiling lan wenjun zeng zhizheng zhang zhibo chen neurips 2021 arxiv https arxiv org abs 2106 10812 metaalign coordinating domain alignment and classification for unsupervised domain adaptation https arxiv org abs 2103 13575 guoqiang wei cuiling lan wenjun zeng zhibo chen cvpr 2021 arxiv https arxiv org abs 2103 13575 introduction for metaalign for unsupervised domain adaptation uda to alleviate the effect of domain shift many approaches align the source and target domains in the feature space by adversarial learning or by explicitly aligning their statistics however the optimization objective of such domain alignment is generally not coordinated with that of the object classification task itself such that their descent directions for optimization may be inconsistent this will reduce the effectiveness of domain alignment in improving the performance of uda in this work we aim to study and alleviate the optimization inconsistency problem between the domain alignment and classification tasks we address this by proposing an effective meta optimization based strategy dubbed metaalign where we treat the domain alignment objective and the classification objective as the meta train and meta test tasks in a meta learning scheme metaalign encourages both tasks to be optimized in a coordinated way which maximizes the inner product of the gradients of the two tasks during training experimental results demonstrate the effectiveness of our proposed method on top of various alignment based baseline approaches for tasks of object classification and object detection metaalign helps achieve the state of the art performance img src assets pipeline png figure 1 illustration of our metaalign strategy which aims to encourage the optimization consistency between the domain alignment task and the object classification task for efficient uda a previous approaches directly combine the optimization objective functions of the two tasks together where the descent directions for optimizing the shared network parameters theta from the two tasks may be inconsistent b in contrast we treat one of these two tasks as meta train task and the other as meta test task we leverage this meta optimization based strategy to enforce the consistency between their optimization gradients metaalign is generic and applicable to various domain alignment based udas introduction for toalign unsupervised domain adaptive classifcation intends to improve the classifcation performance on unlabeled target domain to alleviate the adverse effect of domain shift many approaches align the source and target domains in the feature space however a feature is usually taken as a whole for alignment without explicitly making domain alignment proactively serve the classifcation task leading to sub optimal solution in this work we propose an effective task oriented alignment toalign for unsupervised domain adaptation uda we study what features should be aligned across domains and propose to make the domain alignment proactively serve classifcation by performing feature decomposition and alignment under the guidance of the prior knowledge induced from the classifcation task itself particularly we explicitly decompose a feature in the source domain into a task related discriminative feature that should be aligned and a task irrelevant feature that should be avoided ignored based on the classifcation meta knowledge img src assets toalign png figure 2 illustration of adversarial learning based a baseline and b our proposed toalign d and c denote domain discriminator and image classifier respectively a baseline e g dann directly aligns the target feature f t with the holistic source feature f s domain alignment and image classification tasks are optimized in parallel b our proposed toalign makes the domain alignment proactively serve the classification task where target feature f t is aligned with source task discriminative positive feature f s p which is obtained under the guidance of meta knowledge induced from the classification task usage dependency bash torch 1 7 0 torchvision 0 8 0 termcolor 1 1 0 yacs 0 1 8 train x single source uda on office home dataset bash source and target domains can be defined by source and target python main py configs uda office home toalign yaml data root root to office home source a c p r target a c p r output root exp x multi source uda on domainnet dataset bash python main py configs msda domainnet toalign yaml data root root to domainnet target c i p q r s output root exp semi supervised da on domainnet dataset citation inproceedings wei2021toalign title toalign task oriented alignment for unsupervised domain adaptation author wei guoqiang and lan cuiling and zeng wenjun and zhang zhizheng and chen zhibo booktitle neurips inproceedings wei2021metaalign title metaalign coordinating domain alignment and classification for unsupervised domain adaptation author wei guoqiang and lan cuiling and zeng wenjun and chen zhibo booktitle cvpr pages 16643 16653 year 2021 contributing this project welcomes contributions and suggestions most contributions require you to agree to a contributor license agreement cla declaring that you have the right to and actually do grant us the rights to use your contribution for details visit https cla opensource microsoft com when you submit a pull request a cla bot will automatically determine whether you need to provide a cla and decorate the pr appropriately e g status check comment simply follow the instructions provided by the bot you will only need to do this once across all repos using our cla this project has adopted the microsoft open source code of conduct https opensource microsoft com codeofconduct for more information see the code of conduct faq https opensource microsoft com codeofconduct faq or contact opencode microsoft com mailto opencode microsoft com with any additional questions or comments trademarks this project may contain trademarks or logos for projects products or services authorized use of microsoft trademarks or logos is subject to and must follow microsoft s trademark brand guidelines https www microsoft com en us legal intellectualproperty trademarks usage general use of microsoft trademarks or logos in modified versions of this project must not cause confusion or imply microsoft sponsorship any use of third party trademarks or logos are subject to those third party s policies acknowledgement we borrowed some code from gvb https github com cuishuhao gvb and da detection https github com visionlearninggroup da detection many thanks to the authors for their wonderful works | domain-adaptation discriminator dann domain-generalization | ai |
stubble | stubble a front end starter kit from bearded overview stubble is a default set of html and css that we base every new bearded project on it s our boilerplate we ve included some of our most often used markup patterns our goal with stubble is to eliminate the time it takes to get new projects off the ground and to give us a set of modular extensible pieces to refine and refactor rather than constantly reinventing the wheel our goal with stubble is simple to be kind to our future selves haml sass compass sinatra and ruby requirements 1 ruby http www ruby lang org 2 sinatra http www sinatrarb com 3 haml http haml info 4 sass http sass lang com amp compass http compass style org to get started 1 cd into your project directory 3 install the project s dependencies by typing bundle install 4 start the project by typing rackup config ru 5 open a new tab in your browser and navigate to http localhost 9292 6 enjoy helpful tips stubble relies on the excellent compass extension breakpoint http breakpoint sass com to manage media queries please take a look at the breakpoint docs before using stubble for the first time we use haml why because it s what we use in our full site rails builds and it kind of stinks to write vanilla markup for our wireframes and design mockups only to have to convert it all to haml later plus if a page s markup gets lengthy it s pretty nice to not have to worry about closing tags in the appropriate places you may be wondering what the v1 directory is all about we version our mockups for clients and usually go through several iterations v1 is obviously version 1 we ve included several generic error pages that do not use a haml layout file because we typically don t make error pages look like the rest of our sites if you want to change this you ll need to tinker with the routes in app rb treat yourself if you find yourself changing the app rb config settings and constantly stopping and starting your local instance of the app check out shotgun https github com rtomayko shotgun which will automatically reload your app awesome if you re using this in production you should be minifying your css and javascript if you d like to have sinatra minify your css change the environment setting in compass rb from development to production here s a rough demo http pfulton github io stubble v1 of what stubble looks like when you run it out of the box if you have any questions feel free to contact patrickfulton http www twitter com patrickfulton elefontpress http twitter com elefontpress or beardedstudio http www twitter com beardedstudio on twitter | front_end |
|
CogAlg | cogalg intelligence is general cognitive ability ultimately the ability to predict that includes planning which is technically a self prediction any prediction is interactive projection of known patterns hence primary process must be pattern discovery aka unsupervised learning an obfuscating negation first term this perspective is not very radical pattern recognition is a core of any iq test that s also a default mode in neural nets but they work in a very coarse statistical fashion basic ann is multi layer perceptron https towardsdatascience com what the hell is perceptron 626217814f53 performing lossy chain rule fitting each node sums weighted inputs normalizes the sum into output then adjusts the weights in proportion to coincidence or similarity between input and output locally or via backprop modern anns combine such vertical training with lateral cross correlation within input vector cnns filters are generally designed to converge on edge detection in the 1st layer edge detection means computing lateral gradient same as weighted pixel cross comparison within kernels graph nns embed lateral edges representing similarity or and difference between nodes also produced by their cross comparison popular transformers https www quantamagazine org researchers glimpse how ai gets so good at language processing 20220414 can be seen as a variation https towardsdatascience com transformers are graph neural networks bca9f75412aa of graph nn the first step in transformers is self attention computing dot product between query key pairs within input vector it s a form of cross comparison dot product here serves as a measure of similarity in these pairs although an unprincipled one so basic operation in both trained cnn and self attention is what i call cross comparison but the former compute variation and the latter compute similarity i think the difference is due to relative rarity in respective target data sparse gradients in raw images and sparse similarities in compressed text but almost all text actually describes generalized images and objects therein so there should be gradual transition between the two in my scheme higher level cross comparison computes both difference and similarity for differential clustering gnn transformers and hinton s capsule networks https medium com ai c2 b3 theory practice business understanding hintons capsule networks part i intuition b4b559d1159b also have positional embeddings and i use explicit coordinates but they are still trained through destructive backprop indiscriminate summation first meaningful output to template comparison last primary summation degrades resolution of the whole learning process exponentially with the number of layers hence a ridiculous number of backprop cycles is needed to fit hidden layers into generalized representations aka patterns most practitioners agree that this process is not very smart we and evolution hit on it because it s relatively simple it s also easy to parallelize which is crucial for cell based biology i think the process should be reversed first cross comparison of atomic inputs then their summation into match defined patterns clusters that means lateral connectivity based clustering https en wikipedia org wiki cluster analysis connectivity based clustering hierarchical clustering vs vertical statistical fitting in nns this cross comp and clustering is recursively hierarchical forming patterns of patterns and so on resulting compositional hierarchy is an indefinitely extended pipeline where levels can operate in parallel patterns are eventually displaced from a level by new inputs and sent to the next level as new input elements higher level patterns are increasingly nested packing older and shallower inputs in their lower layers such higher order patterns are cross compared at longer range using higher power operations feedback adjusts hyper parameters to filter future inputs vs fitting them to some templates no training just learning below i describe the process in more detail then extend comparisons to ann and bnn this is an open project wiki https github com boris kz cogalg wiki we need help with design and implementation i pay for contributions or monthly if there is a track record see contributing https github com boris kz cogalg blob master contributing md longer but partly obsolete introduction www cognitivealgorithm info this content is published under creative commons attribution 4 0 international license outline of my approach initial clustering levels positional resolution macro lags value resolution micro by one quantization order input comparison positional resolution output conventionally known as unary intensity and none all in same coords pixels of intensity digitization integer pixels sub binary direction of comparison blobs of gradient edge detection flood fill float average blob params div compare blob params integer distance between blob centers graphs of blobs connectivity based clustering complex normalized graph params log compare graph params float distance between graph centers hierarchical graphs agglomerative clustering and so on higher levels should be added recursively such process is very complex and deeply structured there is no way it could evolve naturally since the code is supposed to be recursive testing before it is complete is almost useless which is probably why no one seems to work on such methods but once the design is done there is no need for interminable glacial and opaque training my feedback only adjusts hyperparameters so pattern is a cluster of matching items where match is compression achieved by replacing elements with their derivatives see comparison section below more commonly pattern is a recurring set or order of elements but to me this is 2nd order pattern if the elements co vary don t match but their derivatives do then the derivatives become elements of a higher derivation pattern consistent process must start with cross comp of adjacent atomic inputs sensory data at the limit of resolution such as pixels of video or equivalents in other modalities symbolic data is not a separate modality just a generalized and encoded sensory data the symbols must be decoded to discover meaningful patterns which is exponentially difficult with the level of encoding thus a start with raw sensory input is by far the easiest to implement part 0 this low level process directly translated into my code seems like quite a jump from the generalities above but it really isn t internally consistent pattern discovery must be strictly bottom up in the complexity of both inputs and operations and there is no ambiguity at the bottom initial predictive value that defines patterns is a match from cross comparison among their elements starting with pixels so i think my process is uniquely consistent with these definitions please let me know if you see any discrepancy in either comparison more in part 1 basic comparison is inverse arithmetic operation between single variable comparands of incremental power boolean subtraction division etc each order of comparison forms miss or loss xor difference ratio etc and match or similarity which can be defined directly or as inverse deviation of miss direct match is compression of represented magnitude by replacing larger input with corresponding miss between the inputs boolean and the smaller input in comp by subtraction integer part of ratio in comp by division etc these direct similarity measures work if input intensity represents some stable physical property which anti correlates with variation this is the case in tactile but not in visual input brightness doesn t correlate with inertia or invariance dark objects are just as stable as bright ones thus initial match in vision should be defined indirectly as inverse deviation of variation in intensity 1d variation is difference ratio etc while multi d comparison has to combine them into euclidean distance and gradient as in common edge detectors patterns more in part 2 cross comparison among patterns forms match and miss per parameter as well as dimensions and distances external match and miss these are separate parameters total value precision of what precision of where comparison is limited by max distance between patterns overall hierarchy has incremental dimensionality search levels param levels pattern levels and pattern comparison is selectively incremental per such level this is hard to explain in nl please see the code starting with line ps https github com boris kz cogalg blob master line 1d alg line ps py and line pps https github com boris kz cogalg blob master line 1d alg line pps py resulting matches and misses are summed into lateral match and miss per pattern proximate input patterns with above average match to their nearest neighbors are clustered into higher level patterns this adds two pattern levels of composition and derivation per level of search conditional cross comp over incremental range and derivation among the same inputs may also add sub levels in selected newly formed patterns on a pixel level incremental range is using larger kernels and incremental derivation starts with using laplacian feedback attention imagination action more in part 3 tentative not in the code higher level feedback will adjust filters starting with average match then ave per parameter derived by deeper cross comp more precisely these should be co averages values coincident with an average value of combined higher level param there are also positional or external filters starting with pixel size and kernel size which determine external dimensions of the input quantization bit integer float of internal and external filters corresponds to the order of comparison the filters are similar to hyperparameters in neural nets with the same values across a level the equivalent to weight matrix are links edges between nodes of a graph but they are lateral vs implicitly vertical when formed via backprop or hebbian learning https data flair training blogs learning rules in neural network text the 20hebbian 20rule 20was 20the of 20nodes 20of 20a 20network text for 20neurons 20operating 20in 20the weight 20between 20them 20should 20decrease in nns in a broader frame of reference the above mentioned external filters will define source locations for selective input to higher level patterns this is similar to conventionally understood attention and ultimately decision making and these locations can be projected vs actually observed generating input for imagination and hypothetical reasoning hierarchy part 4 is out of date there is a single global hierarchy feedforward inputs and feedback filters pass through the same levels of search and composition each higher level is a nested hierarchy with depth proportional to elevation but sub hierarchies are unfolded sequentially that s why i don t have many diagrams they are good at showing relations in 2d but i have a simple 1d sequence of levels nested sub hierarchies are generated by the process itself depending on elevation in a higher order hierarchy that means i can t show them in a generic diagram brain inspired schemes have separate sensory and motor hierarchies in mine they combined into one the equivalent of motor patterns in my scheme are positional filter patterns which ultimately move the sensor the first level is co located sensors targets of input filters and more coarse actuators targets of positional filters i can think of two reasons they are separated in the brain neurons and axons are unidirectional and training process has to take the whole hierarchy off line neither constraint applies to my scheme final algorithm will consist of first level operations recursive increment in operations per level the latter is a meta algorithm that extends working level algorithm to handle derivatives added to current inputs so the levels are 1st level g x 2nd level f g x 3rd level f f g x where f is the recursive code increment resulting hierarchy is a pipeline patterns are outputted to the next level forming a new level if there is none as long as there are novel inputs higher levels will discover longer range spatio temporal and then conceptual patterns please see system diagram https github com boris kz cogalg blob master frame 2d alg illustrations whole system 20hierarchy png some notes there should be a unique set of operations added per level hence a singular in cognitive algorithm core design must be done theoretically generality requires large upfront investment in process complexity which makes it a huge overkill for any specific task that s one reason why such schemes are not explored many readers note disconnect between abstractions in this outline and the amount of detail in current code that s because we are in space time continuum search must follow proximity in each dimension which requires specific processing it s not specific to vision the process is roughly the same for all raw modalities another complaint is that i don t use mathematical notation but it doesn t have the flexibility to express deeply conditional process math is not separable from logic here most people who aspire to work on agi think in terms behavior and robotics i think this is far too coarse to make progress the most significant mechanisms are on the level of perception feedforward perception must drive feedback action not the other way around other distractions are supervision and reinforcement these are optional task specific add ons core cognitive process is unsupervised pattern discovery and main problem here is scaling in complexity don t even start me on chatbots comparison to artificial and biological neural networks all unsupervised learning is some form of pattern discovery where patterns are some kind of similarity clusters there are two fundamentally different ways to cluster inputs centroid based and connectivity based all statistical learning https en wikipedia org wiki statistical learning theory including neural nets is best understood as distributed centroid based clustering https en wikipedia org wiki cluster analysis centroid based clustering centroid is whatever the model fits to not necessarily a single value template line in linear regression can be considered one dimensional centroid and the whole training set a multi dimensional centroid that usually means training cnn https en wikipedia org wiki convolutional neural network or transformer to perform some sort of edge detection or cross correlation same as my cross comparison but the former terms lose meaning on higher levels of search but cnn operations are initially random while my process is designed for cross comp from the start this is why it can be refined by my feedback updating the filters which is far more subtle and selective than weight training by backprop so i have several problems with basic process in ann vertical learning via feedback of error takes tens of thousands of cycles to form accurate representations that s because summation per layer degrades positional input resolution with each added layer the output that ultimately drives learning contains exponentially smaller fraction of original information my cross comp and clustering is far more complex per level but the output contains all information of the input lossy selection is only done on the next level after evaluation per pattern vs before evaluation in statistical methods both initial weights and sampling that feeds sgd https towardsdatascience com stochastic gradient descent clearly explained 53d239905d31 stochastic gradient descent are randomized also driven by random variation are rbms https en wikipedia org wiki restricted boltzmann machine gans https en wikipedia org wiki generative adversarial network vaes https golden com wiki variational autoencoder etc but randomization is antithetical to intelligence it s only useful in statistical methods because they merge inputs with weights irreversibly thus any non random initialization and variation will introduce bias all input modification in my scheme is via hyper parameters stored separately and then used to normalize remove bias inputs for comparison to inputs formed with different value hyper parameters sgd minimizes error top layer miss which is quantitatively different from maximizing match compression and that error is w r t some specific template while my match is summed over all past input experience the error here is plural lateral misses differences ratios etc computed by cross comparison within a level all inputs represent environment and have positive value but then they are packed compressed into patterns which have different range and precision thus different relative value per relatively fixed record cost representation in ann is fully distributed similar to the brain but the brain has no alternative there is no substrate for local memory or program in neurons computers have ram so parallelization is a simple speed vs efficiency trade off useful only for complex semantically isolated nodes such nodes are patterns encapsulating a set of co derived what and where parameters this is similar to neural ensemble but parameters that are compared together should be localized in memory not distributed across a network more basic neural learning mechanism is hebbian though it is rarely used in ml conventional spiking version is that weight is increased if the synapse often receives a spike just before the node fires else the weight is decreased but input and output don t have to be binary the same logic can be applied to scalar values the weight is increased decreased in proportion to some measure of similarity between its input and following output of the node that output is normalized sum of all inputs or their centroid such learning is local within each node but it s still a product of vertical comparison centroid is a higher order of composition than individual inputs this comparison across composition drives all statistical learning but it destroys positional information at each layer compared to autoencoders main backprop driven unsupervised learning technique hebbian learning lacks the decoding stage as does the proposed algorithm decoding decomposes hidden layers to equalize composition orders of output and compared template inspiration by the brain kept ann research going for decades before they became useful their neurons are mere stick figures but that s not a problem most of neuron s complexity is due to constraints of biology the problem is that core mechanism in ann weighted summation may also be a no longer needed compensation for such constraints neural memory requires dedicated connections that makes representation and cross comparison of individual inputs very expensive so they are summed but we now have dirt cheap ram other biological constraints are very slow neurons and the imperative of fast reaction for survival in the wild both favor fast though crude summation at the cost of glacial training reaction speed became less important modern society is quite secure while continuous learning is far more important because of accelerating progress summation also reduces noise which is very important for neurons that often fire at random to initiate and maintain latent connections but that s irrelevant for electronic circuits i see no way evolution could produce proposed algorithm it is extremely limited in complexity that can be added before it is pruned by natural selection and that selection is for reproduction while intelligence is distantly instrumental the brain evolved to guide the body with neurons originating as instinctive stimulus to response converters hence both sgd and hebbian learning is fitting driven by feedback of action triggering weighted input sum pattern discovery is their instrumental upshot not an original purpose uri hasson samuel nastase ariel goldstein reach a similar conclusion in direct fit to nature an evolutionary perspective on biological and artificial neural networks https www cell com neuron fulltext s0896 6273 19 31044 x we argue that neural computation is grounded in brute force direct fitting which relies on over parameterized optimization algorithms to increase predictive power generalization without explicitly modeling the underlying generative structure of the world although anns are indeed highly simplified models of bnns they belong to the same family of over parameterized direct fit models producing solutions that are mistakenly interpreted in terms of elegant design principles but in fact reflect the interdigitation of mindless optimization processes and the structure of the world atomic comparison quantifying match and miss between variables first we need to quantify predictive value algorithmic information theory defines it as compressibility of representation which is perfectly fine but compression is currently computed only for sequences of inputs while i think a logical start is analog input digitization a rock bottom of organic compression hierarchy the next level is cross comparison among resulting pixels commonly known as edge detection and higher levels will cross compare resulting patterns partial match computed by comparison is a measure of compression partial match between two variables is a complementary of miss in corresponding power of comparison boolean match is and and miss is xor two zero inputs form zero match and zero miss comparison by subtraction increases match to a smaller comparand and reduces miss to a difference comparison by division increases match to min integer part of ratio and reduces miss to a fractional part direct match works for tactile input but reflected light in vision requires inverse definition of initial match in other words match is a compression of larger comparand s magnitude by replacing it with miss which means that match smaller input a common subset of both inputs sum of and between their uncompressed unary code representations ultimate criterion is recorded magnitude rather than bits of memory it occupies because the former represents physical impact that we want to predict the volume of memory used to record that magnitude depends on prior compression which is not an objective parameter given incremental complexity initial inputs should have binary resolution and implicit shared coordinate being a macro parameter resolution of coordinate lags that of an input compression of bit inputs by and is well known as digitization substitution of two lower 1 bits with one higher 1 bit resolution of coordinate input summation span is adjusted by feedback to form integers that are large enough to produce above average match next order compression can be achieved by comparison between consecutive integers distinguished by binary before after coordinate basic comparison is inverse arithmetic operation of incremental power and subtraction division logarithm and so on additive match is achieved by comparison of a higher power than that which produced comparands comparison by and will not further compress integers previously digitized by and rather initial comparison between integers is by subtraction resulting difference is miss and smaller input is absolute match compression of represented magnitude is by replacing i1 i2 with their derivatives match min and miss difference if we sum each pair inputs 5 7 12 derivatives match 5 miss 2 7 compression by replacing match 12 7 5 difference is smaller than xor non zero complementary of and because xor may include opposite sign opposite direction bit pairs 0 1 and 1 0 which are cancelled out by subtraction comparison by division forms ratio which is a magnitude compressed difference this compression is explicit in long division match is accumulated over iterative subtraction of smaller comparand from remaining difference in other words this is also a comparison by subtraction but between different orders of derivation resulting match is smaller comparand integer part of ratio and miss is final reminder or fractional part of ratio a ratio can be further compressed by converting to a radix logarithm and so on but computational costs may grow even faster thus power of comparison should increase only for inputs sufficiently compressed by lower power and for bit inputs sub for integer inputs div for pattern inputs etc actual compression depends on input and on resolution of its coordinate input derivative summation span we can t control the input so average match is adjusted via coordinate resolution but the costs of operations and incidental sign fraction irrational fraction etc may grow even faster to justify the costs the power of comparison should only increase in patterns of above average match from prior order of comparison and for bit inputs sub for integer inputs div for pattern inputs etc inclusion into such patterns is by relative match match ave past match that co occurs with average higher level match match value should be weighted by the correlation between input intensity and its stability mass energy hardness of an observed object initial input such as reflected light is likely to be incidental such correlation is very low since match is the magnitude of smaller input its weight should also be low if not zero in this case projected match consists mainly of its inverse component match cancellation by co derived miss see below the above discussion is on match from current comparison but we really want to know projected match to future or distant inputs that means the value of match needs to be projected by co derived miss in comparison by subtraction projected match min i1 i2 weight fractional difference i1 i2 2 divide by 2 because the difference only reduces projected input thus min input projected input in the direction in which it is negative it doesn t affect min in the direction where projected input is increasing quantifying lossy compression there is a general agreement that compression is a measure of similarity but no one seems to apply it from the bottom up the bottom being single scalars also any significant compression must be lossy this is currently evaluated by perceived similarity of reconstructed input to the original input as well as compression rate which is very coarse and subjective compression in my level of search is lossless represented by match on all levels of pattern all derived representations are redundant so it s really an expansion vs compression overall the lossy part comes after evaluation of resulting patterns on the next level of search top level of patterns is cross compared by default evaluation is per lower level of incremental derivation and detail in each pattern loss is when low relative match buffered inputs or alternative derivatives are not cross compared such loss is quantified as the quantity of representations in these lower levels not some subjective quality compression also depends on resolution of coordinate input summation span and of input magnitude projected match can be kept above system s average by adjusting corresponding resolution filters most significant bits and least significant bits of both coordinate and magnitude implementation any prediction has two components what and where we must have both value of prediction precision of what precision of where that where is currently neglected statistical ml methods represent coordinates much more coarsely than the inputs hence precision of where spans of and distances between patterns is degraded and so is predictive value of combined representations that s not the case here because my top level patterns multi dimensional blobs are contiguous core algorithm is 1d time only our space time is 4d and average match is presumably equal over all dimensions that means patterns defined in fewer dimensions will be only slices of actual input fundamentally limited and biased by the angle of scanning slicing hence initial pixel comparison should also be over 4d at once or at least over 3d for video and 2d for still images this full d cycle level of search is a universe specific extension of core algorithm the dimensions should be discoverable by the core algorithm but coding it in is much faster this repository currently has three versions of 1st d cycle each analogous to connected component analysis 1d line alg https github com boris kz cogalg tree master line 1d alg 2d frame alg https github com boris kz cogalg tree master frame 2d alg and 3d video alg https github com boris kz cogalg tree master video 3d alg subsequent cycles will compare full d terminated input patterns over increasing distance in each dimension forming discontinuous patterns of incremental composition and range dimension here defines external sequence and distance among inputs this is different from conventional clustering which treats both external and internal parameters as dimensions complete hierarchical algorithm will have two level code 1st level algorithm contiguous cross comparison over full d cycle plus feedback to adjust most and least significant bits of the input recurrent increment in complexity extending current level alg to next level alg this increment will account for increasing internal complexity of input patterns on higher levels unfolding them for cross comparison and re folding results for evaluation and feedback initial testing could be on recognition of labeled images but video or stereo video should be much better we will then add colors maybe audio and text for more detailed account of current development see wiki https github com boris kz cogalg wiki suggestions and collaboration are most welcome see contributing https github com boris kz cogalg blob master contributing md | deep-learning image-recognition pattern-recognition computer-vision | ai |
challenge_kogi | challenge kogi each point has its own readme | server |
|
FET-scheduling-system | fet scheduling system front end interface for descktop fet timetabling software build with javascript fet fet is open source free software for automatically scheduling the timetable of a school high school or university it uses a fast and efficient timetabling algorithm it is licensed under gnu gpl usually fet is able to solve a complicated timetable in maximum 5 20 minutes for simpler timetables it may take a shorter time under 5 minutes in some cases a matter of seconds for extremely difficult timetables it may take a longer time a matter of hours fet homepage http lalescu ro liviu fet | front_end |
|
freeRTOS | freertos this is a ported stm32f407 based freertos which uses the message pair to be used by the button task synchronous led task and uart to send messages to the standard library mdk project stm32f407 freertos led uart mdk | os |
|
Mini-DALLE3 | p align center a href https minidalle3 github io img src https github com zeqiang lai mini dalle3 assets 26198430 9594f306 cc1a 4a92 bca2 0c64e8daf9c9 alt minidalle3 width 19 a ensp p p align center a href http arxiv org abs 2310 07653 paper a a href http 139 224 23 16 10085 demo a a href https minidalle3 github io project page a p https github com zeqiang lai mini dalle3 assets 26198430 5b6c0a0c ebbf 48db 981e f97d542a38b4 teaser4 https github com zeqiang lai mini dalle3 assets 26198430 1f17e3c3 6804 4c4e 9266 e902ecedeae8 an experimental attempt to obtain the interactive and interleave text to image and text to text experience of dall e 3 https openai com dall e 3 and chatgpt https openai com chatgpt try yourself download the checkpoint https huggingface co h94 ip adapter and save it as following bash checkpoints models sdxl models run the following commands and you will get a gradio based web demo bash export openai api key your key python m minidalle3 web todo x support generating image interleaved in the conversations support generating multiple images at once support selecting image support refinement support prompt refinement variation instruct tuned llm sd citation if you find this repo helpful please consider citing us bibtex misc minidalle3 author lai zeqiang and zhu xizhou and dai jifeng and qiao yu and wang wenhai title mini dalle3 interactive text to image by prompting large language models year 2023 url https github com zeqiang lai mini dalle3 acknowledgement ip adapter https github com tencent ailab ip adapter stable diffusion xl https huggingface co stabilityai stable diffusion xl base 1 0 visitors https api visitorbadge io api visitors path https 3a 2f 2fgithub com 2fzeqiang lai 2fmini dalle3 countcolor 23263759 style flat | dalle dalle-3 dalle3 interactive-text-to-image mini-dalle3 dall-e-3 | ai |
picoos | pico os this is the latest version of pico os a highly configurable and very fast real time operating system it targets a wide range of architectures from the small 8 bit processors with very low memory till huge architectures like 32 bit processors with lots of memory please see the documentation 3 for further information pico os was originally created by dennis kuschel and swen moczarski see original site at sourceforge 1 for details they have given up on development of it but as pico os is great software i decided to pick up the maintenance and developent of it compared to latest sf net release 1 0 4 it has following updates power management api to provide framework for mcu power saving sleeping features suppression of timer interrupts during idle mode tickless idle implementations are available for cortex m stm32 and msp430 ports stdarg support for nano layer printf functions makefile system uses gnu make pattern rules for source directory handling otherwise projects that had many directories run into troubles pico nano layers are augmented by micro layer 4 which contains support for filesystems and some other things support for using cmake as build system instead of gnu make i m also actively maintaining hardware support for some environments arm cortex m0 m3 m4 texas instruments msp430 microchip pic32 npx lpc2xxx arm7tdmi generic unix using ucontext 3 there is some additional information in my blogs 2 additional libraries for networking 1 wire access etc can be found in my github account 5 getting started there are two methods to work with pico os the first is to build the pico os library and link it to your project that you may develop with some kind of integrated compiler debugger ide the second method is to use the makefile system that is provided with pico os to build all you need the pico os rtos library application libararies and your main program pico os makefiles need working gnu make for your host machine for unix systems it is usually easily installable from source code or prebuilt package for ms windows use of msys2 package 6 is recommended building example programs when you have a working gnu make installed you are now ready to build the example programs that are shipped with pico os assuming your host is a ms windows machine and you have ms visual studio or the mingw gcc package installed you can compile the rtos for ms windows simply change into the examples app directory and enter make port x86w32 if you have trouble building the examples please read the readme file in the appropriated port directory eg ports x86w32 readme txt you will find the generated executables in the directory bin x86w32 deb out building pico os as library library version of pico os might be useful when using an ide for development other alternative is to include pico os source files in ide project in it s native way to build the library execute make at pico os root directory the makefile takes two parameters port name of the port to build the subdirectory name build version to build possible values are debug and release example make port avr build debug builds the atmel avr port and includes debug informations the generated library can be found in the directory lib avr deb the makefile knows the targets all clean and docu and all is the default all compiles the operating system clean removes all generated binaries docu generates the html help with use of the doxygen tool makefile searches for the configuration file config mak in the pico os root directory you can put in there your build parameters start of file port avr build debug end of file contributing to pico os development development takes place at github 7 to submit code please follow these guidelines fork the project before making changes create a git branch in forked project code must be under same license 8 as rest of pico os modified bsd license format code in pico os style indent with 2 spaces no tabs try to follow style of existing source files before submitting pull request consider squashing of your commits submit pull request 1 http picoos sf net 2 http stonepile fi tags picoos 3 http picoos github io picoos 4 http github com arizuu picoos micro 5 http github com arizuu 6 https sourceforge net projects msys2 7 http github com picoos picoos 8 http github com picoos license | rtos | os |
Adaptive-MT-LLM | adaptive mt llm code for the paper adaptive machine translation with large language models https arxiv org abs 2301 13294 citation inproceedings moslem 2023 adaptive title adaptive machine translation with large language models author moslem yasmin and haque rejwanul and d kelleher john and way andy abstract consistency is a key requirement of high quality translation it is especially important to adhere to pre approved terminology and adapt to corrected translations in domain specific projects machine translation mt has achieved significant progress in the area of domain adaptation however real time adaptation remains challenging large scale language models llms have recently shown interesting capabilities of in context learning where they learn to replicate certain input output text generation patterns without further fine tuning by feeding an llm at inference time with a prompt that consists of a list of translation pairs it can then simulate the domain and style characteristics this work aims to investigate how we can utilize in context learning to improve real time adaptive mt our extensive experiments show promising results at translation time for example llms can adapt to a set of in domain sentence pairs and or terminology while translating a new sentence we observe that the translation quality with few shot in context learning can surpass that of strong encoder decoder mt systems especially for high resource languages moreover we investigate whether we can combine mt from strong encoder decoder models with fuzzy matches which can further improve translation quality especially for less supported languages we conduct our experiments across five diverse language pairs namely english to arabic en ar english to chinese en zh english to french en fr english to kinyarwanda en rw and english to spanish en es booktitle proceedings of the 24th annual conference of the european association for machine translation research technical year 2023 publisher european association for machine translation url https arxiv org abs 2301 13294 | ai |
|
Information-Hiding-in-Video-Processing | project logo br p align center a href https github com joan0018 information hiding in video processing img src https icons iconarchive com icons papirus team papirus places 512 folder red video icon png alt logo width 80 height 80 a h4 align center information hiding in video processing h4 h4 align center master of information technology br supervisor assoc prof ts dr tew yiqi br contributors joan hau h4 description needed p align center need added after final report run originality a href https github com joan0018 information hiding in video processing wiki strong explore the docs strong a br a href table of content strong explore more strong a p p table of contents a id table of content details open open summary h2 style display inline block table of contents h2 summary ol li a href introduction introduction a ul li a href objective objectives a li li a href outcome expected outcome a li ul li li a href get start getting started a ul li a href prerequisites prerequisites a li li a href guide guidelines a ul li a href installation installation a li li a href build build a li li a href execute execute a li li a href testing testing a li ul li li a href a li ul li li a href usage usage a ul li a href a li li a href a li ul li li a href contact contact a li li a href acknowledgements acknowledgements a li ol details a project information a id introduction a introduction div align justify nbsp nbsp nbsp information security has gained tremendous importance over the past few decades due to the massive growth of digital communications dalal m and and juneja m 2021 video information hiding is one of the most important strategies for providing various video services such as video authentication and augmentation video information hiding process refer to inserting various forms of featured information into video streams for security purposes thampi s 2004 such as inserting the watermarking into the video to represent the originality of the video merchant shabbir et al 2003 or for video augmentation purposes such as depthmap embedding motion information embedding and extended color information as a result effective video information hiding techniques are important for a variety of new multimedia application br nbsp nbsp nbsp video information hiding technique for compressed video have recently gained a lot of attention as videos are often stored and transmitted in compressed format existing compressed video information hiding methods such as mpeg h 264 and h 265 high efficiency video coding hevc have been thoroughly investigated in compression standards bhaumik arup et al 2009 using the least significant bit lsb approach to hide the data for high resolution avi audio video interleave videos and singh et al 2010 using same method to hide each row of image pixels in multiple frames of the video paruchuri j k et al 2009 proposed an optimized framework by hiding the data into selective discrete cosine transform dct coefficient in compressed video y tew et al 2014 modified the structure of coding block and non zero transform coefficients to embed information br nbsp nbsp nbsp despite the success of the above information hiding techniques in previous video compression standards little work has been done on the proprietary tools provided in the latest video compression standard h 266 versatile video coding vvc to further improve information hiding methods h 266 vvc x liu et al 2021 uses a hybrid coding framework including prediction transformation quantization and entropy coding which was identified as the previous coding standard however vvc introduces many new compression tools to improve the compression efficiency for internal prediction vvc introduces some new mechanisms such as matrix weighted internal prediction mip multiple reference lines mrl and cross component linear model cclm which increases the flexibility of pattern selection and thus is potentially suitable for information hiding div a id objective a objectives ol li protect the secret data of the video by hiding the message in the video and not allowing anyone without the secret key to discover the presence of a second message in the played video li li provide proper protection on data during transmission li li provide an artistic and scientific way of communication for the video sending process li ol a id outcome a expected outcome div align justify nbsp nbsp nbsp due to the advancement of information technology the rapid advances in publishing and broadcasting technology require an alternative solution to hiding information copyrighting resources such as audio video and other digital forms can lead to large scale unauthorized copying throughout the research solve the problem of video information hiding in various information management at the same time providing an easier way for hiding information during information transmission through video div getting started a id get start h2 style display inline block getting started h2 a prerequisites a id prerequisites h2 style display inline block prerequisites h2 a macos xcode and cmake version 3 12 or higher are required br collision only applicable on macos collision br refer fraunhofer versatile video encoder vvenc https github com fraunhoferhhi vvenc for more information installation a id guide a guidelines a id installation a installation install xcode cpp https apps apple com us app xcode id497799835 mt 12 in terminal cpp cd path of the file install command line tools cpp xcode select install install homebrew and cmake cpp https code visualstudio com download install visual studio code cpp bin bash c curl fssl https raw githubusercontent com homebrew install master install sh a id build a build cpp https github com joan0018 information hiding in video processing wiki build a id execute a execute in terminal cpp cd path of the file cpp cd install bin cpp chmod x vvencapp cpp vvencapp a id testing a testing in terminal cpp vvencapp preset tooltest s 80x44 r 15 i test data rtn23 80x44p15 f15 yuv f 8 o out vvc installation demo a id installationdemo h2 style display inline block installation demo h2 a usage examples a id usage h2 style display inline block usage h2 a contact a id contact h2 style display inline block contact h2 a joan hau github https github com joan0018 acknowledgements a id acknowledgements h2 style display inline block acknowledgement h2 a a special thanks to fraunhofer versatile video encoder vvenc https github com fraunhoferhhi vvenc vvdec fraunhofer versatile video decoder v1 2 0 https www hhi fraunhofer de fileadmin departments vca mc vvc vvdec v1 2 0 v1 pdf fraunhofer versatile video decoder vvdec https github com fraunhoferhhi vvdec references a id references h2 style display inline block references h2 a div align justify a href https www hhi fraunhofer de en departments vca technologies and solutions h266 vvc html 1 a heinrich hertz institut f 2022 h 266 vvc fraunhofer heinrich hertz institute viewed 20 august 2022 https www hhi fraunhofer de en departments vca technologies and solutions h266 vvc html div | server |
|
sparkify-data-engineering | sparkify data engineering this project is done in partial fulfillment of the udacity data science nanodegree course on data engineering project detail a startup called sparkify wants to analyze the data they ve been collecting on songs and user activity on their new music streaming app the analytics team is particularly interested in understanding what songs users are listening to currently they don t have an easy way to query their data which resides in a directory of json logs on user activity on the app as well as a directory with json metadata on the songs in their app as a data engineer i was asked to create a postgres database with tables designed to optimize queries on song play analysis my role was to create a database schema and etl pipeline for this analysis i was able to test my database and etl pipeline by running queries given to me by the analytics team from sparkify and compare my results with their expected results project description in this project i applied ny skills in data modeling with postgres and build an etl pipeline using python in completing the project i defined fact and dimension tables for a star schema for a particular analytic focus and wrote an etl pipeline that transfers data from files in two local directories into these tables in postgres using python and sql | server |
|
EDonation-SoftwareEngineering | edonation softwareengineering e donation is mobile application which i develop in software engineering project in 6th semister using firebase database and platform is flutter dart 17 https user images githubusercontent com 113015136 212621471 0ad37f33 0053 42b6 9151 0a6418cdc250 jpeg 18 https user images githubusercontent com 113015136 212621478 cd67d85f ba2c 4751 b2c8 869feb2f54d4 jpeg 19 https user images githubusercontent com 113015136 212621482 488a91aa 180c 43ac a4b2 6c6e36b63b14 jpeg 20 https user images githubusercontent com 113015136 212621484 8a15912f 01df 46a8 8adb a57c67ca3628 jpeg 21 https user images githubusercontent com 113015136 212621487 0908439f c865 4b59 ac13 96546f242415 jpeg 22 https user images githubusercontent com 113015136 212621489 30dd32e9 04f4 44d2 b7f6 861667fe6156 jpeg 23 https user images githubusercontent com 113015136 212621490 4394d0f0 9dfc 4fa1 b011 702017aa8fe1 jpeg 24 https user images githubusercontent com 113015136 212621496 5c303c57 895e 4da1 ad1d 258bfe3a6b64 jpeg 25 https user images githubusercontent com 113015136 212621504 670a5d3c 1475 4fbb 9bc9 a616691c099f jpeg 26 https user images githubusercontent com 113015136 212621508 eacbd4d9 bbd1 41bf 96cd ccc211e1c456 jpeg last https user images githubusercontent com 113015136 212621514 5e92af26 b7ed 4b52 88de 3eb046f1e616 jpeg 1 https user images githubusercontent com 113015136 212621521 0f389f73 5b04 419c 8409 7f083917fa3c jpeg 2 https user images githubusercontent com 113015136 212621524 1d62fd8c 7628 4e56 a20a 44d83ff4e8e5 jpeg 3 https user images githubusercontent com 113015136 212621527 d3de05cd 4b5e 494a 97d2 9553bfda0d35 jpeg 4 https user images githubusercontent com 113015136 212621530 e03192cf c82c 431a 90f4 d360bd48db4c jpeg 5 https user images githubusercontent com 113015136 212621536 49794cc4 6cb5 484e 9c0e 8161ef21e6d4 jpeg 6 https user images githubusercontent com 113015136 212621542 15cd7b1e d248 438c 912f 30fb24977fa1 jpeg 7 https user images githubusercontent com 113015136 212621550 fc6b5bff 860a 4769 b739 3f5990e37cf2 jpeg 8 https user images githubusercontent com 113015136 212621555 b4efce07 9d88 4bda ab6c affd199214b1 jpeg 9 https user images githubusercontent com 113015136 212621559 86b5121a 2187 464d b2f6 f2e1b5741c2c jpeg 10 https user images githubusercontent com 113015136 212621563 0926cae6 ef6d 44f4 9f07 10c75d31b59a jpeg 11 https user images githubusercontent com 113015136 212621569 66203263 70ce 4078 8372 3f4b397d59c2 jpeg 12 https user images githubusercontent com 113015136 212621571 71db3c6f d775 462a b1a6 163256f1305f jpeg 13 https user images githubusercontent com 113015136 212621575 7b4670c7 d35b 4a33 b083 7db62e240250 jpeg 14 https user images githubusercontent com 113015136 212621579 e1622386 eb2e 4555 acd1 25264ed00e66 jpeg 15 https user images githubusercontent com 113015136 212621581 07118e40 3bbc 4d9a 956a 321a7bb364a3 jpeg 16 https user images githubusercontent com 113015136 212621586 6799dfff e635 4175 b04b 16f5a60777b1 jpeg | server |
|
bbtrackpad_zephyr | blackberry trackpad zephyr rtos driver this is a zephyr driver in the form of external module for blackberry trackpads specifically this supports the series of 20 pin trackpads which are based around spi interface | os |
|
computer_vision | gesture emotions posture and face recognition using openpose dlib purpose of this work is to demonstrate few state of art computer vision applications using openpose dlib libraries acknowledgements this github repository work is greatly inspired and have used code concepts presented in the following github repositories dlib https github com davisking dlib modern c toolkit for computer vision and other machine learning kerasify https github com moof2k kerasify small library for running keras models from a c application opencv https github com opencv opencv open source computer vision library openpose https github com cmu perceptual computing lab openpose a real time multi person keypoint detection and multi threading c library thanks to dr michael rinehart chief scientist at elastica for his mentorship and guidance through the project operating systems supported ubuntu 16 04 nvidia jetson tx2 https developer nvidia com embedded buy jetson tx2 requirements nvidia graphics card with at least 1 6 gb available the nvidia smi command checks the available gpu memory in ubuntu at least 2 gb of free ram memory highly recommended cudnn and a cpu with at least 8 cores install compile and run install compile and run https github com srianant computer vision blob master openpose installation md installation is must design software design https github com srianant computer vision blob master openpose readme md demo gesture recognition https github com srianant computer vision blob master output hand gesture video gif emotions recognition https github com srianant computer vision blob master output emotions video gif pose recognition https github com srianant computer vision blob master output pose video gif dlib face recognition https github com srianant computer vision blob master output face rec gif | ai |
|
SheHeroes | h1 align center img width 464 alt screen shot 2020 11 21 at 10 28 41 pm src https user images githubusercontent com 56549294 99883790 a6f45100 2c4f 11eb 925b 7c155f3772d1 png h1 align center sheheroes h1 sheheroes women safety app label team members shagun goyal https github com shagun25 charu sachdeva https github com charu271 arshdeep singh https github com arshdeepsahni ayushi sharma https github com ayushi0014 label inspiration considering the safety and security of women in india in the recent times we wanted to give a try from our end to address the issue in a simpler and safer way looking at the recent trends and the most powerful weapon with the humanity technology we planned to use the same to give access to women in serious or dangerous situations to address the issue in a fast and easier way to ensure their security label tech stack flutter firestore google map api crimometer api label features main features point right map to track the current location of the user guiding for safe routes and crime prone areas br point right voice assistant executes features on voice commands br additional features point right sos sos call and sos messages to user provided contacts br point right shake detects the frequency of shakes and after a certain frequency sends help message with user location to provided contacts br point right camera to capture image or and record video and save it to the local storage br point right police stations locates all the nearest police stations br point right police siren rings the police siren br point right taxi one touch ola cab facility to books cabs for user br point right news to guide the users about self defense techniques br label emails email goyalsahagun25 gmail com br email charusachdeva271 gmail com br email arsh22sahni gmail com br email sharma14001 gmail com br label screenshots h3 align center authentication h3 img width 1440 alt screen shot 2020 11 21 at 10 00 58 pm src https user images githubusercontent com 56549294 99883797 aa87d800 2c4f 11eb 9bdf fe080280d319 png h3 align center voice assistant h3 img width 1440 alt screen shot 2020 11 21 at 10 00 45 pm src https user images githubusercontent com 56549294 99883798 abb90500 2c4f 11eb 881a af84d9c75f1e png h3 align center emergency dashboard h3 img width 1440 alt screen shot 2020 11 21 at 10 00 32 pm src https user images githubusercontent com 56549294 99883799 ac519b80 2c4f 11eb 8521 ebd283ce16a0 png h3 align center safe dashboard h3 img width 1440 alt screen shot 2020 11 21 at 10 00 20 pm src https user images githubusercontent com 56549294 99883801 acea3200 2c4f 11eb 96b1 aea436439ede png h3 align center switcher h3 img width 1440 alt screen shot 2020 11 21 at 10 00 06 pm src https user images githubusercontent com 56549294 99883803 ae1b5f00 2c4f 11eb 8a8e 85493ebd6aaf png | server |
|
TTK4155 | ttk4155 embedded and industrial computer systems design http www ntnu edu studies courses ttk4155 in collaboration with khuong huynh https github com khuongh and h vard olai kopperstad https github com haavardok this repository includes all developed software for the course project the project s goal was to develop an embedded system for a fully functional one player ping pong game with an atmel avr atmega162 https ww1 microchip com downloads en devicedoc atmel 2513 8 bit avr microntroller atmega162 datasheet pdf and an arduino uno including both software and hardware the system included user controls with touchpads and a joystick an lcd display with menu game settings and statistics and ping pong board including a motor encoder servo and solenoid for controlling the racket and an ir sensor for detecting the ball with pwm this work allowed me to develop my skills in developing embedded systems including low level programming design of electrical circuits bus communication can bus and implemenation of discrete control systems pid controller for racket br img src ttk4155 messy picture jpg alt alt text title optional title style display inline block margin 0 auto max width 300px excuse the messy picture | os |
|
Astar | astar cover https user images githubusercontent com 40356749 135799652 175e0d24 1255 4c26 87e8 447b192fd4b2 gif div align center integration action https github com astarnetwork astar workflows integration badge svg https github com astarnetwork astar actions github tag latest by date https img shields io github v tag astarnetwork astar https github com astarnetwork astar tags substrate version https img shields io badge substrate 3 0 0 brightgreen logo parity 20substrate https substrate dev license https img shields io github license astarnetwork astar color green https github com astarnetwork astar blob production shiden license br twitter url https img shields io twitter follow astarnetwork style social https twitter com astarnetwork twitter url https img shields io twitter follow shidennetwork style social https twitter com shidennetwork youtube https img shields io youtube channel subscribers uc36jgef6gqatvsk9xlzzrvq style social https www youtube com channel uc36jgef6gqatvsk9xlzzrvq docker https img shields io docker pulls staketechnologies astar collator logo docker https hub docker com r staketechnologies astar collator discord https img shields io badge discord gray logo discord https discord gg astarnetwork telegram https img shields io badge telegram gray logo telegram https t me plasmofficial medium https img shields io badge medium gray logo medium https medium com astar network div astar network is an interoperable blockchain based the substrate framework and the hub for dapps within the polkadot ecosystem with astar network and shiden network people can stake their tokens to a smart contract for rewarding projects that provide value to the network for contributing to this project please read our contribution guideline contributing md building from source this section assumes that the developer is running on either macos or debian variant operating system for windows although there are ways to run it we recommend using wsl https docs microsoft com en us windows wsl install win10 or from a virtual machine for stability execute the following command from your terminal to set up the development environment and build the node runtime bash install substrate development environment via the automatic script curl https getsubstrate io ssf bash s fast clone the git repository git clone recurse submodules https github com astarnetwork astar git change current working directory cd astar compile the node note you may encounter some errors if wasm32 unknown unknown is not installed or if the toolchain channel is outdated cargo build release show list of available commands target release astar collator help building with nix bash install nix package manager curl https nixos org nix install sh run from root of the project folder astar folder nix shell i nixpkgs channel nixos 21 05 third party nix shell nix run cargo build release running a collator node to set up a collator node you must have a fully synced node with the proper arguments which can be done with the following command bash start the shiden collator node with target release astar collator base path path to save blocks name node display name port 30333 rpc port 9944 telemetry url wss telemetry polkadot io submit 0 rpc cors all collator now you can obtain the node s session key by sending the following rpc payload bash send rotate keys request curl h content type application json data jsonrpc 2 0 method author rotatekeys id 1 localhost 9933 should return a long string of hex which is your session key jsonrpc 2 0 result session key in hex id 1 after this step you should have a validator node online with a session key for your node for key management and validator rewards consult our validator guide online https docs astar network build validator guide configure node run rpc tests rpc tests suite can be run for any release to run tests go to https github com astarnetwork astar actions workflows rpctest yml click run workflow in the dropdown input the release version tag you want to run the test suite then click the green run workflow button to start the test suite screenshot from 2022 07 07 15 28 46 https user images githubusercontent com 874046 177785570 330c6613 237d 4190 bfed 69876209daf6 png workspace dependency handling all dependencies should be listed inside the workspace s root cargo toml file this allows us to easily change version of a crate used by the entire repo by modifying the version in a single place right now if non std is required default features false must be set in the root cargo toml file related to this issue https github com rust lang cargo pull 11409 otherwise it will have no effect causing your compilation to fail also package imports aren t properly propagated from root to sub crates so defining those should be avoided defining features in the root cargo toml is additive with the features defined in concrete crate s cargo toml adding dependency 1 check if the dependency is already defined in the root cargo toml 1 if yes nothing to do just take note of the enabled features 2 if no add it make sure to use default features false if dependency is used in no std context 2 add new dependecy workspace true to the required crate 3 in case dependency is defined with default features false but you need it in std context add features std to the required crate further reading official documentation https docs astar network whitepaper https github com astarnetwork plasmdocs blob master wp en pdf whitepaper jp https github com astarnetwork plasmdocs blob master wp jp pdf subtrate developer hub https substrate dev docs en substrate glossary https substrate dev docs en knowledgebase getting started glossary substrate client library documentation https polkadot js org docs | plasm-network polkadot substrate evm blockchain kusama wasm web3 dapp astar-network staking | blockchain |
ThreatHunting | threathunting installation before installation unblocking the downloaded files may be required get childitem ps recurse unblock file option 1 install at the system level copy the project folder to windir system32 windowspowershell v1 0 modules option 2 install at the profile level copy the project folder to home documents windowspowershell modules option 3 import module at each powershell prompt to only use the scripts during the current powershell session use import module threathunting psm1 contact certanalysismitigation dla mil cert dla mil http seclist us threathunting powershell collection designed to assist in threat hunting windows systems html | os |
|
EngineeringNotationFormatter | engineeringnotationformatter ios project demoing c based engineering notation formatter with a objective c wrapper and handles variable digits style and step capability notes martin moene has created a really nice c version see https github com martinmoene engformat cpp release notes v 1 3 fixed stepper display and updated project for xcode 11 v 1 2 after martin moene found some edge cases that produces failures redid much of the code log messages can be enabled or not in engnotation c handle input values which are not normal floats and return nan infinite etc as appropriate incorporated unit tests based on the ones written by martin moene we now share them v 1 0 initial release basically as i got i from jukka korpela historical several years ago i tripped on a great c function that formats a floating point number into engineering notation in either exponential or international system of units si notation this c code was written by jukka korpela and posted at http www cs tut fi jkorpela c eng html this function is useful when you want to display say length in units people not machines can understand thus instead of 1 67e4 meters you could show either 16 7e3 meters or using the si prefixes http physics nist gov cuu units prefixes html as 17 7 k meters as jukka points out there is no posix i e printf formatter that will do this so he wrote one in using his code i found an edge condition and provided updated code which jukka appended to his web page circa 2009 the code has a most interesting feature the ability to specify the number of significant digits for instance if you set the value to 4 every number will be rounded up or down so that it is comprises exactly that many digits thus you can utilize this code to provide a floating point step function that exactly steps a number by one increment or decrement to its lowest significant digit there are convenience functions in both c and objective c to perform this step a reverse function is also provided that takes the string in either exponential or si units and returns the floating point number properly rounded for sure this feature is not one many people need but it you do need it this code is invaluable and i m sure jukka spent a good deal of time working on it the included ios project can be used to experiment with the various settings screenshot appscreenshot png we jointly offer this code here with a unattributed bsd style license see source files | os |
|
cartero | depreciation notice at long last cartero has been depreciated as the first multi page application build tool cartero served its purpose well for many years however now there are far more robust and better supported tools that can be used to achieve the same goals we recommend migrating any projects still using cartero to web pack https webpack js org thank you to everyone who contributed to this pioneering project cartero cartero is an asset pipeline built on npm https www npmjs org and browserify http browserify org that allows you to easily organize front end code in multi page web applications into reusable packages containing html javascript css and images build status https travis ci org rotundasoftware cartero svg branch master https travis ci org rotundasoftware cartero overview cartero eliminates much of the friction involved in the build process when applying modular design principles to front end assets in multi page web applications use directories to group together assets js css images etc for each ui component and cartero will take care of ensuring that all the appropriate assets are then loaded on the pages that require them and just those pages depending on a package is as simple as require pork and beans and since cartero is built on npm https www npmjs org the official node js package manager you can easily publish your packages and or depend on other npm packages in your own code a package might contain assets for a calendar widget a popup dialog a header or footer an entire web page cartero is a build tool it does not introduce many new concepts and the same modular organizational structure it facilitates could also be achieved by stringing together other build tools and the appropriate script link and img tags however using cartero is well a lot easier see this article https github com rotundasoftware cartero blob master comparison md for more info on how cartero compares to other tools and this tutorial http www jslifeandlove org intro to cartero to get started command line usage build all assets for a multi page application with the cartero command for example cartero views index js static assets this command will gather up the js and other assets required by each javascript entry point matching the first argument in this case all index js files in the views directory and drop the compiled assets into the output directory specified in the second argument along with information used at run time by the hook the hook to load the appropriate assets for each entry point cartero only processes each asset one time so compiling assets for many entry points at once is extremely efficient each additional entry point adds practically no overhead cartero also separates out any javascript assets that are used by all your entry points into a common bundle so that the browser cache can be leveraged to load any shared logic quickly on each page this magic is done by factor bundle https github com substack factor bundle thanks for your wizardry http cyber wizard institute james adding a w flag to the cartero command will run cartero in watch mode so that the output is updated whenever assets are changed again cartero s watch mode is extremely efficient only rebuilding what is necessary for a given change the hook at run time your application needs to be able to easily figure out where assets are located for this reason cartero provides a small 100 loc https github com rotundasoftware cartero node hook blob master index js runtime library that your server side logic can use to look up asset urls or paths based on a simple map output by cartero at build time at the time of this writing only a hook for node js https github com rotundasoftware cartero node hook is available but one can quickly be written for any server side environment for example if views page1 index js is an entry point the following call will return all the script and link tags needed to load the js and css bundles it requires javascript h gettagsforentrypoint views page1 index js function err scripttags styletags scripttags and styletags and strings of script and link tags respectively attach the tags to the express res locals so we can output them in our template to load the page s assets res locals script scripttags res locals style styletags you can also ask the cartero hook to lookup the url of a specific asset for example to find the url of carnes png in that same page1 directory javascript var imageurl h getasseturl views page1 carnes png it s all in the package json cartero can gather and compile style and image assets from any module with a package json file just include a style and or image property in the package json that enumerates the assets the package requires of that type in glob notation https github com isaacs node glob glob primer for example name my module version 1 0 2 main lib my module js dependencies style scss styles image icon png images note that package json files can be in any location you can even put package json files in your views folder sound weird try it the javascript entry point that is used by any given view is after all just like a package it has its own js css and may depend on other packages or even be depended upon relax your brain does the below directory structure make sense node modules my module index js icon png package json style scss views page1 package json page1 jade server side template style css index js entry point for page 1 page2 package json style css page2 jade server side template index js entry point for page 2 usage npm install g cartero cartero entrypoints outputdir options the cartero command gathers up all assets required by the javascript files matching the entrypoints argument which is a glob string and transforms and concatenates them as appropriate and saves the output in outputdir at run time the html tags needed to load the assets required by a particular entry point can be found using the cartero hook s https github com rotundasoftware cartero node hook the cartero express middleware https github com rotundasoftware cartero express middleware can be used for an added level of convenience command line options transform t name or path of a application level transform see discussion of apptransforms option transformdir d path of an application transform directory see discussion of application transforms watch w watch mode watch for changes and update output as appropriate for dev mode postprocessor p the name of a post processor module to apply to assets e g uglifyify etc maps m enable javascript source maps in js bundles for dev mode keepseperate s keep css files separate instead of concatenating them for dev mode outputdirurl o the base url of the cartero output directory e g assets defaults to help h show this message tranforms package specific local transforms the safest and most portable way to apply transforms to a package like sass css or coffeescript js is using the transforms key in a package s package json file the key should be an array of names or file paths of transform modules https github com substack module deps transforms for example name my module description example module version 1 5 0 style scss transforms sass css stream dependencies sass css stream 0 0 1 all transform modules are called on all assets including javascript files it is up to the transform module to determine whether or not it should apply itself to a file usually based on the file extension application level transforms you can apply transforms to all packages within an entire branch of the directory tree using the apptransforms and apptransformdirs options or their corresponding command line arguments packages inside a node modules folder located inside one of the supplied directories are not effected for example to transform all sass files inside the views directory to css cartero views index js static assets t sass css stream d views catalog of transforms any browserify javascript transform will work with cartero see the parcelify documentation https github com rotundasoftware parcelify catalog of transforms for a catalog of transforms that apply to other asset types built in transforms there are three built in transforms that cartero automatically applies to all packages the relative to absolute path transform style assets only cartero automatically applies a transform to your style assets that replaces relative urls with absolute urls after any local default transforms are applied this transform makes relative urls work even after css files are concatenated into bundles for example the following url reference in a third party module will work even after concatenation css div backdrop background url pattern png the asset url transform to resolve asset urls at times it is necessary to resolve the url of an asset at build time for example in order to reference an image in one package from another for this reason cartero applies a special transform to all javascript and style assets that replaces expressions of the form asset url path with the url of the asset at path after any local default transforms are applied the path is resolved to a file using the node resolve algorithm and then mapped to the url that file will have once in the cartero output directory for instance in page1 index js javascript mymodule require my module img my module attr src asset url my module icon png the same resolution algorithm can be employed at run time on the server side via the cartero hook https github com rotundasoftware cartero node hook using the getasseturl method the resolve transform to resolve node style paths to absolute paths just like to the asset url transform but evaluates to the absolute path of the referenced asset for example in a nunjucks https mozilla github io nunjucks template this transform could be used to reference a template in one module from another jinja extends resolve other module template nunj block body endblock api c cartero entrypoints outputdir options entrypoints is a glob pattern or an array of glob patterns any javascript file matching the pattern s will be treated as an entry point outputdir is the path of the directory into which all of your processed assets will be dropped along with some meta data it should be a directory that is exposed to the public so assets can be loaded using script link tags e g the static directory in express applications options are as follows assettypes default style image the keys in package json files that enumerate assets that should be copied to the cartero output directory assettypestoconcatenate default style a subset of assettypes that should be concatenated into bundles note javascript files are special cased and are always both included and bundled by browserify outputdirurl default the base url of the output directory approotdir default undefined the root directory of your application you generally only need to supply this option if the directory structure of the system on which your application will be run is different than of the system on which cartero is being run apptransforms default undefined an array of transform module tranforms names paths or functions to be applied to all packages in directories in the apptransformdirs array apptransformdirs default undefined apptransforms are applied to any packages that are within one of the directories in this array the recursive search is stopped on node module directories packagetransform default undefined a function that transforms package json files before they are used the function should be of the signature function pkgjson pkgpath and return the parsed transformed package object this feature can be used to add default values to package json files or alter the package json of third party modules without modifying them directly sourcemaps default false enable js source maps passed through to browserify watch default false reprocess assets and bundles and meta data when things change postprocessors default an array of post procesor functions or module names paths post processors should have the same signature as transform modules https github com substack module deps transforms these transforms are applied to all final bundles after all other transforms have been applied useful for minification uglification etc a cartero object is returned which is an event emitter c on done function called when all assets and meta data have been written to the destination directory c on error function err called when an error occurs c on browserifyinstancecreated function browserifyinstance called when the browserify watchify instance is created c on filewritten function path assettype isbundle watchmodeupdate called when an asset or bundle has been written to disk watchmodeupdate is true iff the write is a result of a change in watch mode c on packagecreated function package called when a new parcelify https github com rotundasoftware parcelify package is created faq q what is the best way to handle client side templates use a browserify transform like nunjucksify https github com rotundasoftware nunjucksify or node hbsfy https github com epeli node hbsfy to precompile templates and require them explicitly from your javascript files q what does cartero write to the output directory you generally don t need to know the anatomy of cartero s output directory since the cartero hook https github com rotundasoftware cartero node hook serves as a wrapper for the information and assets in contains but here is the lay of the land for the curious note the internals of the output directory are not part of the public api and may be subject to change static assets output directory 66e20e747e10ccdb653dadd2f6bf05ba01df792b entry point package directory assets json page1 bundle 14d030e0e64ea9a1fced71e9da118cb29caa6676 js page1 bundle da3d062d2f431a76824e044a5f153520dad4c697 css 880d74a4a4bec129ed8a80ca1f717fde25ce3932 entry point package directory assets json page2 bundle 182694e4a327db0056cfead31f2396287b7d4544 css page2 bundle 5066f9594b8be17fd6360e23df52ffe750206020 js 9d82ba90fa7a400360054671ea26bfc03a7338bf regular package directory robot png metadata json each subdirectory in the output directory corresponds to a particular package some of which are entry points is named using that package s unique id contains all the assets specific to that package and has the same directory structure as the original package directories that coorespond to entry points also contain an assets json file which enumerates the assets used by the entry point the metadata json file maps package paths to package ids q is it safe to let browsers cache asset bundles yes the name of asset bundles generated by cartero includes an shasum of their contents when the contents of one of the files changes its name will be updated which will cause browsers to request a new copy of the content the rails asset pipeline http guides rubyonrails org asset pipeline html implements the same cache busting technique q will relative urls in css files break when cartero bundles them into one file well they would break but cartero automatically applies a tranform to all your style assets that replaces relative urls with absolute urls calculated using the outputdirurl option so no they won t break contributors david beck https twitter com davegbeck james halliday https twitter com substack oleg seletsky https github com go oleg license mit | front_end |
|
conversational-ai-chatbot | discontinuation of project this project will no longer be maintained by intel this project has been identified as having known security escapes intel has ceased development and contributions including but not limited to maintenance bug fixes new releases or updates to this project intel no longer accepts patches to this project conversational ai chat bot conversational ai chat bot conversational ai chat bot introduction introduction block diagram block diagram prerequisites prerequisites hardware hardware recommended domain knowledge recommended domain knowledge get started get started step 1 connect the devices step 1 connect the devices step 2 clone the repository step 2 clone the repository step 3 build the reference implementation step 3 build the reference implementation step 4 start the reference implementation and set configuration step 4 start the reference implementation and set configuration step 5 run the reference implementation step 5 run the reference implementation wav file ingestion recommended for first time configuration wav file ingestion recommended for first time configuration live speech ingestion live speech ingestion step 6 stop the reference implementation step 6 stop the reference implementation step 7 remove the reference implementation step 7 remove the reference implementation references references restart the alsa firmware restart the alsa firmware general guidance general guidance expanding the reference implementation expanding the reference implementation mandatory local repo configurations mandatory local repo configurations release notes release notes introduction this guide helps you build and run the conversational ai chat bot reference implementation upon completing the steps in this guide you will be ready to integrate services to build your own complete solution block diagram block diagram docs diagram png prerequisites the following items are required to build the conversational ai chat bot you will need additional hardware and software when you are ready to build your own solution ubuntu 18 04 https releases ubuntu com 18 04 4 or ubuntu 20 04 https releases ubuntu com 20 04 8th generation and above intel core processor 32gb ram and 100gb of free space on hdd docker 19 03 5 https docs docker com install docker compose 1 25 3 https docs docker com compose git https git scm com gnu make https www gnu org software make here s how to install git curl and make on an ubuntu 18 04 system other operating systems may vary bash sudo apt get update y sudo apt get install y git curl build essential hardware 8th generation and above intel core processor 32gb ram and 100gb of hdd respeaker or equivalent usb mic array auxiliary aux port speaker or usb speaker network with speed greater than 60 mbps preferable while building the docker images recommended domain knowledge open bank project https www openbankproject com openvino toolkit https docs openvinotoolkit org latest index html to use the chatbot we need the credentials of an open bank project compatible server this reference implementation uses a sandbox server hosted here https apisandbox openbankproject com for this demo create credentials here https apisandbox openbankproject com consumer registration get started step 1 connect the devices before installing the application connect input and output devices on your linux host i e the system on which the ri is running 1 on the linux host connect a wired microphone and a wired speaker 2 open settings with the nine button menu bottom right of screen and choose sound if a sound application is running it will appear in this dialog close any sound applications e g media players youtube identify audio devices use the aplay command to obtain the name of the connected available devices 1 open a terminal and run the command bash aplay l aplay output docs aplay output png 2 notice the name of available devices in the figure above the name of the seeedstudio respeaker device is arrayuac10 the name of jabra sound card is usb the list of available recording and playback devices in step 4 start the reference implementation and set configuration step 3 start the reference implementation and set configuration will contain usb and arrayuac10 as options step 2 clone the repository bash git clone https github com intel conversational ai chatbot step 3 build the reference implementation you must build the provided component services and create local docker images to do so run bash setup install note this command may take a while to run depending on your internet connection and machine specifications step 4 start the reference implementation and set configuration the solution is to be deployed on a single node docker swarm it stores the keys as docker secrets bash setup start next you ll set the configuration options 1 use output from aplay to choose from the list of available recording and playback devices when prompted with choose a recording device and a playback device to test enter the device name for the recording and playback devices and then press enter recording and playback devices docs devices png note the names are case sensitive 2 to test device compatibility press enter key to begin recording the application records for 20 seconds the recording will play automatically after the 20 seconds has elapsed if you hear playback the solution has found your device follow the prompts and enter yes continue with step 4 if you indicate you don t hear playback with no the installation process will stop connect another device and go back to the start of installation build and install if you don t have another device or the device s you ve tried do not work see restart the alsa firmware restart the alsa firmware 3 set the ingestion type when prompted by entering the corresponding number 4 set the asr model when prompted by entering the corresponding number ingestion type and asr model docs ingestion type and asr model png note wave ingestion enter 1 is recommended to start and test the ri later sections cover live speech ingestion 5 for obp related configuration login docs login md login using openbankproject credentials 6 the build will begin after configuration end note the configuration uses the host system s proxy note a first time build and install may take up to 45 minutes to complete warning if you configured the solution for wav file ingestion the audio starts as soon as the message starting chatbot services appears adjust volume of speaker for audio response of chatbot services step 5 run the reference implementation wav file ingestion recommended for first time configuration ingestion starts as soon as the software installation has concluded the software cycles through the audio input files and plays responses to the audio queries until you stop the software or switch run methods note you cannot provide speech input to the ri while running wav file ingestion you ll hear these nlp responses welcome to the future of banking the balance for the saving account ending in is rupees you will find one near road you have two savings accounts with our bank to listen listen to output through the speaker device you chose you will hear the nlp responses listed above to read the log files to see input and output speech in the log files open two terminal windows in a terminal s list the container ids docker ps format table image t status t id view the log by running docker logs with the nlp container id docker logs f nlp container id view the log by running docker logs with the asr container id docker logs f asr container id live speech ingestion the application listens for speech input as soon as installation concludes the instructions below outline how to speak to the application read responses or listen to responses through a speaker to speak start by waking the application with the wake word a word that alerts the ri to start listening for speech the wake word is respeaker ree spee kr say respeaker pause good morning to listen listen to output through the speaker device you chose to read the log files to see input and output speech in the log files open two terminal windows in a terminal s list the container ids docker ps format table image t status t id view the log by running docker logs with the nlp container id docker logs f nlp container id view the log by running docker logs with the asr container id docker logs f asr container id step 6 stop the reference implementation the solution can be stopped using following command bash setup stop step 7 remove the reference implementation to remove built docker images use below command bash setup uninstall references restart the alsa firmware 1 if the recording test in build and install fails restart the advanced linux sound architecture alsa firmware bash pulseaudio k sudo alsa force reload 2 rebuild the application see build and install note if the build fails after restarting the alsa firmware the device is not supported general guidance after completing the steps in the getting started section it may be helpful to read through the remainder of this document for some further guidance on the conversational ai chat bot reference implementation please go through this link for an overview of the services used in this solution overview of services docs overview md security overview docs security md expanding the reference implementation the reference implementation you created is not a complete solution it provides the base components for creating a framework to run an openvino powered conversational ai chat bot this section provides information about components you might want to include or replace or change component description authz service currently we need to use linux commandline for login it doesn t represent an actual banking use case login it can be replaced or extended to give a ui web ui based interface for login mandatory local repo configurations user needs to update below repo specific git configurations immediately after cloning this repository cd current repo path git config core hookspath githooks release notes find the release notes here docs release notes md | ai |
|
AttackVLM | h1 align center style text align center font weight bold font size 2 0em letter spacing 2 0px on evaluating adversarial robustness of br large vision language models h1 p align center style font size 1 2em b em arxiv preprint 2023 em br b p p align left style text align left font size 1 2em b a href https yunqing me github io attackvlm target blank style text decoration none project page a a href https yunqing me github io attackvlm target blank style text decoration none slides a a href https arxiv org pdf 2305 16934 pdf target blank style text decoration none arxiv a a href https drive google com drive folders 118mtdlew0yefc z0egllknax aavbrfp usp sharing target blank style text decoration none data repository a nbsp b p tl dr in this research we evaluate the adversarial robustness of recent large vision language generative models vlms under the most realistic and challenging setting with threat model of black box access and targeted goal our proposed method aims for the targeted response generation over large vlms such as minigpt 4 llava unidiffuser blip 2 img2prompt etc in other words we mislead and let the vlms say what you want regardless of the content of the input image query teaser image assets teaser 1 jpg teaser image assets teaser 2 jpg requirements platform linux hardware a100 pcie 40g lmdb tqdm wandb torchvision etc in our work we used dall e midjourney and stable diffusion for the target image generation and demonstration for the large scale experiments we apply stable diffusion https github com compvis stable diffusion for target image generation to install stable diffusion we init our conda https docs conda io en latest environment following latent diffusion models https github com compvis latent diffusion a suitable base conda environment named ldm can be created and activated with conda env create f environment yaml conda activate ldm note that for different victim models we will follow their official implementations and conda environments targeted image generation teaser image assets teaser 3 jpg as discussed in our paper to achieve a flexible targeted attack we leverage a pretrained text to image model to generate an targetd image given a single caption as the targeted text consequently in this way you can specify the targeted caption for attack by yourself we use stable diffusion https github com compvis stable diffusion dall e https openai com blog dall e now available without waitlist or midjourney https www midjourney com app as the text to image generators in our experiments here we use stable diffusion for demonstration thanks for open sourcing prepare the scripts git clone https github com compvis stable diffusion git cd stable diffusion then prepare the full targeted captions from ms coco https cocodataset org home or download our processed and cleaned version https drive google com file d 19tt036lbvqyonzi7pfu9qvi3jvgapkrg view usp sharing and move it to stable diffusion in experiments one can randomly sample a subset of coco captions e g 10 100 1k 10k 50k for the adversarial attack for example lets assume we have randomly sampled 10k coco captions as our targeted text c tar and stored them in the following file https drive google com file d 1e5w3yim7zjrw3 c64yqvzg na7doawaf view usp sharing generate the targeted images the targeted images h c tar can be obtained via stable diffusion by reading text prompt from the sampled coco captions with the script below and txt2img coco py https drive google com file d 1hthxlgdx97 uel3g9amvx qgngssjeiy view usp sharing please move txt2img coco py to stable diffusion note that hyperparameters can be adjusted with your preference boldsymbol h xi boldsymbol c text tar python txt2img coco py ddim eta 0 0 n samples 10 n iter 1 scale 7 5 ddim steps 50 plms skip grid ckpt model pool sd v1 4 full ema ckpt from file name of your coco captions file txt outdir path of your targeted images where the ckpt is provided by stable diffusion v1 https github com compvis stable diffusion weights text the 20weights 20are 20available 20via and can be downloaded here sd v1 4 full ema ckpt https huggingface co compvis stable diffusion v 1 4 original resolve main sd v1 4 full ema ckpt additional implementation details of text to image generation by stable diffusion can be found here https github com compvis stable diffusion text active 20community 20development reference 20sampling 20script we 20provide 20a adversarial attack black box query overview of our attackvlm strategy teaser image assets teaser 4 jpg prepare the vlm scripts there are two steps of adversarial attack for vlms 1 transfer based attacking strategy and 2 query based attacking strategy for the further improvement for blip blip 2 img2prompt models please refer to lavis tool the minigpt 4 and llava will be also supported here we use unidiffuser https github com thu ml unidiffuser for an example b example unidiffuser b installation git clone https github com thu ml unidiffuser git cd unidiffuser cp unidff tool then create a suitable conda environment named unidiffuser following the steps here https github com thu ml unidiffuser text to 2dimage 20generation dependency conda 20create 20 2dn and prepare the corresponding model weights we use uvit v1 pth as the weight of u vit transfer based attacking strategy conda activate unidiffuser python train adv img py output unidiff adv transfer batch size 250 num samples 10000 steps 100 epsilon 5 you can modify the perturb budget here cle data path path of your clean data folders tgt data path path of your tgt data folders output name of your output img folder the crafted adv images x trans will be stored in output img name of your output img folder then we perform image to text and store the generated response of x trans this can be achieved by python eval i2t dataset py batch size 10 mode i2t img path output img name of your trans img folder output name of your output txt file where the generated responses will be stored in output unidiffuser name of your output txt file txt we will use them for pseudo gradient estimation via rgf estimator query based attacking strategy via rgf estimator python train adv img query py output unidiff trans query queried text will be stored data path output img unidiffuser trans text path output unidiffuser name of your output txt file txt batch size 1 num samples 1000 steps 3 you can modify the perturb budget here epsilon 3 sigma 8 delta zero num query 25 num sub query 25 wandb wandb project name unidiff wandb run name sigma 8 delta zero evaluation here we use wandb https wandb ai site to dynamically monitor the moving average of the clip score e g rn50 vit b 32 vit l 14 etc to evaluate the similarity between a the generated response of trans query images and b the predefined targeted text c tar an example shown as below where the dotted line denotes the moving average of the clip score of image captions after query teaser image assets example png meanwhile the image caption after query will be stored and the directory can be specified by output bibtex if you find this project useful in your research please consider citing our paper article zhao2023evaluate title on evaluating adversarial robustness of large vision language models author zhao yunqing and pang tianyu and du chao and yang xiao and li chongxuan and cheung ngai man and lin min journal arxiv preprint arxiv 2305 16934 year 2023 meanwhile a relevant research that aims to embedding a watermark to multi modal diffusion models https github com yunqing me watermarkdm article zhao2023recipe title a recipe for watermarking diffusion models author zhao yunqing and pang tianyu and du chao and yang xiao and cheung ngai man and lin min journal arxiv preprint arxiv 2303 10137 year 2023 acknowledgement we appreciate the wonderful base implementation of minigpt 4 https github com vision cair minigpt 4 llava https llava vl github io unidiffuser https github com thu ml unidiffuser lavis https github com salesforce lavis and clip https openai com research clip we also thank metaai https ai facebook com blog large language model llama meta ai for open sourcing their llama checkponts we thank sisi for providing some enjoyable and visual pleasant images generated by midjourney https www midjourney com app in our research | adversarial-attack deep-generative-model generative-ai image-to-text-generation text-to-image-generation foundation-models large-language-models vision-language-model trustworthy-ai | ai |
ddd-vet-sample | ddd fundamentals sample app a sample meant to demonstrate domain driven design using a veterinary hospital management system this sample is used in the domain driven design fundamentals course on pluralsight https app pluralsight com library courses domain driven design fundamentals give a star star if you like or are using this project to learn please give it a star thanks getting started the main application is in the frontdesksolution folder open the frontdesk sln file for the main sample the application relies on the public web site for sending emails and confirming appointments you must run the public site for this part of the demo to work open it from vetclinicpublic web folder and the vetclinicpublic web sln solution you will need a test mail server to capture the email that would be sent to the user when they create a new appointment i recommend smtp4dev which you can get here i m still using the old version https github com rnwood smtp4dev releases after 3 0 264 master but the latest one probably works too https github com rnwood smtp4dev releases the communication between the two web apps is done using sql server service broker you must set this up using the setupsqlservicebroker sql setupsqlservicebroker sql file in the root of the repo finally if you have to adjust the connection strings for the database to use your version of localdb or sql express or whatever you will need to update these in several places frontdesk web config frontdesk shareddatabasemanagementtools shareddatabasetests app config vetclinicpublic web config messagingconfig cs in both solutions basically you should search both solutions for localdb mssqllocaldb and replace it with whatever local sql server database you re using to create the application domain database you should run the unit test in shareddatabasetests shareddatabasecontextshould buildmodel the message queue database is created by the setupsqlservicebroker script above and is called servicebrokertest | os |
|
THOR-APP | thor app this ios app is part of a senior design project in electrical amp computer engineering at boston university the app will enable a dji phantom 3 pro drone to semi autonomously navigate a wheat crop field capture images and process images in the cloud with an ndvi algorithm the final output will be a color coded ndvi map of the wheat field useful for wheat farmers the image storage and image processing will be done in the aws cloud the project is tailored to the customer s requirements graminor in norway | os |
|
guideApp | trajectware timeline based navigation across computing heritage goal and scope the history of calculation information processig and computation is very rich quickly accelerating over the past century and is driving the industrial revolution and digital transformation of our world it is marked by many events related to conceptual and technological breakthrough driving by several people and companies in order to help in the analysis by researchers and explanation to citizens the nam ip computer museum developed and released an open source framework based on a the structuration of events in the form of timeline fragments that can be explored using various navigation operation to focus on specific period aspects technological conceptual cultural contextual or involved people organisation img src https github com namip computer museum guideapp blob main assets illustrations map png raw true width 400 the current scope is computing heritage and its validation is still on going in this domain however the concept could be enlarged any type of historical event and be used by other museum design ideas features the framework is designed in two main components a knowledge base back end to structure all the relevant information a conceptual model was developed based on different standards sem dolce spatial history ontlogy constructed past theory dbpedia and accessed through queries and or a specific api to extract a relevant timeline img src https github com namip computer museum guideapp blob main assets illustrations metamodel4 png raw true height 400 a navigation front end currently based on reactnative and that can easily be deployed as mobile application img src https github com namip computer museum guideapp blob main assets illustrations protonav jpg raw true width 800 typical timelines are the following actors at different granularity levels it can be the life of a person a group or a company possibly with a focus on common event characteristics object s also at different granularity levels it could relate to the precise history of a specific object e g the design of the lisa computer but also of a family of objects according to specific criteria e g micro computer of a specific period manufacturer using a specific cpu temporal spatial or thematic contexts respectively through specific event dates location or tag characteristics different granularity levels can also be considered e g to reflect the computer history related to micro computer in france from 1970 to 1985 this can come as additional filter for the previous types of timelines some timeline navigation operations event pivoting between related entities or features e g from amiga 500 computer to commodore company or the 68k cpu or gui timeline time zoom in out based on a defined period e g the micro computer history can be divided in early golden age and standardisation periods actor zoom in out from person level to company object zoom in out e g down to version variant level and up to product family level relations inclusion possibly iterative and with closure e g to follow casual relation to look for causes consequences related to some events combining multiple timelines together either merged or keeping them separated with an adequate visualisation temporal alignment shared events specific relations status and testing a full prototype was developped and is currently used in the museum with the following scope timelines of the micro computer period 1970 1990 as support of the micro computer meg revolution exhibition coverting machines user interface operating systems micro processor evolution trilingual french english dutch support for high quality images with zoom and video replay integrated quiz only in french a few random questions but not generated from knowledge base resources are fully bundled internal sqlite pictures vid os so does not required wifi in principle but a react native dependency actually require internet access even though nothing is exchanged android apk available for download here https drive google com file d 17yhnbce d gmcupei go5cmnyrl9eu7b view usp sharing ongoing and future work rest api openapi design and implementation for decoupling knowledge base back end and navigation front end event extraction from various open data sources generic dbpedia or more specialised museum inventory application deployment on the full exhibition timeline of our museum improved integration with physical artefact qrcode more elaborated web client event learning reliability ranking documentation global design pre print available soon full documentation in french for now https docs google com document d 1msaaaxbrw0v6dpnfq5gp6qiruddsjgzykkbkelflyxs contributors aur lien masson eseo react native prototype christophe ponsard cetic nam ip design architecture thomas collignon unamur api ward desmet nam ip nl translations marie gevers unamur texts | front_end |
|
llm | large language model transformer architecture attention is all you need https arxiv org pdf 1706 03762 pdf bloom bigscience 176b model https arxiv org pdf 2211 05100 pdf br prompting chain of thought prompting elicits reasoning in large language models https arxiv org pdf 2201 11903 pdf pal program aided language models https arxiv org pdf 2211 10435 pdf react synergizing reasoning and acting in language models https arxiv org pdf 2210 03629 pdf br pre training and scaling laws scaling laws for neural language models https arxiv org pdf 2001 08361 pdf br model architectures and pre training objectives what language model architecture and pretraining objective work best for zero shot generalization https arxiv org pdf 2204 05832 pdf llama open and efficient foundation language models https arxiv org pdf 2302 13971 pdf br scaling laws and compute optimal models language models are few shot learners https arxiv org pdf 2005 14165 pdf training compute optimal large language models https arxiv org pdf 2203 15556 pdf bloomberggpt a large language model for finance https arxiv org pdf 2303 17564 pdf br instruction finetuning scaling instruction finetuned language models https arxiv org pdf 2210 11416 pdf introducing flan more generalizable language models with instruction fine tuning https ai googleblog com 2021 10 introducing flan more generalizable html br parameter efficient fine tuning peft scaling down to scale up a guide to parameter efficient fine tuning https arxiv org pdf 2303 15647 pdf on the effectiveness of parameter efficient fine tuning https arxiv org pdf 2211 15583 pdf lora lora low rank adaptation of large language models https arxiv org pdf 2106 09685 pdf qlora efficient finetuning of quantized llms https arxiv org pdf 2305 14314 pdf prompt tuning the power of scale for parameter efficient prompt tuning https arxiv org pdf 2104 08691 pdf br rlhf fine tuning fine tuning 20b llms with rlhf on a 24gb consumer gpu https huggingface co blog trl peft training language models to follow instructions with human feedback https arxiv org pdf 2203 02155 pdf learning to summarize from human feedback https arxiv org pdf 2009 01325 pdf proximal policy optimization algorithms https arxiv org pdf 1707 06347 pdf direct preference optimization your language model is secretly a reward model https arxiv org pdf 2305 18290 pdf constitutional ai harmlessness from ai feedback https arxiv org pdf 2212 08073 pdf br model evaluation metrics holistic evaluation of language model https crfm stanford edu helm latest scenarios 1 general language understanding evaluation glue benchmark https openreview net pdf id rj4km2r5t7 superglue https super gluebenchmark com rouge a package for automatic evaluation of summaries https aclanthology org w04 1013 pdf measuring massive multitask language understanding mmlu https arxiv org pdf 2009 03300 pdf bigbench hard beyond the imitation game quantifying and extrapolating the capabilities of language models https arxiv org pdf 2206 04615 pdf br application react synergizing reasoning and acting in language models https arxiv org pdf 2210 03629 pdf langchain https github com langchain ai langchain who owns the generative ai platform https a16z com 2023 01 19 who owns the generative ai platform | ai |
|
Blockchain | comunidad linkedin del instructor https www linkedin com in joanamengual7 perfil de udemy https www udemy com user joan amengual mesquida m s cursos rutas de aprendizaje blockchain https blockstellart com curso completo de blockchain de cero a experto https blockstellart com quieres aprender los pilares y los fundamentos de la tecnolog a blockchain te intimida blockchain y no sabes por donde empezar eres un emprendedor que quiere transformar su negocio con el poder de la tecnolog a blockchain pero no sabes c mo conseguirlo si la respuesta a alguna de estas preguntas es s entonces este curso es para ti descripci n del curso blockchain es uno de los campos tecnol gicos m s punteros en el que todos quieren estar y es que blockchain est cambiando la vida de las personas al igual que lo hizo la electricidad hace ya 100 a os las palabras blockchain cadena de bloques bitcoin ethereum cada vez est n m s presente en la vida de todos nosotros la revoluci n de las criptomonedas ya es una realidad este curso es nico pues lo hemos dise ado para que no tengas que programar ni una sola l nea de c digo ni tener conocimientos de programaci n o de inform tica pero a n as poder desatar el poder de la tecnolog a blockchain es el nico curso de todo udemy en espa ol que cubre los fundamentos y los pilares de la tecnolog a blockchain de cero a experto el curso ha sido dise ado para cubrir los principios b sicos hasta los m s avanzados de la tecnolog a blockchain empezando por sus or genes y avanzando por los conceptos criptogr ficos que han hecho posible su construcci n por ello vamos a entender la criptograf a de clave p blica los elementos que influyen en el flujo de transacciones de bitcoin la miner a y los protocolos de consenso como proof of work pow y proof of stake pos las caracter sticas propias de la cadena de bloques y si con eso no fuera poco nos adentraremos en ethereum y en el mundo del desarrollo de smart contracts y aplicaciones distribuidas dapps para finalizar con el ecosistema de los famosos tokens en el curso nos centraremos en entender los or genes de la creaci n de blockchain empezando as viendo los detalles de la creaci n de bitcoin estudio de la arquitectura que permite la distribuci n de la informaci n y ofrece las caracter sticas de inmutabilidad que han revolucionado el mundo introducci n a la criptograf a de clave p blica mediante el estudio de las caracter sticas propias de la cadena de bloques como su transparencia privacidad anonimato entre otras m s desarrollo del algoritmo de hashing que act a como piedra angular en blockchain entendiendo la miner a y los protocolos famosos proof of work pow y proof of stake pos estudio de ethereum y del lenguaje de programaci n solidity que permite la creaci n de smart contracts y aplicaciones distribuidas dapps por ello presentaremos un proyecto de una dapp que ser presentado en un congreso nacional espa ol de ingenieros telem ticos creaci n de una blockchain con python desde 0 creaci n de una criptomoneda con python desde 0 conoceremos los tokens en especial los tokens erc 20 y nft con todo detalle y con ello llevaremos acabo el desarrollo y la creaci n de un token erc 20 con solidity creaci n de un sistema de pago con tokens con solidity para el parque de atracciones de disney finalmente repasaremos los proyectos m s interesantes de las altcoins a qui n le interesa el curso este curso es ideal para cualquier persona te lo recomiendo especialmente si eres un consultor freelance que no tiene habilidades de programaci n pero quieres transformar las empresas con el poder de la blockchain empresarios y due os visionarios que quieren subir sus empresas de categor a de nivel con el poder del blockchain iniciados en blockchain que quieren mejorar su portfolio con nuevos proyectos apasionados en la tecnolog a que quieren ganar experiencia con los pilares de la blockchain cualquier persona interesada en mejorar y adaptarse a los cambios tecnol gicos presentes una vez termines el curso ser s todo un profesional de blockchain te esperamos en clase para que por fin disfrutes de los pilares de la tecnolog a que ha dado origen a bitcoin y que as puedas convertirte en todo un profesional | blockchain solidity smart-contracts | blockchain |
pyRockets | py rockets django starter app to kickstart the development of backend apis | django python starter-kit create-django-app api-starter | server |
OCEChain | ocechain ocechain aims to create a decentralized autonomous content economy where content value can be recognized efficiently and all contributors can be incentivized directly and effectively to promote long term economic growth note requires go 1 11 https golang org dl build tendermint requires v0 26 1 rc0 other deps packages uncompress deps ocechain deps pkg tar bz2 into goroot src or gopath src run make in the top folder to start build blockchain command oce oce ocecli ocecli | blockchain |
|
nlp-resources | nlp resources a curated list of awesome machine learning frameworks libraries courses books and many more star and fork our repository for latest update table of contents free books free books courses courses videos and lectures videos and lecturers papers papers tutorials tutorials sample code sample code datasets datasets conferences conferences libraries libraries free books 1 nltk book http nltk org book 2 text mining with r julia silge and david robinson https www tidytextmining com courses 1 natural language processing coursera https www coursera org learn language processing 2 nautral language processing edx https www edx org course natural language processing nlp 3 oxford cs deep nlp https github com oxford cs deepnlp 2017 videos and lectures 1 2016 cs224d deep learning for natural language processing lecture videos https www youtube com playlist list plmimxx8char9ig0zhsytqgsdhb9weegam 2 natural language processing https www youtube com watch v miev29rvpuq list pl0ap34rkaadmjqjdskwold w2vscyruqc papers 1 breaking sticks and ambiguities with adaptive skip gram http arxiv org abs 1502 07257 2 distributed representations of words and phrases and their compositionality http papers nips cc paper 5021 distributed representations of words and phrases and their compositionality pdf 3 learning the dimensionality of word embeddings http arxiv org abs 1511 05392 4 emergence of language with multi agent games learning to communicate with sequences of symbols https papers nips cc paper 6810 emergence of language with multi agent games learning to communicate with sequences of symbols pdf 5 skip thought vectors http arxiv org abs 1506 06726 tutorials 1 natural language processing http aiplaybook a16z com docs guides nlp 2 machine learning nlp text classification using scikit learn python and nltk https towardsdatascience com machine learning nlp text classification using scikit learn python and nltk c52b92a7c73a 3 multi class classification tutorial with the keras deep learning library https machinelearningmastery com multi class classification tutorial keras deep learning library 4 topic modeling with scikit learn https medium com mlreview topic modeling with scikit learn e80d33668730 5 data science with python r sentiment classification using linear methods https www codementor io jadianes data science python r sentiment classification machine learning du107otfg sample code 1 sentiment https github com vivekn sentiment 2 prediksi gender nama https github com vickydasta prediksi gender nama 3 topic modeling https github com piskvorky topic modeling tutorial 4 pos tagging nltk bahasa indonesia https github com mrrizal pos tag indonesian 5 naive bayes document classifier bahasa indonesia https github com mrrizal document classifier datasets 1 amazon reviews https snap stanford edu data web amazon html 2 arxiv http arxiv org help bulk data s3 3 bimanlp https github com drr3d bimanlp tree old ver dataset libraries 1 nltk http www nltk org 2 gensim https github com rare technologies gensim 3 textblob https github com sloria textblob 4 spacy https github com explosion spacy 5 sastrawi https github com sastrawi sastrawi 6 nalapa https github com anpandu nalapa 7 polyglot https github com abosamoor polyglot contributing jika ingin berkontribusi dalam github ini sangat disarankan untuk pull request namun dengan resource berbahasa indonesia frequently ask question faq faq menjawab pertanyaan pertanyaan umum terkait repository ini mulai dari naming convention pertanyaan dasar hingga pertanyaan lanjut | natural-language-processing natural-language-understanding natural-language-generation natural-language natural-language-inference python indonesia machine-learning indonesian-language data-science machine-intelligence | ai |
task-tracker-app | preview of the app screenshot 40 https user images githubusercontent com 49793696 132160653 fbe22701 25da 49e7 91b0 476daf56c6d9 png link to the application task tracker app https krishansingh1 github io task tracker app after opening this link follow step 2 step 1 getting started with task tracker app this project was bootstrapped with create react app https github com facebook create react app available scripts in the project directory you can run npm start runs the app in the development mode open http localhost 3000 http localhost 3000 to view it in the browser the page will reload if you make edits you will also see any lint errors in the console step 2 this app is created using json server to build this app i used fake rest api using json server all the data come from this api and we made all requests like get the data post the data put the data delete the data from this server command so to use this app you have to run command in your terminal or from your cmd npm run server it will start the json server open new tab http localhost 5000 http localhost 5000 in your browser all the data of this app displayed in the above link | server |
|
ywen-mobile-ui | rc ywen mobile ui react ywenmobileui component npm version npm image npm url build status travis image travis url test coverage coveralls image coveralls url gemnasium deps gemnasium image gemnasium url node version node image node url npm download download image download url sauce test status https saucelabs com buildstatus rc ywen mobile ui https saucelabs com u rc ywen mobile ui sauce test status https saucelabs com browser matrix rc ywen mobile ui svg https saucelabs com u rc ywen mobile ui npm image http img shields io npm v rc ywen mobile ui svg style flat square npm url http npmjs org package rc ywen mobile ui travis image https img shields io travis react component ywen mobile ui svg style flat square travis url https travis ci org react component ywen mobile ui coveralls image https img shields io coveralls react component ywen mobile ui svg style flat square coveralls url https coveralls io r react component ywen mobile ui branch master gemnasium image http img shields io gemnasium react component ywen mobile ui svg style flat square gemnasium url https gemnasium com react component ywen mobile ui node image https img shields io badge node js 3e 0 10 green svg style flat square node url http nodejs org download download image https img shields io npm dm rc ywen mobile ui svg style flat square download url https npmjs org package rc ywen mobile ui browser support ie https raw github com alrra browser logos master internet explorer internet explorer 48x48 png chrome https raw github com alrra browser logos master chrome chrome 48x48 png firefox https raw github com alrra browser logos master firefox firefox 48x48 png opera https raw github com alrra browser logos master opera opera 48x48 png safari https raw github com alrra browser logos master safari safari 48x48 png ie 8 chrome 31 0 firefox 31 0 opera 30 0 safari 7 0 screenshots img src width 288 development npm install npm start example http localhost 8000 examples online example http yaliyingwy github io ywen mobile ui feature support ie8 ie8 chrome firefox safari keyboard install rc ywen mobile ui https nodei co npm rc ywen mobile ui png https npmjs org package rc ywen mobile ui usage js var ywenmobileui require rc ywen mobile ui var react require react react render ywenmobileui container api props table class table table bordered table striped thead tr th style width 100px name th th style width 50px type th th style width 50px default th th description th tr thead tbody tr td classname td td string td td td td additional css class of root dom node td tr tbody table test case http localhost 8000 tests runner html coverage coverage http localhost 8000 node modules rc server node modules node jscover lib front end jscoverage html w http localhost 8000 tests runner html coverage license rc ywen mobile ui is released under the mit license | front_end |
|
mECU | a miata ecu mecu overview i plan to develop an open source ecu which runs on an stm32 much like some of the existing ecu solutions available rusefi openecu i m pulling some design ideas and concepts from them but hopefully develop something of my own tailored to my own requirements a 1992 mazda mx 5 with a bp swap design goals i hope to support cop for at least 4 cylinders and with wasted spark up to 8 i don t want to develop a device that can do everything and the kitchen sink that s more suited to other more entrenched developers like haltech aem etc i hope to build a kit that someone can throw together really fast or buy preassmbled much like megasquirt is was q a q why not megasquirt i don t really agree with their design decisions and the price is a bit ridiculous for what you get and i believe that having more custom solutions for different car families is more what communities tend towards regardless on different forums you hear suggestions for this or that ems anyways so why not roll my own learn something and hopefully don t grenade my car | os |
|
nice-front-end-tutorial | p align center img src https cdn jsdelivr net gh nicejade nice front end tutorial assets images lotus svg alt nice front end tutorial width 100 height 100 p h1 align center nice front end tutorial h1 div align center a href https github com nicejade nice front end tutorial img src https img shields io github license nicejade nice front end tutorial svg alt license a a href https weibo com jeffjade img src https img shields io badge weibo jeffjade red svg style flat alt nice front end tutorial a a href https v2ex com t 449982 reply11 img src https img shields io badge chat on 20v2ex brightgreen svg alt chat on v2ex a a href https hacpai com article 1504767632550 img src https img shields io badge chat on 20hacpai brightgreen svg alt chat on hacpai a a href https www jeffjade com 2017 09 28 127 nice front end tutorial utm source github com img src https img shields io badge blog jeffjade com 23a696c8 svg alt blog homepage a a href https aboutme lovejade cn utm source github com img src https img shields io badge author nicejade 23a696c8 svg alt author nicejade a div div align center strong constantly updated front end resources tutorials opinions strong div https jeffjade com utm source nice front end tutorial https www jeffjade com 2017 09 28 127 nice front end tutorial the future ai ml dl https github com nicejade nice front end tutorial blob master tutorial ai ml dl tutorial md deno https github com nicejade nice front end tutorial blob master tutorial deno tutorial md flutter https github com nicejade nice front end tutorial blob master tutorial flutter tutorial md pwa https github com nicejade nice front end tutorial blob master tutorial pwa tutorial md python https github com nicejade nice front end tutorial blob master tutorial python tutorial md serverless https github com nicejade nice front end tutorial blob master tutorial serverless tutorial md webassembly https github com nicejade nice front end tutorial blob master tutorial webassembly md front end tutorial https github com nicejade nice front end tutorial blob master tutorial front end tutorial md ecmascript https github com nicejade nice front end tutorial blob master tutorial ecmascript tutorial md html 5 https github com nicejade nice front end tutorial blob master tutorial html tutorial md css 3 https github com nicejade nice front end tutorial blob master tutorial css3 tutorial md framework news https github com nicejade nice front end tutorial blob master tutorial framework news md vue https github com nicejade nice front end tutorial blob master tutorial vue tutorial md react https github com nicejade nice front end tutorial blob master tutorial react tutorial md angular https github com nicejade nice front end tutorial blob master tutorial angular tutorial md https github com nicejade nice front end tutorial blob master tutorial quickapp tutorial md https github com nicejade nice front end tutorial blob master tutorial wechat mini program tutorial md webpack https github com nicejade nice front end tutorial blob master tutorial webpack tutorial md gulp https github com nicejade nice front end tutorial blob master tutorial gulp tutorial md ui ui ui https github com nicejade nice front end tutorial blob master tutorial ui tutorial md optimization https github com nicejade nice front end tutorial blob master tutorial optimization tutorial md testing https github com nicejade nice front end tutorial blob master tutorial testing tutorial md back end tutorial nodejs https github com nicejade nice front end tutorial blob master tutorial nodejs tutorial md nginx https github com nicejade nice front end tutorial blob master tutorial nginx tutorial md mongodb https github com nicejade nice front end tutorial blob master tutorial mongodb tutorial md redis https github com nicejade nice front end tutorial blob master tutorial redis tutorial md front back end tutorial tools https github com nicejade nice front end tutorial blob master tutorial tools tutorial md chrome https github com nicejade nice front end tutorial blob master tutorial chrome tutorial md git github https github com nicejade nice front end tutorial blob master tutorial git tutorial md markdown https github com nicejade nice front end tutorial blob master tutorial markdown tutorial md docker https github com nicejade nice front end tutorial blob master tutorial docker tutorial md kubernetes https github com nicejade nice front end tutorial blob master tutorial kubernetes tutorial md graphql https github com nicejade nice front end tutorial blob master tutorial graphql tutorial md web security https github com nicejade nice front end tutorial blob master tutorial web security tutorial md other wizards list awesome list https github com nicejade nice front end tutorial blob master tutorial awesome list md front end channel https github com nicejade nice front end tutorial blob master tutorial front end channel md resume interviews https github com nicejade nice front end tutorial blob master tutorial resume interviews tutorial md interesting https github com nicejade nice front end tutorial blob master tutorial interesting tutorial md https nicelinks site utm source github about me https aboutme lovejade cn utm source github https www jeffjade com nicelinks utm source github https quickapp lovejade cn nicelinks utm source github https nice lovejade cn utm source github https docz lovejade cn utm source github https blog lovejade cn utm source github https weibo com jeffjade utm source github https www zhihu com people yang qiong pu https www jianshu com u 9aae3d8f4c3d segmentfault https segmentfault com u jeffjade twitter https twitter com nicejadeyang facebook https www facebook com yang gang jade web https image nicelinks site qrcode jqx jpg https image nicelinks site wqycx weixin png ver 1 img src https image nicelinks site nice links png width 300px alt img img src https camo githubusercontent com a4d1e07fce0639d0a43ebdb4074c5c1e67978934 68747470733a2f2f696d6167652e6e6963656c696e6b732e736974652f6e6963656c696e6b732d6d696e6970726f6772616d2d636f64652e6a706567 width 300px alt img mit http opensource org licenses mit copyright c 2018 present nicejade https aboutme lovejade cn utm source nice front end tutorial | front-end-development tutorial web webpack vue react mongodb redis pwa webassembly html5 css angular testing gulp git github docker quickapp miniprogram | front_end |
itba-tp-final | contexto acaba de ser contratado como el primer ingeniero de datos de una peque a empresa de viajes su primera tarea para usted fue demostrar el valor y los conocimientos que se pueden generar a partir de las canalizaciones de datos su plan es que una vez que demuestre lo valiosos que pueden ser los datos comenzar n a invertir en el uso de un proveedor de instancias en la nube por ahora su propia computadora tendr que hacerlo objetivo crear un dag de airflow que act e de etl para extraer extraiga datos est ticos s3 y los cargue en una base de datos de postgres datos a utilizar para llevar a cabo el desarrollo se utilizar el dataset de demoras y cancelaciones de viajes a reos de kaggle que ser hosteado en un bucket creado por el alumno en s3 desarrollo arquitectura arquitectura images arquitectura jpg dag dag images dag 1 jpg visualizaciones linea de tiempo que muestra los promedios de retraso se puede observar a simple vista que los ultimos dias el a o en particular 23 de diceimbre el promedio de retraso aumenta considerablemente dag images supersettimeline jpg tree map que visualiza los aeropuertos segun el promedio de retraso dag images supersettreemap jpg outliers cantidad de vuelos por dia en eje izquierdo versus outliers en eje derecho dag images fl count vs outliers 2009 jpg | cloud |
|
DBMS_Project | dbms project database management system spring 2019 indian institute of information technology sri city iiits instructor dr prerana mukherjee tas ms r abhijit mohanta vennelakanti vyshnavi ug rohan sukumaran arvind deshraj class time location mwf 9 00 am 11 00 am 113 115 tutorial time location monday 2 30 pm 4 30 pm 113 115 deliverables for project 1 codebase readme to be hosted as github repository 2 report in ieee acm double column format in docx pdf 3 demo 2 min video showing the workflow of the system live working demo during evaluation 4 app links website links regulations in project work 1 form a team size of 6 inter section groups are allowed 2 clearly demarcate the work so that during evaluation marks can be alloted as per the work distribution 3 report and code should be free of any plagiarised content from any web sources in case you take a particular code snippet or text from any web sources for report writing so give proper credits in the reference section as well as cross reference it in the report draft 4 report should have following subsections i abstract outlining your problem statement workflow and key results in brief ii problem statement clearly out line the problem statement motivation key challenges key highlights contributions in the work how do you handle the various challenges iii methodology put er diagram uml class diagram use case diagram system design architecture diagram showing work flow set of tools used front end and back end explanation of the modules in the system iv results snapshots of the ui key tables if any results and analysis discussion robustness testing some performance numbers if you state of your system vis vis already existing systems strengths drawbacks if any v conclusion future direction 5 you can record the screen to show the workflow in your system for video note additional scoring points would be there on novelty and innovative packaging of the system refrain from using projects done in ase and same project topics you can use any software for developing your system s front end refer to syllabus to see few s w database element is must any mysql oracle mongodb dynamodb cassandra | server |
|
CST680-HW | cst680 hw hw for drexel university s graduate cs t680 course on cloud software engineering | cloud |
|
python-bitcoin-blockchain-parser | bitcoin blockchain parser build status https travis ci org alecalve python bitcoin blockchain parser svg branch master https travis ci org alecalve python bitcoin blockchain parser coverage status https coveralls io repos alecalve python bitcoin blockchain parser badge svg branch master service github https coveralls io github alecalve python bitcoin blockchain parser branch master this python 3 library provides a parser for the raw data stored by bitcoind features detects outputs types detects addresses in outputs interprets scripts supports segwit supports ordered block parsing installing using pip pip install blockchain parser using source requirements python bitcoinlib plyvel coverage for tests plyvel requires leveldb development libraries for leveldb 1 2 x on linux install libleveldb dev sudo apt get install libleveldb dev install dependencies contained in requirements txt pip install r requirements txt then just run python setup py install developing first setup a virtualenv and install dependencies virtualenv p python3 venv source venv bin activate pip install r requirements txt run the test suite by lauching tests sh examples below are two basic examples for parsing the blockchain more examples are available in the examples directory unordered blocks this blockchain parser parses raw blocks saved in bitcoin core s blk file format bitcoin core does not guarantee that these blocks are saved in order if your application does not require that blocks are parsed in order the blockchain get unordered blocks method can be used python import os from blockchain parser blockchain import blockchain instantiate the blockchain by giving the path to the directory containing the blk files created by bitcoind blockchain blockchain os path expanduser bitcoin blocks for block in blockchain get unordered blocks for tx in block transactions for no output in enumerate tx outputs print tx s outputno d type s value s tx hash no output type output value ordered blocks if maintaining block order is necessary for your application you should use the blockchain get ordered blocks method this method uses bitcoin core s leveldb index to locate ordered block data in it s blk files python import os from blockchain parser blockchain import blockchain to get the blocks ordered by height you need to provide the path of the index directory leveldb index being maintained by bitcoind it contains ldb files and is present inside the blocks directory blockchain blockchain os path expanduser bitcoin blocks for block in blockchain get ordered blocks os path expanduser bitcoin blocks index end 1000 print height d block s block height block hash blocks can be iterated in reverse by specifying a start parameter that is greater than the end parameter python for block in blockchain get ordered blocks os path expanduser bitcoin blocks index start 510000 end 0 print height d block s block height block hash building the leveldb index can take a while which can make iterative development and debugging challenging for this reason blockchain get ordered blocks supports caching the leveldb index database using pickle https docs python org 3 6 library pickle html to use a cache simply pass cache filename to the ordered blocks method if the cached file does not exist it will be created for faster parsing the next time the method is run if the cached file already exists it will be used instead of re parsing the leveldb database python for block in blockchain get ordered blocks os path expanduser bitcoin blocks index cache index cache pickle print height d block s block height block hash note you must manually programmatically delete the cache file in order to rebuild the cache don t forget to do this each time you would like to re parse the blockchain with a higher block height than the first time you saved the cache file as the new blocks will not be included in the cache | blockchain |
|
FreeRTOSCpp | freertosmingw basic freertos port for x86 mingw winmm library building toolchain used to build the project was mingw project is using cmake as a build mechanism proposed ide is clion | os |
|
OIBSIP | oibsip oasis infobyte internship | mongodb nodejs postman react-router reacthooks reactjs | server |
Win10As | win 10 assistant make your windows 10 computer iot friendly with mqtt publish mqtt senors cpu prosessor load cpuprosessortime maintopic cpuprosessortime returns string 0 100 free memory in mb freememory maintopic freememory returns string of memory in mb volume muted maintopic mute 1 muted 0 not muted master volume in volume maintopic volume returns string of current volume setting 0 100 camera screnshot of primary monitor if enabled it publishes to specified folder as jpg file or published the maintopic mqttcamera topic battery sensors if enabled published to maintopic power with subtopics batterychargestatus batteryfulllifetime batterylifepercent batteryliferemaining powerlinestatus in use maintopic binary sensor inuse message on if the api getlastinputinfo is less then 30 seconds else off disk sensors maintopic drive subtopic with each drive letter with the following subtopics totalsize percentfree availablefreespace exsample kjetilsv drive c totalsize mqtt listeners the predefined is optional due safety resons mute unmute maintopic mute set 1 muted 0 not muted published to maintopic mute after setting volume maintopic volume set volume 0 100 published to maintopic volume after setting monitor maintopic monitor set 0 1 published to maintopic monitor after setting suspend pc maintopic suspend shutdown maintopic shutdown reboot maintopic reboot hibrernate maintopic hibrernate toast message maintopic toast displays a message on the windows computer message exsample home assistant kom ned kjetil c temp iselin jpg the the image must be visable from the windows computer tts maintopic tts mqtt message is sendt to the synthesizer currently the volume is set to 100 app running sensor maintopic app running message appname and published back to maintopic app running appname with 0 not running not found in process 1 found tested with common applications like spotify firefox skype exsample mosquitto pub t kjetilsv app running m spotify if spotify is running kjetilsv app running spotify return message 1 cmd commandstring chrome windowstyle 1 execparameters http vg no monitorid 1 custom commands maintopic customcommandname message is currently not used will be impemented in later versions one example of a custom command is lockcomputer thanks to fatbasta it s no added in the hass example file | mqtt iot win windows-10 | server |
FederatedGPT-Shepherd | h1 align center img src assets shepherd llamas png width 75 br shepherd br h1 h4 align center em span style font size 18pt a platform supporting federated instruction tuning span em h4 p align center a href overview overview a a href https arxiv org pdf 2305 05644 pdf paper a a href installation installation a a href data preparation data preparation a a href federated finetuning federated finetuning a a href inference inference a a href citation citation a p code license apache 2 0 https img shields io badge code 20license 20apache 202 0 blue https github com jayzhang42 federatedgpt shepherd blob main license usage and license notices the data code and checkpoints are intended and licensed for research use only overview recent advancements in fine tuning large language models llms have leveraged instructions created by humans or apis such as chatgpt and gpt 4 to revolutionize nlp research and industry applications however the collection of instructions from a wide array of individuals presents challenges in cost and privacy for instance collecting vast amounts of daily conversations from users is a valuable means of providing guidance for llms enabling them to generate authentic and genuine responses however privacy concerns may hinder users from sharing their conversations resulting in a limited quantity of instructions that are not fully representative of the target population federated learning a well studied and well developed learning approach provides a solution to addresses these challenges and paves the way for designing personalized llms tailored to individual users this repository shepherd offers a foundational framework for exploring federated finetuning of llms using heterogeneous instructions across diverse categories the framework is designed for ease of use adaptability and scalability to accommodate large datasets additionally it facilitates seamless integration of novel algorithms and configurations making it a convenient tool for researchers and practitioners in both the fl and the nlp community paper we are pleased to share our fedit https arxiv org pdf 2305 05644 pdf paper towards building the federated gpt federated instruction tuning we kindly invite you to read the paper for an in depth understanding of federated instruction tuning for llms and further insights into our repository p align center img src assets fedit png width 100 p installation the code requires some dependencies python 3 8 as specified in requirements txt please follow the relevant libraries to install or run bash pip install r requirements txt if bitsandbytes doesn t work install it from source https github com timdettmers bitsandbytes blob main compile from source md windows users can follow these instructions https github com tloen alpaca lora issues 17 data preparation prior to commencing the federated fine tuning make sure to create a data file for each individual client bash num client 10 the number of clients diff quantity 0 whether clients have different amounts of data python client data allocation py num client diff quantity running this command will save the data files in the folder data str num client the data file new databricks dolly 15k json for generating each client s local dataset is the first version of databricks dolly 15k which is a corpus of more than 15 000 records with 8 categeries generated by thousands of databricks lab https www databricks com learn labs employees please refer to their official repository dolly https github com databrickslabs dolly for the latest version of data categories distribution and heteogeneity the first version of databricks dolly 15k contains 8 categories with the distribution of each category shown in the following subfigure provided on the right p align center img src assets twodonuts png width 150 p without federated learning the model can be trained on only the particular local instruction categories of each user left due to privacy or cost issue by implementing our federated instruction tuning fedit https arxiv org pdf 2305 05644 pdf framework with this repo shepherd the llm can be trained on the local instruction datasets of all clients with greater diversity and quantity of data points that cover the entire range of the subject matter right the following figure presents an illustrative depiction of the category distributions among each client serving to exemplify the heterogeneity nature of clients instructions p align center img src assets hetero png width 150 p use your own data you can simply modify client data allocation py to load your own dataset for federated training federated finetuning to fully leverage the computational resources of each participating client our lightweight federated learning framework employs the well established parameter efficient method lora https github com microsoft lora for conducting local training the local training process is built upon the implementations of hugging face s peft https github com huggingface peft tim dettmers bitsandbytes https github com timdettmers bitsandbytes and the alpaca lora https github com tloen alpaca lora enabling the training to be completed within hours on a single nvidia titan rtx example usage bash python main py global model chavinlo alpaca native data path data output dir lora shepherd 7b num communication rounds 10 num clients 10 train on inputs group by length within the main py file the generalclient is a python class serves as a representation of the local client and encompasses five distinct sections that facilitate local training prepare local dataset build local trainer initiate local training train and terminate local training each of these sections is easy to comprehend and can be easily customized by adding your own functions to meet specific requirements we can also tweak the hyperparameters bash python main py global model chavinlo alpaca native data path data output dir lora shepherd 7b num communication rounds 10 num clients 10 client selection frac 0 1 local num epochs 2 local batch size 64 local micro batch size 32 local learning rate 0 0003 lora r 8 lora target modules q proj k proj v proj o proj train on inputs group by length our framework supports numerous popular llms such as llama https github com facebookresearch llama alpaca https github com tatsu lab stanford alpaca vicuna https vicuna lmsys org baize https github com project baize baize chatbot and others we welcome any pull requests that adapt our code to support additional models or datasets inference the globalmodel generate py file streamlines the inference process for the global model by utilizing a gradio interface this file loads the foundation model from the hugging face model hub and obtains the lora weights and configurations from the output directory bash python globalmodel generate py load 8bit base model chavinlo alpaca native lora weights path output path to lora weights lora config path output path to lora config citation please cite our fedit paper and this repo if you find our repository helpful for your research thank you misc zhang2023building title towards building the federated gpt federated instruction tuning author jianyi zhang and saeed vahidian and martin kuo and chunyuan li and ruiyi zhang and guoyin wang and yiran chen year 2023 eprint 2305 05644 archiveprefix arxiv primaryclass cs cl misc shepherdgithub author jianyi zhang and martin kuo and ruiyi zhang and guoyin wang and saeed vahidian and yiran chen title shepherd a lightweight github platform supporting federated instruction tuning year 2023 publisher github journal github repository howpublished url https github com jayzhang42 federatedgpt shepherd note we are constantly working to enhance this framework by resolving bugs and extending its functionality and simulation capabilities we welcome pull requests that adapt our code to support additional research goals such as benchmarking of models and datasets algorithmic enhancements and hardware simulation | federated-learning large-language-model | ai |
GPTLens | gptlens this is the repo for the code and datasets used in the paper large language model powered smart contract vulnerability detection new perspectives https arxiv org pdf 2310 01152 pdf accepted by the ieee trust privacy and security tps conference 2023 if you find this repository useful please give us a star thank you if you wish to run your own dataset please switch to the release branch sh git checkout release getting start step 0 set up your gpt 4 api get gpt 4 api from https platform openai com account api keys replace openai api enter your openai api key in src model py line 4 with your api key step 1 run auditor sh python run auditor py backend gpt 4 temperature 0 7 topk 3 num auditor 1 parameter description backend the version of gpt temperature the hyper parameter that controls the randomness of generation topk identify k vulnerabilities per each auditor num auditor the total number of independent auditors step 2 run critic sh python run critic py backend gpt 4 temperature 0 auditor dir auditor gpt 4 0 7 top3 1 num critic 1 parameter description backend the version of gpt temperature the hyper parameter that controls the randomness of generation auditor dir the directory of logs outputted by the auditor num critic the total number of independent critics step 3 run ranker sh python run ranker py auditor dir auditor gpt 4 0 7 top3 1 critic dir critic gpt 4 0 1 strategy default parameter description auditor dir the directory of logs outputted by the auditor critic dir the directory of logs outputted by the critic strategy the strategy for generating the final score note we observe that the output from auditors can drift largely between different runs due to randomness we upload a set of results that we obtained on september 28 using gpt 4 with 1 auditor 1 critic and 3 outputs per each contract see src logs the composite score less than 5 can be deemed as not being a vulnerability citation misc hu2023large title large language model powered smart contract vulnerability detection new perspectives author sihao hu and tiansheng huang and fatih lhan and selim furkan tekin and ling liu year 2023 eprint 2310 01152 archiveprefix arxiv primaryclass cs cr q a if you have any questions you can either open an issue or contact me sihaohu gatech edu and i will reply as soon as i see the issue or email | gpt-4 large-language-models smart-contracts vulnerability-detection gpt gpt-35-turbo blockchain ethereum | ai |
dtask | dtask dtask is a scheduler for statically dependent tasks many embedded sensor applications can be structured as a set of dependent computations starting from an interrupt intermediate computations may be shared such as filtered data this is a type of data flow programming dtask is designed to make this easy to express and yet require minimal resources dtask could replace an rtos in some situations and is much less complicated dtask uses annotations in the source code to calculate task dependencies during compilation and then sorts them topologically at runtime the task are processed in waves so that results can be passed atomically without locking tasks can be enabled and disabled at runtime which will automatically enable disable other related tasks dtask allows definition of many small modular tasks that will be easier to test and yet can be inlined for performance this is not an official google product usage declare a task like this dtask task name result type bool z is valid result type z int x dref an integer int y dref another integer compute z from x and y set z is valid if the computation yields a new valid result if z is valid task name z return z is valid code can be added to run when the task is enabled or disabled dtask enable task name run when task name is enabled dtask disable task name run when task name is disabled tasks are declared in groups a task group is declared with dtask group group name the following tasks are part of this group this will create a struct type that contains the state named group name state which contains a field for every task api outside of a task or from another task group see below dtask enable state task1 task2 enable task1 and task2 and dependencies dtask disable state task1 task2 disable task1 and task2 and dependents dtask clear state task1 task2 only enable task1 and task2 if enabled tasks depend on them dtask switch state task1 task2 enable only task1 and task2 and depencencies disable all others dtask select state commit the changes from the above calls and run dtask enable dtask disable code for tasks that are enabled disabled dtask run state initial task run tasks running the task named initial unconditionally and within a task to get a value x dref task name to set the result of a task dref task name x task name should be the name of the current task only see main c and tasks c for an example run make test to build and run the example delays delays can be used to look at the history of a value useful to implement fir filters for example use delay type length as a task type to implement a delay then use the following api delay read delay type len i returns a pointer to the value i waves ago delay write delay type len value writes the value for the current wave delay fill delay type len value fills the delay with a value the delay must be declared with declare delay type length see factor tasks c for an example nesting task groups can be nested by calling dtask run from a task in another task group see toplevel tasks c for an example msp430 lauchpad pump demo click below for a video of the pump controller demo in operation pump demo video https github com google dtask blob master doc images pump youtube thumbnail jpg https youtu be qkpn5xx2aha | real-time rtos embedded frp dataflow | os |
ML_course | epfl machine learning course cs 433 machine learning course fall 2023 the course website and syllabus is available here https www epfl ch labs mlo machine learning cs 433 this repository contains all lecture notes labs and projects resources code templates and solutions videos will be available after each lecture on the mediaspace channel https mediaspace epfl ch here for 2022 lectures https tube switch ch switchcast epfl ch series 60d0234f e9b0 42c9 b727 35e518fe8833 and for 2022 exercises https www youtube com playlist list pl4o4bxki facbxnceagfovutetfyhsx6r and here for 2021 https www youtube com playlist list pl4o4bxki fad4nb7yyr5f8witmpxjpeaa contact us if you have any questions via the discussion forum https edstem org eu courses 797 discussion for epfl students or email to the assistants or teachers please create issues and pull requests here using the menu above | ai |
|
azad | p align center img src doc images azad big png height 100px p azad the game of life intro base structure installation | server |
|
AuDi-GIT-turtlebot3_autorace | audi git turtlebot3 autorace program development for driving autonomous mobile robot br i participated in the 2018 r biz challenge turtlebot3 autorace with kim seong ho yoo kyung hyun and shim gyu ho br we used ros robot operating system framework and opencv in python3 as software br also we used rototis s turtlebot3 nvidia s jetson tx2 board and arduino s opencr board as hardware br we reference the book called ros robot programming br if you re interested in our project please see the following video br img src img reference ros book jpg width 200 height 300 2018 r biz challenge turtlebot3 autorace click below video watch the video https img youtube com vi jxrdtc2mzk8 0 jpg https www youtube com watch v jxrdtc2mzk8 misson01 img src img 1 jpg width 400 height 200 misson02 img src img 2 jpg width 400 height 200 misson03 img src img 3 jpg width 400 height 200 misson04 img src img 4 jpg width 400 height 200 misson05 img src img 5 jpg width 400 height 200 misson06 img src img 6 jpg width 400 height 200 modeling img src img modeling cameramount png width 200 height 200 img src img modeling ultrasonicmount bmp width 200 height 200 img src img modeling field bmp width 200 height 200 img src img modeling blockingbar bmp width 200 height 200 img src img modeling trafficlight bmp width 200 height 200 img src img modeling trafficsign bmp width 200 height 200 field img src img reference field01 jpg width 200 height 200 img src img reference field02 jpg width 200 height 200 img src img reference realblockingbar bmp width 200 height 200 img src img reference realtrafficsign bmp width 200 height 200 image processing img src img reference nodedesign bmp width 800 height 400 br img src img reference lane01 jpg width 400 height 200 img src img reference lane02 jpg width 400 height 200 br img src img reference lane03 jpg width 400 height 200 img src img reference lane04 png width 400 height 200 image processing vs cnn convolution neural network https user images githubusercontent com 22444743 148670749 08df750c 5671 4e2d 8470 82dab4deb788 mp4 slam simultaneous localization and mapping https user images githubusercontent com 22444743 148670537 b4beebef 8242 4c46 b16a d907c70e6111 mp4 | front_end |
|
JC_Assignment | jumpcloud software engineer programming assignment qwex23 https circleci com gh qwex23 jc assignment svg style svg https app circleci com pipelines github qwex23 jc assignment the stats module has the ability to track the average time for a given action it works by calculating a running average of the time of each action as they are added specification https github com qwex23 jc assignment blob main software 20engineer 20 20backend 20assignment pdf usage stats hello world package main import github com qwex23 jc assignment stats func main st stats newstats call1 action jump time 100 call2 action run time 75 call3 action jump time 200 st addaction call1 st addaction call2 st addaction call3 statsjson err st getstats if err nil println bad news we had an error print statsjson will output similar to action jump avg 150 action run avg 75 downloading and running the code you should have the following installed go https golang org dl developed and tested on go version go1 16 5 windows amd64 but there is not reason to believe at time of writing that go 1 16 x for any os would be incompatible however they are not tested go exe will need to be updated into the path git https git scm com downloads open a terminal or cmd in the desired directory and run the following commands git clone https github com qwex23 jc assignment git cd jc assignment stats testing the module go test v to use the module standalone cd main open main go in the text editor and add your custom code to use the module go run will run the main program compile the code go build will compile the main program in to main exe using the module go get github com qwex23 jc assignment stats add the following import to your code import github com qwex23 jc assignment stats design map vs slice this implementation uses a map with key of the given action and value of a struct that contains the total number of actions and the running total of time units from this we can calculate the running average by dividing the two values map was chosen for its low memory footprint and fast lookup an alternative implementation could use a slice this would hold each action input as a struct in the array in memory upon the getstats call the program could then calculate the average of every action by making one or more passes through the slice this would provide the most extensibility the cost of this would be both memory usage and lookup time for averaging a map with key of action and value of slice where the slice is each action input could be used to reduce the lookup time but have little effect on the memory usage mutex for unsafe operations a mutex was chosen for this implementation to ensure thread safety this allowed for simple implementation and guaranteed thread safety for the shared memory operations an alternative implementation could be to use a database engine or to investigate more into thread safe data structures in golang assumptions no other statistics would be needed from the program no persistence of input is necessary json is case insensitive to go s standard the values passed for time will be relatively small in number of values or size of values because the program calculates the total of all values per action there is a possibility of overflowing uint64 18446744073709551615 assuming the use case specified in the document uint64 would have adequate headroom for the total of all specified values the program was designed under that assumption a mitigation for this would be to use the cumulative moving average function https en wikipedia org wiki moving average cma uses the last value and the total number of values to calculate the new average implementing this would allow for max uint64 number of times with value that is valid uint64 the time value cannot be negative the average returned will be an integer approximation based on go s rounding rules the order of the action averages in the return of getstats is unimportant adding a sort before we marshal the final slice would fix this at the cost of higher runtime complexity performance the benchmark tests can be run from the stats directory using the command note on windows this only ran successfully in powershell and not cmd go test bench benchmarking test provided key insight into the performance of the module the results showed that there was an increased compute time based on the number of unique actions in the core map the design decision was made to use a hashmap with key of action string and value of the average values the thought at the time was that there would be quick inserts in the addactions call and this would be advantageous for a hypothetical real world use case the speed that the map could have provided would make up for the lengthier getstats because hypothetically there would be more adds than gets the results of the test back up this hypothesis go test bench goos windows goarch amd64 pkg github com qwex23 jc assignment stats cpu amd ryzen 9 3900x 12 core processor benchmarkgetstatssmall 24 733446 1489 ns op benchmarkgetstatsmega 24 39 30190587 ns op benchmarkgetstatssmall direct 24 3347928 357 2 ns op benchmarkgetstatsmega direct 24 100 13974519 ns op benchmarkaddactionmega 24 799914 1632 ns op benchmarkaddactionsmall 24 799999 1542 ns op benchmarkaddactionmega direct 24 2580138 448 3 ns op benchmarkaddactionsmall direct 24 2927943 409 3 ns op benchmarkgetstats direct highvolume 24 12369030 97 23 ns op benchmarkgetstats direct lowvolume 24 12302934 96 90 ns op benchmarkaddactionmega direct highvolume 24 46109864 25 89 ns op benchmarkaddactionmega direct lowvolume 24 46153490 26 07 ns op pass ok github com qwex23 jc assignment stats 18 094s the benchmarks show that the performance of getstats is directly dependant on the numer of unique actions see getstatssmall vs getstatsmega we can also see that the amount of samples of the same action name has no effect on the performance of either addaction or getstats see tests with highvolume vs lowvolume during development of the benchmarks it is found that map lookup in go for string keys is at best o n log n https stackoverflow com questions 29677670 what is the big o performance of maps in golang because of the preprocessing necessary this could be the cause of the slightly larger processing time for the larger datasets | golang go jumpcloud | cloud |
mistral | div align center img src https github com stanford crfm mistral raw main docs mistral components png height 300px div mistral mistral a strong and cool northwesterly wind that builds as it moves bringing good health and clear skies license https img shields io badge license apache 202 0 green svg https opensource org licenses apache 2 0 pre commit https img shields io badge pre commit enabled green logo pre commit logocolor white https github com pre commit pre commit a framework for transparent and accessible large scale language model training built with hugging face https huggingface co includes tools and helpful scripts for incorporating new pre training datasets various schemes for single node and distributed training including on cloud providers like gcp and importantly scripts for evaluation visit our read the docs https nlp stanford edu mistral for the full documentation a propulsion endeavor quickstart installation mistral has been tested with python 3 8 12 pytorch 1 11 0 compiled with cuda 11 3 cuda 11 3 nccl 2 10 transformers 4 17 0 and deepspeed 0 6 0 the environment can be easily built with the following commands bash conda create n mistral python 3 8 12 pytorch 1 11 0 torchdata cudatoolkit 11 3 c pytorch conda activate mistral pip install r setup pip requirements txt a yaml export of a tested environment is provided at environments environment gpu yaml environments and non python dependencies can be managed with conda and python dependencies can be managed with pip note conda was used for the pytorch install to get the version compiled with cuda 11 3 training gpt 2 micro prerequisites first make sure to update conf mistral micro yaml with the directories you want to store the hugging face cache and model runs artifacts caching artifacts cache dir path to artifacts run dir path to runs next make sure that path to mistral is on your pythonpath single node single gpu training for single node single gpu training run bash conda activate mistral cd mistral cuda visible devices 0 python train py config conf mistral micro yaml nnodes 1 nproc per node 1 training arguments fp16 true training arguments per device train batch size 2 run id tutorial gpt2 micro multi node multi gpu training with deepspeed modify job hostfile in the following way hostname of first machine slots number of gpus hostname of second machine slots number of gpus hostname of the nth machine slots number of gpus below is an example hostfile where we train on machine1 and machine2 with 8 gpus each machine1 slots 8 machine2 slots 8 to start distributed training run bash conda activate mistral cd mistral deepspeed num gpus 8 num nodes 2 master addr machine1 train py config conf tutorial gpt2 micro yaml nnodes 2 nproc per node 8 training arguments fp16 true training arguments per device train batch size 4 training arguments deepspeed conf deepspeed z2 small conf json run id tutorial gpt2 micro multi node note you may need to adjust your batch size depending on the capacity of your gpus if you are interested in training a model on google cloud check out our google cloud kubernetes tutorial https nlp stanford edu mistral tutorials gcp plus kubernetes html using the model model checkpoints will be stored in the directory specified by the artifacts run dir an example checkpoint might be in path to runs tutorial gpt2 micro checkpoint 1000 mistral stores model checkpoints in the hugging face format so models can be loaded and used in the same manner as if one had trained the model with hugging face for instance to generate text with transformers python from transformers import gpt2lmheadmodel gpt2tokenizer tokenizer gpt2tokenizer from pretrained gpt2 model gpt2lmheadmodel from pretrained stanford crfm eowyn x777 checkpoint 400000 input ids tokenizer encode hello world this is a language model prompt return tensors pt sample output model generate input ids do sample true max length 50 top k 50 print output n 100 print tokenizer decode sample output 0 skip special tokens true check out this google colab notebook https colab research google com github stanford crfm mistral blob main generate text ipynb to run this demo resources the propulsion team has trained 5 gpt 2 medium models and 5 gpt 2 small models on the openwebtext corpus https huggingface co datasets openwebtext as found in datasets https huggingface co datasets each model has 600 checkpoints subject to the following checkpoint schedule every 10 steps for the first 0 100 steps every 50 steps from 100 2000 steps every 100 steps from 2000 20 000 steps every 1000 steps from 20 000 400 000 steps checkpoints can be downloaded from hub https huggingface co stanford crfm run type seed download alias gpt 2 small 21 download https huggingface co stanford crfm alias gpt2 small x21 tree main battlestar gpt 2 small 49 download https huggingface co stanford crfm battlestar gpt2 small x49 tree main caprica gpt 2 small 81 download https huggingface co stanford crfm caprica gpt2 small x81 tree main darkmatter gpt 2 small 343 download https huggingface co stanford crfm darkmatter gpt2 small x343 tree main expanse gpt 2 small 777 download https huggingface co stanford crfm expanse gpt2 small x777 tree main arwen gpt 2 medium 21 download https huggingface co stanford crfm arwen gpt2 medium x21 tree main beren gpt 2 medium 49 download https huggingface co stanford crfm beren gpt2 medium x49 tree main celebrimbor gpt 2 medium 81 download https huggingface co stanford crfm celebrimbor gpt2 medium x81 tree main durin gpt 2 medium 343 download https huggingface co stanford crfm durin gpt2 medium x343 tree main eowyn gpt 2 medium 777 download https huggingface co stanford crfm eowyn gpt2 medium x777 tree main each model has a distinct git repo and each checkpoint is stored as a branch as an example here s how to get the battlestar model s checkpoint for step 300000 make sure you have git lfs installed https git lfs github com git lfs install get checkpoint 300000 for battlestar git clone https huggingface co stanford crfm battlestar gpt2 small x49 branch checkpoint 300000 single branch cd battlestar gpt2 small x49 git lfs pull for convenience every model and step checkpoint is listed in mistral models json issues to ask questions report issues or request features please use the github issue tracker https github com stanford crfm mistral issues before creating a new issue please make sure to search for existing issues that may solve your problem differences between mistral and hugging face please visit the following page https nlp stanford edu mistral hugging face differences html that outlines the differences between the two codebases contributing please see the following page https nlp stanford edu mistral contributing html for information on contributing | ai |
|
malware-detection | malware detection experiments in malware detection and classification using machine learning techniques 1 microsoft malware classification challenge https www kaggle com c malware classification 1 1 feature engineering initial feature engineering consisted of extracting various keyword counts from the asm files as well as the entropy and file size from the byte files of the 10868 malware samples in the training set image files of the first 1000 bytes of the asm and byte files were created and combined with keyword and entropy data this resulted in a set of 2018 features flow control graphs and call graphs were generated for each asm sample a feature set was then generated from the graphs including graph maximum delta density diameter and function counts etc 1 2 feature selection statistical analysis of the feature set using chi squared tests to remove features that are independent of the class labels or have low variance the byte file images were found to be weak learners and were removed from the feature set a comparison of the best features from the chi squared tests with reduced feature sets of between 10 50 of the original features 1 2 1 selection comparison testing with an extratreesclassifier and 10 fold cross validation produced the following results original asm keyword counts 1006 features logloss 0 034 10 best asm features with entropy and image features 202 features logloss 0 0174 20 best asm with entropy and image features 402 features logloss 0 0164 30 best asm with entropy and image features plus feature statistics 623 features multiclass logloss 0 0133 accuracy score 0 9978 confusion matrix 1540 0 0 0 0 1 0 0 0 1 2475 2 0 0 0 0 0 0 0 0 2942 0 0 0 0 0 0 1 0 0 474 0 0 0 0 0 2 0 0 0 38 2 0 0 0 3 0 0 0 0 748 0 0 0 1 0 0 0 0 0 397 0 0 0 0 0 0 0 0 0 1225 3 0 0 0 0 0 0 0 8 1005 40 best asm and image features with feature statistics extratreesclassifier with 1000 estimators on 10868 training samples and 823 features using 10 fold cross validation multiclass logloss 0 0135 accuracy score 0 9976 confustion matrix 1541 0 0 0 0 0 0 0 0 1 2475 2 0 0 0 0 0 0 0 0 2942 0 0 0 0 0 0 1 0 0 474 0 0 0 0 0 5 0 0 0 37 0 0 0 0 5 0 0 0 0 746 0 0 0 1 0 0 0 0 0 397 0 0 0 0 0 0 0 0 0 1227 1 0 0 0 0 0 0 0 9 1004 1 2 2 feature selection summary the performance of the extratreesclassifier is optimal at around 30 of asm and image features with highest variance plus sample statistics entropy and file size adding call graph features produced a marginal improvement it is possible that better classification accuracy would be achieved by using an ensemble of different classifiers with the asm image and call graph feature sets as separate inputs to the various classifiers 1 3 model selection selection of candidate models using gridsearchcv to find optimal classifier hyper parameters svm extratrees xgboost 30 best features logloss 0 0080 accuracy 0 9981 confusion matrix 1540 0 0 0 0 1 0 0 0 2 2475 0 1 0 0 0 0 0 0 0 2941 0 0 0 1 0 0 0 0 0 474 0 1 0 0 0 1 0 0 0 41 0 0 0 0 4 0 0 0 1 746 0 0 0 0 0 0 0 0 0 398 0 0 0 0 0 0 0 0 0 1227 1 0 0 0 0 0 0 0 8 1005 naivebayes knn 1 4 graphs file entropy graph 1 https github com dchad malware detection blob master resources file entropy by class png file entropy by malware class 1 shannon s entropy by malware class a score of 0 0 means the bytes are all the same value a score of 1 0 means every byte in the file has a different value file entropy graph 2 https github com dchad malware detection blob master resources file entropy by size png file entropy by file size 2 shannon s entropy by file size a score of 0 0 means the bytes are all the same value a score of 1 0 means every byte in the file has a different value asm registry counts https github com dchad malware detection blob master resources register counts png edx by esi registry counts 3 assembler register edx by esi counts 1 5 conclusions the best accuracy scores were achieved with xgboost 99 81 and extratreesclassifier 99 76 using a feature set of 623 asm image and entropy features marginal improvements could be achieved using additional features and ensemble methods however due to the limited sample size further efforts are unlikely to produce significant improvements in prediction accuracy analysis will now focus on much larger sample sizes from virusshare com as described in the following sections 2 virusshare com malware collection analysis virusshare com regularly publishes huge collections of malware binaries for use by researchers each malware archive is currently around 25gb in size several of the latest archives have been downloaded to use as training and test sets the archives used are training set virusshare 00251 zip and virusshare 00252 zip 131072 malware samples virusshare 00263 zip and virusshare 00264 zip 131072 malware samples virusshare apt1 293 zip 293 malware samples testing set 2 1 automated unpacking and disassembly of malware binaries using cuckoo sandbox and unpack py for behaviourial analysis unpacking the binaries and dumping process memory for intransigent samples manual unpacking with immunity debugger and ida pro tools cuckoo sandbox https github com cuckoosandbox cuckoo unpack py https malwaremusings com 2013 02 26 automated unpacking a behaviour based approach https github com malwaremusings unpacker ida pro 5 0 https www hex rays com products ida support download freeware shtml immunity debugger https www immunityinc com products debugger volatility https github com volatilityfoundation ildasm exe https msdn microsoft com en us library f7dy01k1 v vs 110 aspx ndisasm http www nasm us pub nasm releasebuilds 2 12 02 trid http mark0 net soft trid e html clamav clamav net windows defender malwarebytes anti malware virustotal com environment setup debian apt install virtualbox virtualbox dkms python dev libffi dev virtualenv virtualenvwrapper clamav pip install cython numpy scipy scikit learn matplotlib jupyter pandas xgboost git clone https github com cuckoosandbox cuckoo git clone https github com volatility environment setup windows todo 2 2 generating training labels clamav and windows defender used for initial training label generation or virustotal com aggregate classification if they cannot identify the culprit malwarebytes was also used but it crashed at the end of the scan and the log files could not be recovered av scan results results virusshare 00251 57529 files classified as malicious 8007 files classified as non malicious results virusshare 00252 56625 files classified as malicious 8911 files classified as non malicious results virusshare 00263 51612 files classified as malicious 13924 files classified as non malicious results virusshare 00264 42274 files classified as malicious 23262 files classified as non malicious results virusshare apt1 293 292 files classified as malicious 1 file classified as non malicious total malware types 8334 total malware families 2737 total files 262437 2 2 1 graphs malware counts https github com dchad malware detection blob master resources malware counts png malware counts 4 top 10 malware counts packer counts https github com dchad malware detection blob master resources packer counts png packer counts 5 top 10 compiler packer counts file call graph 1 https github com dchad malware detection blob master resources vs251 call graph vertex by edge graph png virusshare 251 pe coff call graph vertext x edge count 6 virusshare 251 call graph vertex by edge count file histogram graph 1 https github com dchad malware detection blob master resources vs251 entropy histogram png file entropy histogram 7 virusshare 251 shannon s file entropy histogram 2 3 converting to asm and feature extraction ida pro and objdump for disassembly of binaries to asm text files feature sets will consist of entropy and file size from packed binaries entropy and file size from unpacked binaries file magic signatures and trid signatures asm features from disassembled unpacked binaries executable header features call graph features function counts extracted from call graphs sample statistics behavioural features from cuckoo sandbox reports memory features from volatility reports 2 4 feature selection and reduction 1 pe coff binaries chi2 tests vs251 feature sets 54911 samples 240 pe asm and header features pe asm function count features vs252 feature sets 46165 samples 271 pe asm and header features pe asm function count features vs263 feature sets 40974 samples 203 pe asm and header features pe asm function count features vs264 feature sets 14366 samples 243 pe asm and header features pe asm function count features 2 elf binaries 3 java bytecode 4 javascript 5 html 6 pdf 2 5 model selection 2 5 1 pe coff model selection model selection with 10 fold cross validation 1 extratreesclassifier vs251 100 estimators accuracy score 0 912 500 estimators accuracy score 1000 estimators accuracy score memory fail vs252 100 estimators accuracy score 0 888 12 75 minutes 500 estimators accuracy score 1000 estimators accuracy score vs263 100 estimators accuracy score 0 903 9 63 minutes 500 estimators accuracy score 1000 estimators accuracy score vs264 100 estimators accuracy score 0 889 2 27 minutes 500 estimators accuracy score 0 890 14 57 minutes 1000 estimators accuracy score 2 xgboost vs251 100 estimators accuracy score xgboost vs252 100 estimators accuracy score xgboost vs263 100 estimators accuracy score xgboost vs264 100 estimators accuracy score 3 lightgbm vs251 100 estimators accuracy score 0 892 vs252 100 estimators accuracy score 0 676 171 23 minutes vs263 100 estimators accuracy score vs264 100 estimators accuracy score 0 758 9 26 minutes 200 estimators accuracy score 0 750 18 53 minutes 4 randomforestclassifier vs251 100 estimators accuracy score 0 903 500 estimators accuracy score 1000 estimators accuracy score vs252 100 estimators accuracy score 0 881 81 34 minutes vs263 100 estimators accuracy score vs264 100 estimators accuracy score 0 879 15 45 minutes model stacks ensembles 1 one input layer of classifiers 1 output layer classifier layer 1 six x layer one classifiers extratrees x 2 randomforest x 2 xgboost x 1 lightgbm x 1 layer 2 one classifier extratrees final labels 2 voting democratic and weighted democratic six x layer one classifiers extratrees x 2 randomforest x 2 xgboost lightgbm democratic vote geometric and sum means final labels weighted six x layer one classifiers extratrees x 2 randomforest x 2 xgboost lightgbm weighted vote extratrees double weight geometric and sum means final labels 3 multiple layers of classifiers layer one layer two layer 3 final labels layer 1 extratrees x 2 randomforest x 2 xgboost x 1 lightgbm x 1 layer 2 extratrees x 2 randomforest x 2 xgboost x 1 lightgbm x 1 layer 3 extratrees x 1 4 combined pe coff features function count features layer 1 layer 2 final labels layer 1 a models combined features layer one extratrees x 2 randomforest x 2 xgboost x 1 lightgbm x 1 layer 1 b models function count features layer one extratrees x 2 randomforest x 2 xgboost x 1 lightgbm x 1 layer 2 extratrees x 1 final labels 5 combine outputs from 1 2 3 and 4 vote final labels 2 5 2 elf model selection 2 5 3 java bytecode model selection 2 5 4 javascript model selection todo 2 6 conclusions todo 2 7 workflows 2 7 1 training label generation 1 antivirus scans using clamav and windows defender clamscan v r directory containing the nastiness clamav report txt windows defender see notes in section 7 on extracting windows defender logs 2 generate scalar training labels for each malware type and family process av reports py combine av reports py generate train labels py 2 7 2 feature engineering 2 7 2 1 pe coff malware features 1 file entropy feature generation feature extraction entropy py 2 file magic signature and trid signature feature generation trid check file py generate file ids py feature extraction file id py 3 packer identification feature generation generate packer ids py feature extraction packer id py 4 asm feature generation unpacked pe files disassemble pe py feature extraction pe asm py generate pe header tokens py feature extraction pe header py 5 asm feature generation packed pe files todo 6 call graph generation and feature extraction generate call graphs pe asm py generate function column names py function name clean py feature extraction pe function counts py feature reduction pe function counts py 7 behavioural analysis feature generation todo 8 memory analysis feature generation todo 2 7 2 2 elf malware features 1 file entropy feature generation feature extraction entropy py 2 file magic signature and trid signature feature generation trid check file py generate file ids py feature extraction file id py 3 packer identification feature generation generate packer ids py feature extraction packer id py 4 asm feature generation disassemble elf py feature extraction elf asm py 5 call graph generation 6 behavioural analysis feature generation 7 memory analysis feature generation 2 7 2 3 java bytecode features 1 convert bytecode to tokens 2 extract bytecode features tools javap https docs oracle com javase 7 docs technotes tools windows javap html 2 7 2 4 javascript html features 1 generate javascript html keywords 2 unpack javascript 3 extract javascript html features tools 2 7 2 5 pdf features 1 generate pdf keywords 2 extract javascript shellcode macros 3 extract pdf feature sets tools peepdf https github com jesparza peepdf 2 7 3 feature selection 2 7 3 1 pe coff feature selection 1 pe coff feature reduction feature reduction pe asm py feature reduction pe header py feature reduction pe function counts py 2 74 model selection todo 1 pe coff model selection model selection pe coff py 3 automated sensor malware detection todo 4 references todo 5 notes on installing xgboost for python 5 1 source install if installing from source after building and installing you have problems loading other packages it is because of the xgboost 0 4 py2 7 egg pth file that the install script dumps in the python dist packages directory you will have to delete the pth file then go change the installation of the xgboost egg and egg info files in the python dist packages directory from usr local lib python2 7 dist packages xgboost 0 4 py2 7 egg egg info to usr local lib python2 7 dist packages xgboost 0 4 py2 7 dist info and usr local lib python2 7 dist packages xgboost 0 4 py2 7 egg xgboost to usr local lib python2 7 dist packages xgboost now python will be able to find all the packages 5 2 pip install pip install xgboost now works for version 0 6a2 on debian ubuntu mint distros 5 3 anaconda install xgboost is not a part of the official distribution but several community members have created conda packages for it the most up to date package seems to be by user creditx the following command will install the package conda install c creditx xgboost 6 notes on installing cuckoo sandbox python 2 7 is preferred for cuckoo sandbox attempting with python 3 x will be a fail installing the python module requirements in requirements txt results in failure because the module dpkt is only compatible with python 2 x versions if using anaconda or python 3 x then revert to python 2 7 or use mkvirtualenv to create a virtual environment to run cuckoo for example mkvirtualenv p usr bin python cuckoosandbox note if using anaconda remove the anaconda bin directory from path or it will cause an error when setting up the virtual environment also ensure that libxml2 dev and libxslt1 dev are installed or there will be build errors when installing the requirements 7 notes on extracting windows defender logs open a powershell run as administrator enter the following commands cd program files windows defender mpcmdrun getfiles scan several log and cab files will be placed in c programdata microsoft windows defender support the windows defender malware detection log is called mpdetection yymmdd hhmm log 8 notes on multi architecture disassembly with objdump ensure binutils multi target support has been installed linux mint 18 note linux mint 17 does not have mips architecture in binutils have to install from sauce apt install binutils binutils aarch64 linux gnu binutils alpha linux gnu binutils arm linux gnueabi binutils arm linux gnueabihf binutils arm linux gnueabihf binutils arm none eabi binutils avr binutils dev binutils doc binutils gold binutils h8300 hms binutils hppa linux gnu binutils hppa64 binutils hppa64 linux gnu binutils m68hc1x binutils m68k linux gnu binutils mingw w64 binutils mingw w64 i686 binutils mingw w64 x86 64 binutils mips linux gnu binutils mips64 linux gnuabi64 binutils mips64 linux gnuabi64 binutils mips64el linux gnuabi64 binutils mips64el linux gnuabi64 binutils mipsel linux gnu binutils msp430 binutils multiarch binutils multiarch dev binutils powerpc linux gnu binutils powerpc linux gnuspe binutils powerpc linux gnuspe binutils powerpc64 linux gnu binutils powerpc64 linux gnu binutils powerpc64le linux gnu binutils powerpc64le linux gnu binutils s390x linux gnu binutils sh4 linux gnu binutils source binutils sparc64 linux gnu binutils z80 elf binutils 9 notes on installing lightgbm 1 clone build and install git clone recursive https github com microsoft lightgbm cd lightgbm mkdir build cd build cmake make j cd python package python setup py install 2 if you have problems with building or installing python module apt update apt upgrade apt install cmake pip install setuptools numpy scipy scikit learn u 3 if you have problems with updating setuptools sklearn etc and you probably will because pip train wreck apt purge y python pip wget https bootstrap pypa io get pip py python get pip py apt install python pip pip install setuptools numpy scipy scikit learn u | ai |
|
gcp-data-engineer-exam | google cloud professional data engineer exam study materials several engineers at leverege recently studied for and passed the google cloud professional data engineer https cloud google com certification data engineer certification exam the exam not only covers google s flagship big data and machine learning products e g bigquery bigtable cloud ml engine but also tests you on your ability to analyze and design for data engineering problems while we had experience with many of the gcp products tested on the exam more studying was necessary to encompass the entire scope of the exam we have put together a collection of study materials https github com leverege gcp data engineer exam blob master data 20engineering 20notes pdf that we used to prepare for the exam we hope that our study guides help you pass your exam on your first try exam format google cloud professional data engineer exam consists of 50 multiple choice questions you have two hours to complete the exam at a certified test location it s important to note that paper and pencils are not allowed in the exam we highly suggest going through the official practice exam https cloud google com certification practice exam data engineer without writing anything down to simulate an actual test environment some questions will ask you to pick multiple answers but the prompt will also let you know how many correct answers there are for example there may be a question on what types of machine learning algorithms you can use to given a dataset and given six choices you pick three correct options we found the official practice exam to be similar in difficulty as the actual exam the practice test included at the end of preparing for the google cloud professional data engineer exam https www coursera org learn preparing cloud professional data engineer exam on coursera was also helpful to see what types of questions might be asked we also took the 50 question practice exam on the linux academy course https linuxacademy com google cloud platform training course name google cloud data engineer but found it a bit misleading in terms of question style a sample linuxacademy question might ask what gcp product to use when you have an existing hadoop cluster but most of the actual exam questions had a customer scenario and focused on designing the solution rather than simply picking a product it was still good a resource to gauge your pace however as the other practice tests are only 25 questions each study plan since this is a gcp data engineering exam it s imperative to know the key gcp products it s best to get hands on experience by completing the data engineering track https google qwiklabs com quests 25 on qwiklabs but if you are pressed for time you can read our data engineering notes or other cheatsheets compiled by jorwalk https github com jorwalk data engineering gcp blob master study guide md and ml874 https github com ml874 data engineering on gcp cheatsheet once you are familiar with the gcp products it s good to study up on the hadoop ecosystem e g hadoop hive spark and its gcp equivalent as well as key ml concepts there were no in depth questions on tensorflow machine learning or deep neural networks but the exam did test on feature engineering strategies e g how to combat overfitting and identifying potential machine learning questions to solve there were several questions on the two case studies listed on the website https cloud google com certification guides data engineer i e flowlogistic mjtelco but the questions did not require you to re read the actual case study again during the exam all of the information needed to answer the question regarding the case studies was embeded within the question itself we suggest going through the video on the coursera course to dissect the case studies but not memorize or over analyze this portion too much overall there was a heavy emphasis on design troubleshooting and optimization of various data engineering scenarios a common type of problem was asking how to re design an existing solution at scale or implementing a fix given issues in the current architecture for example 1 a current cloud sql implementation has a single table with a few data points in the future if the throughput is 100x higher how can you partition shard the tables to improve performance 2 you need to design a global e commerce application to that can deal with multiple customers trying to buy the same item around the same time how do you deal with out of order data 3 a bigquery command is taking too long to read compute write how do you change your query to fix this there were also a fair number of iam related questions consistent with the types of questions on the practice exam it was really helpful to review all the iam roles per product knowing the different role types assigned to a human user vs a service account and encryption strategies 1 give an external consultant access to dataflow bigquery bigtable what role would you assign without giving access to the actual data 2 customer wants to encrpyt data at rest but doesn t want to store the keys on gcp where should you create keys and how do you encrypt that data finally not all questions were scenario based there were a fair number of questions simply asking for product specific details that tested on core concepts of gcp products 1 how to design the bigtable index to improve performance 2 how to avoid exploding index problem for datastore 3 what combination of gcp products to use for streaming data and storage 4 given technical requirements what open source hadoop products to use to process store data each exam probably draws from a larger pool of questions so it s hard to be definitive about what topics to study more but it s fair to expect more questions on bigquery machine learning bigtable and dataflow than cloud sql pubsub or stackdriver some members on our team mentioned a question or two about regulatory requirements e g hipaa gdpr but nothing too specific we hope that the exam study guide https github com leverege gcp data engineer exam blob master data 20engineering 20notes pdf we prepared and used also helps you pass the test if you notice any inaccuracies or want to contribute feel free to leave a issue resources cheatsheets https github com jorwalk data engineering gcp blob master study guide md https github com ml874 data engineering on gcp cheatsheet https medium com google cloud a tensorflow glossary cheat sheet 382583b22932 https www slideshare net guangxu5 gcp data engineer cheatsheet other exam overviews debriefs https medium com simonleewm a study guide to the google cloud professional data engineer certification path 9e83e41e311 https www linkedin com pulse google cloud certified professional data engineer writeup rix courses https linuxacademy com google cloud platform training course name google cloud data engineer https www coursera org learn preparing cloud professional data engineer exam https google qwiklabs com quests 34 https google qwiklabs com quests 25 | gcp data-engineering certification-prep google-cloud-platform | cloud |
ESD_PROJECT | esd project embedded system design project | os |
|
tritone | tritone a shuffle based music player for android designed for embedded entertainment systems currently still under development but debug builds are available in android build outputs apk tritone does not yet import music it is preloaded with several public domain pieces although preloading creative commons attribution music is a possibility it would require programming in a form of attribution and this is only for demo purposes in the first place so it s likely to stick to preloaded music for now more platforms coming soon tritone is available under the mozilla public license version 2 0 however i ask as a courtesy that you not submit exact clones of it to app stores without making significant changes or asking further permission | os |
|
emqx | emqx github release https img shields io github release emqx emqx color brightgreen label release https github com emqx emqx releases build status https github com emqx emqx actions workflows push entrypoint yaml badge svg https github com emqx emqx actions workflows push entrypoint yaml coverage status https img shields io coveralls github emqx emqx master label coverage https coveralls io github emqx emqx branch master docker pulls https img shields io docker pulls emqx emqx label docker 20pulls https hub docker com r emqx emqx slack https img shields io badge slack emq 39ae85 logo slack https slack invite emqx io discord https img shields io discord 931086341838622751 label discord logo discord https discord gg xygf3fqnes twitter https img shields io badge follow emq 1da1f2 logo twitter https twitter com emqtech youtube https img shields io badge subscribe emq ff0000 logo youtube https www youtube com channel uc5fjr77eraxvzenewzqao5q emqx is the world s most scalable open source mqtt broker https www emqx com en blog the ultimate guide to mqtt broker comparison with a high performance that connects 100m iot devices in 1 cluster while maintaining 1m message per second throughput and sub millisecond latency emqx supports multiple open standard protocols like mqtt http quic and websocket it s 100 compliant with mqtt 5 0 and 3 x standard and secures bi directional communication with mqtt over tls ssl and various authentication mechanisms with the built in powerful sql based rules engine https www emqx com en solutions iot rule engine emqx can extract filter enrich and transform iot data in real time in addition it ensures high availability and horizontal scalability with a masterless distributed architecture and provides ops friendly user experience and great observability emqx boasts more than 20k enterprise users across 50 countries and regions connecting 100m iot devices worldwide and is trusted by over 400 customers in mission critical scenarios of iot iiot connected vehicles and more including over 70 fortune 500 companies like hpe vmware verifone saic volkswagen and ericsson for more information please visit emqx homepage https www emqx io get started run emqx in the cloud the simplest way to set up emqx is to create a managed deployment with emqx cloud you can try emqx cloud for free https www emqx com en signup utm source github com utm medium referral utm campaign emqx readme to cloud continue https cloud intl emqx com console deployments 0 oper new no credit card required run emqx using docker docker run d name emqx p 1883 1883 p 8083 8083 p 8084 8084 p 8883 8883 p 18083 18083 emqx emqx latest next please follow the deploy with docker https www emqx io docs en v5 1 deploy install docker html guide for further instructions run emqx cluster on kubernetes please consult official emqx operator https github com emqx emqx operator blob main docs en us getting started getting started md documentation for details run emqx on macos emqx is available as core homebrew https brew sh package brew install emqx emqx start more installation options if you prefer to install and manage emqx yourself you can download the latest version from www emqx io downloads https www emqx io downloads for more installation options see the emqx installation documentation https www emqx io docs en v5 1 deploy install html documentation the emqx documentation is available at www emqx io docs en latest https www emqx io docs en latest the emqx enterprise documentation is available at docs emqx com en https docs emqx com en contributing please see our contributing md contributing md for more organised improvement proposals you can send pull requests to eip https github com emqx eip get involved follow emqtech on twitter https twitter com emqtech join our slack https slack invite emqx io if you have a specific question check out our discussion forums https github com emqx emqx discussions for general discussions join us on the official discord https discord gg xygf3fqnes team keep updated on emqx youtube https www youtube com channel uc5fjr77eraxvzenewzqao5q by subscribing resources mqtt client programming https www emqx com en blog tag mqtt client programming a series of blogs to help developers get started quickly with mqtt in php node js python golang and other programming languages mqtt sdks https www emqx com en mqtt client sdk we have selected popular mqtt client sdks in various programming languages and provided code examples to help you quickly understand the use of mqtt clients mqttx https mqttx app an elegant cross platform mqtt 5 0 client tool that provides desktop command line and web to help you develop and debug mqtt services and applications faster internet of vehicles https www emqx com en blog category internet of vehicles build a reliable efficient and industry specific iov platform based on emq s practical experience from theoretical knowledge such as protocol selection to practical operations like platform architecture design build from source the master branch tracks the latest version 5 for version 4 4 checkout the main v4 4 branch emqx 4 4 requires otp 24 emqx 5 0 and 5 1 can be built with otp 24 or 25 bash git clone https github com emqx emqx git cd emqx make build emqx rel emqx bin emqx console for 4 2 or earlier versions release has to be built from another repo bash git clone https github com emqx emqx rel git cd emqx rel make build emqx rel emqx bin emqx console license see license license | mqtt iot mqtt-broker erlang iot-middleware broker m2m pubsub messaging coap lorawan mqtt-server emqx manufacturing mqtt-protocol industry-40 message-queue aiot iiot lwm2m | server |
Simple-Unity-Audio-Manager | https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20card 20image png tired of having to manage a billion audiosources lying about that will no longer be a reality never worry about sounds and music ever again jacky s simple audio manager aka jsam is a easy to use performant and decentralized audio playing system for unity jsam is perfect for game jams prototypes and is scaleable with your project features easily and intuitively add and play sounds and music individually control master volume sound volume and music volume music and sound fading built in loop point authoring interface powered by audio tools library for net https github com zeugma440 atldotnet spatialized 3d sound and audio audio that changes depending on the scale of time compatible with unity s built in audio effects drag and drop components that handle sound playback on collision events trigger events particle emission and death extensive in editor in code documentation for easy extensibility add jsam to your project install via git url you will need to have git https git scm com book en v2 getting started installing git installed and available in your system s path open the package manager window in unity click the symbol in the left hand corner choose the option to add package from git url input the following https github com jackyyang09 simple unity audio manager git master and click add https github com jackyyang09 simple unity audio manager blob media media installation package 20install gif also check out the wiki https github com jackyyang09 simple unity audio manager wiki 1 downloading and importing jsam to see how to simplify audio integration in your project check out the documentation https jackyyang09 github io simple unity audio manager to learn more about and extend jsam s functionality more info check out the releases https github com jackyyang09 simple unity audio manager releases page to see all the latest updates jsam is now on the unity asset store https assetstore unity com packages tools audio jacky s simple audio manager 176802 do check the github releases page for the latest bug fixes if you d like to see what parts of audiomanager i m actively working on you can check out the trello https trello com b r6237lmd audiomanager if jsam has helped you at all feel free to donate https www paypal com paypalme brogrammist my quest to streamline audio is neverending but your patronage is always appreciated screenshots https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20promo 201 png https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20promo 202 png https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20promo 203 png https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20promo 204 png https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20promo 205 png https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20promo 206 png https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20promo 207 png https github com jackyyang09 simple unity audio manager blob media media homepage 20promo jsam 20promo 208 png | unity3d audiomanager sound unity music unity3d-plugin unity-asset jsam audio sounds | os |
Very-Deep-Convolutional-Networks-for-Natural-Language-Processing-in-tensorflow | very deep convolutional networks for natural language processing in tensorflow implement the paper very deep convolutional networks for natural language processing https arxiv org abs 1606 01781 in tensorflow just 9 layers parts of code are based on https github com amygdala tensorflow workshop tree master workshop sections cnn text classification which is based on the https github com dennybritz cnn text classification tf and other parts are based on https github com scharmchi char level cnn tf the data of the experiment is dbpedia the paper reports that the accuracy is 0 9865 | text-classification convolutional-neural-networks | ai |
instant18n | instant18n gem for rails use openai s gpt large language model to power internationalization of the text in your rails application extracted from real world usage in magmachat https github com magma labs magma chat installation install the gem and add to the application s gemfile by executing bundle add instant18n if bundler is not being used to manage dependencies install the gem by executing gem install instant18n make sure to set openai access token in your environment so that the library is able to access gpt usage invoke with i18n it or simply it in your view templates method is short for instant translation use in place of the standard t method for translating text the it method provides translation using the gpt 3 language model and caching the results to improve performance i18n it hello world espa ol hola mundo this will attempt to translate the text hello world to spanish using the gpt 3 language model if the translation is successful the translated text will be returned if the translation fails the original text or gpt error will be returned options i18n it text lang opts the it method accepts the following parameters key required the key associated with the text to be translated lang required the language to translate the text to defaults to the default language set in the i18n module class if you pass in css classes with the class option the method will return the translation wrapped in a div tag instead of plain text additional options that affect caching force force a cache miss expires in seconds how long to cache the translation additional options that are passed to the gpt 3 api model defaults to gpt 3 5 turbo temperature defaults to 0 25 max tokens defaults to 64 top p defaults to 0 1 frequency penalty defaults to 0 presence penalty defaults to 0 full description of these options is available here https platform openai com docs api reference chat create view helper it text opts this gem mixes in an it helper method into actionview base for convenience the helper method assumes the presence of a current user object with a preferred language attribute if current user is nil it will use the value of i18n default language instead default language the default language is set to english for performance and practical reasons if you pass in the default language gpt is not invoked change the default language in an initializer or at runtime by changing the value of the default language property on the i18n module i18n default language spanish anything goes because gpt is smart and can translate into almost anything that resembles a language all of the following options are known to work espa ol baby talk baseldeutsch braille ebonics emoji esperanto gregg shorthand klingon 1337 speak leetspeak newspeak morse code rhyming cockney slang sindarin singlish spanglish trumpisms t rk e uwu the limit is your imagination development after checking out the repo run bin setup to install dependencies then run rake spec to run the tests you can also run bin console for an interactive prompt that will allow you to experiment to install this gem onto your local machine run bundle exec rake install to release a new version update the version number in version rb and then run bundle exec rake release which will create a git tag for the version push git commits and the created tag and push the gem file to rubygems org https rubygems org testing the i18n extensions gem can be tested using the rspec testing framework the tests are located in the spec directory and can be run using the following command bundle exec rspec contributing bug reports and pull requests are welcome on github at https github com obie instant18n this project is intended to be a safe welcoming space for collaboration and contributors are expected to adhere to the code of conduct https github com obie instant18n blob main code of conduct md license the gem is available as open source under the terms of the mit license https opensource org licenses mit code of conduct everyone interacting in the instant18n project s codebases issue trackers chat rooms and mailing lists is expected to follow the code of conduct https github com obie instant18n blob main code of conduct md acknowledgments the i18n extensions gem uses the gpt 3 language model api provided by openai | ai |
|
MobileAppDevelopment | bzu https bzu edu pk images logo1 png https www bzu edu pk mobileappdevelopment for students to understand mobile app development java download jdk from https www oracle com pk java technologies downloads set path in windows environment variable helloworld program https www baeldung com java could not find load main class 10 best java ides and editors in 2023 https www turing com blog best java ides and editors android developer guide https developer android com guide https apilevels com course outline chapter 3 android architecture chapter 4 application components chapter 5 hello world chapter 7 activities chapter 8 services https codeunplug com android service sample code for playing default ringtone chapter 9 broadcast recievers https www geeksforgeeks org broadcast receiver in android with example chapter 68 sqlite database https www tutorialspoint com android android sqlite database htm chapter 11 fragments https www javatpoint com android fragments chapter 12 intents filters chapter 13 ui layouts chapter 14 ui controls radio buttons https github com tahirabbas876 quiz using radiobutton chapter 15 event handling chapter 19 notifications https www tutorialspoint com how to create a notification with notificationcompat builder in android chapter 25 alert dialoges https www tutorialspoint com android android alert dialoges htm chapter 45 calling api and json parsing https www geeksforgeeks org json parsing in android using volley library listview and arrayadapter https guides codepath com android using an arrayadapter with listview reading qr code https github com journeyapps zxing android embedded web api bzuattendance backend server https github com qadir0108 bzuattendancebackend | android android-application android-development mobile-development | front_end |
tweeter | tweeter project tweeter is a simple single page twitter clone this repository is the starter code for the project students will fork and clone this repository then build upon it to practice their html css js jquery and ajax front end skills and their node express back end skills getting started 1 create https docs github com en repositories creating and managing repositories creating a repository from a template a new repository using this repository as a template 2 clone your repository onto your local device 3 install dependencies using the npm install command 3 start the web server using the npm run local command the app will be served at http localhost 8080 4 go to http localhost 8080 in your browser dependencies express node 5 10 x or above | front_end |
|
Code-LMs | large models of source code i occasionally train and publicly release large neural language models on programs including polycoder https arxiv org pdf 2202 13169 pdf here i describe how to use these october 2022 polycoder is available on huggingface thanks to ninedaywang https github com ninedaywang polycoder is available on the huggingface hub the available models are ninedaywang polycoder 160m ninedaywang polycoder 0 4b ninedaywang polycoder 2 7b to use in huggingface simply run requires the newest version of transformers pip install transformers 4 23 0 python import transformers from transformers import autotokenizer automodelforcausallm from packaging import version assert version parse transformers version version parse 4 23 0 tokenizer autotokenizer from pretrained ninedaywang polycoder 2 7b model automodelforcausallm from pretrained ninedaywang polycoder 2 7b the model can be used for example by python prompt def binarysearch arr left right x mid left input ids tokenizer encode prompt return tensors pt result model generate input ids max length 50 num beams 4 num return sequences 4 for res in result print tokenizer decode res table of contents 1 setup getting started 2 models incl polycoder models 3 datasets datasets 4 evaluation evaluation 5 how to cite citation getting started all current models were trained using the gpt neox toolkit https github com eleutherai gpt neox first download a pretrained checkpoint as described below and then use this either with a docker image via docker or through our fork of this toolkit from source from source to generate code code generation or replicate our evaluation evaluation retrieving checkpoints checkpoint files for training polycoder are hosted on this public zenodo repository https zenodo org record 6363556 see this section models for details on currently available models model checkpoints range up to 6gb which is also the amount of gpu memory they require to run running on cpu is neither tested nor recommended download and untar a checkpoint file in this case for a 2 7b parameter model trained for 150k steps to a directory called checkpoints using mkdir checkpoints cd checkpoints wget https zenodo org record 6363556 files 2 7b 150k tar tar xvf 2 7b 150k tar from source we maintain a public fork of the neox repository here https github com frankxu2004 gpt neox which includes the minor changes we made to the codebase to allow for tabs newlines in the tokenization and also includes instructions for running the perplexity and humaneval tasks note that this repository uses a forked version https github com frankxu2004 lm evaluation harness of the lm evaluation harness with the code benchmark from our work citation building this repository should match the process for gpt neox almost exactly you may also use the docker image mentioned next but mounting a checkout of the latest version of this fork over the gpt neox directory inside the container once set up generate py entrypoint described below code generation for free form code generation or use one of the commands here https github com frankxu2004 gpt neox a modified version for polycoder code pretraining to calculate perplexity and humaneval results as in the paper https arxiv org pdf 2202 13169 via docker a base docker image containing a slightly modified version of the gpt neox repository https github com eleutherai gpt neox is available via dockerhub https hub docker com repository docker vhellendoorn code lms neox docker pull vhellendoorn code lms neox base this image can be used together with a checkpoint file hosted on this public zenodo repository https zenodo org record 6363556 the base docker image size is 5 4gb once a checkpoint has been retrieved start the container with the following commands substituting another gpu device index if needed nvidia docker run rm it e nvidia visible devices 0 shm size 1g ulimit memlock 1 mount type bind src pwd checkpoints dst gpt neox checkpoints vhellendoorn code lms neox base code generation the following command can be used to generate code from a prompt sudo deepy py generate py configs text generation yml checkpoints configs local setup yml checkpoints configs 2 7b yml note if not using the 2 7b parameter model replace the final config file with the appropriate model size e g small 160m parameters medium 405m once the checkpoint has been loaded you can feed it an example such as def return1 n returns 1 n note the whitespace tokens and watch it predict return 1 and then probably a bunch of other returnx methods depending on the sample the modifications to gpt neox mentioned above center around the need to allow tabs and newlines in the prompt input for the interactive mode these can be added using their escaped versions t n when using file based input the project will read the entire file instead of treating each line as a prompt by default the command below will create an interactive prompt and return relatively short outputs 256 tokens with a sampling temperature of 0 5 this behavior can be changed in gpt neox checkpoints configs text generation yml a lower temperature e g 0 2 will produce more consistent and plausible to the model predictions a higher temperature such as the default may be useful for generating and evaluating many candidates see our paper https arxiv org pdf 2202 13169 for recommendations for the latter setting consider switching to the input file mode and providing an entire snippet without escaping whitespace in the corresponding file multi lingual models a name models a several models have been trained on a large corpus data characteristics of code spanning 12 programming languages this includes a 2 7b parameter model nick named polycoder trained for 100k and 150k steps a 405m parameter model 100k 150k steps and a 160m parameter model 150k steps available models all models are available at a public zenodo repository https zenodo org record 6363556 in the form of tar files with fairly self explanatory names e g 2 7b 100k a 2 7b parameter model trained for 100k steps currently available models include gpt2 2 7b https zenodo org record 6363556 files 2 7b 150k tar a 32 layer 2 560 dimensional transformer model trained with a batch size of 128 sequences 256k tokens models available both at 100k and at 150k steps steps note that gpt neox default config https github com eleutherai gpt neox blob main configs 2 7b yml for this model was modified to reduce the number of training steps and learning rate decay steps accordingly to 160k down from 320k to better match the available training resources hence this model may not have reached its peak performance gpt2 0 4b https zenodo org record 6363556 files 0 4b 150k tar a 24 layer 1 024 dimensional transformer model based on the medium config https github com eleutherai gpt neox blob main configs medium yml trained with 256k tokens per batch gpt2 160m https zenodo org record 6363556 files 160m 150k tar a 12 layer 768 dimensional transformer model based on the small config https github com eleutherai gpt neox blob main configs small yml trained with 256k tokens per batch training process training was done on 4 to 8 nvidia rtx 8000 gpus largely following the standard config values except also enabling scaled upper triang masked softmax fusion and bias gelu fusion for performance and slightly changing the batch size see model details available models data split changed to 98 9 0 1 1 initial loss scale 2 16 and print eval intervals the below image shows the loss curve of the various models training process in terms of validation loss image https user images githubusercontent com 1426353 153651075 a0ceb8ef 6207 4853 b801 40dd6172d5a6 png caveats the trained models come with a few minor known limitations this model was not trained to solve programming problems and may not perform well on a benchmark such as humaneval https github com openai human eval models like codex powering copilot are pretrained on natural language which may boost their ability to interpret nl prompts this model only learned language from comments in code the model appears to start generating a random new file once it reaches the predicted end of the current one it is possible that the end of document token was not properly added to the training data whitespace is very important to the model since no preprocessing was done on the input files for instance the following snippet will yield poor predictions because in java we would never expect an instance method at the top level as is indicated by the single level of t indentation of the two lines within this method public int gettotalweight list integer weights n t sum weights in parallel n treturn adjusting the indentation makes it predict more reasonable continuations public int gettotalweight list integer weights n t t sum weights in parallel n t treturn the codex model discusses controlling for this to increase usability this may be worth doing in a future version of the model datasets 249gb multi lingual corpus this is the corpus used to train polycoder the datasets were cloned overnight on october 9 10 2021 to mine a similar training set see data https github com vhellendoorn code lms tree main data the list of file paths can be downloaded from https zenodo org record 6363556 files index zip https zenodo org record 6363556 files index zip each row in the file is the file path along with its sha 256 hash to ease deduplication that is the hashes allow checking if files from any future test set were already contained in the training set the data collection and filtering process is described in detail in the paper https arxiv org pdf 2202 13169 pdf and below the final filtered dataset statistics are language repositories size gb files c 10 749 55g 3 037 112 c 9 511 21g 2 514 494 c 13 726 52g 4 289 506 go 12 371 15g 1 416 789 java 15 044 41g 5 120 129 javascript 25 144 22g 1 774 174 php 9 960 13g 1 714 058 python 25 446 16g 1 550 208 ruby 5 826 4 1g 674 343 rust 4 991 3 5g 304 842 scala 1 497 1 8g 245 100 typescript 12 830 9 2g 1 441 926 data collection filtering i cloned the most popular repositories for 12 popular programming languages with at least 50 stars stopping at 25k per language from github in october 2021 for each project each file belonging to the majority language of that project was extracted yielding the training set below after cleaning this initial unfiltered dataset spanned 631gb and 38 9m files next similar to codex and codeparrot very large 1mb and very short 100 tokens files were filtered out reducing the dataset to 424gb files were then deduplicated based on a hash of their content which reduced the number of files by another 30 or so leaving 249gb of data and 24 1m files no tokenization filters were applied the model processes entire files including all comments a code specific vocabulary was constructed on a random 5 subset of the files above evaluation please find detailed instructions for replicating our perplexity and humaneval results on our public fork https github com frankxu2004 gpt neox a modified version for polycoder code pretraining of the neox repository this in turn leverages our extension https github com frankxu2004 lm evaluation harness of the lm evaluation harness evaluating codex to download the test sets that we used in the paper 12 programming languages use wget https zenodo org record 6363556 files unseen test sets tar gz tar xvzf unseen test sets tar gz to get perplexity results on these samples using codex api use export openai api key your open ai api key python3 u evaluation eval codex all py dirs code sampled100 where your open ai api key is a private string that can be obtained by signing up for openai s beta https beta openai com account api keys as of march 2022 getting an api key is free for 3 months and afterwards a credit card needs to be entered however even after entering a credit card using our evaluation script does not lead to any costs results humaneval these are polycoder s results on the humaneval benchmark https github com openai human eval model pass 1 pass 10 pass 100 polycoder 160m 2 13 3 35 4 88 polycoder 400m 2 96 5 29 11 59 polycoder 2 7b 5 59 9 87 17 68 codeparrot 110m 3 80 6 57 12 78 codeparrot 1 5b 3 58 8 03 14 96 gpt neo 125m 0 75 1 88 2 97 gpt neo 1 3b 4 79 7 47 16 30 gpt neo 2 7b 6 41 11 27 21 37 gpt j 6b 11 62 15 74 27 74 codex 300m 13 17 20 37 36 27 codex 2 5b 21 36 35 42 59 50 codex 12b 28 81 46 81 72 31 results multilingual language modeling these are the perplexity results of polycoder on the multilingual test sets https zenodo org record 6363556 files unseen test sets tar gz language perplexity c 2 3464 c 2 5832 c 2 9189 go 2 567 java 2 9194 javascript 3 0611 php 3 6954 python 3 1767 ruby 3 9742 rust 3 2449 scala 3 8735 typescript 3 6143 a comparison with the other models is available in figure 6 in the paper image images fig6 png citation a systematic evaluation of large language models of code https arxiv org pdf 2202 13169 article xu2022systematic title a systematic evaluation of large language models of code author xu frank f and alon uri and neubig graham and hellendoorn vincent j journal arxiv preprint arxiv 2202 13169 year 2022 | gpt-2 deep-learning source-code | ai |
Remote-Visual-Assistant-Embedded-Systems-Project | cop315 embedded systems design project designed a prototype to assist visually impaired people in navigating themselves using raspberry pi pi camera developed an android app for interaction via live video streaming motion library gps google maps api incorporated features like face detection opencv library and ocr tesseract library in the application demonstrated the project to professors industrialists in open house iit delhi 2019 link to blog https remotevisualassistant home blog 2019 04 21 the journey begins | os |
|
websoft | websoft join the chat at https gitter im webbprogrammering websoft https badges gitter im webbprogrammering websoft svg https gitter im webbprogrammering websoft utm source badge utm medium badge utm campaign pr badge utm content badge course material for software development for the web nicked as websoft visit the original course repo for websoft https github com webbprogrammering websoft not your forked version you can review this information and its examples on github pages for the websoft repo https webbprogrammering github io websoft directory structure here is a brief explanation the directory structure of the repo directory info slides slides slideshow for the course published on github pages slides example example code samples template code and code examples you can try out and review their source code create your own working directory once you have cloned this repo start by creating your own directory work and save all your own work in that wiki the wiki for this repo wiki contains additional course information issues forum the issues for this repo issues works like a forum and contains additional course information and can be used to ask questions about the course and course content chat there is a chat on gitter for this repo click the button to join it join the chat at https gitter im webbprogrammering websoft https badges gitter im webbprogrammering websoft svg https gitter im webbprogrammering websoft utm source badge utm medium badge utm campaign pr badge utm content badge copyright c 2020 mikael roos mos webbprogrammering se | front_end |
|
humor_detection | humor classifier dataset description dataset used link https github com crowdtruth short text corpus for humor detection it contains five pickle files positive samples 1 humorous one liners https github com iamdsc humor detection blob master datasets humorous oneliners pickle 12 longer jokes https github com iamdsc humor detection blob master datasets oneliners incl doubles pickle negative samples 1 reuters headlines https github com iamdsc humor detection blob master datasets reuters headlines pickle 2 english proverbs https github com iamdsc humor detection blob master datasets proverbs pickle 3 wikipedia sentences https github com iamdsc humor detection blob master datasets wiki sentences pickle conclusion models description accuracy f1 score 1 simple feed forward network with dense layers on top of embedding layer 0 9660 0 9231 2 without pre trained word embeddings 0 9839 0 9568 3 using simple rnn layer on top of embedding layer 0 9686 0 9413 4 using lstm layer on top of the embedding layer 0 9587 0 9514 5 using two conv1d layers on top of the embedding layer 0 9674 0 9469 7 using gru layer on top of conv1d layer 0 9617 0 9462 8 using two gru layers on top of two conv1d layers 0 9599 0 9472 team members amit kumar https github com pymit aditya https github com adi160 yash chandra verma https github com ycv005 devesh kaushik https github com deveshkau | humor-detection python jupyter-notebook keras sklearn natural-language-processing pandas matplotlib | ai |
3-Link-Robot-arm-control-using-Mobile-Device | 3 link robot arm control using mobile device source code with complete report for embedded systems design project fall 2017 | os |
|
system_design | system design my slides on system design related topics license cc by nc nd 4 0 | os |
|
Simple-login-exercise-React-Express | simple login with react and express as exercise just simple exercise from my web development course for connecting frontend react app and backend express server sending login data to the server checking if they are correct if there is user with the same set e mail and pwd and sending back message and user data disclaimer this is just simple exercise the methods used for handling user data and especially passwords are not something to use in real life do not use it in real projects | server |
|
nlp-ue4 | nlp ue4 natural language processing plugin for unreal engine 4 using tensorflow this plugin was built upon getnamo s tensorflow ue4 plugin https github com getnamo tensorflow ue4 which you can find here this plugin allows you to extract entities and intents from sentences fully in blueprints without touching c or python installation 1 to use this nlp plugin you must first follow the instructions for intalling tensorflow ue4 https github com getnamo tensorflow ue4 releases 2 download and add nlp ue4 plugin https github com glenn v w nlp ue4 to plugins 3 download and add googlenews vectors negative300 bin 3 39 gb https drive google com file d 0b7xkcwpi5kdynlnuttlss21pqmm edit usp sharing to content scripts examples you can find a bare bones example project here https github com glenn v w nlp ue4 examples for a more in depth use of this plugin you can find a text based adventure game i ve been working on here https github com glenn v w nlp puzzlegame feature overview this plugin s workings were heavily inspired by microsoft luis https eu luis ai similarly to it we work with entities and intents the main difference between luis and this plugin is that this plugin works offline without the need to pay for microsoft azure but it is missing a number of features that microsoft luis does have namely patterns regexes etc in an ideal world these will be added later but we ll see so how to get started using this plugin there s two major parts for using this plugin there s an in engine part and an out of engine part let s start to the latter out of engine entities make your way to content entities in this folder you can have as many entities as you wish each type of entity must be a csv file and the name of the file will be the type of the entity the csv file must have the following structure colors csv https puu sh dcfzk 06892ba83b png 1 field a1 must be empty or contain 2 field b1 must contain entities 3 field a2 must contain either true or false this determines whether the entity has an impact on the intent of a sentence in other words it determines if the entity is meaningful colors and many adjectives may be described as meaningless as far as the intent is concerned true meaningful false meaningless 4 field b2 b3 b4 and onwards must be structured as seen in the image words that fall within the same entity category but have a different meaning should be in different fields while synonyms should be in the same field for example red and ruby are in the same field since as far as we re conserned here they re synonyms blue meanwhile is in a different field 5 field a3 a4 a5 and onwards must have unique names but the names are meaningless i suggest using row numbers for simplicity the first word in b2 will be referred to as the base of that entity henceforth trainingdata and intents make your way to content scripts this folder must contain 3 csv files for the plugin to function trainingdatasentences csv trainingdataintents csv intents csv the following screenshot has those files open in that order from left to right trainingdatasentences trainingdataintents and intents intents https puu sh dcg9y 593462f598 png so what s going on here trainingdatasentences csv left hand file includes the sentences our neural net will be training on for intent recognition this should contain sentences similar to what players may be entering in the game but be careful there s a very strict way to structure them 1 change sentence to be lower case 2 remove all punctuation marks 3 remove all stop words ourselves hers between yourself but again there about once during out very having with they own an be some for do its yours such into of most itself other off is s am or who as from him each the themselves until below are we these your his through don nor me were her more himself this down should our their while above both up to ours had she all no when at any before them same and been have in will on does yourselves then that because what over why so can did not now under he you herself has just where too only myself which those i after few whom t being if theirs my against a by doing it how further was here than 4 make sure you replace all words that belong to an entity to the base of that entity see entities 5 if a word belongs to an entity that was selected to be meaningless to the intent remove it so for example imagine we have an entity of objects with base barrel an entity of colors with base red and is set to be meaningless an intent of equipables with base key we would like to enter the following sentence into our training set open the green chest using the green key with the above structuring that would become open barrel using key this sentence can then be added to our csv file where each word is a seperate field and all the fields after the sentence is complete are filled with none until j max of 10 words of course this sentence corresponds to an intent we must select the corresponding intent of this sentence we do this in trainingdataintents csv where we set the corresponding field to 1 and all incorrect fields to 0 these collumns correspond directly with the rows in intents csv so for example if intents csv has the followin fields b2 goto b3 gothrough b4 use and b5 pickup the first of those goto corresponds to the first collumn in trainingdataintents csv the second one gothrough corresponds to the second collumn and so on intents csv has similar rules to entities 1 field a1 must be empty or contain 2 field b1 must contain intents 3 field a3 a4 a5 and onwards must have unique names but the names are meaningless i suggest using row numbers for simplicity in engine to use natural language processing in a blueprint you must add a tensorflowcomponent and a naturallanguagecomponent x https puu sh dd4dp 302260f52f png in the tensorflowcomponent set the tensorflowmodule to glennvwsnaturallanguageprocessing x https puu sh dd4ds 55f7557bc9 png in the naturallanguagecomponent set the intent data table to a data table containing your intents there should be one by default which you can modify to your needs next all you need to do to use language processing is the following x https puu sh dd4dw 10b494c504 png anywhere in your code where you wish to process a sentence call process sentence from the naturallanguagecomponent and pass it your sentence string on beginplay bind an event to sentenceprocessed from the naturallanguagecomponent this event will be called when the net completes processing after you call process sentence and will receive the intent string and the detected entities array of entity type string and specific entity string in the order that they appeared in the original sentence you can parse this result as you wish video overview that may sound like a lot so you can also watch this video for a quick summary of the plugin s features insert video here troubleshooting command window pops up on first begin play on first play the plugin adds modules to the python virtual environment this may take a few minutes depending on internet connectivity the naturallanguagecomponent does not complete training wait for a few minutes before pressing play again python modules are being installed in the background just be patient license https github com glenn v w nlp ue4 blob master license nlp and tensorflow plugin mit https opensource org licenses mit tensorflow and tensorflow icon apache 2 0 http www apache org licenses license 2 0 | ai |
|
carbon-website | carbon design system deployment status https github com carbon design system carbon website workflows deployment 20status badge svg this is the carbon design system website http www carbondesignsystem com it s built using the gatsby theme carbon https gatsby theme carbon now sh with gatsbyjs https www gatsbyjs org structure src components data gatsby theme carbon images pages styles util develop contribution guidelines github contributing md content and markdown guidelines https gatsby theme carbon now sh components markdown navigation guidelines https gatsby theme carbon now sh guides navigation sidebar yarn install install dependencies yarn dev start the development server yarn dev clean use this if you have cache issues lint js lint your javascript files format run prettier if you need more detailed information on how to set up your machine to develop locally please take a look at our wiki https github com carbon design system carbon website wiki build runing the build command generates all the files and places them in the public folder yarn build | hacktoberfest gatsby carbon ibm | os |
aws-iot-device-sdk-python-v2 | aws iot device sdk v2 for python version https img shields io pypi v awsiotsdk svg style flat https pypi org project awsiotsdk this document provides information about the aws iot device sdk v2 for python this sdk is built on the aws common runtime https docs aws amazon com sdkref latest guide common runtime html jump to installation installation samples samples getting help getting help faq documents faq md api docs https aws github io aws iot device sdk python v2 mqtt5 user guide documents mqtt5 userguide md installation minimum requirements python 3 7 step by step instructions documents prerequisites md install from pypi macos and linux python3 m pip install awsiotsdk windows python m pip install awsiotsdk install from source bash 1 create a workspace directory to hold all the sdk files mkdir sdk workspace cd sdk workspace 2 clone the repository you could select the version of the sdk you desire to use git clone b sdk version https github com aws aws iot device sdk python v2 git 3 optional setup the version number of your local build the default version for awsiotsdk is set to 1 0 0 dev you can set the version number of the local build in aws iot device sdk python v2 awsiot init py sed i s version 1 0 0 dev version sdk version aws iot device sdk python v2 awsiot init py 4 install using pip use python instead of python3 on windows python3 m pip install aws iot device sdk python v2 samples samples readme samples getting help the best way to interact with our team is through github you can open a discussion https github com aws aws iot device sdk python v2 discussions for guidance questions or an issue https github com aws aws iot device sdk python v2 issues new choose for bug reports or feature requests you may also find help on community resources such as stackoverflow https stackoverflow com questions tagged aws iot with the tag aws iot https stackoverflow com questions tagged aws iot or if you have a support plan with aws support https aws amazon com premiumsupport you can also create a new support case please make sure to check out our resources too before opening an issue faq documents faq md api docs https aws github io aws iot device sdk python v2 iot guide https docs aws amazon com iot latest developerguide what is aws iot html source https github com awsdocs aws iot docs check for similar issues https github com aws aws iot device sdk python v2 issues aws iot core documentation https docs aws amazon com iot dev blog https aws amazon com blogs awsf blog master iot category internet of things 23amazon freertos 7ccategory internet of things 23aws greengrass 7ccategory internet of things 23aws iot analytics 7ccategory internet of things 23aws iot button 7ccategory internet of things 23aws iot device defender 7ccategory internet of things 23aws iot device management 7ccategory internet of things 23aws iot platform integration with aws iot services such as device shadow https docs aws amazon com iot latest developerguide iot device shadows html and jobs https docs aws amazon com iot latest developerguide iot jobs html is provided by code that been generated from a model of the service contributions guidelines documents contributing md license this library is licensed under the apache 2 0 license documents license latest released version v1 19 0 | hacktoberfest | server |
MusicPlayer-Project | musical keyboard objective in this experiment creating isr s to handle interrupts on the ts7250 board using and combining most of what they have learned in previous assignments part 1 the keyboard for this part you are to create a module that contains code to set up an isr and a real time task the purpose of the real time task is to create a square wave that will be played on a speaker to create the square wave you will need to toggle pin 1 of port f part 2 master slave software interrupt for this part you are to use your code from the previous lab to decide a master slave relationship with the other students boards in the lab this time however the master board sends the current note that it is playing to all of the other boards so that all of them play the same note as the master board this requires a few steps 1 you must program your master slave server program to accept messages that begin with these messages represent one of the five notes to be played a b c d e 2 if your board is a slave and it receives one of those messages it must change the frequency of the note being played to do this you will use software interrupts specifically you will use software interrupt 63 reference the ep93xx manual so you need to write a handler for this interrupt in your module to trigger the interrupt in your server program you simply write a 1 to the bit in the software interrupt register that corresponds to interrupt 63 your module should still change the notes when the buttons are pressed in other words your module should handle both interrupts 3 if your board is a master and it receives a note message it must also change the frequency of the note being played furthermore it should forward that message to all the slave boards in the network | os |
|
ShipRSImageNet | shiprsimagaenet a large scale fine grained dataset for ship detection in high resolution optical remote sensing images python https img shields io badge python 3 x ff69b4 svg https github com luyanger1799 amazing semantic segmentation git opencv https img shields io badge opencv 3 x 7c4 x orange svg https github com luyanger1799 amazing semantic segmentation git apache https img shields io badge apache 2 0 blue svg https github com luyanger1799 amazing semantic segmentation git description font color red shiprsimagenet font is a large scale fine grainted dataset for ship detection in high resolution optical remote sensing images the dataset contains font color red 3 435 images font from various sensors satellite platforms locations and seasons each image is around 930 930 pixels and contains ships with different scales orientations and aspect ratios the images are annotated by experts in satellite image interpretation categorized into font color red 50 object categories images font the fully annotated shiprsimagenet contains font color red 17 573 ship instances font there are five critical contributions of the proposed shiprsimagenet dataset compared with other existing remote sensing image datasets images are collected from various remote sensors cover ing multiple ports worldwide and have large variations in size spatial resolution image quality orientation and environment ships are hierarchically classified into four levels and 50 ship categories the number of images ship instances and ship cate gories is larger than that in other publicly available ship datasets besides the number is still increasing we simultaneously use both horizontal and oriented bounding boxes and polygons to annotate images providing detailed information about direction background sea environment and location of targets we have benchmarked several state of the art object detection algorithms on shiprsimagenet which can be used as a baseline for future ship detection methods examples of annotated images image https github com zzndream shiprsimagenet blob main imgs examples 20of 20annotated 20images jpeg image source and usage license the shiprsimagenet dataset collects images from a variety of sensor platforms and datasets in particular images of the xview dataset are collected from worldview 3 satellites with 0 3m ground resolution images in xview are pulled from a wide range of geographic locations we only extract images with ship targets from them since the image in xview is huge for training we slice them into 930 930 pixels with 150 pixels overlap to produce 532 images and relabeled them with both horizontal bounding box and oriented bounding box we also collect 1 057 images from hrsc2016 and 1 846 images from fgsd datasets corrected the mislabeled and relabeled missed small ship targets 21 images from the airbus ship detection challenge 17 images from chinese satellites suchas gaofen 2 and jilin 1 use of the google earth images must respect the google earth terms of use https www google com permissions geoguidelines html all images and their associated annotations in shiprsimagenet can be used for academic purposes only but any commercial use is prohibited object category the ship classification tree of proposed shiprsimagenet is shown in the following figure level 0 distinguish whether the object is a ship namely class level 1 further classifies the ship object category named as category level 2 further subdivides the categories based on level 1 level 3 is the specific type of ship named as type image https github com zzndream shiprsimagenet blob main imgs shiprsimagenet categories tree jpeg at level 3 ship objects are divided into 50 types for brevity we use the following abbreviations dd for destroyer ff for frigate ll for landing as for auxiliary ship lsd for landing ship dock lha for landing heli copter assault ship aoe for fast combat support ship epf for expeditionary fast transport ship and roro for roll on roll off ship these 50 object classes are other ship other warship submarine other aircraft carrier enterprise nimitz midway ticonderoga other destroyer atago dd arleigh burke dd hatsuyuki dd hyuga dd asagiri dd other frigate perry ff patrol other landing yuting ll yudeng ll yudao ll yuzhao ll austin ll osumi ll wasp ll lsd 41 ll lha ll commander other auxiliary ship medical ship test ship training ship aoe masyuu as sanantonio as epf other merchant container ship roro cargo barge tugboat ferry yacht sailboat fishing vessel oil tanker hovercraft motorboat and dock dataset download baidu drive extraction code h2qk shiprsimagenet https pan baidu com s 1x6zrw39aozohebo1mm0rqq google drive shiprsimagenet https drive google com file d 1wapkasoa9mxrfxqiq6lttlvrv4csc6vv view usp sharing benchmark code installation we keep all the experiment settings and hyper parameters the same as depicted in mmdetection v2 11 0 config files except for the number of categories and parameters mmde tection is an open source object detection toolbox based on pytorch it is a part of the openmmlab project developed by multimedia laboratory cuhk this project is based on mmdetection https github com open mmlab mmdetection v2 11 0 mmdetection is an open source object detection toolbox based on pytorch it is a part of the openmmlab https openmmlab com project prerequisites linux or macos windows is in experimental support python 3 6 pytorch 1 3 cuda 9 2 if you build pytorch from source cuda 9 0 is also compatible gcc 5 mmcv https mmcv readthedocs io en latest installation installation install mmdetection following the instructions https github com open mmlab mmdetection blob master docs get started md we are noting that our code is checked in mmdetection v2 11 0 and pytorch v1 7 1 create a conda virtual environment and activate it python conda create n open mmlab python 3 7 y conda activate open mmlab install pytorch and torchvision following the official instructions https pytorch org e g python conda install pytorch torchvision c pytorch note make sure that your compilation cuda version and runtime cuda version match you can check the supported cuda version for precompiled packages on the pytorch website https pytorch org install mmcv full we recommend you to install the pre build package as below python pip install mmcv full f https download openmmlab com mmcv dist cu version torch version index html please replace cu version and torch version in the url to your desired one for example to install the latest mmcv full with cuda 11 and pytorch 1 7 1 use the following command python pip install mmcv full f https download openmmlab com mmcv dist cu110 torch1 7 1 index html download this benchmark code python git clone https github com open mmlab mmdetection git cd mmdetection2 11 shiprsimagenet install build requirements and then install mmdetection python pip install r requirements build txt pip install v e or python setup py develop train with shiprsimagenet download the shiprsimagenet dataset it is recommended to symlink the shiprsimagenet dataset root to mmdetection2 11 shiprsimagenet data python ln s dataset shiprsimagenet mmdetection2 11 shiprsimagenet data if your folder structure is different you may need to change the corresponding paths in config files python mmdetection2 11 shiprsimagenet mmdet tools configs data shiprsimagenet coco format masks voc format annotations imagesets jpegimages prepare a config file the benchamark config file of shiprsimagenet already in the following python mmdetection2 11 shiprsimagenet configs shiprsimagenet example of train a model with shiprsimagenet python python tools train py configs shiprsimagenet faster rcnn faster rcnn r50 fpn 100e shiprsimagenet level0 py models trained on shiprsimagenet we introduce two tasks detection with horizontal bounding boxes hbb for short and segmentation with oriented bounding boxes sbb for short hbb aims at extracting bounding boxes with the same orientation of the image it is an object detection task sbb aims at semantically segmenting the image it is a semantic segmentation task the evaluation protocol follows the same map and mar of area small medium large and map iou 0 50 0 95 calculation used by ms coco level 0 model backbone style hbb map sbb map extraction code download faster rcnn with fpn r 50 pytorch 0 550 2vrm model https pan baidu com s 1bavxp26ohdzm8gtsngqzow faster rcnn with fpn r 101 pytorch 0 546 f362 model https pan baidu com s 1t0iqcfrlcooppv0k6ysl5q mask rcnn with fpn r 50 pytorch 0 566 0 440 24eq model https pan baidu com s 1se hngc0vlng61fuv htq mask rcnn with fpn r 101 pytorch 0 557 0 436 lbcb model https pan baidu com s 1tfay8 8sutwfbqghbnav8a cascade mask rcnn with fpn r 50 pytorch 0 568 0 430 et6m model https pan baidu com s 1wvob8ms2zitwj w3hzhf9a ssd vgg16 pytorch 0 464 qabf model https pan baidu com s 1yj0f20pjr9e2op0rx8vduw retinanet with fpn r 50 pytorch 0 418 7qdw model https pan baidu com s 1nzc2ukqns0hzdvp srxubq retinanet with fpn r 101 pytorch 0 419 vdiq model https pan baidu com s 1nmseodcariirueynb q4oa foveabox r 101 pytorch 0 453 urbf model https pan baidu com s 13vpp1lmoafak vr0s0nuzq fcos with fpn r 101 pytorch 0 333 94ub model https pan baidu com s 1ql 8i05og80jqrtvqqw9hq level 1 model backbone style hbb map sbb map extraction code download faster rcnn with fpn r 50 pytorch 0 366 5i5a model https pan baidu com s 1ofnmgbchakg26iao1tnjya faster rcnn with fpn r 101 pytorch u 0 461 u 6ts7 model https pan baidu com s 1ubaofgejxbavvqg5c uvca mask rcnn with fpn r 50 pytorch u 0 456 u 0 347 9gnt model https pan baidu com s 1vijgbte6z4udatzsqu7alq mask rcnn with fpn r 101 pytorch 0 472 0 371 wc62 model https pan baidu com s 18lzr9yek6tjivbns8 qgpa cascade mask rcnn with fpn r 50 pytorch 0 485 0 365 a8bl model https pan baidu com s 12rvdqciapqfc9sg0ni0tlq ssd vgg16 pytorch 0 397 uffe model https pan baidu com s 19h43hbi1gi3n9rq bczh6q retinanet with fpn r 50 pytorch 0 368 lfio model https pan baidu com s 1suhdueoeacftk8que48sbw retinanet with fpn r 101 pytorch 0 359 p1rd model https pan baidu com s 1qeu4jwh1yajaov7wbuks4w foveabox r 101 pytorch 0 389 kwiq model https pan baidu com s 12rkj3hevn qgefjabqabfg fcos with fpn r 101 pytorch 0 351 1djo model https pan baidu com s 1bwn3n9thik5 5vdgrmy6sw level 2 model backbone style hbb map sbb map extraction code download faster rcnn with fpn r 50 pytorch 0 345 924l model https pan baidu com s 1auzf2zapklenwbvqfdfpkw faster rcnn with fpn r 101 pytorch 0 479 fb1b model https pan baidu com s 1tdwonosgeudiji4huwtzpq mask rcnn with fpn r 50 pytorch 0 468 0 377 so8j model https pan baidu com s 1g35mrwqqsrmv7jogotwjqw mask rcnn with fpn r 101 pytorch 0 488 0 398 7q1g model https pan baidu com s 1mgu88crwzgmwjcg1wjz0mw cascade mask rcnn with fpn r 50 pytorch 0 492 0 389 t9gr model https pan baidu com s 1g4qqlwkwp4alhxpuohsg2a ssd vgg16 pytorch 0 423 t1ma model https pan baidu com s 1n7gt2emfzue54dmzhw8y9g retinanet with fpn r 50 pytorch 0 369 4h0o model https pan baidu com s 1rplxarnckn0p0ojgpq8qog retinanet with fpn r 101 pytorch 0 411 g9ca model https pan baidu com s 1uyndcvyb p9m2h7k ql1iw foveabox r 101 pytorch 0 427 8e12 model https pan baidu com s 1qztaomrqxp6l5nvrbbmb4g fcos with fpn r 101 pytorch 0 431 0hl0 model https pan baidu com s 1ik3gyzb572paocjwierdag level 3 model backbone style hbb map sbb map extraction code download faster rcnn with fpn r 50 pytorch 0 375 7qmo model https pan baidu com s 1ljwkd3 khlavvsivseod5q faster rcnn with fpn r 101 pytorch 0 543 bmla model https pan baidu com s 1sqhxti69nukywopqs1nslq mask rcnn with fpn r 50 pytorch 0 545 0 450 a73h model https pan baidu com s 1rbkbyb2bo ubb5j67puya mask rcnn with fpn r 101 pytorch 0 564 0 472 7k9i model https pan baidu com s 1hs7fckr3l9jizg22vsvzgg cascade mask rcnn with fpn r 50 pytorch 0 593 0 483 ebga model https pan baidu com s 1ejynomggsjsqw1tikktnxg ssd vgg16 pytorch 0 483 otu5 model https pan baidu com s 1fmecagajjnxtba63jw9k9w retinanet with fpn r 50 pytorch 0 326 tu5a model https pan baidu com s 11s8x7w35g7krmzqijpcnpg retinanet with fpn r 101 pytorch 0 483 ptv0 model https pan baidu com s 1kwx7g3bcsagosovmjr36ta foveabox r 101 pytorch 0 459 1acn model https pan baidu com s 1p5ebaxwajj a4s4hfqhfew fcos with fpn r 101 pytorch 0 498 40a8 model https pan baidu com s 11tnlbl2agnhp hlgy5yovg development kit the shiprsimagenet development kit https github com zzndream shiprsimagenet devkit is based on dota development kit https github com captain whu dota devkit and provides the following function load and image and show the bounding box on it covert voc format label to coco format label citation if you make use of the shiprsimagenet dataset please cite our following paper z zhang l zhang y wang p feng and r he shiprsimagenet a large scale fine grained dataset for ship detection in high resolution optical remote sensing images in ieee journal of selected topics in applied earth observations and remote sensing vol 14 pp 8458 8472 2021 doi 10 1109 jstars 2021 3104230 contact if you have any the problem or feedback in using shiprsimagenet please contact zhengning zhang at 23880666 qq com license shiprsimagenet is released under the apache 2 0 license please see the license license file for more information | ai |
|
wp-component-library | 10up component library deprecated please go to https github com 10up component library a library of barebones front end components built with wordpress and accessibility in mind support level https img shields io badge support active green svg support level mit license https img shields io github license 10up wp component library svg https github com 10up wp component library blob gh pages license md overview at 10up we strive to provide websites that yield a top notch user experience in order to improve both our efficiency and consistency we need to standardize what we use and how we use it standardizing our approach to commonly used front end components allows us to understand better the inner workings of someone else s project and produce better solutions for ourselves and our clients each component in this library is built with simplicity and accessibility in mind tailored to fit the often opinionated nature of wordpress core output these components are intended to be easily adapted to any number of different projects and use cases all components are tested to be wcag 2 1 compliant start browsing https 10up github io wp component library how to use to use a component navigate to the component s detail page to see demos usage examples and installation instructions support level active 10up is actively working on this and we expect to continue work for the foreseeable future including keeping tested up to the most recent version of wordpress bug reports feature requests questions and pull requests are welcome like what you see a href http 10up com contact img src https 10up com uploads 2016 10 10up github banner png a | wordpress front-end accessibility wcag ui-components | front_end |
FirmAFL | firm afl firm afl is the first high throughput greybox fuzzer for iot firmware firm afl addresses two fundamental problems in iot fuzzing first it addresses compatibility issues by enabling fuzzing for posix compatible firmware that can be emulated in a system emulator second it addresses the performance bottleneck caused by system mode emulation with a novel technique called augmented process emulation by combining system mode emulation and user mode emulation in a novel way augmented process emulation provides high compatibility as system mode emulation and high throughput as user mode emulation publication yaowen zheng ali davanian heng yin chengyu song hongsong zhu limin sun firm afl high throughput greybox fuzzing of iot firmware via augmented process emulation in usenix security symposium 2019 introduction firm afl is the first high throughput greybox fuzzer for iot firmware firm afl addresses two fundamental problems in iot fuzzing first it addresses compatibility issues by enabling fuzzing for posix compatible firmware that can be emulated in a system emulator second it addresses the performance bottleneck caused by system mode emulation with a novel technique called augmented process emulation by combining system mode emulation and user mode emulation in a novel way augmented process emulation provides high compatibility as system mode emulation and high throughput as user mode emulation the overview is show in figure 1 div align center img src https github com zyw 200 firmafl raw master image augmented process emulation png width 70 height 70 div div align center figure 1 overview of augmented process emulation div nbsp we design and implement firm afl an enhancement of afl for fuzzing iot firmware we keep the workflow of afl intact and replace the user mode qemu with augmented process emulation and the rest of the components remain unchanged the new workflow is illustrated in figure 2 div align center img src https github com zyw 200 firmafl raw master image overview of firmafl png width 70 height 70 div div align center figure 2 overview of firm afl div setup our system has two parts system mode and user mode we compile them separately for now user mode cd user mode configure target list mipsel linux user mips linux user arm linux user static disable werror make system mode cd qemu mode decaf qemu 2 10 configure target list mipsel softmmu mips softmmu arm softmmu disable werror make usage 1 download the firmdyne repo to the root directory of firmafl then setup the firmadyne according to its instructions including importing its datasheet https cmu app boxcn net s hnpvf1n72uccnhyfe307rc2nb9rfxmjp into database 2 replace the scripts makeimage sh with modified one in firmadyne modify directory 3 follow the guidance from firmadyne to generate the system running scripts take dir 815 router firmware as a example cd firmadyne sources extractor extractor py b dlink sql 127 0 0 1 np nk firmware dir 815 firmware 1 01 zip images scripts getarch sh images 9050 tar gz scripts makeimage sh 9050 scripts infernetwork sh 9050 cd python firmafl setup py 9050 mipsel 4 modify the run sh in image 9050 directory as following in order to emulate firmware with our modified qemu and kernel and running on the ram file for mipsel arch mipsel qemu qemu system arch kernel vmlinux arch 3 2 1 image image raw mem file mem file qemu m 256 mem prealloc mem path mem file m qemu machine kernel kernel for mipseb arch mips qemu qemu system arch kernel vmlinux arch 3 2 1 image image raw mem file mem file qemu m 256 mem prealloc mem path mem file m qemu machine kernel kernel 5 run the fuzzing process after running the start py script firmafl will start the firmware emulation and after the system initialization 120s the fuzzing process will start maybe you should use root privilege to run it cd image 9050 python start py 9050 related work our system is built on top of triforceafl decaf afl and firmadyne triforceafl afl qemu fuzzing with full system emulation https github com nccgroup triforceafl decaf make it work make it right make it fast building a platform neutral whole system dynamic binary analysis platform andrew henderson aravind prakash lok kwong yan xunchao hu xujiewen wang rundong zhou and heng yin to appear in the international symposium on software testing and analysis issta 14 san jose ca july 2014 https github com sycurelab decaf afl american fuzzy lop 2 52b http lcamtuf coredump cx afl firmadyne daming d chen maverick woo david brumley and manuel egele towards automated dynamic analysis for linux based embedded firmware in network and distributed system security symposium ndss 16 2016 https github com firmadyne troubleshooting 1 error static declaration of memfd create follows non static declaration please see https blog csdn net newnewman80 article details 90175033 2 failed to find romfile efi e1000 rom when run the run sh use the run sh in firmafl config 9050 instead 3 fork server crashed with signal 11 run scripts in start py sequentially first run run sh when the testing program starts run python test py and user sh 4 for the id 12978 16116 firmware since these firmware have more than 1 test case so we use different image directory name to distinguish them before firmafl setup first change image directory name image 12978 to image 129780 then modify the firmadyne scratch 12978 to firmadyne scratch 129780 after that run python firmafl setup py 129780 mips if you want to test another case for image 12978 you can use image 129781 instead image 129780 | server |
|
awesome-blockchain-ai | awesome blockchain ai awesome https awesome re badge svg https awesome re a curated list of blockchain projects for artificial intelligence and machine learning this list explores awesome projects that exploit the properties of blockchain technologies decentralization immutability smart contracts etc to build the next generation of ai systems contents recommended reading recommended reading blockchains for ai algorithms blockchains for ai algorithms blockchains for data blockchains for data blockchains for computation blockchains for computation blockchains for ai in finance blockchains for ai in finance blockchains for ai in medicine blockchains for ai in medicine blockchains for ai in supply chains blockchains for ai in supply chains academic research academic research recommended reading wikipedia blockchain https en wikipedia org wiki blockchain a blockchain is a growing list of records called blocks which are linked using cryptography artificial intelligence https en wikipedia org wiki artificial intelligence in the field of computer science artificial intelligence ai sometimes called machine intelligence is intelligence demonstrated by machines in contrast to the natural intelligence displayed by humans and other animals machine learning https en wikipedia org wiki machine learning machine learning ml is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions relying on patterns and inference instead blockchain ai and machine learning decentralizing ai dreamers vs pragmatists https www linkedin com pulse decentralizing ai dreamers vs pragmatists jesus rodriguez jesus rodriguez may 23 2019 how the blockchain could break big tech s hold on a i https www nytimes com 2018 10 20 technology how the blockchain could break big techs hold on ai html new york times october 20 2018 how to actually combine ai and blockchain in one platform https hackernoon com how to actually combine ai and blockchain in one platform ef937e919ec2 hacker noon june 7 2018 blockchain based machine learning marketplaces https medium com fehrsam blockchain based machine learning marketplaces cb2d4dae2c17 fred ehrsam march 13 2018 the convergence of ai and blockchain what s the deal https medium com francesco ai the convergence of ai and blockchain whats the deal 60c618e3accc francesco corea december 1 2017 blockchains for ai algorithms singularitynet https singularitynet io singularitynet is a distributed ai platform on the ethereum blockchain with each blockchain node backing up an ai algorithm intuition fabric https intuitionfabric com the goal of intuition fabric is to democratize access to ai through a network of deep learning models that are stored on the interplanetary file system and accessed through the ethereum blockchain openmined https openmined org openmined is a community focused on building open source technology for the decentralized ownership of data and intelligence with openmined ai can be trained on data that it never has access to raven protocol https www ravenprotocol com raven protocol is a decentralized and distributed deep learning training protocol thought network https thought live thought s blockchain enabled fabric fundamentally changes applications by embedding artificial intelligence into every bit of data making it agile actionable and inherently secure matrix ai https www matrix io the matrix ai network is a public chain that combines ai technology with blockchain technology to solve the major challenges currently stifling the development and adoption of blockchain technology matrix is poised to revolutionize and democratize the field of artificial intelligence using a blockchain powered decentralized computing platform cortex labs https www cortexlabs ai cortex labs is a decentralized ai platform with a virtual machine that allows you to execute ai programs on chain fetch ai https fetch ai fetch ai is a decentralized machine learning platform based on a distributed ledger that enables secure sharing connection and transactions based on any data globally oraichain https orai io oraichain is the world s first intelligent and secure solution for emerging web3 scalable dapps and decentralized ai bittensor https bittensor com bittensor is an open source protocol that powers a decentralized blockchain based machine learning network related resources https taostats io links alethea ai https alethea ai a research and development studio building at the intersection of generative ai and blockchain vanna labs https www vannalabs ai an ethereum l2 rollup that supports native seamless and trustless ai ml inferences on chain to empower decentralized applications blockchain projects for ai algorithms danku https github com algorithmiaio danku a blockchain based protocol for evaluating and purchasing ml models on a public blockchain such as ethereum blog post https algorithmia com research ml models on blockchain decentralized collaborative ai on blockchain https github com microsoft 0xdeca10b 0xdeca10b is a framework to host and train publicly available machine learning models in smart contracts with incentive mechanisms to encourage good quality training data while keeping the models free to use for prediction blog post https www microsoft com en us research blog leveraging blockchain to make machine learning models more accessible blockchains for data ocean protocol https oceanprotocol com ocean protocol is a decentralized data exchange protocol that lets people share and monetize data while guaranteeing control auditability transparency and compliance to all actors involved its network handles storing of the metadata i e who owns what links to the data itself and more blockchains for computation truebit https truebit io truebit gives ethereum smart contracts a computational boost deepbrain chain https www deepbrainchain org a decentralized ai computing platform that supplies processing power to companies looking to develop a i technologies nunet https www nunet io a globally decentralized computing framework that combines latent computing power of independently owned compute devices across the globe into a dynamic marketplace of compute resources phala network https phala network a decentralized off chain compute infrastructure for web3 development blockchains for ai in finance numerai https numer ai numerai is a hedge fund powered by a network of anonymous data scientists that build machine learning models to operate on encrypted data and stake cryptocurrency to express confidence in their models cindicator https cindicator com cindicator is a crowd sourced prediction engine for financial and crypto indicators erasure https erasure xxx erasure is a decentralized protocol and data marketplace for financial predictions blockchains for ai in medicine doc ai https doc ai about doc ai aims to decentralize precision medicine on the blockchain by using ai burstiq https www burstiq com healthcare data marketplace with granular ownership and granular consent of data by using on chain storage on a custom blockchain burstiq can comply with hipaa gdpr and other regulations blockchains for ai in supply chains academic research coin ai https doi org 10 3390 e21080723 baldominos a saez y 2019 coin ai a proof of useful work scheme for blockchain based distributed deep learning entropy 21 8 723 wekacoin https doi org 10 1109 dappcon 2019 00023 bravo marquez f reeves s ugarte m 2019 april proof of learning a blockchain consensus mechanism based on machine learning competitions in 2019 ieee international conference on decentralized applications and infrastructures dappcon pp 119 124 ieee deep learning based consensus https arxiv org abs 1904 07349 li b chenli c xu x shi y jung t 2019 dlbc a deep learning based consensus in blockchains for deep learning services arxiv preprint arxiv 1904 07349 proof of deep learning https doi org 10 1109 bloc 2019 8751419 chenli c li b shi y jung t 2019 may energy recycling blockchain with proof of deep learning in 2019 ieee international conference on blockchain and cryptocurrency icbc pp 19 23 ieee blockml https doi org 10 1145 3366624 3368156 merlina a 2019 december blockml a useful proof of work system based on machine learning tasks in proceedings of the 20th international middleware conference doctoral symposium pp 6 8 convergence of ai and dlt https doi org 10 1109 access 2020 2981447 pandl k d thiebes s schmidt kraepelin m sunyaev a 2020 on the convergence of artificial intelligence and distributed ledger technology a scoping review and future research agenda ieee access 8 57075 57095 proof of learning https arxiv org abs 2007 15145 lan y liu y li b 2020 proof of learning pole empowering machine learning with consensus building on blockchains arxiv preprint arxiv 2007 15145 decentralized and collaborative ai on blockchain https doi org 10 1109 blockchain 2019 00057 harris j d waggoner b 2019 july decentralized and collaborative ai on blockchain in 2019 ieee international conference on blockchain blockchain pp 368 375 ieee decentralized and collaborative ai on blockchain https doi org 10 1007 978 3 030 59638 5 10 harris j d 2020 september analysis of models for decentralized and collaborative ai on blockchain in international conference on blockchain pp 142 153 springer cham hyperparameter optimization https doi org 10 3389 fbloc 2020 00023 mittal a aggarwal s 2020 hyperparameter optimization using sustainable proof of work in blockchain frontiers in blockchain 3 23 proof of federated learning https doi org 10 1109 tpds 2021 3056773 qu x wang s hu q cheng x 2021 proof of federated learning a novel energy recycling consensus algorithm ieee transactions on parallel and distributed systems 32 8 2074 2085 proof of neural architecture https doi org 10 1109 icbc51069 2021 9461067 li b lu q jiang w jung t shi y 2021 may a mining pool solution for novel proof of neural architecture consensus in 2021 ieee international conference on blockchain and cryptocurrency icbc pp 1 3 ieee license cc0 http mirrors creativecommons org presskit buttons 88x31 svg cc zero svg https creativecommons org publicdomain zero 1 0 to the extent possible under law steven van vaerenbergh https github com steven2358 has waived all copyright and related or neighboring rights to this work | blockchain machine-learning artificial-intelligence awesome-list awesome | blockchain |
ml-art-colabs | ml visual art colabs a list of cool colabs on machine learning imagemaking or other artistic purposes 3d ken burns effect ken burns effect https github com dvschultz ml art colabs blob master 3d ken burns multiple ipynb by manuel romero https github com mrm8488 demo video https youtu be dysfitr fdy by lia coleman 3d photo inpainting 3d photography using context aware layered depth inpainting https github com dvschultz ml art colabs blob master 3d photo inpainting ipynb demo video https youtu be y3noi8fqulo bigbigan bigbigan https colab research google com github tensorflow hub blob master examples colab bigbigan with tf hub ipynb by tensorflow biggan biggan https colab research google com github tensorflow hub blob master examples colab biggan generation with tf hub ipynb by tensorflow colorization image colorizer https colab research google com github jantic deoldify blob master imagecolorizercolab ipynb by deoldify video colorizer https colab research google com github jantic deoldify blob master videocolorizercolab ipynb by deoldify coltran https github com dvschultz ml art colabs blob master coltran ipynb by google brain dcgan tf gan on tpus https colab research google com github tensorflow gan blob master tensorflow gan examples colab notebooks tfgan on tpus ipynb by tensorflow deepdream deepdream https github com dvschultz ml art colabs blob master deepdream ipynb by alex mordvintsev demo video https www youtube com watch v mvoi u0khts list plwuczxqipjs9afmkvp2i9 y 23bcgk8ze index 5 minimal deepdream implementation https colab research google com github tensorflow docs blob master site en tutorials generative deepdream ipynb by tensorflow first order motion model first order motion https colab research google com github aliaksandrsiarohin first order model blob master demo ipynb scrollto ucmfmjv7k ag by aliaksandr siarohin https github com aliaksandrsiarohin funit funit https colab research google com github shaoanlu fewshot face translation gan blob master colab demo ipynb scrollto 2gvoysecoghg by shaoanlu image data processing process wikiart dataset https github com pbaylies stylegan2 blob master process 20wikiart 20dataset ipynb by peter baylies image generators looking glass 1 1 https colab research google com drive 11vds9dpczz2q2efkojcwyax4oob6n40g by bearsharktopus https www patreon com bearsharktopus tutorial https youtu be 37 zjreghw4 image gpt https gist github com jonathanfly eb61f0d31680e1b890f3a53fbaf31384 by jonathan fly https gist github com jonathanfly lucid lucid visualizes the networks of many convolutional neural nets lucid https colab research google com github tensorflow lucid blob master notebooks tutorial ipynb lucent https colab research google com github greentfrapp lucent blob master notebooks tutorial ipynb lucent is a pytorch variation of lucid next frame prediction next frame prediction with pix2pixhd https github com dvschultz ml art colabs blob master pix2pixhd next frame prediction ipynb training demo video https www youtube com watch v gry1j3jhtp0 video generation demo https youtu be pqvklabntki object detection yolo v5 https colab research google com drive 1gdz2xctogr39tggs ez6i3rts16wmzzq by ultralytics object mask generation u square net https github com dvschultz ai blob master u 2 net ipynb by derrick schultz shape matching gan shape matching gan https github com dvschultz shapematchinggan blob master shapematchinggan ipynb by derrick schultz singan singan https github com dvschultz ai blob master singan ipynb by derrick schultz demo video https youtu be ukgmnvuyl84 singan distortions https github com dvschultz ml art colabs blob master singan distortions ipynb by duskvirkus https github com duskvirkus inspired by the yuma kishi s studies of collage of paintings for humanity https obake2ai com studies of collage of paintings for humanity stylegan flesh digressions https github com dvschultz ml art colabs blob master flesh digressions ipynb loops of the constant and style layers demo video https www youtube com watch v zrn1kp lby8 ganspace https github com dvschultz make ml art with google colab blob master ganspace s2dd ipynb feature detection using pca demo video https youtu be fci3wx38ong t 1340 barycentric cross network interpolation with different layer interpolation rates https colab research google com drive 1fwoyqtu0kvydwhrddfkbhdkcs0jj zuk usp sharing by arfafax https github com arfafax network bending https github com dvschultz ml art colabs blob master network bending static images ipynb demo video https www youtube com watch v pso alwtn14 network blending https colab research google com drive 1tputbma9eaxs9hl9io21g7xn7jz xrko usp sharing by justin pinkney https github com justinpinkney demo video https youtu be k5tu xhwaao stylegan paintings stylegan1 https colab research google com drive 1cfkk0cbnev2bf8z9bohxepk7e f7ttui stylegan encoder tutorial https github com pbaylies stylegan2 blob master stylegan encoder tutorial ipynb by peter baylies stylegan2 https github com dvschultz ai blob master stylegan2 ipynb by derrick schultz stylegan2 https colab research google com drive 1shgw6wohefqtqs znmna3dzrcvoabkih by mikael christensen swa playground https github com dvschultz ai blob master swa playground ipynb by arfafax https github com arfafax stylegan2 experiments blob master stylegan2 20network 20interpolation ipynb wikiart example generation https github com pbaylies stylegan2 blob master wikiart 20example 20generation ipynb peter baylies stylegan2 activations and pca projection https github com dvschultz ml art colabs blob master stylegan2 activations and pca projection ipynb by duskvirkus look at lower network levels of sg2 generator style transfer lucid 2d style transfer https colab research google com github tensorflow lucid blob master notebooks differentiable parameterizations style transfer 2d ipynb by google neural style tf https github com dvschultz ai blob master neural style tf ipynb by derrick schultz demo video https youtu be yyb5yzbzuc8 t 1183 superresolution esrgan https github com dvschultz esrgan blob master esrgan ipynb by derrick schultz image superresolution https colab research google com github tugstugi dl colab notebooks blob master notebooks isr prediction tutorial ipynb by erdene ochir tuguldur https github com tugstugi srfbn https github com dvschultz srfbn cvpr19 blob master srfbn ipynb by derrick schultz sr zoo https github com dvschultz ai blob master super resolution zoo ipynb ported to colab by derrick schultz slow motion film https colab research google com drive 1sk0uc gjxmdnaxhhyqd2afrknakpdtnz modified by derrick schultz from a notebook by rife https github com dvschultz ml art colabs blob master rife ipynb by derrick schultz modified from towards data science article https towardsdatascience com high quality slow motion videos in 5 minutes with deep learning 1ed526665ef super slomo https colab research google com github tugstugi dl colab notebooks blob master notebooks superslomo ipynb by erdene ochir tuguldur https github com tugstugi text to image generation aphantasia https colab research google com github eps696 aphantasia blob master aphantasia ipynb by vadim epstein https github com eps696 tutorial https youtu be friui8mp 8 attn gan https github com dvschultz ml art colabs blob master attn gan ipynb the og text to image generator notebook by derrick schultz big sleep https github com dvschultz ml art colabs blob master the big sleep bigganxclip ipynb biggan controlled by clip by ryan murdock demo video https www youtube com watch v tiqtr8gnjq list plwuczxqipjs9afmkvp2i9 y 23bcgk8ze index 28 disco diffusion 4 1 https colab research google com drive 1shfrn5y0ykyki1k ifusbfrnj8 1sa39 by somnai https twitter com somnai dreams demo tutorial https youtu be dx2g940pao8 illustrip https colab research google com github eps696 aphantasia blob master illustrip3d ipynb by vadim epstein https github com eps696 tutorial https youtu be ktylfdf6lrs quick clip guided diffusion https colab research google com drive 1fuoobqomdjug7rgsmwfqa883a9r4hxeo usp sharing fast clip guided diffusion image generation by katherine crowson https github com crowsonkb daniel russell https github com russelldc et al s2ml art generator https github com justin bennington s2ml art generator blob main s2ml art generator ipynb by justin bennington https github com justin bennington zoetrope 5 5 https colab research google com drive 1gkfhvbnmgmquovwd7ua6dyhm7 viwitp clip vqgan tool by bearsharktopus https www patreon com bearsharktopus texture synthesis neural cellular automata https colab research google com github google research self organising systems blob master notebooks texture nca tf2 ipynb by alex mordvitsev texturize grass demo https github com photogeniq texturize blob master examples demo grass ipynb demo video https youtu be trhhzq46xuu texturize gravel demo https github com photogeniq texturize blob master examples demo gravel ipynb twingan twingan https colab research google com github mrm8488 shared colab notebooks blob master twingan manu ipynb by manuel romero https github com mrm8488 unpaired image to image translation cut https github com dvschultz ml art colabs blob master cut ipynb by derrick schultz cyclegan https colab research google com github tensorflow docs blob master site en tutorials generative cyclegan ipynb scrollto itzuapl56mny by tensorflow munit https github com dvschultz munit blob master munit ipynb by derrick schultz stargan v2 pytorch https github com dvschultz ml art colabs blob master stargan v2 pytorch ipynb ml text colabs gpt 2 gpt 2 https colab research google com drive 1vlg8e7ysewypxu nornhsv5dw4nftgce by martin woolf https minimaxir com ml audio colabs magenta generating piano music with transformer https colab research google com notebooks magenta piano transformer piano transformer ipynb scrollto qi5g x4fozls by magenta jukebox sampling and co composing with prompts https colab research google com github anlexmatos jukebox blob master jukebox interacting with jukebox ipynb by anthony matos https github com anlexmatos music source separation demucs https github com dvschultz ml art colabs blob master demucs ipynb demo video https youtu be thxsqfcx7gw by lia coleman open unmix https colab research google com drive 1mijf0zgwxn kaxtnd0q6hayalrid5feq other helpful repositories dl colab notebooks https github com tugstugi dl colab notebooks by erdene ochir tuguldur https github com tugstugi shared colab notebooks https github com mrm8488 shared colab notebooks by manuel romero https github com mrm8488 | ai |
|
Info30005-BabyCaring | info30005 2019 wa video presentation link https youtu be nfaisthye10 description of core functionalities function 1 login to the website enter your username and password if you have registered accounts url https babycaring herokuapp com login view login ejs route login controller controller model userschema function 2 sign up for the website create your new account url https babycaring herokuapp com register view register ejs route register controller controller model userschema function 3 editing your profile you can update your username and password here url https babycaring herokuapp com loggedin profile view profile ejs route loggedin profile controller usercontroller model userschema function 4 asking question about babies you can post your questions here url https babycaring herokuapp com loggedin createpost view newpost ejs route loggedin createpost controller userpostscontroller model userpostsschema function 5 making reply to other user s questions url https babycaring herokuapp com post 5cf0ba6dbf3f6d7d3ac4e166 view single ejs route post id controller userpostscontroller model userpostsschema function 6 consulting a certified expert url https babycaring herokuapp com askexpert view askexpert ejs route askexpert controller userpostscontroller model userpostsschema function 7 searching for desired questions and answers by keywords unfinished function 8 browsing contents according to topics unfinished function 9 categorizing questions into solved and unsolved unfinished | server |
|
sduis | sduis certificate in information technology | server |
|
Rx-Marble-Design-System | rx marble design system a library and framework agnostic design system to visualize functional reactive programming with a href https github com reactivex title reactiveextensions reactiveextensions a div style width 100 text align center img src https github com biophoton rx marble design system raw master assets design system cover png div index design tokens unit font color shape line size components time time progress consumer event notification complete error operator operator context operation diagrams description legend diagram beyond the standard links rx marble design system github page http bit ly rx marble design system github page rx marble design system google slides https bit ly rx marble design system slides rx marble design system github repo https bit ly rx marble design system repo rx marble design system rxjs github issue https bit ly rx marble design system issue preview p style width 100 text align center div style width 100 text align center img src https github com biophoton rx marble design system raw master assets draft rx marbles design system components png br img src https github com biophoton rx marble design system raw master assets draft rxjs ratelimiting operators jpeg br img src https github com biophoton rx marble design system raw master assets operators new tap jpg br img src https github com biophoton rx marble design system raw master assets operators new tap v2 jpg br img src https github com biophoton rx marble design system raw master assets operators new tap v3 jpg br img src https github com biophoton rx marble design system raw master assets operators new finalize jpg br img src https github com biophoton rx marble design system raw master assets operators new finalize v2 jpg div p | rxjs diagrams rx-marbel-design-system | os |
spirit | spirit n a microframework for web development description we are big fans of cuba for ruby so we wanted to contribute to elixir community with a similar microframework the intention of this project is to learn how elixir works and create a framework for our upcoming projects we know there are many frameworks like phoenix clint sugar and others which we will be watching to learn and contribute but we still want to build a new one it will teach us a lot installation add spirit to deps elixir defp deps do spirit 0 0 1 end and run mix do deps get deps compile usage here s a simple application elixir cat lib sample app ex defmodule sampleapp do use spirit get hello do send resp conn 200 h1 hello world h1 end match do send resp conn 404 not found end end and the config file elixir cat config config exs use mix config config spirit app sampleapp to run it just do mix server and start browsing your application check spirit example to see the full example and step by step guide composition you can compose as many spirit applications as you want using forward this is a recommended practice when you have nested routes or want to group routes based on a criterion elixir defmodule users do use spirit get do send resp conn 200 users index end get id do show the user with id end post do create a new user end match do send resp conn 404 not found end end defmodule mainapp do use spirit get hi name do send resp conn 200 h1 hello name h1 end forward users to users get hello rest do send resp conn 200 matches all routes starting with hello end match do send resp conn 404 not found end end cuba https github com soveran cuba clint https github com lpil clint sugar http sugar framework github io phoenix http phoenixframework org spirit example https github com citrusbyte spirit example | front_end |
|
Web-dev-mini-projects | web dev mini projects the repository contains the list of awesome amp cool web development beginner friendly projects h1 align center web dev mini projects h1 div align center a href https github com topics html img alt html src https img shields io badge html 20 23e34f26 svg style for the badge a a href https github com topics css img alt css src https img shields io badge css 20 23e34f26 svg style for the badge a a href https github com topics javascript img alt javascript src https img shields io badge javascript 20 23e34f26 svg style for the badge logo javascript logocolor white a br br a href https github com ayushparikh code web dev mini projects img src https badges frapsoft com os v1 open source svg v 103 a a href https github com ayushparikh code web dev mini projects img src https img shields io badge built 20by developers 20 3c 2f 3e 0059b3 a a href https github com ayushparikh code web dev mini projects img src https img shields io static v1 svg label contributions message welcome color yellow a a href https github com ayushparikh code img src https img shields io badge maintained 3f yes brightgreen svg v 103 a a href https github com ayushparikh code web dev mini projects blob main license img src https img shields io badge license mit blue svg v 103 a br br a href https github com ayushparikh code web dev mini projects graphs contributors img src https img shields io github contributors ayushparikh code web dev mini projects color brightgreen a a href https github com ayushparikh code web dev mini projects stargazers img src https img shields io github stars ayushparikh code web dev mini projects color 0059b3 a a href https github com ayushparikh code web dev mini projects network members img src https img shields io github forks ayushparikh code web dev mini projects color yellow a a href https github com ayushparikh code web dev mini projects issues img src https img shields io github issues ayushparikh code web dev mini projects color 0059b3 a a href https github com ayushparikh code web dev mini projects issues q is 3aissue is 3aclosed img src https img shields io github issues closed raw ayushparikh code web dev mini projects color yellow a a href https github com ayushparikh code web dev mini projects pulls img src https img shields io github issues pr ayushparikh code web dev mini projects color brightgreen a a href https github com ayushparikh code web dev mini projects pulls q is 3apr is 3aclosed img src https img shields io github issues pr closed raw ayushparikh code web dev mini projects color 0059b3 a div div align center add any web development mini project div br how to contribute 1 fork this https github com ayushparikh code web dev mini projects repository 2 clone the forked repository terminal git clone https github com ayushparikh code web dev mini projects 3 navigate to the project directory terminal cd web dev mini projects 4 make a new folder with your project name inside web dev mini projects add your project files eg index html style css script js inside that folder br 5 also add a readme file in your project folder which consists of description screenshots about your project br 6 create a new branch terminal git checkout b your branch name 7 add commit your changes terminal git add git commit m your commit message 7 push your local branch to the remote repository terminal git push u origin your branch name 8 create a pull request congratulations sit and relax till we review your pr you ve made your contribution to https github com ayushparikh code web dev mini projects project br our valuable contributors a href https github com ayushparikh code web dev mini projects graphs contributors img src https contrib rocks image repo ayushparikh code web dev mini projects a br br project maintainers table tr td align center a href https github com ayushparikh code img src https avatars githubusercontent com u 60268067 v 4 width 150px height 150px a br h4 style color red ayush parikh h4 a href https www linkedin com in ayush parikh332 img src https mpng subpng com 20180324 vhe kisspng linkedin computer icons logo social networking ser facebook 5ab6ebfe5f5397 2333748215219374063905 jpg width 32px height 32px a td td align center a href https github com chicken biryani img src https avatars githubusercontent com u 41121520 v 4 width 150px height 150px a br h4 style color red shloka gupta h4 a href https www linkedin com in shloka gupta 45b974157 img src https mpng subpng com 20180324 vhe kisspng linkedin computer icons logo social networking ser facebook 5ab6ebfe5f5397 2333748215219374063905 jpg width 32px height 32px a td td align center a href https github com harshita2216 img src https avatars githubusercontent com u 65803563 v 4 width 150px height 150px a br h4 style color red s harshita h4 a href https www linkedin com in s harshita img src https mpng subpng com 20180324 vhe kisspng linkedin computer icons logo social networking ser facebook 5ab6ebfe5f5397 2333748215219374063905 jpg width 32px height 32px a td tr table br opensource programs this project is a part of following open source program table style width 80 background color white border radius 30px tr td center a href https letsgrowmore in projects img src https letsgrowmore in wp content uploads 2021 05 cropped growmore removebg preview png img a center td tr table hr p align center a href https github com ayushparikh code web dev mini projects title web dev mini projects img src https img shields io badge github 100000 style for the badge logo github logocolor white a p hr hr happy contribution | webdevelopment webdev webdeveloper webdevprojects webdevelopmentprojects html html-css-javascript miniprojects projects-list css javascript hacktoberfest hacktoberfest-accepted | front_end |
radius | radius design system kit this is 1 of 3 repos that is part of an ecosystem of open source tools and libraries that allow you to accelerate your design system this repository contains several branches that enable the boilerplate creation of design system instances for react rangle io radius https rangle io radius figma file https www figma com file rqenxzwazgiewm7coch1sc radius design kit storybook docs https radius ds netlify com chromatic https www chromaticqa com library appid 5e44874935df3b0022b9d890 see also radius angular https github com rangle radius angular radius workspace https github com rangle radius workspace contribution we are currently working to make radius for react more flexible with regards to accelerating the creation of design systems for integrating with vanilla css css modules emotion and styled components respective versions of such design systems can be found in the following branches basic css css modules basic emotion the emotion library has theme basic styled the styled components library has theme please create pull requests against the branches above to contribute what s inside readme md demo dist docs example jest config js package json src tsconfig json are you using radius we would love to hear about how you are using radius or any feedback or feature requests open an issue https github com rangle radius issues new quick start to get started you can just clone the repository tsdx react w storybook user guide congrats you just saved yourself hours of work by bootstrapping this project with tsdx let s get you oriented with what s here and how to use it this tsdx setup is meant for developing react component libraries not apps that can be published to npm if you re looking to build a react based app you should use create react app razzle nextjs gatsby or react static if you re new to typescript and react checkout this handy cheatsheet https github com sw yx react typescript cheatsheet commands tsdx scaffolds your new library inside src and also sets up a parcel based https parceljs org playground for it inside example the recommended workflow is to run tsdx in one terminal bash npm start or yarn start this builds to dist and runs the project in watch mode so any edits you save inside src causes a rebuild to dist then run either storybook or the example playground storybook run inside another terminal bash yarn storybook this loads the stories from stories note stories should reference the components as if using the library similar to the example playground this means importing from the root project directory this has been aliased in the tsconfig and the storybook webpack config as a helper example then run the example inside another bash cd example npm i or yarn to install dependencies npm start or yarn start the default example imports and live reloads whatever is in dist so if you are seeing an out of date component make sure tsdx is running in watch mode like we recommend above no symlinking required we use parcel s aliasing https parceljs org module resolution html aliases to do a one off build use npm run build or yarn build to run tests use npm test or yarn test configuration code quality is set up for you with prettier husky and lint staged adjust the respective fields in package json accordingly jest jest tests are set up to run with npm test or yarn test bundle analysis calculates the real cost of your library using size limit https github com ai size limit with npm run size and visulize it with npm run analyze react testing library we do not set up react testing library for you yet we welcome contributions and documentation on this rollup tsdx uses rollup https rollupjs org as a bundler and generates multiple rollup configs for various module formats and build settings see optimizations optimizations for details typescript tsconfig json is set up to interpret dom and esnext types as well as react for jsx adjust according to your needs continuous integration github actions two actions are added by default main which installs deps w cache lints tests and builds on all pushes against a node and os matrix size which comments cost comparison of your library on every pull request using size limit https github com ai size limit optimizations please see the main tsdx optimizations docs https github com palmerhq tsdx optimizations in particular know that you can take advantage of development only optimizations js types index d ts declare var dev boolean inside your code if dev console log foo you can also choose to install and use invariant https github com palmerhq tsdx invariant and warning https github com palmerhq tsdx warning functions module formats cjs esmodules and umd module formats are supported the appropriate paths are configured in package json and dist index js accordingly please report if any issues are found deploying the example playground the playground is just a simple parcel https parceljs org app you can deploy it anywhere you would normally deploy that here are some guidelines for manually deploying with the netlify cli npm i g netlify cli bash cd example if not already in the example folder npm run build builds to dist netlify deploy deploy the dist folder alternatively if you already have a git repo connected you can set up continuous deployment with netlify bash netlify init build command yarn build cd example yarn yarn build directory to deploy example dist pick yes for netlify toml named exports per palmer group guidelines always use named exports https github com palmerhq typescript exports code split inside your react app instead of your react library including styles there are many ways to ship styles including with css in js tsdx has no opinion on this configure how you like for vanilla css you can include it at the root directory and add it to the files section in your package json so that it can be imported separately by your users and run through their bundler s loader publishing to npm we recommend using np https github com sindresorhus np usage with lerna when creating a new package with tsdx within a project set up with lerna you might encounter a cannot resolve dependency error when trying to run the example project to fix that you will need to make changes to the package json file inside the example directory the problem is that due to the nature of how dependencies are installed in lerna projects the aliases in the example project s package json might not point to the right place as those dependencies might have been installed in the root of your lerna project change the alias to point to where those packages are actually installed this depends on the directory structure of your lerna project so the actual path might be different from the diff below diff alias react node modules react react dom node modules react dom react node modules react react dom node modules react dom an alternative to fixing this problem would be to remove aliases altogether and define the dependencies referenced as aliases as dev dependencies instead however that might cause other problems https github com palmerhq tsdx issues 64 | os |
|
Book_Hub | book hub the book has been taught and developed by internshala trainings in its android app development training program this project was made during lockdown as my summer internship project this app uses kotlin to build its functionality the concepts covered this app constitutes of concept layouts using xml kotlin basics activity lifecycle intents fragments and their lifecycle lists using recycler view connecting the app to the internetget and post request using volley saving and retrieving data in sql using room about the project allowing clients the chance to transfer download boundless number of books each book will be stored in server and top most high rated book will be displayed above even client can bookmarked his herfavourite book and can read online screenshot of layout alt text https github com anmol17agarwal book hub blob master screenshot 20of 20layout png | front_end |
|
BlockchainDevelopmentTutorials | blockchain development blockchain ethereum ethereum ethereum dapp web3 js faq 1 extension 2 issue blockchaintuts gmail com issues | blockchain |
|
javascript-sdk | the bnb beacon chain javascript sdk allows browsers and node js clients to interact with bnb beacon chain it includes the following core components crypto core cryptographic functions amino amino https github com binance chain docs site blob master docs encoding md protobuf like encoding and decoding of transactions client implementations of bnb beacon chain transaction types such as for transfers and trading accounts management of accounts and wallets including seed and encrypted mnemonic generation ledger ledger nano s x support via hid u2f and web ble bluetooth rpc node rpc client transaction transaction class build and sign you can find more detailed documentation and examples in our documentation https github com binance chain javascript sdk blob master docs readme md pages installation if you do not need ledger support with node js bash npm i binance chain javascript sdk no optional or yarn add binance chain javascript sdk no optional if you need ledger support with node js bash npm i binance chain javascript sdk or yarn add binance chain javascript sdk prerequisites windows users please install windows build tools https www npmjs com package windows build tools first mac users make sure xcode command line tools are installed xcode select install linux users note that ubuntu xenial and newer distributions are recommended especially when using travis or other ci systems you may need some dev packages to be installed on your system for usb support on debian based distributions like ubuntu you should install them with this command bash sudo apt get install libudev dev libusb dev usbutils use with webpack we often see webpack builds failing with the sdk due to the usb dependency but adding this to your webpack config should fix that js module exports plugins new webpack ignoreplugin usb testing all new code changes should be covered with unit tests you can run the tests with the following command bash yarn test tests for the ledger hardware wallet integration have their own suite that runs in both node and in the browser bash yarn test ledger yarn test ledger browser contributing contributions to the bnb beacon chain javascript sdk are welcome please ensure that you have tested the changes with a local client and have added unit test coverage for your code | sdk frontend blockchain bnb | blockchain |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.