names
stringlengths 1
98
| readmes
stringlengths 8
608k
| topics
stringlengths 0
442
| labels
stringclasses 6
values |
---|---|---|---|
SlimTrainer | this repository is currently in an early expiremental stage and should not be considered production ready slimtrainer and adalite allow for full parameter 16 bit finetuning of language models up to 7b on a single 24gb gpu the optimizer uses the backpropagation fusing technique from lomo https github com openlmlab lomo but uses a custom optimizer instead of using simple sgd the small batch size and extreme memory requirements extensive exploration of potential optimizer variants resulting in a custom optimizer adalite based on adafactor and lamb further development is being pursued including quantization of embedding and optimizer states which should allow for larger batch sizes note that long sequence lengths require more memory a sequence length of 512 at a batch size of 1 with flash attention and adalite for llama 7b just fits at a batch size of 1 and a batch size of 2 is so close that i suspect it could be achieved with some efficiency tweaks stablelm 3b fits at a batch size of 4 an implementation of sigma reparameterization https proceedings mlr press v202 zhai23a zhai23a pdf and a naive version of embedding quantization are also present in this repository apple found that sigma parameterization was sufficient to allow them to pretrain a transformer with sgd momentum unfortunately momentum alone takes too much memory and i have not seen any improvement for finetuning using sigma reparameterization with non momentum sgd the techniques used in this repository should also help with training in less constrained settings but i haven t tested it in such contexts with a sufficiently large batch size the overlaplion optimizer may be useable | language-modeling llama transformers | ai |
awesome-npm-packages-for-developers | awesome npm packages list a curated list of awesome npm packages https github com codexive zech awesome npm packages for developers assets 56152847 0c972157 ffde 4b50 88d3 4b8a48dcd050 npm packages are javascript modules that can be installed and used in other javascript projects they are stored in the npm registry which is a central repository of javascript code npm packages can be used to simplify and speed up development by providing reusable code for common tasks below are the lists of awesome npm packages developers can use on their projects add to calendar button https www npmjs com package add to calendar button add to calendar button is a javascript web component which lets you reliably create beautiful buttons where people can add events to their calendars supported calendars include apple google microsoft 365 outlook teams yahoo and generic ical it integrates easily with any usual html webpage vanillajs way as well as popular javascript frameworks and libraries like angular react vue svelte and more axios https www npmjs com package axios axios is a promise based http client for javascript it is a popular library that makes it easy to make requests to apis axios is available for both the browser and node js bcrypt https www npmjs com package bcrypt bcrypt is a popular npm package used for hashing and salting passwords in node js applications it provides a secure way to store passwords by applying a one way hashing algorithm with the inclusion of a salt value this helps protect user passwords even if the stored hashes are compromised chakra ui https www npmjs com package chakra ui react chakra ui is a react component library that provides a set of components for building user interfaces it is a popular choice among developers because it offers a simple api and supports many different use cases concurrently https www npmjs com package concurrently concurrently allows you to run backend server and frontend folder concurrently this will save you the stress of running cd backend in the root folder starting the server it then doing cd frontend folder and starting the project with concurrently you can run both frontend and backend with one script in the root folder dotenv https www npmjs com package dotenv dotenv is a popular npm package that enables the loading of environment variables from a env file into node js applications it allows you to store configuration values such as api keys database urls or any other sensitive information separate from your codebase and provides an easy way to manage different environments development staging production without modifying your code expressjs https www npmjs com package express expressjs is a web framework that enables you to design a web application to handle a variety of different http demands formik https www npmjs com package formik for form management of react applications handles user inputs and submitting form inputs best used with yup formspree react https www npmjs com package formspree react the formspree react package is a react component library that makes it easy to integrate formspree forms into your react application it s a service that allows you to create and host forms without having to worry about the backend handlebars https www npmjs com package handlebars handlebars is a popular templating language that allows you to generate dynamic html or other markup templates in javascript applications it provides a simple and intuitive syntax for embedding dynamic content into static template files headless ui https www npmjs com package headlessui react headless ui is a set of completely unstyled fully accessible ui components for react vue and alpine js it s a great way to build accessible lightweight and performant user interfaces without having to worry about styling or accessibility helmet https www npmjs com package helmet helmet is a middleware for express js which helps in securing your applications by setting various http headers related to security it helps you set up policies like content security policy and x xss protection among many others jest https www npmjs com package jest jest is a javascript testing framework it was designed and built with a focus on simplicity and support for large web applications it works with projects using babel typescript node js react angular vue js and svelte jsonwebtoken https www npmjs com package jsonwebtoken jsonwebtoken is an npm package commonly used in node js applications for implementing json web tokens jwts json web tokens are a compact and self contained way of transmitting information between parties as a json object they consist of three parts a header a payload and a signature material ui https mui com material ui getting started installation provides a comprehensive set of pre designed and customizable ui components styles and themes to build modern and visually appealing user interfaces mongoose https www npmjs com package mongoose mongoose is an object data modeling odm library for node js that provides a straightforward and schema based solution for interacting with mongodb databases it simplifies the process of working with mongodb by providing a higher level api and built in support for data validation schema definition and query building moment https www npmjs com package moment moment is a popular javascript date library for parsing validating manipulating and formatting dates netlify cli https www npmjs com package netlify cli netlify cli is a command line tool that allows you to deploy your site to netlify it is a popular choice among developers because it offers a simple api and supports many different use cases next themes https www npmjs com package next themes next themes packages is a next js library that makes it easy to add dark mode support to your next js application it provides a simple api for creating and managing themes and it supports both system level dark mode and custom theme nodemon https www npmjs com package nodemon nodemon is a development tool for node js applications that automatically restarts the node js server whenever file changes are detected it is commonly used during the development process to improve the development workflow by eliminating the need to manually restart the server after making code changes numeral https www npmjs com package numeral numeral is a javascript library used for formatting and manipulating numbers it provides an easy way to format numbers according to various patterns apply custom formats and perform mathematical operations on them html pdf https www npmjs com package html pdf html pdf is a popular library used for generating pdf files from html content in node js applications it provides a simple and straightforward way to convert html templates or web pages into pdf documents prettier https www npmjs com package prettier prettier is an opinionated code formatter that enforces a consistent code style across your entire codebase it is a popular tool that is used by many developers to improve the readability and consistency of their code react rc carousel https www npmjs com package react rc carousel react rc carousel is a react component that allows you to create a carousel with images videos and text it is easy to use and highly customizable react testing library https www npmjs com package testing library react react testing library is a library for testing react components it provides a set of utilities that make it easy to write tests for your react components react query https www npmjs com package react query react query is a library for managing data in react applications it provides a simple and declarative api that makes it easy to fetch cache and update data from your server or api react beautiful drag drop https www npmjs com package react beautiful dnd react beautiful drag drop is a library that allows you to create drag and drop interfaces in react it is a popular choice among developers because it offers a simple api and supports many different use cases react circular progressbar https www npmjs com package react circular progressbar react circular progress bar is a react component that displays a circular progress indicator it is used to show the progress of an operation or task react hook form https www npmjs com package react hook form react hook form is a library for managing forms in react applications using hooks it offers a lightweight and performant solution for building forms with easy validation and form state management react icons https www npmjs com package react icons react icons is a library that provides a set of icons for react applications it is a popular choice among developers because it offers a simple api and supports many different use cases react joyride https www npmjs com package react joyride react joyride is a react component that allows you to create a guided tour of your application it is easy to use and highly customizable react leaflet https www npmjs com package react leaflet react leaflet is a react component that allows you to create a map with markers polygons and other shapes it is easy to use and highly customizable react loading skeleton https www npmjs com package react loading skeloton react loading skeleton package is a react library that allows you to create loading skeletons for your react application skeletons are a great way to improve the user experience of your app while it is loading react router dom https www npmjs com package react router dom react router dom is a library that provides routing capabilities for react applications it is a popular choice among developers because it offers a simple api and supports many different use cases react select https www npmjs com package react select react select is a library that provides a set of components for building forms in react it is a popular choice for developers because it offers a simple api and supports many different use cases react slick https www npmjs com package react slick react slick is a react component that allows you to create a carousel with images videos and text it is easy to use and highly customizable react spinners https www npmjs com package react spinners react spinners is a library that provides a set of components for building loading spinners in react it is a popular choice among developers because it offers a simple api and supports many different use cases recharts https www npmjs com package recharts recharts is a library that provides a set of components for building charts in react it is a popular choice among developers because it offers a simple api and supports many different use cases react toastify https www npmjs com package react toastify react toastify is a library that provides a simple api for creating toast notifications in react applications it is a popular choice among developers because it offers a simple api and supports many different use cases redux https www npmjs com package redux redux is a popular state management library for javascript applications commonly used with frameworks like react including react native and next js it provides a predictable and centralized way to manage the state of an application making it easier to develop and maintain complex application states socket io https www npmjs com package socket io socket io is a javascript library that enables real time bidirectional communication between web clients and servers it provides a simple and efficient way to build real time applications such as chat applications collaborative tools real time gaming and more styled components https www npmjs com package styled components styled components is a package that allows you to write css inside of javascript it is a popular choice for react developers because it makes it easy to create reusable and maintainable styles swiper https www npmjs com package swiper the swiper package is a free and open source javascript library that provides a powerful and flexible slider for web application mobile web application and mobile native hybrid apps it s a modern touch slider which is focused only on the modern apps platforms to bring the best experience and simplicity yup https www npmjs com package yup a javascript library for object schema validation provides a simple api for defining and validating javascript objects including validation of nested objects yup is often used in combination with formik to perform validation on form inputs | front_end |
|
Auto-Locking-Pet-Door | auto locking pet door embedded systems senior design project this is an auto locking pet door t assist the blind more to come | os |
|
nsw-design-system-react | note this repository has been archived by the nsw design system team on may 9 2023 and is now in a read only state while no further updates or contributions will be accepted the existing content remains available for reference feel free to explore the repository to access historical information and resources while it won t receive any new updates you can still review the codebase and discussions that took place in the past if you have questions related to the repository s content or its context please consult the available documentation or use the issue tracker for assistance this repository s content can serve as a resource for learning and inspiration for future projects thank you for your interest in this repository and we hope it continues to be a useful resource for the development community nsw design system react library install bash npm install nsw ds react nsw design system add the styles separately in your main app js file js import nsw design system dist css main css in your index html document add this line of code inside the head tag or install icon https www npmjs com package material icons and font https www npmjs com package fontsource public sans from npm link href https fonts googleapis com css2 family public sans ital wght 0 400 0 700 1 400 display swap rel stylesheet link href https fonts googleapis com icon family material icons rel stylesheet usage refer to individual components usage in storybook https digitalnsw github io nsw design system react here s how you import the component jsx import react component from react import callout from nsw ds react class example extends component render return callout title title of callout p description of callout p callout attribution the components library is adapted from https github com govau design system components license mit digitalnsw https github com digitalnsw | react ui-components | os |
frontend | buildkite frontend archived for a few years we experimented with developing the buildkite frontend in an open source repository in the beginning when our team was small maintenance of this codebase was easy and it s integration with our main application was minimal over time our team has grown and so has the size and importance of this codebase after many weeks of discussion we decided to stop development in this repoistory and move it back into our main application creating a monolith the biggest reasons we moved it back were day to day development was complicated between our backend application and the frontend code it was difficult to document and communicate to new members of the team why this seperation existed and what the benefits were the idea was eventually the code here become a seperate entity that could run indepently of the backend application but we never got around to it so the code here ended up becoming an akward dependency of our main application that we managed with git submodules which caused great sadness maintaining 2 prs for features was annoying we had a backend and a frontend pr for the same feature keeping them both in sync was mostly busywork for little benefit our deployment testing processes were simplified by unifying the codebases we didn t have to worry about have you got the latest version of the frontend type problems in development we don t need to concern ourselves with potentially disclosing any feature experiements we may be shipping to production that d require frontend changes we re keeping this code public for historical reasons but the repo will be achived and no longer be developed in it s been a few weeks since we made the move and we ve been way more productive with the unifided codebases and our day to day development experience is simpler we re sad to no longer have the code be open source because it brought us joy to have it out in the open but we hope the increased rate of feature development in buildkite makes up for that if you d like to keep in sync with the changes we re making we re posting more and more to our public changelog here http buildkite com changelog license see license txt license txt mit | buildkite frontend react graphql relay es6 javascript basscss | front_end |
BookStore | bookstore computer engineering database and webprogramming dr shams al ajrawi | server |
|
hyperledger | you can see a rendered version of this repo here https hyperledger github io hyperledger | hyperledger blockchain distributed-ledger | blockchain |
MG100_firmware | note not recommended for new designs use https github com lairdcp pinnacle 100 firmware manifest for canvas firmware for the mg100 see https github com canvasdm ble gateway dm firmware manifest laird connectivity docs images lairdconnnectivitylogo horizontal rgb png https www lairdconnect com mg100 mg100 docs images mg100 starter kit png https www lairdconnect com iot devices iot gateways sentrius mg100 gateway lte mnb iot and bluetooth 5 note not recommended for new designs use https github com lairdcp pinnacle 100 firmware git the sentrius mg100 gateway is an out of the box product allowing the end user to develop a fully featured iot solution with minimum effort with the addition of the optional battery backup it provides uninterrupted reporting of sensor data additionally the sensor data is logged locally on an sd card to ensure data is captured even if the lte connection is interrupted based on laird connectivity s pinnacle 100 modem the sentrius mg100 gateway captures data from bluetooth 5 sensors and sends it to the cloud via a global low power cellular lte m nb iot connection it is based on the innovative integration of nordic semiconductor nrf52840 and the sierra wireless hl7800 module this enables the mg100 hardware to support lte m nb iot supports lte bands 1 2 3 4 5 12 13 20 and 28 as well as bluetooth 5 features like coded phy 2m phy and le advertising extensions warning this product contains a li ion battery there is a risk of fire and burns if the battery pack is handled improperly do not attempt to open or service the battery pack do not disassemble crush puncture short external contacts or circuits dispose of in fire or water or expose a battery pack to temperatures higher than 60 c 140 f the sentrius mg100 gateway was designed to use the supplied battery pack only contact laird connectivity technical support if a replacement is required note this readme file and associated documentation should be viewed on github selecting the desired branch the master branch will always be up to date with the latest features viewing documentation from a release ga branch is recommended to get documentation for the specific feature set of that release the mg100 firmware can operate in two modes lte m and aws lte m and aws nb iot and lwm2m nb iot and lwm2m these two modes are selected at compile time see the following sections for documentation on the demo and how it operates download firmware releases from here https github com lairdcp mg100 firmware releases lte m and aws the default build task vscode tasks json is setup to build the demo source code for lte m and aws operation read here docs readme ltem aws md for details on how the demo operates nb iot and lwm2m the mg100 can be compiled to work with nb iot and lwm2m communication to the cloud with the build lwm2m task in tasks json vscode tasks json for more details on the lwm2m demo read here docs readme nbiot lwm2m md firmware updates if the mg100 is running v2 0 0 firmware or earlier firmware updates must be programmed via swd serial wire debug to do this please consult the mg100 hardware guide section 5 4 4 to learn how to connect a j link debugger to the board mg100 units with version 3 x or greater support firmware updates via uart ble or lte read here docs firmware update md for instructions on how to update firmware with an mg100 running v3 x or later development cloning and building the source this is a zephyr based repository do not git clone this repo to clone and build the project properly please see the instructions in the mg100 firmware manifest https github com lairdcp mg100 firmware manifest repository ble profiles details on the ble profiles used to interface with the mobile app can be found here docs ble md development and debug see here docs development md for details on developing and debugging this app | nb-iot lwm2m support-lte zephyr firmware lte sensor-data | os |
KelpNet | kelpnet pure c machine learning framework license https img shields io badge license apache 202 0 blue svg https opensource org licenses apache 2 0 build status https ci appveyor com api projects status a51hnuaat3ldsdmo svg true https ci appveyor com project harujoh kelpnet codecov https codecov io gh harujoh kelpnet branch master graph badge svg https codecov io gh harujoh kelpnet csharp samplecode functionstack float nn new functionstack float new convolution2d float 1 32 5 pad 2 name l1 conv2d new relu float name l1 relu new maxpooling float 2 2 name l1 maxpooling new convolution2d float 32 64 5 pad 2 name l2 conv2d new relu float name l2 relu new maxpooling float 2 2 name l2 maxpooling new linear float 7 7 64 1024 name l3 linear new relu float name l3 relu new dropout float name l3 dropout new linear float 1024 10 name l4 linear samples xor https github com harujoh kelpnet blob master kelpnet sample sample sample01 cs cnn https github com harujoh kelpnet blob master kelpnet sample sample sample06 cs alexnet https github com harujoh kelpnet blob master kelpnet sample sample sample19 cs vgg https github com harujoh kelpnet blob master kelpnet sample sample sample15 cs resnet https github com harujoh kelpnet blob master kelpnet sample sample sample17 cs others https github com harujoh kelpnet tree master kelpnet sample sampledata mnist fashionmnist cifar 10 100 importable caffemodel chainermodel onnxmodel features uses the same define by run approach as pytorch and keras no libraries are used for matrix operations so all algorithms are readable opencl is used for parallel processing so processing can be parallelized not only on gpus but also on cpus fpgas and various other computing devices additional installation of the corresponding driver may be required to use opencl intel cpu or gpu https software intel com en us articles opencl drivers amd cpu or gpu https www amd com en support nvidia gpu https developer nvidia com opencl advantages of being built in c easy to set up a development environment and easy to learn for beginners in programming there are many options for visual representation of processing results such as the net standard form and unity development for various platforms such as pcs mobile devices and embedded devices is possible how to contact us if you have any questions or concerns even minor ones please feel free to use issue if you want to communicate with us easily please contact us via twitter you can also check the current development status on twitter twitter https twitter com harujoh system requirements libraries net standard 2 0 or 2 1 samples net framework 4 6 1 implemented functions connections convolution2d deconvolution2d embedid linear lstm activations elu leakyrelu relu relu6 sigmoid tanh softmax softplus swish mish poolings averagepooling2d maxpooling2d normalize batchnormalization lrn noise dropout stochasticdepth lossfunctions meansquarederror softmaxcrossentropy optimizers adabound adadelta adagrad adam adamw amsbound amsgrad momentumsgd rmsprop sgd | deep-learning opencl gpu machine-learning dotnet onnx neural-network csharp | ai |
martinchavez.dev | personal website | cloud |
|
Embedded-System-Design | embedded system design this repository is homework from embedded system design 2022 fall https timetable nycu edu tw r main crsoutline acy 111 sem 1 crsno 535603 lang zh tw in nycu cs contents folders description lab 1 lab1 hello world lab 2 lab2 display image with framebuffer lab 3 lab3 video output and crop images from video stream lab 4 lab4 display through hdmi and make a marquee lab 5 lab5 madplay mp3 player on linux final final human face recognizion environment setting operating system ubuntu 14 04 or 16 04 br development board embedsky e9v3 br other tools that are needed in this repository lab 0 lab0 111 lab 0 pdf | os |
|
node-sails-model-reverser | image squidhome 2x png http i imgur com rivu9 png node sails model reverser produces static sails waterline models by reverse engineering existing databases installation to install this adapter run sh npm install save sails model reverser usage an example follows which uses the apache derby adapter sails derby to reverse engineer and create models for the testtable table of the testdb database running on the standard port of localhost each model is written to a single file and is placed in the path specified by options outputpath path for output in the example below if no output path is specified the files will be placed in the generated sub directory of this module so probably node modules sails model reverser generated each model is generated to use the connection specified by options connectionname myconnection in the example below if no connection name is specified the default connection name default will be used at the time of this writing sails derby is the only tested adapter javascript var reverser require sails model reverser var adapter require sails derby var connection url jdbc derby localhost 1527 testdb minpoolsize 10 maxpoolsize 100 var tables testtable var options outputpath path for output connectionname myconnection var reverser new reverser adapter connection tables options reverser reverse more resources stackoverflow http stackoverflow com questions tagged sails js sailsjs on freenode http webchat freenode net irc channel twitter https twitter com sailsjs professional enterprise https github com balderdashy sails docs blob master faq md are there professional support options tutorials https github com balderdashy sails docs blob master faq md where do i get help a href http sailsjs org target blank title node js framework for building realtime apis img src https github camo global ssl fastly net 9e49073459ed4e0e2687b80eaf515d87b0da4a6b 687474703a2f2f62616c64657264617368792e6769746875622e696f2f7361696c732f696d616765732f6c6f676f2e706e67 width 60 alt sails js logo small a license mit license copy 2016 balderdashy http github com balderdashy contributors mike mcneil http michaelmcneil com balderdash http balderdash co contributors sails http sailsjs org is free and open source under the mit license http sails mit license org | server |
|
information | information information about diference technology | server |
|
jormungandr | this repo is unmaintained please refer to https github com input output hk catalyst core tree main src jormungandr full node just because you call something a blockchain that doesn t mean you aren t subject to normal engineering laws user guide documentation available here docs docs https input output hk github io jormungandr master current build status ci status description circleci circleci https circleci com gh input output hk jormungandr tree master svg style svg https circleci com gh input output hk jormungandr tree master master and prs install from binaries use the latest binaries https github com input output hk jormungandr releases available for many operating systems and architectures install from source prerequisites rust get the rust compiler https www rust lang org tools install latest stable version is recommended minimum required 1 39 sh rustup install stable rustup default stable rustc version if this fails try a new command window or add the path see below dependencies for detecting build dependencies homebrew on macos vcpkg on windows pkg config on other unix like systems c compiler see cc rs https github com alexcrichton cc rs for more details must be available as cc on unix and mingw or as cl exe on windows path win add userprofile cargo bin to the environment variable path lin mac add home cargo bin to your path protobuf the protocol buffers https developers google com protocol buffers version bundled with crate prost build will be used for distribution or container builds in general it s a good practice to install protoc from the official distribution package if available commands check latest release tag on https github com input output hk jormungandr releases latest sh git clone https github com input output hk jormungandr cd jormungandr git checkout tags latest release tag replace this with something like v1 2 3 cargo build skip this if you do not want to run the tests cargo test skip this if you do not want to run the tests cargo install locked path jormungandr features systemd on linux with systemd cargo install locked path jcli this will install 2 tools jormungandr the node part of the blockchain jcli a command line helper tool to help you use and setup the node configuration basics a functional node needs 2 configurations 1 its own node configuration https input output hk github io jormungandr configuration introduction html where to store data network configuration logging 2 the blockchain genesis configuration https input output hk github io jormungandr advanced introduction html which contains the initial trusted setup of the blockchain coin configuration consensus settings initial state in normal use the blockchain genesis configuration is given to you or automatically fetched from the network quick start public mode to start a new node from scratch on a given blockchain you need to know the block0 hash of this blockchain for trust purpose and internet peers to connect to the simplest way to start such a node is jormungandr block0 hash hash trusted peers ips quick start cardano shelly testnet official cardano shelly testnet documentation https testnet iohkdev io cardano shelley for the nightly testnet ask within the cardano stake pool workgroup telegram group https web telegram org im p cardanostakepoolworkgroup quick start private mode follow instructions on installation then to start a private and minimal test setup sh mkdir mynode cd mynode python3 path to source repository scripts bootstrap py options use the following recommended bootstrap options sh bootstrap bft bft setup bootstrap genesis praos slot duration 2 genesis praos setup bootstrap help further help the bootstrap script creates a simple setup with a faucet with 10 millions coins a bft leader and a stake pool both scripts can be used to do simple limited operation through the jcli debugging tools documentation documentation is available in the markdown format here doc summary md license this project is licensed under either of the following licenses apache license version 2 0 license apache license apache or http www apache org licenses license 2 0 mit license license mit license mit or http opensource org licenses mit | blockchain |
|
rails-angularjs-simple-forum | rails angularjs a simple forum a simple demonstration between a rails backend and angularjs front end to create a simple forum type application demo you can visit the demo here http obscure bayou 5295 herokuapp com http obscure bayou 5295 herokuapp com | front_end |
|
Elecalc | elecalc description elecalc is designed to be an electronic engineering calculator for ios ipados possibly for mac in the future written in swift and swiftui features calculations for resistors and capacitors in parallel and series component calculations e g led and heatsink calculations conversions e g decibel to numeric gain requirements an iphone or ipad running ios ipados 13 5 or newer compatible devices iphone 6s or later ipad 5th generation or newer ipad mini 4 or newer all ipad pro models and ipod touch 7th generation you can also use the xcode simulator to test the app as long as the simulator is running ios 13 5 or newer i have been using xcode 12 gm to incorporate ios 14 features so you may need to update to xcode 12 to build properly requires a mac running macos 10 15 6 or later installing the project should build and install like any ios xcode project see the apple developer documentation https developer apple com documentation xcode running your app in the simulator or on a device for details future additions i am open to feature requests if anyone has any please add them to the issues tab prefixing with feature request licensing this project is licensed under the mit license please see license md for more information | os |
|
onnxruntime-iot-edge | page type sample languages python products azure machine learning service azure iot edge azure storage p align center img width 100 src images for readme intro onnxruntime png alt onnx runtime with azure iot edge for acceleration of ai on the edge p this tutorial is a reference implementation for executing onnx models across different device platforms using the onnx runtime inference engine onnx runtime https github com microsoft onnxruntime is an open source inference engine for onnx models onnx runtime execution providers eps enables the execution of any onnx model using a single set of inference apis that provide access to the best hardware acceleration available in simple terms developers no longer need to worry about the nuances of hardware specific custom libraries to accelerate their machine learning models this tutorial demonstrates that by enabling the same code to run on different hw platforms using their respecitive ai acceleration libraries for optimized execution of the onnx model onnx runtime on nvidia jetson platform readme onnxruntime arm64 md is the tutorial example for deploying pre trained onnx models on the nvidia jetson nano using azure iot edge onnx runtime with intel openvino readme onnxruntime openvino md is the tutorial examle for dpeloying pre trained onnx models with onnx runtime using the openvino sdk for acceleration of the model using onnx runtime with azure machine learning azureml openvino readme md is the example using azure machine learning service to deploy the model to an iot edge device contribution this project was created with active contributions from abhinav ayalur https github com abhi12 ayalur angela martin https github com t anma kaden dippe https github com kaden dippe kelly lin https github com kemichi lindsey cleary https github com lindseyc and priscilla lui https github com priscillalui this project welcomes contributions and suggestions most contributions require you to agree to a contributor license agreement cla declaring that you have the right to and actually do grant us the rights to use your contribution for details visit https cla microsoft com when you submit a pull request a cla bot will automatically determine whether you need to provide a cla and decorate the pr appropriately e g label comment simply follow the instructions provided by the bot you will only need to do this once across all repositories using our cla | server |
|
feature-selector | feature selector simple feature selection in python feature selector is a tool for dimensionality reduction of machine learning datasets methods there are five methods used to identify features to remove 1 missing values 2 single unique values 3 collinear features 4 zero importance features 5 low importance features usage refer to the feature selector usage notebook https github com willkoehrsen feature selector blob master feature 20selector 20usage ipynb for how to use visualizations the featureselector also includes a number of visualization methods to inspect characteristics of a dataset correlation heatmap images example collinear heatmap png most important features images example top feature importances png requires python 3 6 lightgbm 2 1 1 matplotlib 2 1 2 seaborn 0 8 1 numpy 1 22 0 pandas 0 23 1 scikit learn 0 19 1 contact any questions can be directed to wjk68 case edu | ai |
|
ZTuoExchange_web | version english readme en md btc usdt usdt 1dwwqhw9pv9issqwujc8naygda7xfahaow eth usdt usdt 0x4f1ea0f10aa99f608f31f70b4d3119f6928693ed ltc lxr4tmtdhcspdao98vg2sbvx3uxdvpqvma 1 exchange vue withdrawrecord vue 2 3 webpack require 4 data a vue js project 1 node 8 0 npm npm 2 ieo 3 nginx gzip 4 compression webpack plugin js css gz webpack compression webpack plugin package json build setup bash install dependencies npm install serve with hot reload at localhost 8080 npm run dev build for production with minification npm run build build for production and view the bundle analyzer report npm run build report run unit tests npm run unit run e2e tests npm run e2e run all tests npm test for a detailed explanation on how things work check out the guide http vuejs templates github io webpack and docs for vue loader http vuejs github io vue loader qq qq 735446452 2018 7 27 1 8 2018 7 21 1 bug 2018 7 20 1 bug 2018 7 18 1 2 3 4 5 2018 7 17 1 bug 2 2018 7 16 1 2 2 bug 3 4 bug 5 bug 6 7 8 icon 9 2018 7 13 1 2 buyprice 8 3 2018 7 12 1 bug 2 2018 7 11 1 banner 1 6 2 2018 7 10 1 2018 7 9 1 2m 2018 7 8 1 banner bottom 3px 20px 2 3 4 5 banner 126 6 poptip bug 2018 7 7 1 3 bug 2 js 3 myextension 4 nodata 5 app 6 bug | front_end |
|
esp-google-iot | esp google iot this framework enables google cloud iot core connectivity with esp32 based platforms using google cloud iot device sdk https github com googlecloudplatform iot device sdk embedded c blob master readme md supported hardware esp32 devkitc https docs espressif com projects esp idf en latest hw reference modules and boards html esp32 devkitc v4 esp wrover kit https docs espressif com projects esp idf en latest hw reference modules and boards html esp wrover kit v4 1 esp32 pico kit https docs espressif com projects esp idf en latest hw reference modules and boards html esp32 pico kit v4 1 getting started please refer to https docs espressif com projects esp idf en latest get started index html for setting esp idf esp idf can be downloaded from https github com espressif esp idf esp idf v3 2 and above is recommended version please refer to example readme examples smart outlet readme md for setting up smart outlet use case which allows to control load connected to configurable gpio on esp32 using google cloud iot core | server |
|
udacity-c2-restapi | udagram rest api udagram is a simple cloud application developed alongside the udacity cloud engineering nanodegree it allows users to register and log into a web client post photos to the feed and process photos using an image filtering microservice the project is split into three parts 1 the simple frontend https github com udacity cloud developer tree master course 02 exercises udacity c2 frontend a basic ionic client web application which consumes the restapi backend 2 the restapi backend https github com udacity cloud developer tree master course 02 exercises udacity c2 restapi a node express server which can be deployed to a cloud service 3 the image filtering microservice https github com udacity cloud developer tree master course 02 project image filter starter code the final project for the course it is a node express application which runs a simple script to process images getting setup installing project dependencies this project uses npm to manage software dependencies npm relies on the package json file located in the root of this repository after cloning open your terminal and run bash npm install tip npm i is shorthand for npm install installing useful tools 1 postbird https github com paxa postbird postbird is a useful client gui graphical user interface to interact with our provisioned postgres database we can establish a remote connection and complete actions like viewing data and changing schema tables columns ect 2 postman https www getpostman com downloads postman is a useful tool to issue and save requests postman can create get put post etc requests complete with bodies it can also be used to test endpoints automatically we ve included a collection udacity c2 restapi postman collection json which contains example requsts running the server locally to run the server locally in developer mode open terminal and run bash npm run dev developer mode runs off the typescript source any saves will reset the server and run the latest version of the codebase | cloud |
|
machine-learning-nanodegree | machine learning nanodegree by udacity about this repository contains all the code notes and guides i have gathered while on this programme feel free to use them | ai |
|
t89emu | t89 emu a risc v emulator built for embedded and operating system emulation features 32 bit rv32i c c support interrupt handling m mode video device text graphics mode debugging interface disassembler registers memory viewer usage t89 emu uses the risc v gnu compiler toolchain found a href https github com riscv collab riscv gnu toolchain target blank here a a guide outlining the build process can be found a href https mindchasers com dev rv getting started target blank here a most importantly make sure to configure the cross compiler to properly target t89 emu console cd riscv gnu toolchain mkdir build cd build configure prefix opt riscv32 with arch rv32i with abi ilp32 make note the toolchain may take up to an hour to build getting started the firmware directory provides a skeleton template for embedded or operating systems development the code provided demonstrates how to properly interface t89 emu s hardware see details below detailing the hardware specifications the emulator provides a modern graphical user interface using dear imgui with an opengl glfw backend to build the emulator execute the following commands console cd build build sh once the emulator is built navigate to the firmware directory and compiler the firmware using make console cd firmware make the makefile provided compiles and links the source code outputting an elf file targeting the risc v t89 emu architecture after obtaining the binary the emulator is ready to run console cd run sh a window with the application should appear allowing you to run the firmware on the emulator running the sample hello world application the window should look like alt text img helloworld png hello world example hardware documentation memory layout address memory section 0x00000000 0x0001ffff instruction memory 0x10000000 0x100fffff data memory 0x20000000 0x2008ffff video memory 0x30000000 0x30000010 csr memory note the location of instruction data memory can be re configured in the linker script but the changes must also be reflected in the emulator s source code control state registers csrs refer to the risc v privileged spec for a complete description of the control state registers below are details specific to t89 emu s csr implementation memory mapped csrs address csr size bytes 0x30000000 mcycle 8 0x30000008 mtimecmp 8 0x30000010 keyboard 4 mcycle number of cycles since beginning of simulation mtimecmp because the architecture is 32 bit mtimecmp is split into 2 32 bit registers keyboard wip buttons are mapped to a bit in the keyboard register if a key is pressed the corresponding bit becomes high low otherwise mstatus bits 31 13 12 11 10 8 7 6 4 3 2 0 field reserved mpp reserved mpie reserved mie reserved mtvec t89 emu supports vector tables mip bits 31 12 11 10 8 7 6 4 3 2 0 field reserved meip reserved mtip reserved msip reserved mie bits 31 12 11 10 8 7 6 4 3 2 0 field reserved meie reserved mtie reserved msie reserved video memory the video memory device divides into several sections address video segment size bytes 0x20000000 controller 16 0x20000010 text buffer 1344 0x20000550 pixel buffer 589824 video controller address video segment size bytes 0x20000000 video mode 1 0x20000001 0x2000000f unused 15 most bytes of the video controller are unused or reserved for later use however the first byte defines the mode of the video controller by default the video mode byte initializes to 0 the video mode byte can be changed in software by setting the byte to 1 video text mode or 2 video graphics mode a wip video text buffer the video text buffer is a character buffer located at physical memory address 0x20000010 when video text mode is enabled the characters stores in the video text buffer will be displayed to the lcd display module the display can print up to 21 lines of characters each line fitting a maximum of 64 characters video graphics mode the video graphics buffer is a contigious array of 32 bit pixel data located at physical memory address 0x20000550 when enabled graphics mode displays the a 512x288 resolution image to the lcd display module this is a work in progress 32 bit pixel byte 3 2 1 0 field a b g r future ideas c c decompiler read firmware directly from elf file first implementation released implement exceptions in vector table soon design a more extensive graphics mode to support tiles palettes add user and supervisor protection levels cleaner ui support port build system to cmake tentative macos windows support design and run t89 s architecture on an fpga posibble extensions to project using llvm to create a backend to target the risc v architecture designed for t89 emu the scope of backend development is vast and likely warrants a separate project of its own using llvm to develop a front end for a custom programming language developer remarks i am the only developer maintaining this project i made the project to better understand my knowledge of computer science related fields | computer-architecture elf-parser embedded-systems operating-systems riscv | os |
Open3D-ML | p align center img src https raw githubusercontent com isl org open3d master docs static open3d logo horizontal png width 320 span style font size 220 b ml b span p ubuntu ci https github com isl org open3d ml workflows ubuntu 20ci badge svg style check https github com isl org open3d ml workflows style 20check badge svg pytorch badge https img shields io badge pytorch supported brightgreen style flat logo pytorch tensorflow badge https img shields io badge tensorflow supported brightgreen style flat logo tensorflow installation installation get started getting started structure repository structure tasks algorithms tasks and algorithms model zoo model zoo md datasets datasets how tos how tos contribute contribute open3d ml is an extension of open3d for 3d machine learning tasks it builds on top of the open3d core library and extends it with machine learning tools for 3d data processing this repo focuses on applications such as semantic point cloud segmentation and provides pretrained models that can be applied to common tasks as well as pipelines for training open3d ml works with tensorflow and pytorch to integrate easily into existing projects and also provides general functionality independent of ml frameworks such as data visualization installation users open3d ml is integrated in the open3d v0 11 python distribution and is compatible with the following versions of ml frameworks pytorch 1 8 2 tensorflow 2 5 2 cuda 10 1 11 on gnu linux x86 64 optional you can install open3d with bash make sure you have the latest pip version pip install upgrade pip install open3d pip install open3d to install a compatible version of pytorch or tensorflow you can use the respective requirements files bash to install a compatible version of tensorflow pip install r requirements tensorflow txt to install a compatible version of pytorch pip install r requirements torch txt to install a compatible version of pytorch with cuda on linux pip install r requirements torch cuda txt to test the installation use bash with pytorch python c import open3d ml torch as ml3d or with tensorflow python c import open3d ml tf as ml3d if you need to use different versions of the ml frameworks or cuda we recommend to build open3d from source http www open3d org docs release compilation html getting started reading a dataset the dataset namespace contains classes for reading common datasets here we read the semantickitti dataset and visualize it python import open3d ml torch as ml3d or open3d ml tf as ml3d construct a dataset by specifying dataset path dataset ml3d datasets semantickitti dataset path path to semantickitti get the all split that combines training validation and test set all split dataset get split all print the attributes of the first datum print all split get attr 0 print the shape of the first point cloud print all split get data 0 point shape show the first 100 frames using the visualizer vis ml3d vis visualizer vis visualize dataset dataset all indices range 100 visualizer gif docs images getting started ml visualizer gif loading a config file configs of models datasets and pipelines are stored in ml3d configs users can also construct their own yaml files to keep record of their customized configurations here is an example of reading a config file and constructing modules from it python import open3d ml as ml3d import open3d ml torch as ml3d or open3d ml tf as ml3d framework torch or tf cfg file ml3d configs randlanet semantickitti yml cfg ml3d utils config load from file cfg file fetch the classes by the name pipeline ml3d utils get module pipeline cfg pipeline name framework model ml3d utils get module model cfg model name framework dataset ml3d utils get module dataset cfg dataset name use the arguments in the config file to construct the instances cfg dataset dataset path path to your dataset dataset dataset cfg dataset pop dataset path none cfg dataset model model cfg model pipeline pipeline model dataset cfg pipeline semantic segmentation running a pretrained model for semantic segmentation building on the previous example we can instantiate a pipeline with a pretrained model for semantic segmentation and run it on a point cloud of our dataset see the model zoo model zoo for obtaining the weights of the pretrained model python import os import open3d ml as ml3d import open3d ml torch as ml3d cfg file ml3d configs randlanet semantickitti yml cfg ml3d utils config load from file cfg file model ml3d models randlanet cfg model cfg dataset dataset path path to your dataset dataset ml3d datasets semantickitti cfg dataset pop dataset path none cfg dataset pipeline ml3d pipelines semanticsegmentation model dataset dataset device gpu cfg pipeline download the weights ckpt folder logs os makedirs ckpt folder exist ok true ckpt path ckpt folder randlanet semantickitti 202201071330utc pth randlanet url https storage googleapis com open3d releases model zoo randlanet semantickitti 202201071330utc pth if not os path exists ckpt path cmd wget o format randlanet url ckpt path os system cmd load the parameters pipeline load ckpt ckpt path ckpt path test split dataset get split test data test split get data 0 run inference on a single example returns dict with predict labels and predict scores result pipeline run inference data evaluate performance on the test set this will write logs to logs pipeline run test users can also use predefined scripts readme md using predefined scripts to load pretrained weights and run testing training a model for semantic segmentation similar as for inference pipelines provide an interface for training a model on a dataset python use a cache for storing the results of the preprocessing default path is logs cache dataset ml3d datasets semantickitti dataset path path to semantickitti use cache true create the model with random initialization model randlanet pipeline semanticsegmentation model model dataset dataset max epoch 100 prints training progress in the console pipeline run train for more examples see examples https github com isl org open3d ml tree master examples and the scripts https github com isl org open3d ml tree master scripts directories you can also enable saving training summaries in the config file and visualize ground truth and results with tensorboard see this tutorial docs tensorboard md 3dml models training and inference for details img width 640 src https user images githubusercontent com 41028320 146465032 30696948 54f7 48df bc48 add8d2e38421 jpg 3d object detection running a pretrained model for 3d object detection the 3d object detection model is similar to a semantic segmentation model we can instantiate a pipeline with a pretrained model for object detection and run it on a point cloud of our dataset see the model zoo model zoo for obtaining the weights of the pretrained model python import os import open3d ml as ml3d import open3d ml torch as ml3d cfg file ml3d configs pointpillars kitti yml cfg ml3d utils config load from file cfg file model ml3d models pointpillars cfg model cfg dataset dataset path path to your dataset dataset ml3d datasets kitti cfg dataset pop dataset path none cfg dataset pipeline ml3d pipelines objectdetection model dataset dataset device gpu cfg pipeline download the weights ckpt folder logs os makedirs ckpt folder exist ok true ckpt path ckpt folder pointpillars kitti 202012221652utc pth pointpillar url https storage googleapis com open3d releases model zoo pointpillars kitti 202012221652utc pth if not os path exists ckpt path cmd wget o format pointpillar url ckpt path os system cmd load the parameters pipeline load ckpt ckpt path ckpt path test split dataset get split test data test split get data 0 run inference on a single example returns dict with predict labels and predict scores result pipeline run inference data evaluate performance on the test set this will write logs to logs pipeline run test users can also use predefined scripts readme md using predefined scripts to load pretrained weights and run testing training a model for 3d object detection similar as for inference pipelines provide an interface for training a model on a dataset python use a cache for storing the results of the preprocessing default path is logs cache dataset ml3d datasets kitti dataset path path to kitti use cache true create the model with random initialization model pointpillars pipeline objectdetection model model dataset dataset max epoch 100 prints training progress in the console pipeline run train below is an example of visualization using kitti the example shows the use of bounding boxes for the kitti dataset img width 640 src https github com isl org open3d ml blob master docs images visualizer boundingboxes png raw true for more examples see examples https github com isl org open3d ml tree master examples and the scripts https github com isl org open3d ml tree master scripts directories you can also enable saving training summaries in the config file and visualize ground truth and results with tensorboard see this tutorial docs tensorboard md 3dml models training and inference for details img width 640 src https user images githubusercontent com 41028320 146465084 bc397e4c 494a 4464 a73d 525e82a9b6ce jpg using predefined scripts scripts run pipeline py https github com isl org open3d ml blob master scripts run pipeline py provides an easy interface for training and evaluating a model on a dataset it saves the trouble of defining specific model and passing exact configuration python scripts run pipeline py tf torch c path to config pipeline semanticsegmentation objectdetection extra args you can use script for both semantic segmentation and object detection you must specify either semanticsegmentation or objectdetection in the pipeline parameter note that extra args will be prioritized over the same parameter present in the configuration file so instead of changing param in config file you may pass the same as a command line argument while launching the script for eg launch training for randlanet on semantickitti with torch python scripts run pipeline py torch c ml3d configs randlanet semantickitti yml dataset dataset path path to dataset pipeline semanticsegmentation dataset use cache true launch testing for pointpillars on kitti with torch python scripts run pipeline py torch c ml3d configs pointpillars kitti yml split test dataset dataset path path to dataset pipeline objectdetection dataset use cache true for further help run python scripts run pipeline py help repository structure the core part of open3d ml lives in the ml3d subfolder which is integrated into open3d in the ml namespace in addition to the core part the directories examples and scripts provide supporting scripts for getting started with setting up a training pipeline or running a network on a dataset docs markdown and rst files for documentation examples place for example scripts and notebooks ml3d package root dir that is integrated in open3d configs model configuration files datasets generic dataset code will be integratede as open3d ml tf torch datasets metrics metrics available for evaluating ml models utils framework independent utilities available as open3d ml tf torch utils vis ml specific visualization functions tf directory for tensorflow specific code same structure as ml3d torch this will be available as open3d ml tf torch directory for pytorch specific code available as open3d ml torch dataloaders framework specific dataset code e g wrappers that can make use of the generic dataset code models code for models modules smaller modules e g metrics and losses pipelines pipelines for tasks like semantic segmentation utils utilities for scripts demo scripts for training and dataset download scripts tasks and algorithms semantic segmentation for the task of semantic segmentation we measure the performance of different methods using the mean intersection over union miou over all classes the table shows the available models and datasets for the segmentation task and the respective scores each score links to the respective weight file model dataset semantickitti toronto 3d s3dis semantic3d paris lille3d scannet randla net tf 53 7 https storage googleapis com open3d releases model zoo randlanet semantickitti 202201071330utc zip 73 7 https storage googleapis com open3d releases model zoo randlanet toronto3d 202201071330utc zip 70 9 https storage googleapis com open3d releases model zoo randlanet s3dis 202201071330utc zip 76 0 https storage googleapis com open3d releases model zoo randlanet semantic3d 202201071330utc zip 70 0 https storage googleapis com open3d releases model zoo randlanet parislille3d 202201071330utc zip randla net torch 52 8 https storage googleapis com open3d releases model zoo randlanet semantickitti 202201071330utc pth 74 0 https storage googleapis com open3d releases model zoo randlanet toronto3d 202201071330utc pth 70 9 https storage googleapis com open3d releases model zoo randlanet s3dis 202201071330utc pth 76 0 https storage googleapis com open3d releases model zoo randlanet semantic3d 202201071330utc pth 70 0 https storage googleapis com open3d releases model zoo randlanet parislille3d 202201071330utc pth kpconv tf 58 7 https storage googleapis com open3d releases model zoo kpconv semantickitti 202010021102utc zip 65 6 https storage googleapis com open3d releases model zoo kpconv toronto3d 202012221551utc zip 65 0 https storage googleapis com open3d releases model zoo kpconv s3dis 202010091238 zip 76 7 https storage googleapis com open3d releases model zoo kpconv parislille3d 202011241550utc zip kpconv torch 58 0 https storage googleapis com open3d releases model zoo kpconv semantickitti 202009090354utc pth 65 6 https storage googleapis com open3d releases model zoo kpconv toronto3d 202012221551utc pth 60 0 https storage googleapis com open3d releases model zoo kpconv s3dis 202010091238 pth 76 7 https storage googleapis com open3d releases model zoo kpconv parislille3d 202011241550utc pth sparseconvunet torch 68 https storage googleapis com open3d releases model zoo sparseconvunet scannet 202105031316utc pth sparseconvunet tf 68 2 https storage googleapis com open3d releases model zoo sparseconvunet scannet 202105031316utc zip pointtransformer torch 69 2 https storage googleapis com open3d releases model zoo pointtransformer s3dis 202109241350utc pth pointtransformer tf 69 2 https storage googleapis com open3d releases model zoo pointtransformer s3dis 202109241350utc zip using weights from original author object detection for the task of object detection we measure the performance of different methods using the mean average precision map for bird s eye view bev and 3d the table shows the available models and datasets for the object detection task and the respective scores each score links to the respective weight file for the evaluation the models were evaluated using the validation subset according to kitti s validation criteria the models were trained for three classes car pedestrian and cyclist the calculated values are the mean value over the map of all classes for all difficulty levels for the waymo dataset the models were trained on three classes pedestrian vehicle cyclist model dataset kitti bev 3d 0 70 waymo bev 3d 0 50 pointpillars tf 61 6 55 2 https storage googleapis com open3d releases model zoo pointpillars kitti 202012221652utc zip pointpillars torch 61 2 52 8 https storage googleapis com open3d releases model zoo pointpillars kitti 202012221652utc pth avg 61 01 48 30 best 61 47 57 55 https storage googleapis com open3d releases model zoo pointpillars waymo 202211200158utc seed2 gpu16 pth wpp train pointrcnn tf 78 2 65 9 https storage googleapis com open3d releases model zoo pointrcnn kitti 202105071146utc zip pointrcnn torch 78 2 65 9 https storage googleapis com open3d releases model zoo pointrcnn kitti 202105071146utc pth wpp train the avg metrics are the average of three sets of training runs with 4 8 16 and 32 gpus training was for halted after 30 epochs model checkpoint is available for the best training run training pointrcnn to use ground truth sampling data augmentation for training we can generate the ground truth database as follows python scripts collect bboxes py dataset path path to data root this will generate a database consisting of objects from the train split it is recommended to use this augmentation for dataset like kitti where objects are sparse the two stages of pointrcnn are trained separately to train the proposal generation stage of pointrcnn with pytorch run the following command train rpn for 100 epochs python scripts run pipeline py torch c ml3d configs pointrcnn kitti yml dataset dataset path path to dataset mode rpn epochs 100 after getting a well trained rpn network we can train rcnn network with frozen rpn weights train rcnn for 70 epochs python scripts run pipeline py torch c ml3d configs pointrcnn kitti yml dataset dataset path path to dataset mode rcnn model ckpt path path to checkpoint epochs 100 model zoo for a full list of all weight files see model weights txt https storage googleapis com open3d releases model zoo model weights txt and the md5 checksum file model weights md5 https storage googleapis com open3d releases model zoo integrity txt datasets the following is a list of datasets for which we provide dataset reader classes semantickitti project page http semantic kitti org toronto 3d github https github com weikaitan toronto 3d semantic 3d project page http www semantic3d net s3dis project page http buildingparser stanford edu dataset html paris lille 3d project page https npm3d fr paris lille 3d argoverse project page https www argoverse org kitti project page http www cvlibs net datasets kitti eval object php obj benchmark 3d lyft project page https level 5 global data nuscenes project page https www nuscenes org waymo project page https waymo com open scannet project page http www scan net org for downloading these datasets visit the respective webpages and have a look at the scripts in scripts download datasets https github com isl org open3d ml tree master scripts download datasets how tos visualize network predictions docs howtos md visualize network predictions visualize custom data docs howtos md visualize custom data adding a new model docs howtos md adding a new model adding a new dataset docs howtos md adding a new dataset distributed training docs howtos md distributed training visualize and compare input data ground truth and results in tensorboard docs tensorboard md inference with intel openvino docs openvino md contribute there are many ways to contribute to this project you can implement a new model add code for reading a new dataset share parameters and weights for an existing model report problems and bugs please make your pull requests to the dev https github com isl org open3d ml tree dev branch open3d is a community effort we welcome and celebrate contributions from the community if you want to share weights for a model you trained please attach or link the weights file in the pull request for bugs and problems open an issue https github com isl org open3d ml issues please also check out our communication channels to get in contact with the community communication channels github issue https github com isl org open3d issues bug reports feature requests etc forum https github com isl org open3d discussions discussion on the usage of open3d discord chat https discord com invite d35bgvn online chats discussions and collaboration with other users and developers citation please cite our work pdf https arxiv org abs 1801 09847 if you use open3d bib article zhou2018 author qian yi zhou and jaesik park and vladlen koltun title open3d a modern library for 3d data processing journal arxiv 1801 09847 year 2018 | 3d-perception datasets pretrained-models lidar rgbd tensorflow pytorch visualization semantic-segmentation object-detection 3d-object-detection | ai |
DeepLearning-NLP | introduction to deep learning for natural language processing this repo accompanies the introduction to deep learning for natural language processing workshop to explain the core concepts of deep learning with emphasis on classifying text as the application python data stack is used for the workshop overview the following topics are covered 1 what is deep learning 2 motivation some use cases 3 building blocks of neural networks neuron activation function backpropagation algorithm 4 word embedding 5 word2vec 5 introduction to keras 6 multi layer perceptron 7 convolutional neural network 8 recurrent neural network 9 challenges in deep learning depending on time the following topics might be covered 1 using tensorflow as backend for keras 2 unsupervised learning using autoencoders installation instructions please refer to the installation installation md instructions document that document also has instructions on how to run a script to check if the required packages are installed slides the slides used for the workshop are available here https speakerdeck com bargava introduction to deep learning for natural language processing | ai |
|
llm-pytorch | llm training scripts docs https github com gpauloski llm pytorch actions workflows docs yml badge svg https github com gpauloski llm pytorch actions tests https github com gpauloski llm pytorch actions workflows tests yml badge svg https github com gpauloski llm pytorch actions pre commit ci status https results pre commit ci badge github gpauloski llm pytorch main svg https results pre commit ci latest github gpauloski llm pytorch main tools and training scripts i have developed for building large language models in pytorch this repository provides data preprocessing scripts training scripts and training guides this repository is the successor to my old training tools bert pytorch https github com gpauloski bert pytorch as the old code had a lot of technical debt and was not well tested compared to the old repository this codebase aims to have better code health and maintainability thanks to tests type checking linters documentation etc install see the installation guide https gregpauloski com llm pytorch main installation getting started see the available guides https gregpauloski com llm pytorch main guides | language-models python pytorch training | ai |
JAXSeq | jaxseq note this is version 2 0 of jaxseq it supports jax v0 4 and there are quite few updates that should make it easier to work with however if you are dependent on the old version i would reccoment pulling from the old version branch or the version 1 0 commit under github versions overview built on top of huggingface https huggingface co s transformers https github com huggingface transformers library jaxseq enables training very large language models in jax https jax readthedocs io en latest currently it supports gpt2 gptj t5 and opt models jaxseq is designed to be light weight and easily extensible with the aim being to demonstrate a workflow for training large language models without with the heft that is typical other existing frameworks thanks to jax s pjit https jax readthedocs io en latest jax experimental pjit html function you can straightforwardly train models with arbitrary model and data parellelism you can trade off these two as you like you can also do model parallelism across multiple hosts support for gradient checkpointing gradient accumulation and bfloat16 training inference is provided as well for memory efficient training if you encounter an error or want to contribute feel free to drop an issue installation 1 pull from github bash git clone https github com sea snell jaxseq git cd jaxseq 2 install dependencies install with conda cpu tpu or gpu install with conda cpu shell conda env create f environment yml conda activate jaxseq python m pip install upgrade pip python m pip install e install with conda gpu shell conda env create f environment yml conda activate jaxseq python m pip install upgrade pip conda install jaxlib cuda jax cuda nvcc c conda forge c nvidia python m pip install e install with conda tpu shell conda env create f environment yml conda activate jaxseq python m pip install upgrade pip pip install jax tpu f https storage googleapis com jax releases libtpu releases html python m pip install e workflow we provide some example scripts for training and evaluating gpt2 gptj llama and t5 models using jaxseq however you should feel free to build your own workflow for training you can find these scripts in the examples directory each training script takes as input a jsonl file for eval and train data each of which should be of shape json in text something out text something else in text something else else out text something else else else the examples all use tyro https github com brentyi tyro to manage commandline args see their documentation https brentyi github io tyro this code was largely tested developed and optimized for use on tpu pods though it should also work well on gpu clusters google cloud buckets to further support tpu workflows the example scripts provide functionality for uploading downloading data and or checkpoints to from google cloud storage buckets this can be achieved by prefixing the path with gcs and depending on the permissions of the bucket you may need to specify the google cloud project and provide an authentication token other excellent references for working with large models in jax easylm https github com young geng easylm maxtext https github com google maxtext dall e mini repo https t co blm8e66utj huggingface model parallel jax demo https t co egscnvtndr gpt j repo https github com kingoflolz mesh transformer jax uses xmap instead of pjit alpa https github com alpa projects alpa jaxformer https github com salesforce jaxformer many components of this repo came from collaboration with easylm https github com young geng easylm | gpt2 gpt3 huggingface language-models opt deep-learning flax jax | ai |
topdoc | topdoc build status https travis ci org topdoc topdoc svg branch master https travis ci org topdoc topdoc codecov https codecov io gh topdoc topdoc branch master graph badge svg https codecov io gh topdoc topdoc dependency status https david dm org topdoc topdoc svg https david dm org topdoc topdoc npm version https badge fury io js topdoc svg https badge fury io js topdoc a tool for generating usage and styles guides for html components using css block comments quick intro by adding a topdoc block to your css you can describe an html css component and that information can be used to generate a styleguide here s an example component css topdoc name select description a dropdown select markup select name select option value value1 value 1 option option value value2 selected value 2 option option value value3 value 3 option select tags desktop mobile select select all your css junk here why create another css block comment format topdoc was originally created for topcoat http topcoat io and the one feature missing from other generators was support for any and all custom properties topdoc is extremely tolerant of custom properties it just passes them to the template which defines what to do with it the only required properties are name and markup other than that use whatever you need installation install with npm it s meant to be command line tool so you probably want to install it globally with g sh npm install g topdoc you can also use it as a npm script without install it globally super helpful for automating your styleguide building sh npm install save dev topdoc in your package json file use a script to call the topdoc cli too json scripts docs topdoc css main css with it setup you can then run it from the command line using sh npm run docs usage comment format topdoc uses postcss http postcss org to divide asunder your css document and find all the relevant component information below is an example of a topdoc comment css topdoc name button description a simple button modifiers active active state is active simulates an active state on mobile devices disabled disabled state is disabled simulates a disabled state on mobile devices markup a class topcoat button button a a class topcoat button is active button a a class topcoat button is disabled button a example http codepen io tags desktop light mobile button quiet blarg very true topcoat button topcoat button quiet topcoat button large topcoat button large quiet topcoat button cta topcoat button large cta all your css junk here topdoc comments are identified by the topdoc keyword on the first comment line the rest of the data uses a yaml http www yaml org friendly syntax the fields can be in any order but this is a good example for consistency sake the following are recommend and or required fields name required the full name of the component feel free to use spaces punctuation etc name sir button iii esq description something more descriptive then the title alone modifiers these can be pseudo classes or addition rules applied to the component this must be a yaml mapping http yaml4r sourceforge net doc page collections in yaml htm modifier description which becomes a js hash markup required this is the magic it s the html that will be used to display the component in the docs as most markup fields are long make sure to use the for multiline values css topdoc name button markup a class topcoat button button a a class topcoat button is active button a a class topcoat button is disabled button a tags just some obligatory metadata blarg since topdoc uses a flexible yaml syntax feel free to add any additional custom data you might need for your template components topdoc assumes everything between two topdoc comments and everything after the last topdoc comment is a component put anything that isn t a component general styles above the first topdoc comment however the idea of css components is pretty loose because it is rare to have all the required styles for a component in one place originally topdoc was designed to split up the css into components to then use that css in the styleguild to show as a snippet but honestly that snippet wasn t enough to make the component by itself so it really is only interesting as reference help the output of the help command sh topdoc help usage topdoc topdoc css file directory default src options generate usage guides for css options h help output usage information d destination directory default docs directory where the usage guides will be written like all the options source can be definied in the config or package json file s stdout outputs the parsed topdoc information as json in the console t template directory npm package name default topdoc default template path to template directory or package name note template argument is resolved using the resolve package p project title default cwd name title for your project c clobber deletes destination directory before running i ignore assets file list of files a file or comma delimeted list of files in the asset directory that should be ignored when copying them over a asset directory path path to directory of assets to copy to destination defaults to template directory set to false to not copy any assets v version output the version number command line source specify a source directory with s or source defaults to src bash topdoc s release css destination specify a destination with d or destination defaults to docs bash topdoc d topdocs template specify a template with t or template a default template is included in topdoc if one is not provided the template can be a single jade https github com visionmedia jade file bash topdoc t template template jade or a directory it will duplicate the whole template directory and look for index jade in the template folder provided bash topdoc t template this includes npm installed templates bash topdoc t node modules topdoc theme project title the project title will be passed through to the jade template file bash topdoc p awesome in the jade file it is project title jade title project title yeilds html title awesome title package json configuration all the options can be configured in the package json file this is super helpful if you are always using the same configuration it will look in the package json file if it exists but can be overridden by the command line options also additional data can be passed through to the jade template below is an example json name topcoat test version 0 4 1 description css for clean and fast web apps main gruntfile js dependencies topdoc 0 0 12 topdoc source release css destination topdocs template node modules topdoc theme templatedata title topcoat subtitle css for clean and fast web apps download url label download version 0 4 homeurl http topcoat io sitenav url http www garthdb com text usage guidelines url http bench topcoat io text benchmarks url http topcoat io blog text blog in the jade template the data is accessible using templatedata jade p templatedata subtitle yeilds html p css for clean and fast web apps p template the jade template has data passed through by default document object the document object contains relevant information about just the current document being generated below is an example json title button filename button css source test cases button css template lib template jade url index html components name button slug button details active active state n is active simulates an active state on mobile devices n disabled disabled state n is disabled simulates a disabled state on mobile devices markup a class topcoat button button a n a class topcoat button is active button a n a class topcoat button is disabled button a css topcoat button n topcoat button quiet n topcoat button large n topcoat button large quiet n topcoat button cta n topcoat button large cta n position relative n display inline block n vertical align top n webkit box sizing border box n moz box sizing border box n box sizing border box n webkit background clip padding n moz background clip padding n background clip padding box n padding 0 n margin 0 n font inherit n color inherit n background transparent n border none n cursor default n webkit user select none n moz user select none n ms user select none n user select none n o text overflow ellipsis n text overflow ellipsis n white space nowrap n overflow hidden n padding 0 1 16rem n font size 12px n line height 2rem n letter spacing 1px n color c6c8c8 n text shadow 0 1px rgba 0 0 0 0 69 n vertical align top n background color 595b5b n webkit box shadow inset 0 1px rgba 255 255 255 0 12 n box shadow inset 0 1px rgba 255 255 255 0 12 n border 1px solid rgba 0 0 0 0 36 n webkit border radius 3px n border radius 3px n n topcoat button active n topcoat button is active n topcoat button large active n topcoat button large is active n background color 404141 n webkit box shadow inset 0 1px rgba 0 0 0 0 18 n box shadow inset 0 1px rgba 0 0 0 0 18 n n topcoat button disabled n topcoat button is disabled n opacity 0 3 n cursor default n pointer events none n n name quiet button slug quiet button details active quiet button active state n is active simulates active state for a quiet button on touch interfaces n disabled disabled state n is disabled simulates disabled state markup a class topcoat button quiet button a n a class topcoat button quiet is active button a n a class topcoat button quiet is disabled button a css topcoat button quiet n background transparent n border 1px solid transparent n webkit box shadow none n box shadow none n n topcoat button quiet active n topcoat button quiet is active n topcoat button large quiet active n topcoat button large quiet is active n color c6c8c8 n text shadow 0 1px rgba 0 0 0 0 69 n background color 404141 n border 1px solid rgba 0 0 0 0 36 n webkit box shadow inset 0 1px rgba 0 0 0 0 18 n box shadow inset 0 1px rgba 0 0 0 0 18 n n topcoat button quiet disabled n topcoat button quiet is disabled n opacity 0 3 n cursor default n pointer events none n n nav object the nav object contains names and urls to all the generated html files in the jade template this can utilized to create a navigation to the other pages jade nav site ul each item in nav if item url document url li selected a href item url item text else li a href item url item text project object the project object contains relevant project information currently it only contains the title property passed through the command line p option or through the package json information jade title project title templatedata object as mentioned above additional data can be passed through to the template in the package json file this is accessible in the template as the templatedata object see the example above | os |
|
ripplecharts-frontend | this project has been deprecated instead see xrpl explorer https livenet xrpl org source code https github com ripple explorer hr circleci https circleci com gh ripple ripplecharts frontend svg style svg https circleci com gh ripple ripplecharts frontend xrp charts this is the frontend for xrpcharts ripple com data visualization using angular js and d3 installation instructions 1 install node js and npm http nodejs org 2 install bower http bower io sudo npm install g bower 2 install grunt http gruntjs com sudo npm install g grunt cli 3 install the xrp charts frontend git clone https github com ripple ripplecharts frontend cd ripplecharts frontend npm install bower install 4 copy deployment environments json example into deployment environments json 5 copy src example config js into src config js and fill in the options as desired a url for the api is required build 1 run the following command to build the client grunt 2 navigate your browser to the build directory for the development version or to the bin directory for the compiled version | front_end |
|
STM32-RTOS-USB-HowToFix | stm32 rtos usb howtofix some useful tips to survive on stm32 using rtos and usb cdc device after a week of wasting time around st generated code for stm32h7 i have solved many problems below a collection of my experience to solve many issues remember that afetr many fixes if you generate code again you lost all patches and you have to start again stm32cubeide and stm32cubemx generates code for usb management that has malloc inside interrupt bad practice this can generate heap corrution also some st versions of freertos doesn t manage memory correctly for a multithread enviroment 1 if you use or don t use usb and you have problems with freertos memory management may be the problem try to use heap usenewlib c by dave nadler instead of middlewares third party freertos source portable memmang heap 4 c also take a look at http www nadler com embedded newlibandfreertos html for a very well explanation and also http www nadler com backups 20200111 draft2 stm cube issues and workflow html 2 try to increase heap and stak inside ld file or by codegeneratore put heap to 0x400 or more and stack to 0x800 for example but be free to try whai is best for you 3 usb code uses malloc inside interrupt best is to replace malloc with a static struct look pdf um1734 stm32cube usb device library at point 6 7 library footprint optimization this will be a fist step to impruve the code 4 on windows 10 there are many problems opening a vcp this is because the driver returns a bad parameters of vcp configuration so in this situation the problem is not a memory management problem in file usbd cdc if c inside private variables section declare static uint8 t uartcfg 7 0 0 0 0 0 0 0 then inside function cdc control fs case cdc set line coding uartcfg 0 pbuf 0 uartcfg 1 pbuf 1 uartcfg 2 pbuf 2 uartcfg 3 pbuf 3 uartcfg 4 pbuf 4 uartcfg 5 pbuf 5 uartcfg 6 pbuf 6 break case cdc get line coding pbuf 0 uartcfg 0 pbuf 1 uartcfg 1 pbuf 2 uartcfg 2 pbuf 3 uartcfg 3 pbuf 4 uartcfg 4 pbuf 5 uartcfg 5 pbuf 6 uartcfg 6 break this will prevent windows 10 to take error opening cdc virtual com port 5 inside sysmem c there is a function caddr t sbrk int incr that should be used for malloc but is never actually called so i think something is changed in memory management need to be investigated if there is some new code improvements | os |
|
moeSS | moess moess is a front end for https github com mengskysama shadowsocks tree manyuser thanks to ss panel https github com orvice ss panel demo https ss qaq moe install https www evernote com shard s42 sh 7a30525d a949 4132 9916 1f4fbdbf4828 6eca7d1ce520e173b1a5ebf9489a766d wiki https github com wzxjohn moess wiki import shadowsocks sql to your database may delete your existing data rename config sample php and database sample php in application config then change the settings use admin admin12345 to login the admin dashboard and change settings in http your domain com admin system config html to prevent spam register users need to click a link to activate the account so you need to set a method to send e mail currently support php mail sendmail smtp and sendgrid web api send test e mail function will be added soon method option value php mail mail sendmail sendmail smtp smtp sendgrid sendgrid only change this is not enough you also need to change other values need by the method you choose invite only is default on you need to generate invite code before anyone can registe because i don t use any encrypt function when post the data you are suggesting to secure your site by using ssl certs many notices and sentences are written directly in the view files so you need to edit the file to change them they may moved to database in the future upgrade if database structure has been changed after update i will upload a sql file in update folder just use that sql file to update database structures license the license under which the moess is released is the gplv3 or later from the free software foundation a copy of the license is included with every copy of moess but you can also read the text of the license here https github com wzxjohn moess blob master license in addition i request anyone who uses this software do not change the copyright information requires this system is using codeigniter 3 0 https github com bcit ci codeigniter to build so you need php 5 4 or newer mysql 5 1 curl to run apache is the best one because it support htaccess file which is needed to rewrite the request uri to index php | front_end |
|
material-resume | material r sum introduction professional r sum s and curriculum vitae formalities follow conventions from as early as the 1500 s http mashable com 2011 09 04 history of the resume history of the resume material resume is a fresh take on how this old formality can be improved to meet the current recipient s expectations using google s material design http www google com design spec material design introduction html visual language built using modern web development workflow tools such as bower for package management gulp and plugins for web development task management jade as html template language materialize sass as css scaffolding and json for content layer with these simple instruction you ll be able to impress any hiring manager and learn some neat tricks about the latest design and front end development tools preview the result of this workflow at http paiva cc designed in sketch app the sketch template containing the design for this resume along with material design visual elements symbols icons and much more can be found in the design folder https github com mpaiva material resume tree master src design showcase https cloud githubusercontent com assets 781670 6544935 fdec0634 c534 11e4 9aef a0a1e298038f png collaborate and learn 1 fork this repo help improving this template further the experience of using and learning the latest design development workflow with collaborators alike can only help you become a better designer developer click on the fork button on the top of this page https deltacloud apache org assets img git fork png get going with git if you are hanging around github you re already familiar with git if you are not there are simple and quick ways to learn github client for mac https mac github com github client for windows https windows github com try git free course at code school https www codeschool com courses try git 2 clone your fork assuming you already have git installed in your machine go into your project folder and clone your fork repository locally by adding the following command bash git clone https github com your username material resume git go into your new git folder bash cd material resume 3 requirements in order to take the most of the gulp automation included in this template you will need to have the following installed before moving forward node js and npm https nodejs org manages the development dependencies like gulp bower and plugins bower http bower io takes care of production components and libraries used to display the web page such as materialize jquery bootstrap etc livereload http livereload com monitors changes in the file system as soon as you save a file it is preprocessed as needed and the browser is refreshed getting started let s shift gears and install the npm bower modules this will be surprisingly easy as easy as 1 2 3 with only 3 commands you will be editing the code like a wizard 1 install npm modules npm install this could take a few seconds hang tight it will install all the dependencies included in the package json https github com mpaiva material resume blob master package json mostly gulp plugins and they will be added to the node modules folder 2 install bower components bower install this step takes care of the components dependencies included in the bower json https github com mpaiva material resume blob master bower json in our case just materialize jquery comes with it and it will be added to your bower components folder 3 start gulp and behold gulp this simple command you will run the default task in the gulpfile js https github com mpaiva material resume blob master gulpfile js which includes the following it will create the builds development folder where your index html resides copy bower components into the lib folder converts sass into css converts jade templates into index html copy and optimize images from assets into images folder copy any assets into downloads folder finally it keeps a watch in your src folder for any future changes livereload instructions 1 be sure to have the livereload browser extension http feedback livereload com knowledgebase articles 86242 how do i install and use the browser extensions installed 2 load the index html in the builds development folder 3 make sure livereload 2 is running 4 click the livereload toolbar button to enable or disable livereload image https cloud githubusercontent com assets 781670 6565922 77094562 c68a 11e4 9c73 dcdf53beb475 png html and jade templates jade http jade lang com is a very cool html template language that brings a number of features to allow front end developers to leverage dry practices http en wikipedia org wiki don 27t repeat yourself with variables data binding mixins includes etc to learn more about jade check out this tuts video https webdesign tutsplus com courses top speed html development with jade by kezz bracey kezzbracey https twitter com kezzbracey after you are familiar with jade take a closer look in the src templates folder and the jade files content layer via json for obvious reasons it is important to keep the content of your resume decoupled from your presentation layer the combination of gulp and jade allows to connect the templates with a json file inside the src templates content folder you ll find the mpaiva json https github com mpaiva material resume blob master src templates content mpaiva json containing the sections and content structure of the material resume template this file is serviced via the gulpfile js https github com mpaiva material resume blob master gulpfile js a variable containing with path to data source javascript 19 json containing the content for jade templates 20 var resumedata require src templates content mpaiva json is passed to the gulp jade plugin https www npmjs com package gulp jade via the locals option see example below javascript var jade require gulp jade gulp task templates function var resumedata require src templates content mpaiva json gulp src lib jade pipe jade locals resumedata pipe gulp dest dist then you can bind any data node from the json file with locals name see example below json node javascript name marcelo paiva title user experience director photo images mpaiva2 jpg jade template portrait card jade https github com mpaiva material resume blob master src templates partials portrait card jade jade card portrait card image img src locals photo portrait wrapper h4 locals name h6 locals title that s it really with these initial instructions you should be able to get going in no time have fun and contribute if you have any questions or suggestions please leave us note on the issues page https github com mpaiva material resume issues new | front_end |
|
Web-development | web development following dr angela yu react https github com parv3213 web development tree master react all the react related projects are in this folder todo list react https parv3213 github io react todo list a simple todo list skills react html css bootstrap tindog website https parv3213 github io web development tindog index html a beautiful front end similar to tinder skills html css bootstrap dicee chalange https parv3213 github io web development dicee challenge dicee html alternate for a coin toss skills html css javascript drum kit https parv3213 github io web development drum kit index html a simple drum kit skills html css javascript simon game https parv3213 github io web development simon game simon game html simon game skills html css javascript jquery newsletter app https newsletter app parv herokuapp com newsletter subscribe using the mailchimp api skills html css bootstrap javascript express apis heroku daily journal https daily journal parv herokuapp com journal for adding and reading articles skills html css bootstrap javascript express ejs mongoose heroku todolist https todolist parv herokuapp com a to do list application skills html css bootstrap javascript express ejs mongoose atlis heroku css my site https parv3213 github io web development css my site index html about me skills html css html personal sites https parv3213 github io web development practice html personal sites my resume skills html basic css | html css web-application javascript bootstrap mongodb nodejs reactjs passport hashing | front_end |
CSDB-FE | crowdsourced medical image database be project files for the front end of the project crowdsourced medical image database by carson vache and dr lin li sponsored by the seattle university college of science and engineering please note that this project is meant to be a proof of concept and no optimizations have been performed | server |
|
esp-wolfssl | esp wolfssl licensing important note until march 2021 this repository contained binary distribution of wolfssl libraries which could be used royalty free on all espressif mcu products this royalty free binary distribution is not available anymore this repository now uses upstream wolfssl github pointer as submodule and can still be used as esp idf component please follow licensing requirements per wolfssl licensing https github com wolfssl wolfssl blob master licensing requirements esp idf to run the examples user must have installed esp idf version v4 1 minimum supported from https github com espressif esp idf git the idf path should be set as an environment variable getting started please clone this repository using git clone recursive https github com espressif esp wolfssl please refer to https docs espressif com projects esp idf en latest get started index html for setting esp idf esp idf can be downloaded from https github com espressif esp idf esp idf v4 1 and above is recommended version please refer to example readme examples readme md for more information on setting up examples options debugging and more esp wolfssl esp tls related options can be obtained by choosing ssl library as wolfssl in idf py make menuconfig component config esp tls choose ssl library it shows following options enable small cert verify this is a flag used in wolfssl component and is enabled by default in esp wolfssl enabling this flag allows user to authenticate the server by providing the intermediate ca certificate of the server for a more strict check disable this flag after which you will have to provide the root certificate at top of the hierarchy of certificate chain which will have common name issuer name such a strict check is not compulsary in most cases hence by default the flag is enabled but the option is provided for the user enable debug logs for wolfssl this option prints detailed logs of all the internal operations highly useful when debugging an error esp wolfssl specific options see note are available under idf py make menuconfig component config wolfssl enable alpn application layer protocol negotiation in wolfssl this option is enabled by default for wolfssl and can be disabled if not required note these options are valid for esp tls only if wolfssl is selected as its ssl tls library comparison of wolfssl and mbedtls the following table shows a typical comparison between wolfssl and mbedtls when https request which has server authentication was run with both ssl tls libraries and with all respective configurations set to default mbedtls in content length and out content length were set to 16384 bytes and 4096 bytes respectively property wolfssl mbedtls total heap consumed 19 kb 37 kb task stack used 2 2 kb 3 6 kb bin size 858 kb 736 kb additional pointers in general these are links which will be useful for using both wolfssl as well as networked and secure applications in general furthermore there is a more comprehensive tutorial that can be found in chapter 11 of the official wolfssl manual the examples in the wolfssl package and chapter 11 do appropriate error checking which is worth taking a look at for a more comprehensive api check out chapter 17 of the official manual wolfssl manual https www wolfssl com docs wolfssl manual wolfssl github https github com wolfssl wolfssl | os |
|
decentralized-voting-system | decentralized voting system a decentralized voting system where a user can walk into a government authorized center ex banks telecom companies etc and cast their vote using the proposed portal link to demo on youtube https www youtube com watch v tcvspcgiodm note all diagrams are made by me and appropriate credits must be given before copying it key advantages no voterid required as a user s validity age 18 is determined dynamically using the aadhaar api secure vote by azure blockchain and biometric authentication using pre existing aadhaar database reduced cost during election process shorter wait times as it is decentralized a vote can be cast from anywhere in the country highly scalable design efficient election system in which the portal can be up for days together in turn increasing voter turnout portal front end can provide useful information on the candidate and can aid in their decision making display promises proposals etc a vote s story a user will walk into a government authorized center and complete his her biometric verification once the verification is complete the user will be taken to a web based portal developed by me where he she will be presented with the voting options the portal then sends the information of the user s vote encrypted to backend developed by me where the data will be decrypted and the vote s transaction from the user to the candidate will take place using the azure blockchain service the candidate with the most votes is elected during each election time the users are that are voted are logged which will make sure only one transaction can be made by the user during the whole election process workflow diagram img src images workflow png alt workflow voting system workflow during the election time the admin will initiate the election when the election is initiated the candidate list is sent to the front end of the portal which is setup at govt authorized locations the front end can display useful information on the candidate and can aid in their decision making display promises proposals etc the encrypted vote along with the user information is sent to the initiate vote method at the backend this initiate vote method calls the account validation method which validates the user using the aadhaar api and make sures that the user has not voted yet if the user validation is successful then the vote cast method is called which sends the vote as a contract to the azure blockchain service note at the end of the election the candidate with the most votes is elected img src images portal workflow png alt portal workflow technologies azure blockchain aadhaar api service for biometric authentication python to communicate with blockchain and for backend and frontend api calls truffle provides tools to create and test smart contracts ganache to create private blockchain network for testing on localhost flask web framework docker deployment of portal on the cloud img src images enabled by png alt enabled by author email nishant aklecha gmail com linkedin https www linkedin com in naklecha codechef https www codechef com users naklecha pypi https pypi org user naklecha github https github com naklecha | vote election azure-blockchain aadhaar-api candidate election-process cast proposals verification codefundo codefundopp microsoft portal blockchain ethereum contracts ethereum-contract ethereum-blockchain | blockchain |
cs50-seminars | cs50 seminars cs50 web development seminars fall 2015 | front_end |
|
ArtApp | public art app mobile app to guide people through the public art collection at the campus of the university of houston the code is using ionic framework http ionicframework com and wikitude sdk http www wikitude com products wikitude sdk for augmented reality ar features development prerequisites bower npm install g bower getting started for development 1 run npm install reads package json and installs node packges into node modules 2 run bower install reads bower json and installs local dependencies into the folder www lib run in browser or phonegap developer app 7 run ionic serve this uses ionic xml and will serve as local node server live updates when you make changes to the code this works with phonegap developer app wikitude will not function running on a ios android device 3 set environment variable android build to ant 4 run cordova platform add android 3 7 1 or cordova platform add ios 3 8 0 plugins are automatically installed via script in hooks before platform add note please ignore warnings regarding outdated plugins we are using fixed older versions of plugins to ensure that wikitude functions properly 5 edit plugins com wikitude phonegap wikitudeplugin www wikitudeplugin js and add the plugin key a free trial key can be requested from wikitude for development purposes 6 run ionic resources to generate icons and splash screen assets 7 run cordova build ios or cordova build android and use appropriate cordova commands to test build deploy note android 21 sdk platform must be installed via sdkmanager for a successful android build | front_end |
|
fuse10 | fuse10 acko net front end 2013 provided as a resource to learn from not to shamelessly copy or rip off images models and design steven wittens code is licensed under mit files under js lib are their respective authors and may be licensed differently use fetch audio sh to download the audio files for the demo portions | front_end |
|
Odoo-addons | odoo addons addons for odoo by matic madagascar technologies de l information et de la communication | server |
|
MLNotes | mlnotes notes written for myself to keep ml stats theory ideas organized will hopefully be useful to others to best take advantage of these notes you should download the html files and view them in any browswer that supports mathjax eventually i ll organize them more clearly and maybe even make a pdf but for now i m just keeping ideas in a file named based on the central theme of that file compile notes to make sure the notes you re using are up to date it s probably a good idea to compile the notes using the knitr package in r once it s installed you can just run bash compile sh which will generate the appropriate html files | ai |
|
knowledge_base | this is a knowledge base monorepository of ukraine open ios engineering guild structure links separated by topics wiki markdown pages with descriptions tutorials small chunks of code which are showing some language feature in single file playground style competency matrix place where we all agreed on engineering levels and skills needed for this levels main workflow github prs 1 open issue in github issues or pick an existing one 2 fork this repo to your own 3 push some code changes 4 make pr from fork to this repo 5 add implementing issuenumber to pr description to connect to github issue 6 enjoy friendly discussion | os |
|
esp8266-aws_iot | some examples using x 509 certificates and tlsv1 2 under arduino ide to communicate with aws iot after axtls update to v2 0 0 the esp8266 can work with tls v1 2 the major restriction for this small device communicates with aws iot natively how can i get aws iot working in my esp8266 with arduino ide first update esp8266 arduino core to last git version here are the instructions https github com esp8266 arduino using git version second creating a thing download and convert aws iot certificates to der format http docs aws amazon com iot latest developerguide create device certificate html converting pem to der format on windowns you should download openssl first br openssl x509 in aaaaaaaaa certificate pem crt txt out cert der outform der br openssl rsa in aaaaaaaaaa private pem key out private der outform der br copy cert der and private der to sketch data folder and upload it to spiffs using arduino esp8266fs plugin https github com esp8266 arduino esp8266fs plugin third uploading a arduino sketch some sketch examples are available in examples folder of this repository | server |
|
Matlab_MBD_tasks | matlab mbd tasks this repository wiil contain all tasks of matlab and model base design course for the embedded system intake 42 iti | os |
|
webdevbox | 61 6b 69 74 61 6f 6e 72 61 69 6c 73 40 43 6f 64 65 4d 69 6e 65 72 34 32 webdevbox demotext https github com akitaonrails webdevbox raw main helpers intro png https player vimeo com video 805780923 h ddd98118f0 webdevbox demo distrobox webdevbox this is an archlinux based docker image that comes pre installed with everything i think a web developer would need in the 2020s every language every tool every helper take a look at the dockerfile dockerfile to see all packages but in summary zsh of course with several plugins chezmoi to sync your dotfiles lunarvim already configured to work with tmux all major languages ruby nodejs python php kotlin etc asdf so you can install specific old language versions for your projects podman by default and every devops tool k8s skaffold helm terraform etc all the normal debug tools lsof strace etc usage first install podman in your os docker works fine as well for example in arch sudo pacman s podman install distrobox i tried the distrobox git package in aur but distrobox is still in heavy development and i stumbled upon a old bug that was already solved in the main branch so i recommend installing the unstable version manually curl s https raw githubusercontent com 89luca89 distrobox main install sudo sh s next now you have 2 options the easy one is to just pull the image from dockerhub podman pull docker io akitaonrails webdevbox latest or you can customize the dockerfile and build it manually git clone https github com akitaonrails webdevbox git cd webdevbox podman build t akitaonrails webdevbox finally it s normal distrobox usage let s create the box and enter in it mkdir p local share distrobox webdevbox distrobox create i akitaonrails webdevbox n webdevbox demo i distrobox enter webdevbox demo warning distrobox maps its internal home directory directly on top of your real home by default it will automatically create a user with the same username as you are using right now so it should be very seamleass to transition between them but be careful that whatever destructive command you run over your home files will be permanent if you prefer not to expose your home directory directly you can point the internal home to somewhere else like this distrobox create i akitaonrails webdevbox n webdevbox demo i h local share distrobox webdevbox volume home mnt host i prefer to map to a new directory and have a separated home per box then map my home as an external drive in mnt host then we can initialize chezmoi https www chezmoi io i d recommend first forking my dotfiles repository https github com akitaonrails dotfiles but let s use it as example chezmoi init https github com akitaonrails dotfiles chezmoi update it will prompt you for your specific information such as preferred git email i configured tmux to have the key bind ctrl alt n to open a new pane directly to a text file to serve as a shortcut for times when you want to make quick notes or add reminders i usually point to my dropbox synced obsidian directory for example and if you want to run podman inside distrobox yes you can run a new container inside another container podman is already configured to run rootless inside and if you chose to map your home directory directly to your real home then make sure to create the volume for the containers mkdir p home local share containers storage whenever you podman pull inside the box the blobs will be stored there so not to bloat the box itself remember that changes made inside the box are persistent and that s it you can start working faq did you configure git aliases yes type git la to have a list of all the built in aliases how can i see tmux shortcuts type ctrl b and to open a short incomplete list but reading the tmux conf file is easier how do i navigate in tmux the navigation key binding were customized the same ctrl hjkl are used to navigate between both tmux panels and lunarvim panels i also configured navigaton in copy mode to be like vim hjkl get used to the vi style of using hjkl instead of arrow keys do read tmux conf https github com akitaonrails dotfiles blob main dot tmux conf tmpl from my dotfiles repo did you customize lunarvim lunarvim is mostly stock i did add a few plugins such as github copilot and chatgpt do read config lua helpers config lua and go to the bottom of the file to see what i changed why didn t you install virtualenv nvm or rvm because the box comes with the much superior asdf https asdf vm com guide getting started html why did you install the pacman language packages if you installed asdf the best practice is to have the native packages in the os do their job only configure custom language versions from asdf per project for example cd my project asdf install ruby 2 6 0 asdf local ruby 2 6 0 now only this project directory responds to the specific obsolete ruby 2 6 why chezmoi to manage dotfiles because it felt simple you should always edit dotfiles in the local share chezmoi directory and add information specific to your machine in config chezmoi chezmoi toml if you created a new dotfile add it to the repository chezmoi add autotemplate fishrc if you changed some file in the local share directory update your real files with chezmoi update when everything is working push to your fork of my dotfiles with chezmoi cd git add git commit m description git push origin main exit read their documentation https www chezmoi io what are the largest installed packages this is why i chose not to install postman 300mb azure cli 600mb google cloud sdk 600mb you can install them inside the box anyway but damn rust is heavy yay ps yay version v11 3 2 total installed packages 558 foreign installed packages 10 explicitly installed packages 83 total size occupied by packages 4 8 gib size of pacman cache var cache pacman pkg 30 0 mib size of yay cache home akitaonrails local share distrobox webdevbox cache yay 0 0 b ten biggest packages rust 526 4 mib insomnia bin 394 9 mib jdk11 openjdk 322 2 mib chromium 277 2 mib go 195 6 mib gcc 171 3 mib jre11 openjdk headless 159 8 mib gcc libs 137 8 mib llvm libs 120 5 mib erlang nox 105 8 mib i m receiving errors from podman or podman compose if you see errors similar to this erro 0000 running usr sbin newuidmap 25137 0 1000 1 1 75537 65535 newuidmap write to uid map failed operation not permitted then run this in the terminal webdevbox podman config the initial welcome script already runs this but for some reason the error comes back until we run this again happy hacking links distrobox https github com 89luca89 distrobox lunarvim https github com lunarvim lunarvim learn vim https github com iggredible learn vim chezmoi https www chezmoi io user guide command overview redhat how to use podman inside of container https www redhat com sysadmin podman inside container rootless podman https github com containers podman blob main docs tutorials rootless tutorial md vim tmux navigator https github com christoomey vim tmux navigator chatgpt in vim https github com jackmort chatgpt nvim writing your tmux config a detailed guide https thevaluable dev tmux config mouseless useful tmux configuration examples https dev to iggredible useful tmux configuration examples k3g awesome tmux plugins https github com rothgar awesome tmux plugins jaime s guide to tmux the most awesome tool you didn t know you needed https www barbarianmeetscoding com blog jaimes guide to tmux the most awesome tool you didnt know you needed atuin a powerful alternative for shell history https trendoceans com atuin linux copyright c fabio akita 2023 mit licensed license | front_end |
|
ida-pro-idb-database | ida pro idb database many of ida pro idb database for game hacking and reverse analysis engineering good luck requirements ida pro version 7 0 or 7 7 python 2 or 3 cpu arch 68k m68000 arm x32 64 others remarks and rips game rom hacking cadillacs and dinosaurs violent storm the king of fighters 97 the punisher final fight others file extension bin game file for memory dump idb ida database file for 32 open use idb32 exe i64 ida database file for 64 open use idb64 exe uefi and reverse analysis video demo uefi readme https github com zengfr ida pro idb database tree main demo uefi article https my oschina net zengfr blog 5606084 uefi video https www bilibili com video bv1hg4y1v7ym ida pro plugin for recommends xrefsext https github com zengfr xrefsext ida all xrefs from viewer https github com zengfr ida all xrefs from viewer plugin for ida pro ida all xrefs to viewer https github com zengfr ida all xrefs to viewer plugin for ida pro winhex diff viewer plugin https github com zengfr winhex diff viewer plugin for ida pro hexrayscodexplorer for ida pro 7 7 https github com zengfr hexrayscodexplorer plugin for ida pro other link game hacking romhack data https github com zengfr romhack game hacking video https space bilibili com 492484080 hacking cps1 https github com zengfr romhack tree master cps1 neogeo https github com zengfr romhack tree master neogeo igs pgm https github com zengfr romhack tree master igs m68000 https github com zengfr romhack tree master m68000 z80 https github com zengfr romhack tree master z80 arm https github com zengfr romhack tree master arm x64 https github com zengfr romhack tree master x64 https github com zengfr arcade game romhacking sourcecode top secret data | ida ida-databases ida-plugin ida-pro ida-python idb hacking hacking-tool game-hacking mame reverse-analysis reverse-engineering rom-hacking ida-database ida-plugins m68000 68k | server |
NewPipe | h3 align center we are planning to i rewrite i large chunks of the codebase to bring about a href https github com teamnewpipe newpipe discussions 10118 a new modern and stable newpipe a h3 h4 align center please do b not b open pull requests for i new features i now only bugfix prs will be accepted h4 p align center a href https newpipe net img src assets new pipe icon 5 png width 150 a p h2 align center b newpipe b h2 h4 align center a libre lightweight streaming front end for android h4 p align center a href https f droid org packages org schabi newpipe img src https fdroid gitlab io artwork badge get it on en svg alt get it on f droid height 80 a p p align center a href https github com teamnewpipe newpipe releases alt github release img src https img shields io github release teamnewpipe newpipe svg a a href https www gnu org licenses gpl 3 0 alt license gplv3 img src https img shields io badge license gpl 20v3 blue svg a a href https github com teamnewpipe newpipe actions alt build status img src https github com teamnewpipe newpipe workflows ci badge svg branch dev event push a a href https hosted weblate org engage newpipe alt translation status img src https hosted weblate org widgets newpipe svg badge svg a a href https web libera chat newpipe alt irc channel newpipe img src https img shields io badge irc 20chat 23newpipe brightgreen svg a a href https www bountysource com teams newpipe alt bountysource bounties img src https img shields io bountysource team newpipe activity svg colorb cd201f a p hr p align center a href screenshots screenshots a bull a href supported services supported services a bull a href description description a bull a href features features a bull a href installation and updates installation and updates a bull a href contribution contribution a bull a href donate donate a bull a href license license a p p align center a href https newpipe net website a bull a href https newpipe net blog blog a bull a href https newpipe net faq faq a bull a href https newpipe net press press a p hr read this document in other languages deutsch doc readme de md english readme md espa ol doc readme es md fran ais doc readme fr md doc readme hi md italiano doc readme it md doc readme ko md portugu s brasil doc readme pt br md polski doc readme pl md doc readme pa md doc readme ja md rom n doc readme ro md soomaali doc readme so md t rk e doc readme tr md doc readme zh tw md doc readme asm md doc readme sr md b warning this app is in beta so you may encounter bugs if you do open an issue in our github repository by filling out the issue template b b putting newpipe or any fork of it into the google play store violates their terms and conditions b screenshots img src fastlane metadata android en us images phonescreenshots 00 png width 160 fastlane metadata android en us images phonescreenshots 00 png img src fastlane metadata android en us images phonescreenshots 01 png width 160 fastlane metadata android en us images phonescreenshots 01 png img src fastlane metadata android en us images phonescreenshots 02 png width 160 fastlane metadata android en us images phonescreenshots 02 png img src fastlane metadata android en us images phonescreenshots 03 png width 160 fastlane metadata android en us images phonescreenshots 03 png img src fastlane metadata android en us images phonescreenshots 04 png width 160 fastlane metadata android en us images phonescreenshots 04 png img src fastlane metadata android en us images phonescreenshots 05 png width 160 fastlane metadata android en us images phonescreenshots 05 png img src fastlane metadata android en us images phonescreenshots 06 png width 160 fastlane metadata android en us images phonescreenshots 06 png img src fastlane metadata android en us images phonescreenshots 07 png width 160 fastlane metadata android en us images phonescreenshots 07 png img src fastlane metadata android en us images phonescreenshots 08 png width 160 fastlane metadata android en us images phonescreenshots 08 png br br img src fastlane metadata android en us images teninchscreenshots 09 png width 405 fastlane metadata android en us images teninchscreenshots 09 png img src fastlane metadata android en us images teninchscreenshots 10 png width 405 fastlane metadata android en us images teninchscreenshots 10 png supported services newpipe currently supports these services we link to the service websites separately to avoid people accidentally opening a website they didn t want to youtube website https www youtube com and youtube music website https music youtube com wiki https en wikipedia org wiki youtube peertube website https joinpeertube org and all its instances open the website to know what that means wiki https en wikipedia org wiki peertube bandcamp website https bandcamp com wiki https en wikipedia org wiki bandcamp soundcloud website https soundcloud com wiki https en wikipedia org wiki soundcloud media ccc de website https media ccc de wiki https en wikipedia org wiki chaos computer club as you can see newpipe supports multiple video and audio services though it started off with youtube other people have added more services over the years making newpipe more and more versatile partially due to circumstance and partially due to its popularity youtube is the best supported out of these services if you use or are familiar with any of these other services please help us improve support for them we re looking for maintainers for soundcloud and peertube if you intend to add a new service please get in touch with us first our docs https teamnewpipe github io documentation provide more information on how a new service can be added to the app and to the newpipe extractor https github com teamnewpipe newpipeextractor description newpipe works by fetching the required data from the official api e g peertube of the service you re using if the official api is restricted e g youtube for our purposes or is proprietary the app parses the website or uses an internal api instead this means that you don t need an account on any service to use newpipe also since they are free and open source software neither the app nor the extractor use any proprietary libraries or frameworks such as google play services this means you can use newpipe on devices or custom roms that do not have google apps installed features watch videos at resolutions up to 4k listen to audio in the background only loading the audio stream to save data popup mode floating player aka picture in picture watch live streams show hide subtitles closed captions search videos and audios on youtube you can specify the content language as well enqueue videos and optionally save them as local playlists show hide general information about videos such as description and tags show hide next related videos show hide comments search videos audios channels playlists and albums browse videos and audios within a channel subscribe to channels yes without logging into any account get notifications about new videos from channels you re subscribed to create and edit channel groups for easier browsing and management browse video feeds generated from your channel groups view and search your watch history search and watch playlists these are remote playlists which means they re fetched from the service you re browsing create and edit local playlists these are created and saved within the app and have nothing to do with any service download videos audios subtitles closed captions open in kodi watch block age restricted material hidden span to keep old links compatible you should remove this span if you re translating the readme into another language span id updates span installation and updates you can install newpipe using one of the following methods 1 add our custom repo to f droid and install it from there the instructions are here https newpipe net faq tutorials install add fdroid repo 2 download the apk from github releases https github com teamnewpipe newpipe releases and install it 3 update via f droid this is the slowest method of getting updates as f droid must recognize changes build the apk itself sign it and then push the update to users 4 build a debug apk yourself this is the fastest way to get new features on your device but is much more complicated so we recommend using one of the other methods 5 if you re interested in a specific feature or bugfix provided in a pull request in this repo you can also download its apk from within the pr read the pr description for instructions the great thing about pr specific apks is that they re installed side by side the official app so you don t have to worry about losing your data or messing anything up we recommend method 1 for most users apks installed using method 1 or 2 are compatible with each other meaning that if you installed newpipe using either method 1 or 2 you can also update newpipe using the other but not with those installed using method 3 this is due to the same signing key ours being used for 1 and 2 but a different signing key f droid s being used for 3 building a debug apk using method 4 excludes a key entirely signing keys help ensure that a user isn t tricked into installing a malicious update to an app when using method 5 each apk is signed with a different random key supplied by github actions so you cannot even update it you will have to backup and restore the app data each time you wish to use a new apk in the meanwhile if you want to switch sources for some reason e g newpipe s core functionality breaks and f droid doesn t have the latest update yet we recommend following this procedure 1 back up your data via settings content export database so you keep your history subscriptions and playlists 2 uninstall newpipe 3 download the apk from the new source and install it 4 import the data from step 1 via settings content import database b note when you re importing a database into the official app always make sure that it is the one you exported from the official app if you import a database exported from an apk other than the official app it may break things such an action is unsupported and you should only do so when you re absolutely certain you know what you re doing b contribution whether you have ideas translations design changes code cleaning or even major code changes help is always welcome the app gets better and better with each contribution no matter how big or small if you d like to get involved check our contribution notes github contributing md a href https hosted weblate org engage newpipe img src https hosted weblate org widgets newpipe 287x66 grey png alt translation status a donate if you like newpipe you re welcome to send a donation we prefer liberapay as it is both open source and non profit for further info on donating to newpipe please visit our website https newpipe net donate table tr td a href https liberapay com teamnewpipe img src https upload wikimedia org wikipedia commons 2 27 liberapay logo v2 white on yellow svg alt liberapay width 80px a td td a href https liberapay com teamnewpipe img src assets liberapay qr code png alt visit newpipe at liberapay com width 100px a td td a href https liberapay com teamnewpipe donate img src assets liberapay donate button svg alt donate via liberapay height 35px a td tr tr td img src https bitcoin org img icons logotop svg alt bitcoin td td img src assets bitcoin qr code png alt bitcoin qr code width 100px td td samp 16a9j59ahmrqklszjhyj33n9j3fmztfxnh samp td tr tr td a href https www bountysource com teams newpipe img src https upload wikimedia org wikipedia commons thumb 2 22 bountysource png 320px bountysource png alt bountysource width 190px a td td a href https www bountysource com teams newpipe img src assets bountysource qr code png alt visit newpipe at bountysource com width 100px a td td a href https www bountysource com teams newpipe issues img src https img shields io bountysource team newpipe activity svg colorb cd201f height 30px alt check out how many bounties you can earn a td tr table privacy policy the newpipe project aims to provide a private anonymous experience for using web based media services therefore the app does not collect any data without your consent newpipe s privacy policy explains in detail what data is sent and stored when you send a crash report or leave a comment in our blog you can find the document here https newpipe net legal privacy license gnu gplv3 image https www gnu org graphics gplv3 127x51 png https www gnu org licenses gpl 3 0 en html newpipe is free software you can use study share and improve it at will specifically you can redistribute and or modify it under the terms of the gnu general public license https www gnu org licenses gpl html as published by the free software foundation either version 3 of the license or at your option any later version | youtube-video video newpipe watch translation download-videos android soundcloud peertube bandcamp 4k | front_end |
fedlearner | fedlearner fedlearner is collaborative machine learning framework that enables joint modeling of data distributed between institutions trademark usage policy fedlearner welcomes everyone to build on or modify fedlearner open source software for your own project the license of the software doesn t grant permission to use trademarks or product names in respect to the licensor however you may use the trademark or product name if you use a wordmark to refer to fedlearner program product or technology you use a wordmark in text to indicate the compatibility of your project with fedlearner project you use a wordmark in text to indicate your project is built based on fedlearner technology if you would like to use the fedlearner trademark to combine a trademark with your own brand trademark product project service or domain name for any other commercial use as a verb or noun rather than only as an adjective followed by the generic name noun in a modified abbreviated or altered form or in the plural or possessive form please feel free to contact us for an express permission if you use a trademark in a way not set forth above or for any illegal purpose with the program the licensor reserves the right in its sole discretion to terminate or modify your permission to display or use a trademark and to take action against any use that does not conform to these terms and conditions or violates applicable law | ai |
|
chatGPT-Prompt-Engineering-for-Developers | chatgpt prompt engineering for developers https www deeplearning ai short courses chatgpt prompt engineering for developers this repository contains the materials for the course chatgpt prompt engineering for developers offered by deeplearning ai and taught by isa fulford from openai and andrew ng in this course you will learn how to effectively utilize large language models llms to build powerful and innovative applications by leveraging the openai api you can unlock new possibilities and create value in ways that were previously challenging highly technical or even deemed impossible course description chatgpt prompt engineering for developers introduces you to the world of llms and equips you with the knowledge and skills needed to make the most out of them you will gain insights into how llms work learn best practices for prompt engineering and discover the wide range of tasks llm apis can handle some of the key areas covered in this course include summarizing condensing lengthy texts e g user reviews for brevity inferring classifying sentiment and extracting topics from text transforming text performing tasks such as translation spelling and grammar correction expanding automatically generating text such as writing emails throughout the course you will also learn two fundamental principles for crafting effective prompts acquire techniques to systematically engineer optimal prompts and build a custom chatbot the concepts are reinforced through numerous examples and you ll have the opportunity to gain hands on experience by working directly with jupyter notebooks in our interactive environment course contents introduction to large language models understanding prompt engineering summarizing condensing lengthy texts e g user reviews for brevity inferring classifying sentiment and extracting topics from text transforming performing tasks such as translation spelling and grammar correction expanding automatically generating text such as writing emails building custom chatbots conclusion and next steps about the instructors isa fulford is a skilled ai engineer at openai specializing in natural language processing and large language models she has extensive experience in developing applications that harness the power of llms andrew ng is a renowned ai researcher co founder of coursera and the founder of deeplearning ai with a wealth of knowledge and expertise in the field andrew has played a pivotal role in popularizing ai education chatgpt prompt engineering for developers course to enroll in the course or for further information visit deeplearning ai https www deeplearning ai | chatgpt chatgpt-api deeplearning-ai llms openai-api prompt-engineering expanding inferring summarizing transforming | ai |
CodeLabs-MobileDevOps | build 2016 workshops mobile devops this repo contains the cross platform mobile development modules delivered at the build 2016 conference it instructs attendees about cross platform mobile development and devops practices for mobile solutions cross platform mobile development 1 xamarin https github com microsoft build 2016 codelabs mobiledevops tree master module1 xamarin learn how you can leverage visual studio and xamarin to develop cross platform mobile applications across ios android and windows in this workshop you will learn best practices from microsoft and xamarin for architecting and testing your apps to increase agility and overall quality you will also create and use a new project in visual studio team services cross platform mobile development 2 continuous integration https github com microsoft build 2016 codelabs mobiledevops tree master module2 ci learn how to create a continuous integration pipeline by automating your xamarin apps builds using visual studio team services including unit integration and ui tests in this workshop you ll write a build definition analyze build output and automatically create work items from errors cross platform mobile development 3 continuous deployment and beta testing https github com microsoft build 2016 codelabs mobiledevops tree master module3 cd learn how to streamline your release pipeline by automatically deploying to hockeyapp for beta testing and submitting to app stores for publishing in this workshop you ll build a continuous deployment pipeline with visual studio team services debug an app crash using hockeyapp s diagnostic reporting and deploy your xamarin app automatically | front_end |
|
SSRLCV | uga ssrl computer vision university of georgia small satellite research laboratory smallsat uga edu computer vision ssrlcv is a computer vision software library written in c and the nvidia cuda programming language for nvidia gpu socs in space environments the software will be used onboard the moci satellite with our modified tx2i but is also compatible with ubuntu 16 04 ubuntu 18 04 and linux for tegra ssrlcv can also run on the tx2 and the jetson nano the software currently includes sift feature detection sift feature generation sift feature matching point cloud filtering 2 view triangulation n view triangulation and 2 view bundle adjustment ssrlcv is capable of generating point clouds with 15 100 meter accuracy from a 400 km circular orbit and a 6 meter gsd the results are documented in this thesis research http piepieninja github io research papers thesis pdf and several updates are expected in the near future you can begin reading documentation on the ssrlcv github wiki https github com uga ssrl ssrlcv wiki and view code documentation at data calebadams space doxygen documentation html http data calebadams space doxygen documentation html it is also recommended to clone the following repositories sample data highly recommended to use this is maintained as ssrlcv sample data https gitlab smallsat uga edu payload software ssrlcv sample data on gitlab and mirrored on github https github com uga ssrl ssrlcv sample data utilities maintained as ssrlcv utilities https gitlab smallsat uga edu payload software ssrlcv utilities on gitlab and mirrored on github https github com uga ssrl ssrlcv util check out the contributors guide contrib md if you would like to help further develop ssrlcv dependencies libpng dev libtiff dev g gcc nvcc cuda 10 0 compilation when making you should use the sm of your arch you do this by setting the sm variable i also recommend doing a multicore make with the j flag see below where are digits of integers all executables can be generated by simply using make j sm additionally neither the j or the sm variable are necessary however if these are not used then compilation will take much much longer make sfm j sm device recommended sm jetson nano make sfm j4 sm 53 53 tx1 make sfm j2 sm 53 53 tx2 tx2i make sfm j6 sm 62 62 jetson xavier make sfm j6 sm 72 72 ubuntu 16 04 with gtx 1060 1070 make sfm j8 sm 61 61 you can clean back to source files only with make clean compiling and running on sapelo2 uga cluster 1 start an interactive session on a k40 gpu interact p gpu p gres gpu k40 1 mem 16g 1 load appropriate modules ml cuda 10 0 130 ml gcccore 6 4 0 1 compile make sfm j8 sm 35 log level 3 geo orbit 1 change log level to 4 for memory logging set geo orbit to 0 if this is not in geocentric orbit turns off epipolar geometry reliance 4 run bin sfm d work demlab sfm ssrlcv sample data everest1024 2view s work demlab sfm ssrlcv sample data seeds seed spongebob png delta 3 0 change first to relevant image set usage simply use the command sfm d path to images s path to seed png flag command line argument details i or image path to single image absolute or relative d or directory path to directory of images absolute or relative s or seed path to seed image absolute or relative delta float used as buffer in km for epipolar geometry np or noparams n a signify no use of params csv output ssrlcv currently produces ply files in the out folder a future release will allow for better control of output files and allow camera parameters the image rotation encodes which way the camera was facing as a rotation of axes https en wikipedia org wiki rotation of axes around the individual x y and z axes in r3 this along with a physical position in r3 should be passed in by the adcs all other parameters should be known the focal length is usually on the order of mm and the dpix is usually on the order of nm data type variable name si unit description float3 cam pos kilometers ecef the x y z camera position float3 cam rot radians ecef the x y z camera rotation as euler x y z float2 fov radians the x and y field of view float foc meters the camera s focal length float2 dpix meters the physical dimensions of a pixel well long long int timestamp unix timestamp a unix timestamp from the time of imaging uint2 size pixels the x and y pixel size of the image file formats the ssrlcv logger ssrlcv includes a logger that produces a comma segmented csv encoded log file at out ssrlcv log ascii camera parameters csv ascii encoded file the ascii encoded files that contain camera parameters should be included in the same directory as the images you wish to run a reconstruction on it is required that the file be named params csv the file consists of the image camera struct parameters mentioned above for ease in order the format is as follows filename x position y position z position x rotation y rotation z rotation x field of view y field of view camera focal length x pixel well size y pixel well size unix timestamp x pixel count y pixel count the files should be listed in a numerical order each camera should be on one line and end with a and example of this is ev01 png 781 417 0 0 4436 30 0 0 0 1745329252 0 0 0 19933754453 0 19933754453 0 16 0 4 0 4 1580766557 1024 1024 ev02 png 0 0 0 0 4500 0 0 0 0 0 0 0 0 19933754453 0 19933754453 0 16 0 4 0 4 1580766557 1024 1024 examples of such parameters can be found at https github com uga ssrl ssrlcv sample data https github com uga ssrl ssrlcv sample data binary camera parameters bcp file type binary camera parameters are not currently defined but will be in a later release documentation online documentation documentation on the use of ssrlcv can be found at the ssrlcv wiki is located at https github com uga ssrl ssrlcv wiki https github com uga ssrl ssrlcv wiki code documentation is located at data calebadams space doxygen documentation html http data calebadams space doxygen documentation html ssrlcv utilities the ssrlcv has various utilities for testing io and data visualization these can be found at the ssrlcv utilities gitlab https gitlab smallsat uga edu payload software ssrlcv utilities repository these additional software packages are beneficial meshlab http www meshlab net critical for viewing the results of ssrlcv cloudcompare https cloudcompare org useful for comparing ground truth models the icp algorithm within cc is great for this manual generation generate doxygen by executing doxygen doc doxygen doxyfile from within the projects root directory an index html file will be available in doc doxygen documentation html you can start there when exploring documentation locally citations upon usage please cite one or more of the following high performance computation with small satellites and small satellite swarms for 3d reconstruction http piepieninja github io research papers thesis pdf mastersthesis calebadamsmsthesis author caleb ashmore adams title high performance computation with small satellites and small satellite swarms for 3d reconstruction school the university of georgia url http piepieninja github io research papers thesis pdf year 2020 month may towards an integrated gpu accelerated soc as a flight computer for small satellites https ieeexplore ieee org document 8741765 inproceedings towardsadams2019 doi 10 1109 aero 2019 8741765 url https doi org 10 1109 aero 2019 8741765 year 2019 month mar publisher ieee author caleb adams and allen spain and jackson parker and matthew hevert and james roach and david cotten title towards an integrated gpu accelerated soc as a flight computer for small satellites booktitle 2019 ieee aerospace conference a near real time space based computer vision system for accurate terrain mapping https digitalcommons usu edu cgi viewcontent cgi article 4216 context smallsat inproceedings cvadams2018 title a near real time space based computer vision system for accurate terrain mapping author adams caleb journal 32nd annual aiaa usu conference on small satellites year 2018 publisher aiaa yeet | computer-vision cuda cubesatellite jetson 3d-reconstruction satellite-data cubesat-payload cubesat computervision computer-vision-algorithms jetson-tx2 jetson-tx2i jetson-nano gis gis-application | ai |
PicoW-FreeRTOS-Template | raspberry pi pico w freertos starter in c this is a simple blinky starter project for raspberry pi pico w that uses freertos important setup clone pico sdk https github com raspberrypi pico sdk and inside the cloned sdk directory run git submodule update init to init all submodules clone freertos kernel https github com freertos freertos kernel pico sdk should be present in the machine and it s path should be used as an environment variable as pico sdk path pointing to the cloned pico sdk dir freertos kernel should be present in the machine and it s path should be used as an environment variable as freerstos kernel path pointing to the cloned freertos kernel dir these environment variable should be used when calling cmake or defined in vscode recomended using this setup https www youtube com watch v baotbg8mjj4 that uses the cmake tools extension project rename to rename the project simply open the root cmakelists txt and change project pico freertos c cxx asm to project your project name c cxx asm outputs after building your binary will be under build src src uf2 take the src uf2 and push it you pico w with bootsel note the setup video https www youtube com watch v baotbg8mjj4 mentioned before should show you how to build on vs code happy coding tinkering inspired by the learn embedded systems video series https www youtube com watch v jczxstjzga8 list pleb5f4gtnk68ilrijtcj 2cw4dsdmretw index 14 on youtube | c cmake cpp freertos freertos-iot pico-w picow raspberry-pi raspberry-pi-pico vscode | os |
ML2022-Spring | banner https i imgur com f6ocdtq png p h2 align center machine learning 2022 spring by national taiwan university br h2 p this repository contains code and slides of 15 homeworks for machine learning instructed by hung yi lee all the information about this course can be found on the course website https speech ee ntu edu tw hylee ml 2022 spring php 15 homeworks hw1 regression video https youtu be cfiimk ybtg code https github com virginiakm1988 ml2022 spring blob main hw01 hw01 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw01 hw01 pdf hw2 classification video https youtu be fxupf4vjga4 code https github com virginiakm1988 ml2022 spring blob main hw02 hw02 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw02 hw02 pdf hw3 cnn video https youtu be gxlwjq o50g code https github com virginiakm1988 ml2022 spring blob main hw03 hw03 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw03 hw03 pdf hw4 self attention video https youtu be kbd40w9 io code https github com virginiakm1988 ml2022 spring blob main hw04 hw04 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw04 machine 20learning 20hw4 pdf hw5 transformer code https github com virginiakm1988 ml2022 spring blob main hw05 hw05 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw05 hw05 pdf hw6 gan code https github com virginiakm1988 ml2022 spring blob main hw06 hw06 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw06 hw06 pdf hw7 bert code https github com virginiakm1988 ml2022 spring blob main hw07 hw07 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw07 hw07 pdf hw8 autoencoder code https github com virginiakm1988 ml2022 spring blob main hw08 hw08 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw08 hw08 pdf hw9 explainable ai code https github com virginiakm1988 ml2022 spring blob main hw09 hw09 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw09 hw09 pdf hw10 adversarial attack code https github com virginiakm1988 ml2022 spring blob main hw10 hw10 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw10 hw10 pdf hw11 adaptation code https github com virginiakm1988 ml2022 spring blob main hw11 hw11 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw11 hw11 pdf hw12 reinforcement learning code https github com virginiakm1988 ml2022 spring blob main hw12 hw12 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw12 hw12 pdf hw13 network compression code https github com virginiakm1988 ml2022 spring blob main hw13 hw13 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw13 hw13 pdf hw14 life long learning code https github com virginiakm1988 ml2022 spring blob main hw14 hw14 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw14 hw14 pdf hw15 meta learning code https github com virginiakm1988 ml2022 spring blob main hw15 hw15 ipynb slide https github com virginiakm1988 ml2022 spring blob main hw15 hw15 pdf lecture videos the lecture videos are available on hung yi lee s youtube channel https www youtube com channel uc2ggjtuuwvxrhhhiadh1dlq img src https i imgur com sfdpe52 jpg width 500 https www youtube com watch v 7xzr0 4us5s t 18s img src http i imgur com srv0h6f jpg width 500 | machine-learning deep-learning | ai |
littlekernel-lk | lk the lk embedded kernel an smp aware kernel designed for small systems see https github com littlekernel lk for the latest version see https github com littlekernel lk wiki for documentation builds build status https travis ci org littlekernel lk svg branch master https travis ci org littlekernel lk to build and test for arm on linux 1 install or build qemu v2 4 and above is recommended 2 install gcc for embedded arm see note 1 3 run scripts do qemuarm from the lk directory 4 you should see welcome to lk mp this will get you a interactive prompt into lk which is running in qemu arm machine virt emulation type help for commands note 1 for ubuntu sudo apt get install gcc arm none eabi or fetch a prebuilt toolchain from http newos org toolchains arm eabi 5 3 0 linux x86 64 tar xz | os |
|
mobileBackendSamples | mobilebackendsamples demonstrates dropwizard http www dropwizard io 0 9 1 docs google app engine https cloud google com appengine parse http parse com firebase http https www firebase com ionic http ionicframework com mobile backend http 1 bp blogspot com fnlc9gwaecy veatnoddxli aaaaaaaad90 dj6enctd5sm s1600 conectividade png this is a mobile backend showroom with some popular choices for mobile backend development with lots of samples it is based on my own experience with all these solutions the source code i have provided an ionic app customized for each backend type just de www part is stored so you have to create an ionic app and change the folder let me describe what you ll find here dropbackend contains a dropwizard http www dropwizard io 0 9 1 docs restful service as a mobile backend you can host it on aws gaebackend contains a google app engine https cloud google com appengine backend based on cloud endpoints mobileapp dropbackend an ionic http ionicframework com app that consumes the dropwizard backend mobileapp gaebackend an ionic http ionicframework com app that authenticates users using their google account and get data from google app engine mobileapp parsebackend an ionic http ionicframework com app that authenticate users using username and parse http parse com and gets data from it mobileapp firebasebackend contains an ionic http ionicframework com which authenticate users using email and gets data from firebase https www firebase com setup and compiling the apps to generate the dropbackend server just use maven to compile to test it just run the class newsfeed as java application to generate gaebackend you need to install google app engine sdk and if you want the google eclipse plugin read this doc https cloud google com appengine docs java to generate the ionic apps install ionic http ionicframework com getting started and create some ionic apps then change the www folder to generate the mobileapp gaebackend you will need to install cordova plugin googleplus read this article https ionicthemes com tutorials about google plus login with ionic framework to know how to install it | front_end |
|
sam-design-system-challenge | design system challenge create an angular application that best reflects the intent of the sam design system and supports the sam gov ecosystem getting started this repository is setup to work with federalist for hosting and demo purposes 1 fork this repository for your team to work in 2 perform the challenge create an application using the design system that best meets the scoring criteria please don t squash your commits 3 prior to the challenge deadline make sure your application builds with aot prod mode 4 open a pull request against your teams branch in this repository 5 confirm that your branch builds and is available in federalist helpful links design system component documentation https cg fa19003e 5296 4960 ac2e caccfeb620ac app cloud gov site gsa sam design system design system component repository https github com gsa sam design system design system css framework documentation https federalist 0ad5a602 ca98 4a7e 8d6e d9ece7bc4cf8 app cloud gov site gsa sam styles design system css repository https github com gsa sam styles design system prototypes https federalist e1ade7fb 52e9 46c3 b273 1ab1fd05e2fa app cloud gov site gsa sam prototypes design system prototypes repository https github com gsa sam prototypes united states web design system https designsystem digital gov | os |
|
PI-ML-HENRY-DS10 | pi ml henry ds10 data engineering project to create a movies database this project consists of a raw dataset which was cleaned with the python library pandas the data of which is later served through an api using the fastapi framework using a dockerfile the api was deployed on render to make it available for the piblic to use there are 6 main functions in the api peliculas mes takes a month for an argument in spanish and returns the number of movies that were released hitorically in that month peliculas dia takes a weekday for an argument in spanish and returns the number of movies that were released hitorically in that weekday franquicia takes the name of a franchise collection and returns how many movies the franchise has along with the total and mean earnings the franchise has made so far peliculas pais takes a country name for an argument and returns how many movies that country has produced productoras takes a production company name for an argument and returns the total earnings that producer has made and the amount of movies it has produced so far retorno takes a movie name for an argument and returns the investment cost the earnings the movie produced the net return of the movie and the year it was released links render https mod pi henry ds10 onrender com docs demo video https youtu be x9etwfxim | server |
|
software-engineering-exercise | software engineering exercise overview you are asked to take over development of a new system for tracking quests this system must be capable of handling any story driven quest regardless of genre fantasy historical fiction sports etc the system has several open issues that need to be addressed which can be found in the issues section of this repository instructions fork this repository and make copies of the open issues in this repository make your updates to the existing codebase in your new repository you may complete tasks from the open issues in any order but it may be helpful to work in the order of the issues 1 then 2 etc leave your commit history visible do not squash commits use a separate feature branch pull request for each issue and link it to the corresponding issue in your repository if you choose to implement additional features beyond those described in the issues in this repository please add them as issues in your new repository the user interface should use react redux development environment visual studio 2019 free community edition is available here https visualstudio microsoft com downloads net core 3 1 included in the net core workload in the visual studio 2019 installer choice of database technology is left up to you | cloud |
|
linfa | img align left src mascot svg width 70px height 70px alt linfa mascot icon linfa crates io https img shields io crates v linfa svg https crates io crates linfa documentation https docs rs linfa badge svg https docs rs linfa documentationlatest https img shields io badge docs latest blue https rust ml github io linfa rustdocs linfa codequality https github com rust ml linfa workflows codequality 20lints badge svg https github com rust ml linfa actions query workflow 3a 22codequality lints 22 run tests https github com rust ml linfa workflows run 20tests badge svg https github com rust ml linfa actions query workflow 3a 22run tests 22 linfa italian sap english the vital circulating fluid of a plant linfa aims to provide a comprehensive toolkit to build machine learning applications with rust kin in spirit to python s scikit learn it focuses on common preprocessing tasks and classical ml algorithms for your everyday ml tasks strong a href https rust ml github io linfa website a a href https rust ml zulipchat com community chat a strong current state where does linfa stand right now are we learning yet http www arewelearningyet com linfa currently provides sub packages with the following algorithms name purpose status category notes clustering algorithms linfa clustering data clustering tested benchmarked unsupervised learning clustering of unlabeled data contains k means gaussian mixture model dbscan and optics kernel algorithms linfa kernel kernel methods for data transformation tested pre processing maps feature vector into higher dimensional space linear algorithms linfa linear linear regression tested partial fit contains ordinary least squares ols generalized linear models glm elasticnet algorithms linfa elasticnet elastic net tested supervised learning linear regression with elastic net constraints logistic algorithms linfa logistic logistic regression tested partial fit builds two class logistic regression models reduction algorithms linfa reduction dimensionality reduction tested pre processing diffusion mapping and principal component analysis pca trees algorithms linfa trees decision trees tested benchmarked supervised learning linear decision trees svm algorithms linfa svm support vector machines tested supervised learning classification or regression analysis of labeled datasets hierarchical algorithms linfa hierarchical agglomerative hierarchical clustering tested unsupervised learning cluster and build hierarchy of clusters bayes algorithms linfa bayes naive bayes tested supervised learning contains gaussian naive bayes ica algorithms linfa ica independent component analysis tested unsupervised learning contains fastica implementation pls algorithms linfa pls partial least squares tested supervised learning contains pls estimators for dimensionality reduction and regression tsne algorithms linfa tsne dimensionality reduction tested unsupervised learning contains exact solution and barnes hut approximation t sne preprocessing algorithms linfa preprocessing normalization vectorization tested benchmarked pre processing contains data normalization whitening and count vectorization tf idf nn algorithms linfa nn nearest neighbours distances tested benchmarked pre processing spatial index structures and distance functions ftrl algorithms linfa ftrl follow the regularized leader proximal tested benchmarked partial fit contains l1 and l2 regularization possible incremental update we believe that only a significant community effort can nurture build and sustain a machine learning ecosystem in rust there is no other way forward if this strikes a chord with you please take a look at the roadmap https github com rust ml linfa issues 7 and get involved blas lapack backend some algorithm crates need to use an external library for linear algebra routines by default we use a pure rust implementation however you can also choose an external blas lapack backend library instead by enabling the blas feature and a feature corresponding to your blas backend currently you can choose between the following blas lapack backends openblas netblas or intel mkl backend linux windows macos openblas netlib intel mkl each blas backend has two features available the feature allows you to choose between linking the blas library in your system or statically building the library for example the features for the intel mkl backend are intel mkl static and intel mkl system an example set of cargo flags for enabling the intel mkl backend on an algorithm crate is features blas linfa intel mkl system note that the blas backend features are defined on the linfa crate and should only be specified for the final executable license dual licensed to be compatible with the rust project licensed under the apache license version 2 0 http www apache org licenses license 2 0 or the mit license http opensource org licenses mit at your option this file may not be copied modified or distributed except according to those terms | machine-learning rust algorithms scientific-computing | ai |
Signal_Analyzer_Embedded_System_Evaluation | signal analyzer embedded system evaluation this project is intended to serve as a high level design and analysis methodology for the design evaluation of a signal analyzer based on the design and performance requirements desired by the client simulations and tests were carried out using an evaluation board with the aid of external hardware and software components using these test results which are listed as project deliverables the suitability of the stm32f401re microcontroller for the signal analyzer application was evaluated along with providing recommendations pertaining to full system design and performance pursuant to the request for services rfs the suitability of the stm32f401re microcontroller mcu was evaluated for the design of their signal analyzer product the nucleo f401re evaluation board was used in conjunction with analog and digital peripherals to test hardware performance aspects of this mcu software performance was qualitatively and quantitatively tested based upon the ability of the mcu to execute interrupt driven concurrent and processing intensive tasks the most critical test results and findings pertaining to the design evaluation of the stm32f401re mcu for the signal analyzer application are listed below the mcu was able to be successfully interfaced with a serial terminal which can serve as a natural and easy to use debugging tool further the stm32f401re mcu was found to run at ca 98 dmips which agrees with the specified requirements for the signal analyzer application synthesis of audio waves 20 hz 20 khz was accomplished using the mcu such that the frequency and amplitude could be adjusted by the use of potentiometers further spdt button switches could be used to control the blinking of leds based on wave parameters these are essential functionalities for a signal generator harmonic analyzer and the stm32f401re mcu facilitated their implementation with ease real time performance was qualitatively measured by observing the ability of the stm32f401re mcu to respond to interrupt driven events rtos functionality such as threads were implemented and concurrently executed in a cyclic executive to service peripheral interrupts the mcu was found to execute these tasks without any visible latency or jitter issues floating point samples of an audio wave signal with a predetermined frequency of 1004 hz were provided externally to the stm32f401re mcu an autocorrelation based peak detection algorithm was implemented and executed on the mcu which was able to estimate the input audio frequency with an accuracy of 0 4 | os |
|
ModelTemplate | modeltemplate a basic template for expanding to use in the systems analysis and design class please use this if you are enrolled in my systems analysis and design class i may be adding more classes and interfaces to this template as time goes on you should fork this template and create your own project all changes will be hopefully backward compatible so you can merge in future changes in your projects if needed this is likely not going to be useful for anyone not enrolled in my systems analysis and design course | os |
|
fractal | markdownlint disable md033 md041 p align center a href https fractal build align center img src https d33wubrfki0l68 cloudfront net 5d2e88eb1e2b69f3f8b3a3372b6e4b3b4f095130 2159b hero png alt width 110px a h1 align center fractal h1 p br div align center github actions a href https github com frctl fractal actions title build status img src https img shields io github workflow status frctl fractal test main alt a npm version a href https www npmjs com package frctl fractal title current version img src https img shields io npm v frctl fractal svg alt a discord a href https discord gg vurz4yx title chat with us on discord img src https img shields io badge discord join 7289da alt a npm downloads a href https www npmjs com package frctl fractal title npm monthly downloads img src https img shields io npm dm frctl fractal alt a license a href https github com frctl fractal blob main license title mit license img alt github src https img shields io github license frctl fractal a div br fractal is a tool to help you build and document website component libraries and design systems read the full fractal documentation docs introduction component or pattern libraries are a way of designing and building websites in a modular fashion breaking up the ui into small reusable chunks that can then later be assembled in a variety of ways to build anything from larger components right up to whole pages fractal helps you assemble preview and document website component libraries or even scale up to document entire design systems for your organisation check out the documentation docs for more information requirements you ll need a supported lts version https github com nodejs release of node fractal may work on unsupported versions but there is no active support from fractal and new features may not be backwards compatible with eol versions of node getting started install into your project recommended shell npm install frctl fractal save dev then create your fractal config js file in the project root and configure using the official documentation docs then you can either run npx fractal start to start up the project or create an alias under the scripts section in your package json as a shortcut e g json scripts fractal start fractal start sync fractal build fractal build then shell npm run fractal start installing globally shell npm i g frctl fractal this will also give you global access to the fractal command which you can use to scaffold a new fractal project with fractal new the downside is that it s then difficult to use different fractal versions on different projects this option is not recommended until a global fractal install is capable of offloading to a project specific version examples official demo using nunjucks demo fractal build https demo fractal build repository demo fractal build https github com frctl demo fractal build official examples are available in the examples examples directory although we primarily use them for developing and testing fractal they probably are a great resource for users as well additional public examples can be found on the awesome fractal https github com frctl awesome fractal repo contributing fractal has an active group of contributors but we are always looking for more help if you are interested in contributing then please come and say hi on fractal s discord server https discord gg vurz4yx please note we have a code of conduct github code of conduct md please follow it in all your interactions with the project reporting issues requesting features we use github issues to track bugs and feature requests thank your for taking the time to submit your issue in one of our repositories https github com frctl if you rather have a question please ask it on our discord server https discord gg vurz4yx submitting pull requests we will always welcome pull requests on any of the frctl organisation https github com frctl repositories please submit prs against main branch with an explanation of your intention we use conventional commits https www conventionalcommits org which means that every pull request title should conform to the standard development this repository is a monorepo managed by lerna there is only one lockfile in root this means that all packages must be installed in root manually added to the packages package json files and then bootstrapped with lerna to do some work run the following commands in root 1 npm ci 2 npm run bootstrap testing fractal is a project that evolved rapidly and organically from a proof of concept prototype into a more stable mature tool because of this it s currently pretty far behind where it should be in terms of test coverage any contributions on this front would be most welcome existing tests can be run using the npm test command contributors thanks goes to all wonderful people https github com frctl fractal graphs contributors who have helped us out contributions of any kind welcome license mit https github com frctl fractal blob main license docs https fractal build | pattern-library design-systems | os |
Advance-Lane-Finding | advance lane finding implementing advance lane finding using computer vision techniques results video can be found here https www youtube com watch v kemb0ztk9ya introduction and outline to the project for this project the goal was to take camera images from a car undistort the images based on the camera s calculated calibration settings retrieve only pixel values of interest in a binary image then perform a perspective transform on the binary image and use a very robust lane tracking algorithm to find lane curve positions finally all the results are graphically tied together in outputing the orginal image with augmented graphics overlays of the measured lane lines there were two very important steps necessary for getting good lane position results that were stable and accurate the first was using x y gradient and s v color channel thresholding to get a binary image finding the right thresholding values was anything but trival and required alot of experimentation to get right whats more is it turned out thresholding values that worked really well for one video did not work well at all for other videos this meant that tyring to make a system that was general and dynamic for finding good pixel values was difficult there was actually a lot of expeirmentation done trying to dynamically found good thresholding values from exposure measurments the idea was if the image had too little bright pixels to increase the thresholding until the right percent was found likewise if the image had too many pixels to raise the threshold this technique showed some promise but it was thought best to instead focus on creating a very good line class tracker instead which the project emphasised so preset thresholding values were picked depending on the video ideas in the future for producing general binary images could be to even use machine learning to teach a cnn type network to output pixels of importance almost could be bulky enough to base a graduate research project on perhaps heres a great paper that could inspire the work http news engineering utoronto ca new ai algorithm taught humans learns beyond training the second important step then was the line tracking class which the project suggested was the highest importance during the lecture videos it was first talked about using seperate left right window sliding techniques to find lane lines going from bottom to top this was a great idea for a starting point and it worked rather well on the first project video without even saving past frame values if the binary image was good on the challenge video however a line class need to be implemented that did as the project suggested and stored line positions averaging over time and only considering new curves from past curves but also trying to understand when to start over and find a new base line the line class worked to get fairly good results for the challenge video but then the final harder challenge video was attempted the line class uterly failed at trying to grasp the complexity of the final video fist of all it was very difficult to get a clean binary image there was still basic form but a lot of noise the basic window sliding tracker could not handle this noise and would easily get misdirected from both sides leaving lane markings that were not even parallel let alone accurate next the line class tried average results and ignore results it found bad this meant the class had a very hard time keeping up with the constant changing environment using this basic tracking and line class structure was concluded to be ineffective at getting good results on the final video so a brand new tracking algorithm was created from scratch using creative techniques aimed to be robust enough to tackle the final video discussing the advance lane tracking algorithm using sliding windows to search for the highest amount of pixel spaces was a good start but the biggest problem was that the left and right sides were not dependent on each other and the shapes they formed did not have to be curved splines to address both of these problems instead of using two seperate sliding windows the windows were attached together by some connecting distance seperator see the illustration below for the template also the the template could also slide in one direction to the left or right this meant it would have to produce nice looking curves and this was the results that we expected to find anyway the curve template used per vertical level left window right window padding padding window width window width left window right window level center dis slide res slide res to get even better results the template didnt have to just search from top to bottom in increments of 1 level instead it could skip levels incase and interpolte incase if going level 1 by 1 it would have gone down a bad path and missed a better one such as if there is alot of noise note that this assumes that lane lines both end at the top of the image and this meant fitting the transformation so that was the case this was the reason why the first video and third video had different outward lengths it should also be noted that the transformed image have it so the two lane lines were nearly parallel to each other if meeting these two conditions along with good binary images the advance lane tracker was showed to achieve good results the tracker its self was not perfect espeacially in the presence of imperfect binary images that contained missing features and extra noise so the lines were collected and averaged over time giving nice smooth transitions even so it could be noticed when the tracker was getting distracted by nosie elements when it would curve towards them away from the target observed path the curving was minimal though and its base points did very well to stay in place after much configuration inside the actual tracker the code for the tacker was made to be very well documented so it should serve as a guide for how the algorithm is functioning on a basic level the end results from the advance lane tracker was producing results that were able to satisfiable for the final harder challenge video as well as get results better if not the same as the previous basic tracker for the other two videos just for fun it was thought that the car s speed could be measured by using reference template matching measuring the vertical displacment between transformed images the idea was completly outside the project requirments but still was a fun experiment that only sort of worked the speed was shown to be inaccurate and flucate alot to get this corrected more debugging would need to be done on it and maybe just doing bare template matching with squared difference minimization is not enough to get accurate vertical sliding displacments though it was entertaining to see the effect kind of work espeacially graphically on the overlays moving horizontal bars in the future more work might be done on this feature since it seems in like something should be able to give good results final loose ends at the beginning of the project work was done finding the camera matrix coeiffiecents used to undistort its camera images the work done for that along with the camera values was saved in camera cal so was a rather strightforward process so not much discussion was saved for that but inside the folder it is cool to see undistored checkerboard images along with it matching the inside corners all of this was important to perform accurate perspective transforms also the project called for working on images intially and outputing its binary images and fit curves along with road offset and curvature measurments all the results for that were saved in test images and output images showed curves being fit along with its binary images up in the top right hand corner along with the fitted window shape also in this directory were the orginal outputs that were produced with the orginal basic tracker discussed in the lecture with its same color scheme the results were not bad in one case the output 3 actually looked better for the basic tracker maybe because its binary was seeing more of the right hand lane dashes final loose ends please checkout the final results video link included at the top which showcases the performance on all 3 videos the results on all videos was satisfying i would say but as always could use more work and polish espeacially for creating a dynamic binary outputing but its very easy to find more and more things to work on for this kind of open ended projects it would also be interesting to explore more on speed tracking which could be a really cool feature if it started getting accurate results next step to be implemented to this project is vehicle detection comming soon | ai |
|
Covid-Info-Volunteer-App | description this is an android application built using java and android studio and firebase as database this app lets the users to request volunteer accesss as well as become a volunteer and help others features provides details about volunteers in and around the user s locality gives a list of all the available vaccines simple and user friendly ui | front_end |
|
embedded_projects | uart driver for stm32f411x platform with included led application this project contains a uart driver written in cpp for the stm32f411x microcontroller platform the driver provides functions to initialize and configure the uart peripheral transmit and receive data using the uart interface and handle interrupts generated by the uart peripheral it also contain a led function for demonstration purposes requirements to use this driver you will need the following stm32f411x development board stm32cubeide or other compatible ide stm32f411x hal library uart cable or usb to uart converter usage clone the repository or download the source code as a zip file import the project into your ide and add the necessary hal libraries include the uart h header file in your application code initialize the uart peripheral using the usart2 init function passing in the desired baud rate and other configuration parameters functions the following functions are provided by the uart driver int usart2 write int usart2 read example the following example demonstrates how to use the uart driver to transmit and receive data include uart h void usart2 init void 1 enable clock access to uart2 rcc apb1enr 0x20000 2 enable clock access to porta rcc ahb1enr 0x01 3 enable pins for alternate fucntions pa2 pa3 gpioa moder 0x00f0 gpioa moder 0x00a0 enable alt function for pa2 pa3 4 configure type of alternate function gpioa afr 0 0xff00 gpioa afr 0 0x7700 sets pa2 and pa3 to alternate function 7 usart2 rx tx configure uart usart2 brr 0x0683 baud rate 9600 usart2 cr1 0x000c enabled tx rx 8 bit data usart2 cr2 0x000 usart2 cr3 0x000 usart2 cr1 0x2000 enable uart license this project is licensed under the mit license | os |
|
IoTDataModels | open interconnect consortium inc iot data models 2016 open interconnect consortium inc all rights reserved redistribution and use in source and binary forms with or without modification are permitted provided that the following conditions are met 1 redistributions of source code must retain the above copyright notice this list of conditions and the following disclaimer 2 redistributions in binary form must reproduce the above copyright notice this list of conditions and the following disclaimer in the documentation and or other materials provided with the distribution this software is provided by the open interconnect consortium inc as is and any express or implied warranties including but not limited to the implied warranties of merchantability and fitness for a particular purpose or warranties of non infringement are disclaimed in no event shall the open interconnect consortium inc or contributors be liable for any direct indirect incidental special exemplary or consequential damages including but not limited to procurement of substitute goods or services loss of use data or profits or business interruption however caused and on any theory of liability whether in contract strict liability or tort including negligence or otherwise arising in any way out of the use of this software even if advised of the possibility of such damage the views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies either expressed or implied by the open interconnect consortium inc | server |
|
Semantic-Textual-Similarity | semantic textual similarity using natural language processing nlp abstract semantic textual similarity computes the equivalence of two sentences on the basis of its conceptual similarity it is widely used in natural languages processing tasks such as essay scoring machine translation text classification information extraction and question answering this project focuses on one of the applications of semantic textual similarity known as automatic short answer grading asag it assigns a grade to a response provided by a student by comparing with one or more model answers in particular we selected one of the state of the art short answer grading approaches that use stanford corenlp library and we used the same approach with the help of two open source libraries natural language toolkit nltk and spacy for evaluation texas dataset and an in house benchmarking asag dataset based on mathematics for robotics and control mrc course were considered performances among all three libraries were evaluated using pearson correlation coefficient root mean square error rmse and the runtime results based on texas dataset showed that stanford corenlp library has better pearson correlation coefficient 0 66 and lowest rmse 0 85 than nltk and spacy libraries while using mrc dataset all 3 libraries showed the comparative results on evaluated metrics contents of repository this repository contains exercises related to textual similarity using nltk and spacy libraries that can help for short answer grading comparison of spell corrector approaches using spell corrector using ngrams jaccard coefficient and minimum edit distance spell corrector using minimum edit distance med create jupyter notebooks for each student from mohler data set for short questions and answers create instructor version of assignments using nbgrader create student version of assignments using nbgrader wiki contains theoretically concepts https github com rameshjesswani semantic textual similarity wiki word aligner using nltk and spacy libraries asag based sultan et al 2016 approach using nltk and spacy libraries guidelines for monolingual word aligner it can used as individual module for more usage check here word aligner using nltk and spacy https github com rameshjesswani semantic textual similarity tree master monolingualwordaligner install nltk library procedure given below setup stanford parser ner postagger link to setup in nltk given below guidelines for asag details about asag can be found here asag https github com rameshjesswani semantic textual similarity tree master asag installation nltk requires python versions 2 7 3 4 or 3 5 install nltk library sudo pip install u nltk install packages of nltk import nltk nltk download spacy is compatible with 64 bit cpython 2 6 3 3 and runs on unix linux macos os x and windows install spacy code works with version 2 0 12 library pip install u spacy after spacy installation you need to download a language model python m spacy download en nbgrader installation pip install nbgrader if you are using anaconda conda install jupyter conda install c conda forge nbgrader to install nbgrader extensions jupyter nbextension install user prefix py nbgrader overwrite jupyter nbextension enable user prefix py nbgrader jupyter serverextension enable user prefix py nbgrader for more docs about nbgrader http nbgrader readthedocs io en stable user guide installation html to use stanford parser ner postagger in nltk check files https github com rameshjesswani semantic textual similarity blob master monolingualwordaligner stanfordparser setup txt https github com rameshjesswani semantic textual similarity blob master monolingualwordaligner stanfordnertagger setup txt https github com rameshjesswani semantic textual similarity blob master monolingualwordaligner stanfordpostagger setup txt mindmap mind map https github com rameshjesswani semantic textual similarity blob master nlp basics naturallanguageprocessing mindmap png general nlp pipeline general nlp pipeline https github com rameshjesswani semantic textual similarity blob master nlp basics generalnlppipeline jpg bibtex unpublished rnd kumar authors ramesh kumar month january note ws17 h brs evaluation of semantic textual similarity approaches for automatic short answer grading ploeger nair supervising title evaluation of semantic textual similarity approaches for automatic short answer grading year 2017 18 | natural-language-processing nltk spacy spelling-correction spellchecker textual-similarity nbgrader short-answer-grading pyenchant stanford-parser stanford-ner stanford-pos-tagger pytest word-aligner english parse-trees constituency-tree spacy-nlp monolingual-word-aligner | ai |
scipy4cv | introduction about these notebooks these notebooks are part of the tutorial scipy and opencv as an interactive computing environment for computer vision http emap fgv br sibgrapi 2014 tutorials html first presented at sibgrapi 2014 http emap fgv br sibgrapi 2014 the tutorial is also available as a full paper in rita http www seer ufrgs br rita article view rita vol22 nr1 154 what are we looking for in a computer environment data exploration and visualization images video sequences point clouds and feature vectors extensive list of tools image processing machine learning optimization statistics linear algebra high performance computing hpc promote reproducible research computer vision practitioners need a computing environment that lets them explore data like images video sequences point clouds and feature vectors such an environment should help on the development and testing of new models and algorithms and on the deployment of the results either as a final software module or a scientific technical publication it should also provides a extensive list of tools routines for image processing machine learning statistical inference linear algebra convex optimization and graph algorithms just to name a few problems involving large sets of images can require high performance computing hpc such that the environment should provide practical ways to parallelize and distribute computation an ideal environment should also combine documentation and computation in a single bundle promoting results reproducibility but preventing the research pipeline to become more cumbersome a python based environment xkcd on python figs python png http xkcd com 353 http xkcd com 353 by randall munroe scipy a scientific computing environment based on python developed in the last 12 years based on the numpy module for n dimensional arrays not just software but a community this community meets at scipy conferences http conference scipy org in the last twelve years a powerful scientific computing environment emerged from the python programming community this language is an attractive option for researchers it is interpreted a wanted property for interactive computing environments dynamically typed and presents a very concise and elegant syntax resembling the pseudo code found in computer science textbooks but the tipping point for python to become a major player in scientific computing was the advent of an efficient module for n dimensional array representation and manipulation the numarray module was created by greenfield et al http adsabs harvard edu abs 2002adass 11 140g to address astronomical data analysis in 2005 numarray successor numpy http www numpy org appeared and became the workhorse of the python scientific computing an active community composed of scientists and engineers flourished around python and numpy represented today by the scipy stack and the scipy conferences http conference scipy org why is an interactive environment important to computer vision in computer vision the practitioner is interested in inferring the world state from images that act as observations the statistical relation between the world state and the observed images is defined by models a particular model is defined by parameters chosen by learning algorithms finally the world state is estimated by inference algorithms this vision on computer vision is properly presented by prince in his book http www computervisionmodels com and translates the state of the art of contemporary research in the field which is deeply associated to machine learning nowadays environment should enable tinkering with machine learning techniques visualization and exploration tools to address problems involving generalization overfitting dimensionality reducion optimization the scipy environment provides these capabilities the development of these models and subsequent problems in learning and inference require a computing environment that allows proper tinkering with machine learning techniques visualization and exploration tools are necessary to address problems involving generalization overfitting dimensionality reduction and optimization an interactive computing environment as ipython ython org enriched with tools from the scipy stack and the opencv library can address these needs this tutorial presents interactive computing and hpc with the ipython shell opencv use under python matplotlib for visualization and 2d plotting the scipy scientific library and machine learning with the scikit learn module this tutorial provides a glimpse not a broad and deep coverage examples should provide a starting point for the insterested user the present tutorial provides a short overview on this python based computing environment considering the large set of tools available and the space constraints this tutorial does not pretend to be a complete reference or a broad review it provides just a glimpse of the environment s capabilities to the computer vision community briefly presenting and discussing some problems and code examples these examples can guide the user s first steps and the provided references help on the next ones ipython notebooks presenting the complete code for the examples are available on http nbviewer ipython org github thsant scipy4cv http nbviewer ipython org github thsant scipy4cv and on github https github com thsant scipy4cv tree master https github com thsant scipy4cv tree master | ai |
|
Pewlett-Hackard-Analysis | pewlett hackard analysis silver tsunami overview of the project pewlett hackard is a large company boasting several thousand employees and it s been around for a long time as baby boomers begin to retire at a rapid rate pewlett hackard is looking toward the future in two ways first it s offering a retirement package for those who meet certain criteria second it s starting to think about which positions will need to be filled in the near future this analysis will help future proof pewlett hackard by generating a list of all employees eligible for the retirement package the employee data bobby an hr analyst needs is only available in the form of six csv files because pewlett hackard has been mainly using excel and vba to work with their data but now they have decided to update their methods to use sql a definite upgrade considering the amount of data this analysis will help bobby build an employee database with sql by applying data modeling engineering and analysis skills objective determine the number of retiring employees per title identify employees who are eligible to participate in a mentorship program write a report that summarizes your analysis and helps prepare bobby s manager for the silver tsunami as many current employees reach retirement age results the situation is referred as silver tsunami rightly as a huge percentage of workforce is expected to retire at a given point of time the following query assisted in finding out various retiring titles img width 567 alt retirement titles src https user images githubusercontent com 111387025 195025583 d87bb269 7d92 4730 8a17 93f698abc029 png the entitiy relationship diagram erd was used to understand relationship between different tables which assisted in writing the code for this analysis img width 1001 alt employeedb src https user images githubusercontent com 111387025 195031833 f000a00d fcec 4f86 a4a6 ec78cdbb4635 png the following image gives a glimpse on the count of workforce which is expected to retire from different titles img width 301 alt screen shot 2022 10 11 at 11 01 01 am src https user images githubusercontent com 111387025 195049297 6c11d623 54c7 4e0e ab1b 405c89e387fe png the analysis further indicated that the employees eligible for mentorship are quite few in number in contrast to the number of employees retiring img width 1031 alt mentorship eligibility src https user images githubusercontent com 111387025 195023430 d49e4872 328f 492e 9539 0d3909a15eca png the following image shows the number of employees retiring from both sales and development departments img width 1047 alt sales and development src https user images githubusercontent com 111387025 195023047 d8cb3526 82ce 4195 afd8 d52033e4e47c png summary 1 how many roles will need to be filled as the silver tsunami begins to make an impact 72 458 roles are in urgent need to be filled out as soon as the workforce starts retiring at any given time img width 301 alt screen shot 2022 10 11 at 11 01 01 am src https user images githubusercontent com 111387025 195049014 55bfd156 79b3 4264 b446 4b6e3f013916 png 2 are there enough qualified retirement ready employees in the departments to mentor the next generation of pewlett hackard employees no the analysis indicates that there are not sufficient employees to mentor the next generation of pewlett hackard employees as the number of employees eligible for mentorship are just 1549 img width 1031 alt mentorship eligibility src https user images githubusercontent com 111387025 195039059 7bab9075 aaa4 404c 8a2b 1fd77b27eafb png the additional tables or queries that might provide more insight the queries can be raised for the various departments to find out specific number of employees getting retired in that particular department and separate tables can be constructed for all of them the query made for finance department will look like the following img width 918 alt screen shot 2022 10 11 at 2 34 00 pm src https user images githubusercontent com 111387025 195047482 bfd3ef5a 6b80 4da8 926c 74e7e10f4cd9 png the above query can be further improved to find out the gender of employees retiring in various departments here we can update the query for the finance department as follows img width 1094 alt screen shot 2022 10 11 at 2 30 43 pm src https user images githubusercontent com 111387025 195046786 1de4b4bb 022a 4b0d 8f8d 2843d8529815 png in addition to this a new table can also be made specifying the above information using into function | sql | server |
sql-challenge | sql homework employee database a mystery in two parts sql png sql png background it is a beautiful spring day and it is two weeks since you have been hired as a new data engineer at pewlett hackard your first major task is a research project on employees of the corporation from the 1980s and 1990s all that remain of the database of employees from that period are six csv files in this assignment you will design the tables to hold data in the csvs import the csvs into a sql database and answer questions about the data in other words you will perform 1 data engineering 3 data analysis note you may hear the term data modeling in place of data engineering but they are the same terms data engineering is the more modern wording instead of data modeling before you begin 1 create a new repository for this project called sql challenge do not add this homework to an existing repository 2 clone the new repository to your computer 3 inside your local git repository create a directory for the sql challenge use a folder name to correspond to the challenge employeesql 4 add your files to this folder 5 push the above changes to github instructions data modeling inspect the csvs and sketch out an erd of the tables feel free to use a tool like http www quickdatabasediagrams com http www quickdatabasediagrams com data engineering use the information you have to create a table schema for each of the six csv files remember to specify data types primary keys foreign keys and other constraints for the primary keys check to see if the column is unique otherwise create a composite key https en wikipedia org wiki compound key which takes to primary keys in order to uniquely identify a row be sure to create tables in the correct order to handle foreign keys import each csv file into the corresponding sql table note be sure to import the data in the same order that the tables were created and account for the headers when importing to avoid errors data analysis once you have a complete database do the following 1 list the following details of each employee employee number last name first name sex and salary 2 list first name last name and hire date for employees who were hired in 1986 3 list the manager of each department with the following information department number department name the manager s employee number last name first name 4 list the department of each employee with the following information employee number last name first name and department name 5 list first name last name and sex for employees whose first name is hercules and last names begin with b 6 list all employees in the sales department including their employee number last name first name and department name 7 list all employees in the sales and development departments including their employee number last name first name and department name 8 in descending order list the frequency count of employee last names i e how many employees share each last name bonus optional as you examine the data you are overcome with a creeping suspicion that the dataset is fake you surmise that your boss handed you spurious data in order to test the data engineering skills of a new employee to confirm your hunch you decide to take the following steps to generate a visualization of the data with which you will confront your boss 1 import the sql database into pandas yes you could read the csvs directly in pandas but you are after all trying to prove your technical mettle this step may require some research feel free to use the code below to get started be sure to make any necessary modifications for your username password host port and database name sql from sqlalchemy import create engine engine create engine postgresql localhost 5432 your db name connection engine connect consult sqlalchemy documentation https docs sqlalchemy org en latest core engines html postgresql for more information if using a password do not upload your password to your github repository see https www youtube com watch v 2uatpmnvh0i https www youtube com watch v 2uatpmnvh0i and https help github com en github using git ignoring files https help github com en github using git ignoring files for more information 2 create a histogram to visualize the most common salary ranges for employees 3 create a bar chart of average salary by title epilogue evidence in hand you march into your boss s office and present the visualization with a sly grin your boss thanks you for your work on your way out of the office you hear the words search your id number you look down at your badge to see that your employee id number is 499942 submission create an image file of your erd create a sql file of your table schemata create a sql file of your queries optional create a jupyter notebook of the bonus analysis create and upload a repository with the above files to github and post a link on bootcamp spot copyright trilogy education services 2019 all rights reserved | server |
|
public | system design 2022 https www hse ru edu courses 452715820 2022 telegram https t me qktb 0w2n4swnmey telegram https t me ch3c1m rxy0zwiy zoom https us06web zoom us j 88575619011 pwd c3doqml3bwuzrxrpnw93edfhz2xout09 18 10 https docs google com spreadsheets d 1 qqi7kyg 0pjwgnicbqh j6esjegcffzdsii dntny usp sharing o1 3 o2 5 8 o3 10 round 0 7 o1 0 3 min 10 o2 o3 round round 3 5 4 | os |
|
iota.flash.js | flash channels javascript library it should be noted that this library as it stands right now is an early beta release as such there might be some unexpected results please join the community see links below and post issues to ensure that the developers of the library can improve it join the discussion if you want to get involved in the community need help with getting setup have any issues related with the library or just want to discuss blockchain distributed ledgers and iot with other people feel free to join our slack slack http slack iota org you can also ask questions on our dedicated forum at iota forum https forum iota org installation node js npm install iotaledger iota flash js concepts flash channels use a binary tree topology reduce the number of transactions to be attached usually each transfer would have to be attached to the tangle flash enables realtime streaming of tokens off tangle tree topology binary tree https cdn images 1 medium com max 1600 1 kiedg01op4egkhikukw da png a binary tree approach takes advantage of the fact that a signature can be used up to 2 times while remaining reasonably secure this lets us build a tree where the terminating nodes leaves are the individual transactions that will occur in the flash channel the path from the leaf the root acts to transfer the total value of the root down to the current leaf given we are constructing a tree we must determine the number of transactions in the channel when opening it if we want a 15 transaction channel we must have a tree with the depth of 5 math ceil log2 15 1 as this uses log2 the larger the number of channel transactions the proportionally lower depth we need ie 1000 transactions require a depth of 10 where a 2000 transaction channel require a depth of 11 address reuse a central tenant of iota s winternitz one time signature is that the address is not reused in the case of flash channels addresses can be reused one time this degrades the security however given the realtime nature of the channel there is a reasonable expectation that the channel will have progressed before an attack could take place please see a quick calculation relating to security here https public tangle works winternitz pdf a common practice would be to chain flash channels in order to keep the channel open this is feesible given the lack of cost and the time required to open channels transferring data in flash channels there are two main type of data that is transferred this is a proposed bundles and signatures related to that bundle this the only that should be transferred during the channel s operation this allows a user to maintain their own flash object outlined below and only mutates the state via the flash library preserving the integrity of the channel from their perspective integrity checking each proposed transfer replicates and then advances the whole state of the channel it is imperative that each participant in the channel does a diff to check that the proposed transfer is not malicious and only contains the correct changes this functionality is offered in the getdiff function within the library example a malicious user could craft a transfer that carries the correct values but change the remainder address thus allowing them to take a large portion of the funds early in the channel s life multiparty channels flash at its core is a set of rules governing how parties interact with the multi signature features of iota given this there is no limit to the number of users in a flash channel this means you can have a consensus channel all users agree or a m of n channel some users agree consensus would be good for logistically complex interactions with a large number of interrelated parties m of n would work well for interactions where an arbiter could be present to help resolve disputes in the channel staking the channel when users enter the channel they deposit funds into the newly generated deposit address root of the binary tree the amount they deposit is the amount of tokens they are able to spend in the channel this amount also relates to trust as the amount they have deposit into this address is now under the control of all the parties that have signed the address it is common practice to deposit equal amounts into a channel 100 100 as both users now have an equal amount of tokens at stake therefore any misbehaviour or disagreements would result in both users being in an equally bad position this would be useful for transacting with unknown users or channels with bidirectional payments for situations with reputation involved a more one sided channel can be created 100 0 this means that there is little incentive for the user with zero stake in the channel to be honest apart for a hit to their reputation this is useful for one directional user 100 to machine 0 services where a business would otherwise have to hold large amounts of collateral in each of the flash channels open ie ev charging stations an instant payment broker or a streaming service api flash object for the flash channel to operate a state object should be constructed to manage it below describes the flash object that is recommended for use each user has a copy of this object however they should never transfer the object to each other each time a transfer occurs in the channel this object is updated by the applytransfers function which updates all the required values to the latest channel state example flash object javascript signerscount 2 number of signers in a channel balance 2000 total channel balance deposits 1000 1000 individual user deposits settlementaddresses adkhakxiw maoodhqna user s output addresses depositaddress ajdgajdjs address at index 1 w checksum remainderaddress index 0 of the multisig addresses generated root index 1 of the multisig addresses generated outputs transfers history of transfers within the channel 1 signerscount int number of people partaking in the channel 2 balance int index of the private key 3 deposits array an array of the deposits of each channel user 4 settlementaddresses array array of user s settlement addresses for use when closing the channel 5 depositaddress string tryte encoded address string this is the address at index 1 of the channel addresses 6 remainderaddress object multisig object at address 7 root object multisig object at index 1 8 outputs object channel s outputs history is appended to this object 9 transfers array channel s transfer history is appended to this array multisig getdigest generates the digest value of a key input javascript multisig getdigest seed index security 1 seed string tryte encoded seed 2 index int index of the private key 3 security int security level to be used for the private key returns 1 string digest represented in trytes composeaddress wraps the compose address related functions of iota lib js returns a 81 tryte address from an array of digests note the order of the digests in this function must be noted as this order is required when signing the bundles input javascript multisig composeaddress digests 1 digests array array of digests represented in trytes returns 1 string 81 tryte multisig address updateleaftoroot taking the root of the tree this function walks from the leaf up to the root and finds the first node from the leaf which is able to be used to sign a new transaction from if an address can t be reused it increments a counter which is used to generate new addresses from input javascript multisig updateleaftoroot root 1 root object representation of the current state of the flash tree returns 1 object an object containing the modified multisigs object to be used in the transactions and the number of new addresses generate required to complete the transaction transfer prepare this function checks for sufficient funds and return a transfer array that will correctly transfer the right amount of iota in relation to channel stake into the users settlement address input javascript transfer prepare flash settlementaddresses flash deposit index transfers 1 settlementaddresses array the settlement addresses for each user 2 deposit array the amount each user can still spend 3 index int the index of the user used as an input 4 transfers array the value address destination of the output bundle excluding remainder return 1 array transfer object compose this method takes the transfers object from the transfer prepare function and takes the flash object then constructs the required bundle for the next state in the channel input javascript transfer compose flash balance flash deposit flash outputs multisig flash remainderaddress flash transfers newtansfers close 1 balance int the total amount of iotas in the channel 2 deposit array the amount of iotas still available to each user to spend from 3 outputs string the accrued outputs through the channel 4 multisig object history the leaf bundles 5 remainderaddress string the remainder address of the flash channel 6 history array transfer history of the channel 7 newtransfers array transfers the array of outputs for the transfer 8 close boolean whether to use the minimum tree or not return array array of constructed bundle representing the latest state of the flash channel these bundles do not have signatures sign this takes the constructed bundles from compose and then generates an array of ordered signatures to be applied to the bundle input javascript transfer sign multisig seed bundles 1 multisig object history the leaf bundles 2 seed string tryte encoded seed 3 bundles array array of bundles that require signatures return 1 array an ordered array of signatures to be applied to the transfer bundle array appliedsignatures this takes the bundles that have been generated by compose and the signatures of one user and then applies them to the bundle use this function to apply all the signatures to the bundle the signatures must be applied in the order the address was generated otherwise the signatures will be invalid note you use the output of this function to apply the next set of signatures input javascript transfer appliedsignatures bundles signatures 1 bundles array array of bundles that require signatures 2 signatures array ordered set of one users signatures to be applied to the proposed bundle return 1 array an ordered array of bundles that have had the user s signatures applied getdiff this takes the channel history and latest bundles and runs checks to see where the changes are in the transfer note this is already called in the applytransfers function input javascript transfer getdiff root remainder history bundles 1 root array multisig object starting at the root of the tree 2 remainder object multisig object of the remainder address 3 history array transfer history of the channel 4 bundles array array of bundles for the proposed transfer return 1 array an array of diffs per address applytransfers input javascript transfer applytransfers flash root flash deposit flash outputs flash remainderaddress flash transfers signedbundles 1 root object representation of the current state of the flash tree 2 deposit array the amount of iotas still available to each user to spend from 3 outputs string the accrued outputs through the channel 4 remainderaddress string the remainder address of the flash channel 5 history array transfer history of the channel 6 signedbundles array signed bundle return this function mutates the flash objects that are passed to it if there is an error in applying the transfers ie signatures aren t valid the function will throw an error close used in place of the prepare function when closing the channel this is used to generate the closing transfers to each settlement address it does this by correctly dividing the remaining channel balance amongst the channel s users input javascript transfer close flash settlementaddresses flash deposit 1 settlementaddresses array the settlement addresses for each user 2 deposit array the amount each user can still spend return 1 array closing transfer object | server |
|
drone-core | crates io https img shields io crates v drone core svg https crates io crates drone core maintenance https img shields io badge maintenance actively developed brightgreen svg drone core cargo rdme start the core crate for drone an embedded operating system documentation drone book https book drone os com api documentation https api drone os com drone core 0 15 usage add the crate to your cargo toml dependencies toml dependencies drone core version 0 15 0 add or extend host feature as follows toml features host drone core host cargo rdme end license licensed under either of apache license version 2 0 license apache license apache or http www apache org licenses license 2 0 mit license license mit license mit or http opensource org licenses mit at your option contribution unless you explicitly state otherwise any contribution intentionally submitted for inclusion in the work by you as defined in the apache 2 0 license shall be dual licensed as above without any additional terms or conditions | embedded asynchronous concurrency no-std os async real-time bare-metal rtos firmware rust | os |
SDE_TEMPSENSOR | sde agenda projeto de sistemas digitais embarcados desenvolvimento de algoritmo para leitura de m ltiplos sensores ds18b20 usando um esp8266 discentes guilherme ramos trigueiro do nascimento 18104945 gustavo de oliveira guaresi 19102189 helder henrique da silva 20250326 | os |
|
cs224n | cs224n natural language processing with deep learning winter 2019 this is a repo containing my solutions to all 5 assignments of stanford university s cs224n winter 2019 course http web stanford edu class cs224n index html schedule | ai |
|
NL2SQL-LLM | text2sql llm br leveraging in context learning using a synthetic dataset for text to sql models running pip install r requirements txt python3 src main py run through 1 enter the table name of the table you want to query plot images stage1 png p style text align center p you can find the table names in data csv files 2 select if you want to enter a custom natural language question or perform evaluation on the test questions of the data example queries test set plot images stage2 png p style text align center p 3 enter the natural language question plot images stage3 png p style text align center p 4 retrieving the top 5 most similar questions from the synthetic dataset plot images stage4 png p style text align center p 4 generating the sql query using zero shot prompting plot images stage5a png p style text align center p 5 generating the sql query using chain of thought cot prompting plot images stage5b png p style text align center p methodology tl dr we used synthetic data to craft prompts to leverage in context learning for text to sql models the methodology consists of the following steps ol li synthetic data generation use chatgpt to generate synthetic data of natural language questions and corresponding sql queries for each table li cosine similarity calculation for the test selection select the top 5 most similar questions to the test question from the synthetic dataset li chain of thought cot prompting form a prompt using the top 5 most similar questions and feed it to the model to generate the sql query ol br synthetic data generation we used chatgpt web api to generate synthetic dataset of natural language questions for each table using the following template prompt 1 give sql query for the following question sample question table schema id remote id table name table name some rows in the table looks like this row1 row2 row3 prompt 2 can you give me 10 different natural language questions and their corresponding sql questions specific to this dataset in the format of a combined csv file in the following format index question sql query the delimiter should be instead of comma as a txt file prompt 3 give me x more different natural language questions and the corresponding sql queries in the same format starting from index 11 in a txt file the synthetic dataset generated for each table is present in data example queries the complete set folder contains the all the synthetic data br note x is the number of synthetic questions to be generated i used 40 synthetic questions for each table for checking the syntax of the sql queries i ran all the queries over the table once see src check retr data py cosine similarity calculation we used the sentence transformers library to calculate the cosine similarity between the test question and the synthetic questions generated for each table the top k most similar questions are selected as prompts for the next step br chain of thought cot prompting we crafted a prompt using the top 5 most similar questions as prompts to generate the sql query the prompt also contained information about the table schema and the table name the prompt is then fed to the model to generate the sql query for the test question br models tested i tested the framework with the following models juierror flan t5 text2sql with schema dawei756 text to sql t5 spider fine tuned gpt2 observations b cot prompting helps the model output than zero shot prompting b empirically i have observed that the model produces much better queries with retrieval based cot prompting than zero shot prompting for instance for the question i how many distinct types of earnings are there i for the table earning for zero shot prompting the model outputs the following query which does not work plot images stage5a png p style text align center p for cot prompting the model outputs the following query which works well and answers the question correctly plot images stage5b png p style text align center p b fine tuned models outperform pre trained models b the models fine tuned for text to sql task performed much better than pre trained models the fine tuned models i used are juierror flan t5 text2sql with schema dawei756 text to sql t5 spider fine tuned b providing tables schemas is not enough b some of the initial experiments i conducted involved only providing the table schemas in the prompt in such the case the model does not have access to the values of the columns in the table to remedy this i also tried providing some sample rows from the table in the prompt while this did improve the results it was not enough to generate the correct sql query to further improve the results i started providing some example queries in the prompt this helped the model generate the correct sql query b best results b the open source model juierror flan t5 text2sql with schema produced the best results among the models i used for api calls however i believe the model s performance could be further improved by incorporating openai apis specifically by leveraging the chain of thought prompts and testing them on the openai playground this combination which includes retrieval based cot prompting schema information samples and openai api yielded the best sql query generation results future work ol li i benchmarking the results i benchmarking and comparing zero shot and cot prompting results on a dataset would offer stronger evidence for the effectiveness of retrieval based cot prompting complementing the observed improvements in query formulation through empirical observations li i extending the framework for multiple table i the framework currently supports single table queries but extending it to accommodate multiple table queries would enhance its functionality li i breaking down the query formation process into smaller steps i inspired by the work of a href https github com mohammadrezapourreza few shot nl2sql with prompting few shot nl2sql with prompting a we can break down the query formation process into smaller steps and use the intermediate results to guide the model to generate the final sql query for example we can prompt the model to estmate the tables that would be used in the query instead of asking the user ol contributing pull requests are welcome for major changes please open an issue first to discuss what you would like to change for any detailed clarifications issues please email to ndiwan2 at illinois dot edu dot | ai |
|
pipelang | pipelang in progress proof of concept pipelines for large language models langchain wrapper for extensibility versatility and transparency and without the complexity using functional components ever try out langchain and love the features and how everything works out of the box but find it a bit difficult to extend ever wonder how to modify the prompts for a map reduce qa pipeline that s why i built pipelang installation doesn t work yet install with python pip install pipelang quickstart write a quick map reduce qa pipeline with clarity and transparency currently broken will fix when i get the chance python from src llms import openaipipeline from src prompts import prompt from src utils joiners import simplejoinerpipeline from src utils splitters import recursivetextsplitter text requests get https raw githubusercontent com hwchase17 langchain master docs modules state of the union txt text question what did the president say about justice breyer mapper prompt prompt prompt from file src prompts map reduce qa map langchain txt reducer prompt prompt prompt from file src prompts map reduce qa reduce langchain txt mapper prompt pipeline mapper prompt fill question question fill pipeline context reducer prompt pipeline reducer prompt fill question question fill pipeline summaries map reduce pipeline recursivetextsplitter mapper prompt pipeline openaipipeline simplejoinerpipeline reducer prompt pipeline openaipipeline print map reduce pipeline text or simply use the built in mapreduceqapipeline python from src chains import mapreduceqapipeline map reduce pipeline mapreduceqapipeline question features base pipeline system llms only openai so far prompts with all the langchain prompts chains and splitters all the ones from langchain future thorough type checking asynchronous calls memory flowchart generation computation graph pipeline factories tracing attributions langchain which taught me a lot on modern techniques in working with llm s such as chains and agents as well as the framework react which taught me fp and what a simple powerful and extensible api looks like | ai |
|
MachineLearning | machine learning coursera taught by andrew ng done linear regression logistic regression multi class classification and neural networks neural network learning regularized linear regression and bias variance support vector machines k means clustering and pca anomaly detection and recommender systems todo nothing how to contact me via email thomasrieder at aon dot at via twitter my profile https twitter com thomasrieder class machine learning https class coursera org ml machine learning | ai |
|
CS224n-solutions | cs224n natural language processing with deep learning these past weeks i ve spent several weeks on the cs224n course from stanford university here are my solutions to the assignments these solutions are for the 2017 version of the course installation 1 install anaconda https www continuum io downloads anaconda official website 2 go to assignmentx where x is either 1 2 3 using a terminal sh cd path to assignment1 3 create a python 2 7 environnment using sh conda env n cs224n python 2 7 anaconda 4 activate your environment using add source before activate if you re working with linux mac sh activate cs231n 5 install the dependencies using requirements txt sh pip install r requirements txt 6 don t forget to deactivate your environment when you re done add source before deactivate if you re on linux mac sh deactivate cs224n issues if you re working on windows you won t be able to do assignment 2 because the code is for python 2 7 and tensorflow is only available with python 3 5 on windows at the time i m writting these lines so you will need to create another environment for tensorflow using 1 create a python 3 5 environnment using sh conda env n tensorflow python 3 5 anaconda 2 activate your environment using add source before activate if you re working with linux mac sh activate tensorflow 3 install the dependencies using requirements txt sh pip install r requirements txt 5 install tensorflow you can also install tensorflow gpu for the gpu support sh pip install tensorflow then you need to convert all the python files from python 2 to python 3 to do so you can simply use 2to3 which is a script included in anaconda to convert python file from version 2 to version 3 automatically simply do assignment 3 if you re using my files you will need to use python 3 on windows tensorflow is not compatible with python 2 yet if you want to convert the files from assignment 3 to python 3 beside using 2to3 you will need to replace line 105 from data util py to python with open os path join path features pkl wb as f just add a b to avoid write argument must be str not bytes error also replace line 113 by python with open os path join path features pkl rb as f you will also need to replace tf nn rnn cell rnncell to tf contrib rnn core rnn cell rnncell in files q2 gru cell py and q2 rnn cell py sh 2to3 output dir python3 version assignment2 w n assignment2 note if you cloned my repository you won t need to transform the code from python 2 7 to python 3 5 as i ve already did it more information i wrote several blog posts accessible from my website https twice22 github io twice22 s website if you want to understand in detail how the code works | cs224n stanford-university solutions natural-language-processing | ai |
MAD | 2017 1 18 1 2 3 4 5 6 7 8 d302 c304 b201 8 11 b202 1 4 b203 7 10 b202 9 10 android qq 648809915 648381150 android http hukai me android training course in chinese index html br android https developer android com index html google google developers https developers google cn android https developer android google cn index html firebase https firebase google cn teaching staff instructor http sdcs sysu edu cn node 2495 email liuning2 mail sysu edu cn http edin sysu edu cn zhgf email zhenggf mail sysu edu cn tas e mail kenneth shang foxmail com e mail 1350600484 qq com e mail 405203818 qq com e mail 578949351 qq com e mail 490664702 qq com e mail 1158341873 qq com e mail 522371814 qq com e mail 594171146 qq com homework requirements submit address ftp edin sysu edu cn deadline 23 59 name sid name labx zip eg 1530000 xx lab1 zip hand in source code and lab report pdf required format pre 15331111 huashen lab1 lab1 pdf lab1 code pre https github com gfzheng mad blob master materials reporttemplate doc letures week 1 lectures a href https github com gfzheng mad blob master keynotes 01 pdf target blank 01 pdf a week 2 lectures a href https github com gfzheng mad blob master keynotes 02 pdf target blank 02 android pdf a lab a href https github com gfzheng mad blob master labs pdf target blank pdf a week 3 lectures a href https github com gfzheng mad blob master keynotes 03 pdf target blank 03 pdf a lab a href https github com gfzheng mad blob master labs github pdf target blank github pdf a week 4 lectures a href https github com gfzheng mad blob master keynotes 04 pdf target blank 04 pdf a lab a href https github com gfzheng mad blob master labs rar target blank rar a week 5 week 6 lectures a href https github com gfzheng mad blob master keynotes 06 pdf target blank 06 pdf a lab a href https github com gfzheng mad blob master labs rar target blank rar a week 7 lectures a href https github com gfzheng mad blob master keynotes 07 pdf target blank 07 pdf a lab a href https github com gfzheng mad blob master labs zip target blank zip a week 8 lectures a href https github com gfzheng mad blob master keynotes 08 pdf target blank 08 pdf a lab a href https github com gfzheng mad blob master labs zip target blank zip a week 9 widget lectures a href https github com gfzheng mad blob master keynotes 09 widget pdf target blank 09 widget pdf a code a href https github com gfzheng mad blob master keynotes 09 widget code zip target blank 09 widget code zip a lab a href https github com gfzheng mad blob master labs zip target blank zip a widgets widget week 10 week 11 lectures a href https github com gfzheng mad blob master keynotes 11 e8 a1 a5 e7 a7 bb e5 8a a8 e5 ba 94 e7 94 a8 e5 bc 80 e5 8f 91 e5 89 8d e6 b2 bf pdf target blank 11 pdf a week 11 12 lectures a href https github com gfzheng mad blob master keynotes 11 12 e6 9c 8d e5 8a a1 e4 b8 8e e5 a4 9a e7 ba bf e7 a8 8b pdf target blank 11 12 pdf a code a href https github com gfzheng mad blob master keynotes 11 12classcode progressupdate 7z target blank 11 12classcode progressupdate 7z a a href https github com gfzheng mad blob master keynotes 11 12classcode service 7z target blank 11 12classcode service 7z a lab a href https github com gfzheng mad blob master labs e5 ae 9e e9 aa 8c e5 85 ad zip target blank zip a week 13 lectures a href https github com gfzheng mad blob master keynotes 13 pdf target blank 13 pdf a code a href https github com gfzheng mad blob master keynotes 13 demo zip target blank 13 demo zip a lab a href https github com gfzheng mad blob master labs zip target blank zip a week 14 lab a href https github com gfzheng mad blob master labs zip target blank zip a week 15 16 web lectures 15 16 web keynotes 14 pdf code 15 16 web demo keynotes webdemo zip lab zip labs zip week 17 ndk lectures 17 ndk pdf keynotes 15 ndk pdf code locationdemo zip keynotes locationdemo zip getsensordemo zip keynotes getsensordemo zip lab zip labs zip week 18 lectures 18 keynotes 16 pdf code 18 hellomap rar keynotes hellomap rar lab zip labs zip week 12 week 13 week 14 week 15 web week 16 ndk lectures 12 pdf 13 pptx 13 lab 10 1 zip 10 2 zip ppt pdf lol 10 1 10 2 10 1 10 2 17 12 23 12 project a href https github com gfzheng mad blob master keynotes pdf target blank pdf a 4 https github com gfzheng mad blob master keynotes android final xlsx https github com gfzheng mad blob master keynotes xlsx arbowebforestusermanual pdf https github com gfzheng mad blob master keynotes guidebook arbowebforestusermanual pdf mobile app user s guide acronis pdf https github com gfzheng mad blob master keynotes guidebook genesys care mobile application user guide pdf genesys care mobile application user guide pdf https github com gfzheng mad blob master keynotes guidebook mobile app user s guide acronis pdf how to create a user manual 12 steps http www wikihow com create a user manual how to write user manuals http www wikihow com write user manuals pdf https github com gfzheng mad blob master keynotes guidebook e9 a1 b9 e7 9b ae e8 af b4 e6 98 8e e6 96 87 e6 a1 a3 e6 a8 a1 e6 9d bf pdf pdf https github com gfzheng mad blob master keynotes guidebook e4 b8 ad e5 b1 b1 e5 a4 a7 e5 ad a6 e6 9c ac e7 a7 91 e7 94 9f e6 af 95 e4 b8 9a e8 ae ba e6 96 87 ef bc 88 e8 ae be e8 ae a1 ef bc 89 e5 86 99 e4 bd 9c e4 b8 8e e5 8d b0 e5 88 b6 e8 a7 84 e8 8c 83 pdf shoppingbuddy pdf https github com gfzheng mad blob master keynotes guidebook shoppingbuddy pdf https github com gfzheng mad blob master keynotes pdf 11 26 12 20 1 14 24 0 tips how to ask questions ta ta ta what s your question br as search your question on internet br as google https www google com hk stack overflow https stackoverflow com terry sysuv6 dns https github com bazingaterry sysuv6 dns d host https laod cn hosts 2016 google hosts html search or ask question in qq group br ask tas br ta ta specify your question br as record and understand your solution br d github flavored markdown https guides github com features mastering markdown | android lab tas | front_end |
openchaos | build status https travis ci org openmessaging openchaos svg branch master https travis ci org github openmessaging openchaos maven central https maven badges herokuapp com maven central io openmessaging chaos messaging chaos badge svg http search maven org search 7cga 7c1 7copenmessaging chaos license https img shields io badge license apache 202 4eb1ba svg https www apache org licenses license 2 0 html fossa status https app fossa com api projects git 2bgithub com 2fopenmessaging 2fopenmessaging chaos svg type shield https app fossa com projects git 2bgithub com 2fopenmessaging 2fopenmessaging chaos ref badge shield goals the framework proposals a unified api for vendors to provide solutions to various aspects of performing the principles of chaos engineering in a cloud native environment its built in modules will heavily testify reliability availability and resilience for distriuted system currently the community supported the following platforms apache rocketmq https rocketmq apache org apache kafka https kafka apache org dledger https github com openmessaging openmessaging storage dledger redis https redis io zookeeper https zookeeper apache org etcd https etcd io nacos https nacos io experimental usage take rocketmq for example 1 prepare one control node and some cluster nodes and ensure that the control node can use ssh to log into a bunch of cluster nodes note you must set secret free style does not support passwords 2 edit driver rocketmq rocketmq yaml to set the host name of cluster nodes client config broker config 3 install openchaos in control node mvn clean install 4 run the test in the control node bin chaos sh driver driver rocketmq rocketmq yaml install 5 after the test you will get yyyy mm dd hh mm ss driver chaos result file and yyyy mm dd hh mm ss driver latency point graph png gnuplot must be installed quick start docker in one shell we start the some cluster nodes and the controller using docker compose shell cd docker up sh dev in another shell use docker exec it chaos control bash to enter the controller then shell mvn clean install bin chaos sh driver driver rocketmq rocketmq yaml install restart option usage messaging chaos options options agent run program as a http agent default false c concurrency the number of clients eg 5 default 4 d driver driver eg driver rocketmq rocketmq yaml f fault fault type to be injected eg noop minor kill major kill random kill fixed kill random partition fixed partition partition majorities ring bridge random loss minor suspend major suspend random suspend fixed suspend leader kill leader suspend default noop i fault interval fault injection interval eg 30 default 30 n fault nodes the nodes need to be fault injection the nodes are separated by semicolons eg n1 n2 n3 note this parameter must be used with fixed xxx faults such as fixed kill fixed partition fixed suspend h help help message install whether to install program it will download the installation package on each cluster node when you first use openchaos to test a distributed system it should be true default false restart whether to restart program if you want the nodes to be restarted and shut down after the experiment it should be true default false t limit time chaos execution time in seconds excluding check time and recovery time eg 60 default 60 m model test model currently queue model and kv model are supported default queue output dir the directory of history files and the output files p port the listening port of http agent default 8080 pull driver use pull consumer default is push consumer just for queue model default false r rate approximate number of requests per second eg 20 default 20 recovery calculate failure recovery time default false rto calculate failure recovery time in fault default false u username user name for ssh remote login eg admin default root password user password for ssh remote login eg admin default null fault type the following fault types are currently supported random partition fixed partition isolates random fixed nodes from the rest of the network random loss randomly selected nodes lose network packets random kill minor kill major kill fixed kill kill random minor major fixed processes and restart them random suspend minor suspend major suspend fixed suspend pause random minor major fixed nodes with sigstop sigcont bridge a grudge which cuts the network in half but preserves a node in the middle which has uninterrupted bidirectional connectivity to both components note number of nodes must be greater than 3 partition majorities ring every node can see a majority but no node sees the same majority as any other randomly orders nodes into a ring note number of nodes must be equal to 5 images fault type png license fossa status https app fossa com api projects git 2bgithub com 2fopenmessaging 2fopenchaos svg type large https app fossa com projects git 2bgithub com 2fopenmessaging 2fopenchaos ref badge large | chaos-engineering messaging eventing cache kafka | cloud |
CS275-CODA | cs275 coda mobile app development | front_end |
|
NLPrep | p align center br img src https raw githubusercontent com voidful nlprep master docs img nlprep png width 400 br p p align center a href https pypi org project nlprep img alt pypi src https img shields io pypi v nlprep a a href https github com voidful nlprep img alt download src https img shields io pypi dm nlprep a a href https github com voidful nlprep img alt build src https img shields io github workflow status voidful nlprep python package a a href https github com voidful nlprep img alt last commit src https img shields io github last commit voidful nlprep a p feature handle over 100 dataset generate statistic report about processed dataset support many pre processing ways provide a panel for entering your parameters at runtime easy to adapt your own dataset and pre processing utility online explorer https voidful github io nlprep datasets https voidful github io nlprep datasets documentation learn more from the docs https voidful github io nlprep quick start installing via pip bash pip install nlprep get one of the dataset bash nlprep dataset clas udicstm outdir sentiment you can also try nlprep in google colab google colab https colab research google com assets colab badge svg nlprep https colab research google com drive 1efvxa0o1gttz1xeapdyvxmnyjchxo7jk usp sharing overview nlprep arguments dataset which dataset to use outdir processed result output directory optional arguments h help show this help message and exit util data preprocessing utility multiple utility are supported cachedir dir for caching raw dataset infile local dataset path report generate a html statistics report contributing thanks for your interest there are many ways to contribute to this project get started here https github com voidful nlprep blob master contributing md license pypi license https img shields io github license voidful nlprep license https github com voidful nlprep blob master license icons reference icons modify from a href https www flaticon com authors darius dan title darius dan darius dan a from a href https www flaticon com title flaticon www flaticon com a icons modify from a href https www flaticon com authors freepik title freepik freepik a from a href https www flaticon com title flaticon www flaticon com a | nlp dataset prepare pytorch tfkit | ai |
Houseplant-Monitor-System | houseplant monitor system senior design in computer architecture and embedded systems ucr the houseplant monitor system is a device which allows a user to remotely care for and protect their small houseplants the android folder contains source code for the app used by the user the arducam folder has the source code for controlling the different components of this system arducamone controls motion sensor and fingerprint scanner fingerprint scanner was not integrated into this version arducamtwo controls led light strip lock style solenoid water pump and soil moisture sensor arducamthree controls temperature sensor and keypad brief overview https youtu be kx4eclsc h8 | os |
|
TheSurvey | thesurvey this is our very special project to create a zero to mastery developer survey web app by the community for the community the zero to mastery survey web app will collect numerous feedback from the entire zero to mastery dev community and incorporate those changes suggestions to improve this awesome course currently in beta https zero to mastery github io thesurvey contribute fork this repository clone forked repository using git clone forked repo url change directory to project root cd thesurvey checkout to development branch git checkout development back end change directory to sever cd server install npm dependencies npm install you will need to create a env fiel and put mongo db mongodb localhost 27017 the survey port 3005 in order for this to work then run the command npm run front end in the root directory of the project install npm dependencies npm install then run the command npm run important notes always make pr in development branch and not in master branch stable branch master https github com zero to mastery thesurvey tree master development branch development https github com zero to mastery thesurvey tree development | front_end |
|
full-stack-dev | full stack dev website this website is configured to auto build whenever a pushed to the master branch the static website can be accessed at https gi60s github io full stack dev | front_end |
|
electronicinformationtechnology | electronicinformationtechnology electronic information technology | server |
|
COMP3204 | comp3204 interactive demos slides and handouts this github repository stores the lecture materials for part 2 of the comp3204 https secure ecs soton ac uk module comp3204 computer vision module at the university of southampton http www soton ac uk the lectures presentations themselves are interactive slidesets created using openimaj http www openimaj org from this page you can get the source code for the presentations which you can build yourself following the instructions below if you just want to run the presentations for yourself you can download the latest version of the pre compiled runnable jar from here http jenkins ecs soton ac uk job comp3204 lastsuccessfulbuild artifact app target comp3204 1 0 snapshot jar with dependencies jar more details as well as pdf versions of the handouts and slides are available here http comp3204 ecs soton ac uk part2 html operating the presentations the launcher program that opens when you run the jar is self explanatory but once you ve opened a presentation or demo you can make it full screen by pressing f press again to exit for the presentations you can use the left and right arrow keys to navigate note that on some of the interactive slides you might need to click on the slide background for the arrow keys to work if you clicked on any controls other than buttons building running the code you need to have apache maven http maven apache org installed to build the code it should work with maven 2 or maven 3 fork or clone the repository or download a source tarball https github com jonhare comp3204 tarball master zipball https github com jonhare comp3204 zipball master then from the command line navigate to the app directory within the source tree run mvn install assembly assembly to build the presentation and use java jar target comp3204 1 0 snapshot jar with dependencies jar to launch the main interface | ai |
|
pxp | pxp support a name support a youtube chanel https www youtube com channel ucsk4ifcr6swjyu3zpoeiguw interface maestro detalle https www youtube com watch v uuvevozydy4 support forum request an invitation to rensi kplian com or jaime kplian com http foro kplian com demo a name demo a http gema kplian com sis seguridad vista adm index php user admin password admin installation a name installation a https github com kplian instalador framework pxp for centos 6 x and 7 x how to install https www youtube com watch v fiqbmxl5jdg next if you want another instance a name new system a for create a new instance a name new system a 1 create folder for your project example mypro 2 inside the folder of your project clone this repository this will create the pxp folder git clone https github com kplian pxp git 3 create a empty database for your project 3 1 create database user for connection example create role db conexion noinherit login password db conexion 4 you must create soft links inside your project root folder to lib ln s pxp lib lib execute inside your project root folder index php ln s pxp index php index php execute inside your project root folder sis seguridad ln s pxp sis seguridad sis seguridad execute inside your project root folder sis generador ln s pxp sis generador sis generador execute inside your project root folder sis parametros ln s pxp sis parametros sis parametros execute inside your project root folder sis organigrama ln s pxp sis organigrama sis organigrama execute inside your project root folder sis workflow ln s pxp sis workflow sis workflow execute inside your project root folder all these folders and files are inside pxp 5 create two folders one named reportes generados and other uploaded files inside your project root folder with write access for apache user 6 create a file named datosgenerales php inside pxp lib this file could be a copy of datosgenerales sample php wich already exists in the same folder it s necesary to do some configurations in that file according to the database ejm create user dbweb conexion in your database with superuser privileges session base datos dbweb session usuario conexion conexion session contrasena conexion pass user web web conexion session folder mypro 7 as postgres user execute pxp utilidades restaurar bd restaurar todo py this will generate the database postgres user needs execution access to restaurar todo py restaurar todo py ejm create user conexion in your database session usuario conexion conexion session contrasena conexion dbweb conexion 7 1 configure postgres file pg hba conf in direccion var lib pgsql 9 1 data add next line local all postgres dbweb conexion trust 7 2 restart postgres service etc init d postgresql 9 1 restart or service postgresql 9 1 restart 8 you can use the framework now user admin password admin to create a new system a name new system a 1 create a folder for the system inside it create this structure vista control modelo base funciones schema sql name of database schema for the system data000001 sql scripts with initial data dependencies000001 sql scripts to create objects with dependency create view add foreing key constraints etc patch000001 sql scripts to create objects with no dependency create table add columns etc test data sql test data for the system the folder funciones must contain one file for every function in the system 2 create or update a file named sistemas txt inside your project root folder wich contains the path for every system of your project eg sis mantenimiento one path should be one line in the file to clone an existing system a name existing system a for this yo should have an instance already running and follow these steps 1 go to your project root folder 2 clone your system git repository if you have more than one system you have to clone them all git clone yourrepository git 3 create or update a file named sistemas txt inside your project root folder wich contains the path for every system of your project eg sis mantenimiento one path should be one line in the file 4 execute the script restaurar todo py it whould be in your project folder and pxp utilidades restaurar bd execute it as postgres user 5 if you have data to loose in the system select option 2 if there is nothing to loose select option number 1 to update the database on pull or merge a name update db a after pull the code is updated but database scripts are not executed yet it s possible to execute folowing these steps 1 go to pxp utilidades restaurar bd folder 2 change user to postgres su postgres 3 execute the script restaurar todo py 4 now we have a menu with 4 options option 1 drops all tables and functions an restore them from scripts you lose information here option 2 keeps tables with data drop functions and restore them from scripts and execute new script you don t lose information here option 3 generate a backup from the current database option 4 exit the application code generator a name code generator a this video explains to use code generator for pxp https www youtube com watch v uuvevozydy4 license see the license license txt file for license rights and limitations gplv3 | front_end |
|
Awesome-LLM4Tool | awesome llm4tool awesome https cdn rawgit com sindresorhus awesome d7305f38d29fed78fa85652e3a63e154dd8e8829 media badge svg https github com sindresorhus awesome awesome llm4tool https img shields io badge awesome llm4tool blue https github com topics awesome awesome llm4tool is a curated list of papers repositories tutorials and anything related to the large language models for tools papers tptu task planning and tool usage of large language model based ai agents https arxiv org abs 2308 03427 br jingqing ruan yihong chen bin zhang zhiwei xu tianpeng bao guoqing du shiwei shi hangyu mao xingyu zeng rui zhao br arxiv https arxiv org abs 2308 03427 br aug 7 2023 tool documentation enables zero shot tool usage with large language models https arxiv org abs 2308 00675 br cheng yu hsieh si an chen chun liang li yasuhisa fujii alexander ratner chen yu lee ranjay krishna tomas pfister br arxiv https arxiv org abs 2308 00675 br aug 1 2023 star https img shields io github stars openbmb toolbench svg style social label star https github com openbmb toolbench br toolllm facilitating large language models to master 16000 real world apis https arxiv org abs 2307 16789 br yujia qin shihao liang yining ye kunlun zhu lan yan yaxi lu yankai lin xin cong xiangru tang bill qian sihan zhao runchu tian ruobing xie jie zhou mark gerstein dahai li zhiyuan liu maosong sun br arxiv https arxiv org abs 2307 16789 code https github com openbmb toolbench br july 31 2023 star https img shields io github stars night chen toolqa svg style social label star https github com night chen toolqa br toolqa a dataset for llm question answering with external tools https arxiv org abs 2306 13304 br yuchen zhuang yue yu kuan wang haotian sun chao zhang br arxiv https arxiv org abs 2306 13304 code https github com night chen toolqa br june 23 2023 star https img shields io github stars sambanova toolbench svg style social label star https github com sambanova toolbench br on the tool manipulation capability of open source large language models https arxiv org abs 2305 16504 br qiantong xu fenglu hong bo li changran hu zhengyu chen jian zhang br arxiv https arxiv org abs 2305 16504 code https github com sambanova toolbench br may 25 2023 star https img shields io github stars zjunlp trice svg style social label star https github com zjunlp trice br making language models better tool learners with execution feedback https arxiv org abs 2305 13068 br shuofei qiao honghao gui huajun chen ningyu zhang br arxiv https arxiv org abs 2305 13068 code https github com zjunlp trice br may 22 2023 star https img shields io github stars shishirpatil gorilla svg style social label star https github com shishirpatil gorilla br gorilla large language model connected with massive apis https arxiv org abs 2305 15334 br shishir g patil tianjun zhang xin wang joseph e gonzalez br arxiv https arxiv org abs 2305 15334 project page https shishirpatil github io gorilla code https github com shishirpatil gorilla br may 24 2023 star https img shields io github stars ber666 toolkengpt svg style social label star https github com ber666 toolkengpt br toolkengpt augmenting frozen language models with massive tools via tool embeddings https arxiv org abs 2305 11554 br shibo hao tianyang liu zhen wang zhiting hu br arxiv https arxiv org abs 2305 11554 code https github com ber666 toolkengpt br may 19 2023 star https img shields io github stars opengvlab interngpt svg style social label star https github com opengvlab interngpt br interngpt solving vision centric tasks by interacting with chatgpt beyond language https arxiv org abs 2305 05662 br zhaoyang liu yinan he wenhai wang weiyun wang yi wang shoufa chen qinglong zhang zeqiang lai yang yang qingyun li jiashuo yu kunchang li zhe chen xue yang xizhou zhu yali wang limin wang ping luo jifeng dai yu qiao br arxiv https arxiv org abs 2305 05662 project page https igpt opengvlab com code https github com opengvlab interngpt br may 9 2023 star https img shields io github stars stevengrove gpt4tools svg style social label star https github com stevengrove gpt4tools br gpt4tools teaching llm to use tools via self instruction http arxiv org abs 2305 18752 br lin song yanwei li rui yang sijie zhao yixiao ge ying shan br arxiv http arxiv org abs 2305 18752 huggingface https c60eb7e9400930f31b gradio live code https github com stevengrove gpt4tools br april 24 2023 star https img shields io github stars haotian liu llava svg style social label star https github com haotian liu llava br llava large language and vision assistant https arxiv org abs 2304 08485 br haotian liu chunyuan li qingyang wu yong jae lee br arxiv https arxiv org abs 2304 08485 project page https llava hliu cc code https github com haotian liu llava br april 18 2023 star https img shields io github stars openbmb bmtools svg style social label star https github com openbmb bmtools br tool learning with foundation models https arxiv org abs 2304 08354 br yujia qin shengding hu yankai lin weize chen ning ding ganqu cui zheni zeng yufei huang chaojun xiao chi han yi ren fung yusheng su huadong wang cheng qian runchu tian kunlun zhu shihao liang xingyu shen bokai xu zhen zhang yining ye bowen li ziwei tang jing yi yuzhang zhu zhenning dai lan yan xin cong yaxi lu weilin zhao yuxiang huang junxi yan xu han xian sun dahai li jason phang cheng yang tongshuang wu heng ji zhiyuan liu maosong sun br arxiv https arxiv org abs 2304 08354 code https github com openbmb bmtools br april 17 2023 api bank a benchmark for tool augmented llms https arxiv org abs 2304 08244 br minghao li feifan song bowen yu haiyang yu zhoujun li fei huang yongbin li br arxiv https arxiv org abs 2304 08244 br april 14 2023 star https img shields io github stars microsoft jarvis svg style social label star https github com microsoft jarvis br hugginggpt solving ai tasks with chatgpt and its friends in huggingface http arxiv org abs 2303 17580 br yongliang shen kaitao song xu tan dongsheng li weiming lu and yueting zhuang br arxiv http arxiv org abs 2303 17580 huggingface https huggingface co spaces microsoft hugginggpt code https github com microsoft jarvis br mar 30 2023 star https img shields io github stars microsoft taskmatrix svg style social label star https github com microsoft taskmatrix br visual chatgpt talking drawing and editing with visual foundation models https arxiv org abs 2303 04671 br chenfei wu shengming yin weizhen qi xiaodong wang zecheng tang nan duan br arxiv https arxiv org abs 2303 04671 huggingface https huggingface co spaces microsoft visual chatgpt code https github com microsoft taskmatrix br mar 8 2023 star https img shields io github stars lucidrains toolformer pytorch svg style social label star https github com lucidrains toolformer pytorch br toolformer language models can teach themselves to use tools https arxiv org abs 2302 04761 br timo schick jane dwivedi yu roberto dess roberta raileanu maria lomeli luke zettlemoyer nicola cancedda thomas scialom br arxiv https arxiv org abs 2302 04761 code https github com lucidrains toolformer pytorch br 9 feb 2023 star https img shields io github stars allenai visprog svg style social label star https github com allenai visprog br visual programming compositional visual reasoning without training https arxiv org abs 2211 11559 br tanmay gupta aniruddha kembhavi br arxiv https arxiv org abs 2211 11559 code https github com allenai visprog br nov 18 2022 talm tool augmented language models https arxiv org abs 2205 12255 br aaron parisi yao zhao noah fiedel br arxiv https arxiv org abs 2205 12255 br may 24 2022 mrkl systems a modular neuro symbolic architecture that combines large language models external knowledge sources and discrete reasoning https arxiv org abs 2205 00445 br ehud karpas omri abend yonatan belinkov barak lenz opher lieber nir ratner yoav shoham hofit bata yoav levine kevin leyton brown dor muhlgay noam rozen erez schwartz gal shachaf shai shalev shwartz amnon shashua moshe tenenholtz br arxiv https arxiv org abs 2205 00445 code https github com hwchase17 langchain tree master langchain agents mrkl br may 1 2022 projects auto gpt https github com significant gravitas auto gpt semantic kernel https github com microsoft semantic kernel embedchain https github com embedchain embedchain star https img shields io github stars embedchain embedchain svg style social https embedchain ai blogs llm powered autonomous agents https lilianweng github io posts 2023 06 23 agent | ai |
|
vuebnb | p align center img src https laravel com assets img components logo laravel svg p p align center a href https travis ci org laravel framework img src https travis ci org laravel framework svg alt build status a a href https packagist org packages laravel framework img src https poser pugx org laravel framework d total svg alt total downloads a a href https packagist org packages laravel framework img src https poser pugx org laravel framework v stable svg alt latest stable version a a href https packagist org packages laravel framework img src https poser pugx org laravel framework license svg alt license a p about laravel laravel is a web application framework with expressive elegant syntax we believe development must be an enjoyable and creative experience to be truly fulfilling laravel attempts to take the pain out of development by easing common tasks used in the majority of web projects such as simple fast routing engine https laravel com docs routing powerful dependency injection container https laravel com docs container multiple back ends for session https laravel com docs session and cache https laravel com docs cache storage expressive intuitive database orm https laravel com docs eloquent database agnostic schema migrations https laravel com docs migrations robust background job processing https laravel com docs queues real time event broadcasting https laravel com docs broadcasting laravel is accessible yet powerful providing tools needed for large robust applications a superb combination of simplicity elegance and innovation gives you a complete toolset required to build any application with which you are tasked learning laravel laravel has the most extensive and thorough documentation and video tutorial library of any modern web application framework the laravel documentation https laravel com docs is in depth and complete making it a breeze to get started learning the framework if you re not in the mood to read laracasts https laracasts com contains over 1100 video tutorials on a range of topics including laravel modern php unit testing javascript and more boost the skill level of yourself and your entire team by digging into our comprehensive video library laravel sponsors we would like to extend our thanks to the following sponsors for helping fund on going laravel development if you are interested in becoming a sponsor please visit the laravel patreon page http patreon com taylorotwell vehikl http vehikl com tighten co https tighten co british software development https www britishsoftware co fragrantica https www fragrantica com softonsofa https softonsofa com user10 https user10 com soumettre fr https soumettre fr codebrisk https codebrisk com 1forge https 1forge com tecpresso https tecpresso co jp pulse storm https www fragrantica com contributing thank you for considering contributing to the laravel framework the contribution guide can be found in the laravel documentation http laravel com docs contributions security vulnerabilities if you discover a security vulnerability within laravel please send an e mail to taylor otwell via taylor laravel com mailto taylor laravel com all security vulnerabilities will be promptly addressed license the laravel framework is open sourced software licensed under the mit license http opensource org licenses mit | front_end |
|
pattern-library-skeleton | h1 align center zapatillasfrommars br br welcome to pattern library skeleton br br h1 h4 align center an awesome design system for your products and experiences h4 p align center a href https pattern library skeleton netlify com img src https api netlify com api v1 badges 222e8120 908e 40fe 9f3a c59e694ed4b8 deploy status alt netlify status a p p align center a href fire overview overview a a href rocket getting started getting started a a href triangular ruler architecture architecture a a href nail care guidelines guidelines a a href pray testing the application testing a p p align center img src https klaufel com tokens img figma file tiny png alt design tokens figma file p fire overview we use the best tools to improve our workflow allowing us to create an awesome library of components reactjs https facebook github io react v16 type checking with proptypes https www npmjs com package prop types styled components https styled components com for styling components and application compiling of modern javascript with babel https github com babel babel and bundling with webpack https webpack js org jest https jestjs io and testing library https testing library com for unit ui testing automated git hooks with husky https github com typicode husky code linting using eslint https github com eslint eslint code formatter using prettier https prettier io developing isolated ui components with storybook https storybook js org rocket getting started to get started you need to meet the prerequisites and then follow the installation instructions figma design tokens example figma file https www figma com file igr2xoqczx91cu7cdr4zsi https www figma com file igr2xoqczx91cu7cdr4zsi for more info on configuring your design tokens file visit how to configure design tokens with figma api https pattern library skeleton netlify com path docs overview intro page installing you can clone our git repository git clone git github com klaufel pattern library skeleton git wiring up your development environment hooking it up is as easy as running npm run install this command will install all the required dependencies please note that npm install is only required on your first start or in case of updated dependencies initializing storybook npm run storybook generate design tokens as variables npm run tokens triangular ruler architecture based on the atomic design https bradfrost com blog post atomic web design principles a methodology for creating design systems there are five distinct levels of components atomic design component structure atoms molecules organism templates pages when we use a ui library the highest abstraction of components that we expose would be an organism the rest of the template and page components are built within the application that imports the library source project structure src components atoms molecules organism pages docs figma tokens styles index js entry point src the place where we put our application source code components add your components here this folder is divided from atomic design https bradfrost com blog post atomic web design principles docs our documentation as stories for the design system figma tokens directory containing functions to generate figma design tokens with api styles directory to add global styles and theme to build components index js entry point import all components and export to generate package to use in project as a dependency example of component structure mycomponent stories mycomponent stories js mdx tests snapshots mycomponent test js snap mycomponent test js mycomponent styles js index js mycomponent directory containing our component stories directory containing the stories for storybook mycomponent stories js file containing the component stories tests directory containing the tests for jest snapshots directory containing the autogenerated jest snapshots mycomponent test js snap autogenerated snapshot file mycomponent test js file containing the component tests mycomponent styles js file containing the component styles styled components css in js index js file containing the react component html or other imports from ui library nail care guidelines linting npm run lint find problems in your code js formatter npm run prettier check find format problems in your code npm run prettier write fix format problems in your code pray testing the application jest https jestjs io a delightful javascript testing framework and testing library https testing library com build on top of dom testing library by adding apis for working with react components running your tests npm run test will perform your unit testing npm run test update will perform your unit testing and update snapshots npm run test watch will perform your unit testing and watchers tests npm run test coverage will perform your unit testing and show coverage npm run test coverage web will perform your unit testing show coverage and open the report in your default browser | os |
|
web-development-with-node-and-express | web development with node and express this is the companion repository to web development with node and express 1st edition http shop oreilly com product 0636920032977 do with this repository you can follow along with any of the code samples in the book as well as see additional material that wasn t appropriate for the book format important new structure the first version of the book attempted to treat the repo as a linear development effort that is it attempted to mirror the progress a reader might make as he or she went through the book or the progress a real development effort might take this approach was well intentioned but turned out to cause more problems than it solved after struggling with those problems i realized that compromises had to be made to reflect the reality of this repo including the amount of time i have to maintain this repo instead of a linear commit history with tags for each chapter each chapter now lives in its own directory including an alternate ending chapter ch 08 jquery file upload tags now represent versions the nice thing about versions is that the version of the book you re reading can be correlated to the version of the repository for example if your book says to use version 1 5 you can checkout tag 1 5 and feel confident that the code will mirror what you re reading benefits of this approach code samples match what s in the book see if you re following along by using the official repository in chapter 4 for the version number reduced maintenance for me this is good for you because it allows me to focus on making meaningful updates to the book easier to accept community contributions this makes it a lot easier for me to accept community contributions i can see quickly and easily what chapter s you re correcting and correct the book in parallel code in the book not working after checking for typos try the following steps select the folder that corresponds to the book chapter e g chapter 5 and compare your package json to the one in this repo to rule out problems related to package versions try reinstalling the packages to match the specific version specified in the most current package json to do so run npm install with the specific package version e g npm install save express handlebars 0 5 0 still within the chapter folder click on the text latest commit on the right side of the window to see the newest changes to the code in this chapter a red background means that the code was deleted and you should delete it too a green background means that this is new code that was added and likewise you should add it to reiterate the previous section beyond the specific version tag that matches your copy of the book 1 5 1 for example you may see code changes that don t match contributing i am happy to accept prs for this repository for changes big and small please keep in mind however that changes to the repository have to be kept in sync with changes in the book any work you can do in your pr to make it clear to me what changes need to be made in the book is very helpful to me before sending a pr please consider the following if you re just correcting typos or minor things i prefer one big pr with lots of typo corrections to a bunch of small prs with corrections if you re suggesting major changes to code that appears in the book i recommend you discuss it with me first i would hate for you to do a lot of work that i am unable or unwilling to sync with the book while not every bit of code is carried from chapter to chapter most of it is if you make code changes in ch05 make sure you make the same changes in ch06 to ch19 this is one of the first things i ll check in your pr this repo doesn t contain everything you need many of the questions i receive have to do with the chapter sample code not working out of the box most of these are because the reader hasn t taken note that you have to create your own credentials js file the sample project relies on a lot of third party services twitter google mongolabs weather underground etc not only do i not wish to share my personal access tokens for these sites it would be against the terms of service for those sites the book has instructions for creating your own credentials js and attaching to the appropriate account s you ll need this important file doesn t show up until ch09 so if you just want to get something up and running without any work try one of the early chapters starting with ch09 you ve got to do a little work yourself to get the site running what happened to the original code the original master branch that i developed for the first version of the book has been saved as the legacy branch please do not do any development on this branch it is only there for reference i will not consider any prs from this branch | front_end |
|
cryptozombies-lesson-code | cryptozombies lesson code cryptozombies https user images githubusercontent com 13703497 69648502 c8f3db80 10ae 11ea 9d52 ce4d4bbc426a jpeg overview this repository contains source codes for cryptozombies https cryptozombies io en lessons the source codes are divided into courses and chapters as folders and chapters contain solidity sample codes for each lesson note lesson 7 8 9 has been removed and currently no source code is available for those lessons how to use you can simply clone the project to your local storage with following command git clone https github com loomnetwork cryptozombies lesson code git or fork it to modify the sample codes for your own study if you found any error in the codes while you study feel free to submit pull requests or issues warning please note that these codes are just sample codes so do not use them on production license license shields https img shields io badge license gpl 20v3 blue https www gnu org licenses gpl 3 0 this project is licensed under the gnu general public license v3 0 see the license https github com loomnetwork cryptozombies lesson code blob master license file for details | cryptozombies blockchain solidity tutorial loomnetwork libra | blockchain |
Pewlett-Hackard-Analysis | pewlett hackard analysis build employee database with postgresql and perform sql queries to explore and analysis data by applying skills of data modeling data engineer and data analysis challenge object create a list of candidates for the mentorship program 1 the erd demonstrates relationships between original 6 tables employeedb png employeedb png 2 determining the number of individuals retiring sql for all retirement eligibility retirement info csv data retirement info csv queries select emp no birth date first name last name genger as gender hire date into retirement info from employees where birth date between 1952 01 01 and 1955 12 31 and hire date between 1985 01 01 and 1988 12 31 in conclusion there are 41 380 records of individuals ready to retirement 3 determining the number of individuals being hired current retirement eligibility queries select r emp no r first name r last name d dep no d to date into current emp from retirement info as r left join dept emp as d on r emp no d emp no where d to date 9999 01 01 in conclusion there are 33 118 records of current retirement eligibility current retirement eligibility with title and salary information challenge emp info csv data challenge emp info csv queries select ce emp no as employee number ce first name ce last name t title as title t from date s salary as salary into challenge emp info from current emp as ce inner join titles as t on ce emp no t emp no inner join salaries as s on ce emp no s emp no 4 each employee only display the most recent title by using partition by and row number function queries select employee number first name last name title from date salary into current title info from select row number over partition by cei employee number cei first name cei last name order by cei from date desc as emp row number from challenge emp info as cei as unique employee where emp row number 1 5 the frequency count of employee titles challenge title info csv data challenge title info csv queries select count ct employee number over partition by ct title order by ct from date desc as emp count into challenge title info from current title info as ct a summary count of employees for each title challenge title count info csv data challenge title count info csv queries select count employee number title from challenge title info group by title conclusion in the 33118 records of current retirement eligibility there are 251 assistant engineers 2711 engineers two managers 2022 staffs 12872 senior staffs and 1609 technique leaders 6 determining the number of individuals available for mentorship role sql for eligible for mentor program challenge mentor info csv data challenge mentor info csv queries select em emp no em first name em last name t title as title t from date t to date into challenge mentor info from employees as em inner join titles as t on em emp no t emp no where em birth date between 1965 01 01 and 1965 12 31 and t to date 9999 01 01 in conclusion there are 1549 active employees eligible for mentor plan limitation and suggestion 1 this project assumed retirement years between 1952 and 1955 we need to narrow down period into 3 single year for more accurate estimate and better analysis of potential job opening 2 more detail information and analysis are needed for potential mentor table to compare with the title table of current ready to retirement and get a better estimate of outside hiring request | postgresql pgadmin4 sql datamodel dataengineering data-analysis | server |
pancakeswap-frontend-hardhat-testnet | a hardhat testnet environment base on pancake frontend node environment recommend nvm use 12 install sh install sh serve sh serve sh deploy sh deploy sh start sh start sh note make sure the connected wallet address on pancakeswap frontend page matches your deployed address default deployed address 0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266 running in the serve if you plan to push your local project to the server configure src config index ts base url to address of the server cd pancake frontend npm run build local serve serve s build p 3000 example image text https raw githubusercontent com chobynleo img main pancake swap frontend hardhat testnet wechatimg30 png image text https raw githubusercontent com chobynleo img main pancake swap frontend hardhat testnet wechatimg31 png image text https raw githubusercontent com chobynleo img main pancake swap frontend hardhat testnet wechatimg32 png configuration if you want to try out the process of configuring pancake frontend for yourself and build your own testnet environment the following will show you which files need to be modified preparing source clone pancake swap core git clone git github com pancakeswap pancake swap core git yarn install yarn compile clone pancake swap periphery git clone git github com pancakeswap pancake swap periphery git yarn install yarn compile clone pancake frontend git clone git github com pancakeswap pancake frontend git yarn install tip if you got a compilation error about import uniswap v2 core contracts interfaces ipancakepair sol please refer to upchain https learnblockchain cn question 2055 setup the source code for the contract address https bscscan com address your address code in the pancake swap core directory install hardhat and edit pancakefactory sol npm install save dev hardhat npx hardhat choose create an empty hardhat config js deploy tab select pancakefactory fill your address as feetosetter in constructor deploy creat scripts deploy js npx hardhat run scripts deploy js network dev remember to save init code pair hash in the pancake swap periphery directory install hardhat and edit pancakerouter sol file npm install save dev hardhat npx hardhat choose create an empty hardhat config js in the pancakelibrary sol to find pairfor function read init code pair hash copy this hash without prefix 0x creat scripts deploy js npx hardhat run scripts deploy js network dev if you got an error about error max code size exceeded set solidity optimizer runs to 200 in the pancake frontend directory all the files that need to be modified env development env production src config constants networks ts src config constants index ts src config constants tokens ts local testnet e hardhat src config constants lists ts public pancake default tokenlist json other testnet src config constants lists ts src config constants tokenlists pancake default tokenlist json src config constants contracts ts scr config constants farms ts src config constants pools ts src config constants ifo ts src config constants pricehelperlps ts src config constants types ts src config index ts src config constants nftscollections index ts switch to the pancake swap sdk directory and modify the file cd pancake swap sdk modify the file src constants ts and run pancake swap sdk npm run build cp r dist local pancakeswap libs sdk cp r local pancakeswap libs pancake frontend node modules the following is the specific content to be modified ethers providers staticjsonrpcprovider rpc url is in the src utils providers ts that parameter is in the env development of react app node production network url is in the pancake frontend src config constants networks ts testnet tokens configuration is in the src config constants tokens ts tokenlist in the src config constants tokenlists pancake default tokenlist json in order to facilitate local access i copied it to public you can do the same pancake extended pancake top100 is in the src config constants lists ts the configuration of the wallet connection network is in the src utils wallet ts router address is in the src config constants index ts masterchef lotteryv2 multicall all these contract address are in the src config constants contracts ts the configuration of the abi and address are in the config abi utils addresshelpers the reference in src utils contracthelpers ts src hooks usecontract ts the configuration of the pricehelperlps are in the src config constants pricehelperlps ts in the src config constants types ts about address need to be modified in the src state farms hooks ts about usefarmfrompid 251 need to be modified src utils gettokenlist ts about getting tokenlist i modified into http to access if you want to build project to serve you may modify url into https to get tokenlist detailed see annotation that 42 line on src utils gettokenlist ts in the src config index ts about base bsc scan urls base url base bsc scan url need to be modified in the src config constants nftscollections index ts about address need to be modified farm pools ifo pricehelper contracts all these about chainid need to be modified which are in the src config constants farm ts src config constants pools ts src config constants ifo ts src config constants pricehelper ts src config constants contract ts the configuration of factory address json and init code hash json are in the node modules local pancakeswap libs sdk dist sdk cjs development js see pancakeswap frontend hardhat testnet depoly sh cp r local pancakeswap libs pancake frontend node modules ps 1 if you want to change it set these parameters about factory address json init code hash json chainid in pancakeswap frontend hardhat testnet pancake swap sdk src constants ts and then pancakeswap frontend hardhat testnet pancake swap sdk npm run build cp r local pancakeswap libs pancake frontend src node modules 2 if you want to build frontend project and deploy on your server factory address json and init code hash json should be hard coded in pancakeswap frontend hardhat testnet pancake swap sdk src constants ts | front-end hardhat pancakeswap | front_end |
WeCode_MA | wecode ma list of dear developers wecode mobile application development bootcamp 2021 2022 hooshyar mohammed rasol github https github com hooshyar linkedin https www linkedin com in hooshyar stack overflow https stackoverflow com users 10622449 hooshyar maryam salah jubrail github https github com maryyamsalah linkedin http linkedin com in maryam salah 29b692139 stack overflow https stackoverflow com users 17595130 maryyam salah abdullah hussein hamad ahmad helal muhammedsaied amanj azad ameen amozhgar saadi baper areen saber ali ashna salam mhammad avin fateh rasul basira tahir ahmed azad khorsheed rasheed github https github com azadlinavay linkedin https www linkedin com in azad linavay 6b291520b stack overflow https stackoverflow com users 10904019 azad linavay bawer farhad hussein delman ali github https github com delmanali linkedin https www linkedin com in delman ali 84a994159 stack overflow https stackoverflow com users 17595273 delman ali dosty dilshad abdulhameed darwaza farhad sabir github https github com darwaza2021 linkedin https www linkedin com in darwaza farhad 50a67b225 stack overflow https stackoverflow com users 17322287 darwaza farhad huda hamid said github https github com hudahamid linkedin https www linkedin com in huda hamid 7524a6159 stack overflow https stackoverflow com users 17595301 huda hamid kawan idrees mawlood rafaat khalil abubakr nbsp x 1 github https github com rafaatxalil365 linkedin https www linkedin com in rafaat abubakir a929b3213 stack overflow https stackoverflow com users 17352516 rafaat xalil miran ali rashid nbsp x 1 github https github com miranalirashid linkedin https www linkedin com in miran ali 82a748178 stack overflow https stackoverflow com users 17595118 miran bawar khalid aziz nbsp x 1 github https github com bawarx linkedin https www linkedin com in bawar khalid 265b4b227 stack overflow https stackoverflow com users 14960532 bawar khalid mohammed mansour github https github com hooshyar linkedin https github com mohammedmansur stack overflow https stackoverflow com shanya hushyar nbsp x 1 github https github com shanyahushyar stack overflow https stackoverflow com users 17595162 shanya hushyar rashed sadraddin rashed rebar salam mhammad roudan chirkoh haj hussein roza taha mustafa salih yaseen rajab salwa fikri malla shang masood abdullah shokhan osman sima azad farooq srwa omar abdullah taman moayed latif viyan najmadin nasradin yahia hasan baiz yahya adnan ghadhban miran ali rasheed bawar khaled azeez mohammed mansour github https github com mohammedmansur linkedin https www linkedin com in mohammed mansur 568a65231 stack overflow https stackoverflow com users 15901905 mohammed mansur shanya hushyar sipal salam mostafa majeed sipal salam nbsp x 1 github https github com sipal00 linkedin https www linkedin com in sipal salam 7b7602218 stack overflow https stackoverflow com users 17595226 sipal mostafa majeed nbsp x 1 github https github com mstafamajid linkedin https www linkedin com in mustafa majid 166327224 stack overflow https stackoverflow com users 17595137 mustafa majid karwan msto ali farhad github https github com 1 ali 1 linkedin https www linkedin com in ali farhad 90b4b8198 stack overflow https stackoverflow com users 14529397 alifarhad ali ahmed naman aland abdulmajeed shad khalid nbsp x 1 github https github com shad khalid linkedin https www linkedin com in shad khalid 944545227 stack overflow https stackoverflow com users 17622725 shad khalid sako ranj nbsp x 1 github https github com sako ranj linkedin https www linkedin com in sako ranj 570031213 stack overflow https stackoverflow com users 15195981 sako ranj salar khalid nbsp x 1 github https github com salarpro linkedin https www linkedin com in salar pro 13b970120 stack overflow https stackoverflow com users 5862126 salar pro omer mukhtar nbsp x 1 github https github com omerrmukhtarr linkedin https www linkedin com in omer mukhtar 950b951b7 stack overflow https stackoverflow com users 17595096 omer mukhtar tab profile aso arshad nbsp x 1 karwan khdr github https github com karwan01 linkedin https www linkedin com in karwan khdhr 590b5a1a8 stack overflow https stackoverflow com users 17595109 karwan rasul sara bakir github https github com sarahbakr stack overflow https stackoverflow com users 17628902 sarah bakr yassin hussein github https github com yassin h rassul linkedin https www linkedin com in yassin rassul stack overflow https stackoverflow com users 13059311 yassin h rassul amad bashir github https github com amad a96 linkedin https www linkedin com in amad bashir 615026227 stack overflow https stackoverflow com users 17595120 amad bashir zaynab azad khdir hekar azwar mohammed salih github https github com hekaramohammad stack overflow https stackoverflow com users 13974543 hekar azwar mohemmad salih linkedin https www linkedin com in hekar azwar mohammed salih 579a601a6 ahmed aziz hasan github https github com ahmedaziz0 stack overflow https stackoverflow com users 12643186 ahmed aziz rasan diyar tayeb github https github com titan ui linkedin https stackoverflow com users 17604539 titan ui rasty azad qadir nbsp x 1 github https github com rastyit97 stack overflow https stackoverflow com users 16274767 rasty azad wrya mhamad hassan nbsp x 1 github https github com wrya mhamad linkedin https www linkedin com in wrya mhamad 31024b185 stack overflow https stackoverflow com users 13229231 wrya mhamad tahir awal ghafur github https github com tatosoll linkedin https www linkedin com in tahir awal 490651201 stack overflow https stackoverflow com users 17595960 tahir awal tab profile mohammed ahmed salim nbsp x 1 github https github com mohamed199898 linkedin https www linkedin com in mohamad amedy 078467165 stack overflow https stackoverflow com users 17595148 mohammed ahmed salim shene wali khalid github https github com shenekhalid stack overflow https stackoverflow com users 17595197 shene wali linkedin https www linkedin com mwlite in shene wali 189450228 muhamad tahsin karem github https github com muhamad3 linkedin https www linkedin com in muhamad tahsin 29b80a1a9 stack overflow https stackoverflow com users 14649300 muhamad tahsin ayman abd saeed nbsp x 1 github https github com aymanabd9 linkedin https www linkedin com in ayman abd 60838a228 stack overflow https stackoverflow com users 17595097 ayman abd milad mirkhan majeed github https github com miladmirkhan linkedin https www linkedin com in milad mirkhan 63537521a stack overflow https stackoverflow com users 16825719 milad mirkhan omar falah hasan github https github com omarfalah99 linkedin https www linkedin com in omar falah 3531381ba stack overflow https stackoverflow com users 17595189 omar falah ranj kamal kanabi github https github com ranj kamal linkedin https www linkedin com in ranj kamal 020755154 stack overflow https stackoverflow com users 17595159 ranj kamal dwarozh kakamad noori github https github com dwarozh 177 stack overflow https stackoverflow com users 17595098 dwarozh k noori | front_end |
|
cica-back | awesome project build with typeorm steps to run this project 1 run npm i command 2 setup database settings inside data source ts file 3 run npm run migrate to set up database on first start 4 run npm start command | server |
|
SCA-Cloud-School-Application | sca cloud school application cloud engineering dockerhub link https hub docker com repository docker aijay001 sca cloud assessment | cloud |
|
IOT-Espressif-Android | user manual iot espressif android source code is used to control esp8266 device by android pad or phone it support local and online more info about how to use the android apk please follow the step 1 install the apk by source code 2 register an espressif account or skip register automatically 3 tap the question mark button on the top right corner it will tell you how to use it step by step 4 tap the setting button at the bottom you could set your own preference and see change logs 5 enjoy yourself v0 9 2 1 esptouch demo version is v0 2 1 the indepedent esptouch demo please refer to https github com espressifapp esptouchforandroid 2 how to use python and log4j to turn on off log 2 1 cd project dir python project dir means the directory of the project where contains project 2 2 python xml file search py generate log4j xml 2 3 vi log4j xml and modify it as you like our log4j support off fatal error warn info debug trace all besides we support ignore which means its level is depended upon its parent s level 2 4 python xml parse py it will change initlogger java according to log4j xml automatically 3 db we use greendao as our orm database more info about greendao please refer to https github com greenrobot greendao and http greendao orm com 4 the framework of the source code the layer of our source code are seperated as ui interface type model action command base open db db gen util log esptouch 4 1 ui ui layer is just the ui it shouldn t contain complex business logical 4 2 interface interface layer contains the interfaces it defines the interface to be implemented 4 3 type type layer contains the property type including device s type device s state device s status device s timer and etc 4 4 model model layer contains various models the differece between type and model is that type just stores the property but model contains business logical 4 5 action action layer contains various actions the action is based upon command action commands business logical 4 6 command command layer contains various commands the command is the basis of action command is pure command without any logical 4 7 base base layer is the basic layer which is the base of the iot espressif apk 4 8 open open layer is used to store open api and source code such as zxing from others 4 9 db db layer is used to make the intermediate layer between db gen and upper layer 4 10 db gen db gen layer is generated by greendao to implement orm 4 11 util util layer contains various utilities classes 4 12 log log layer is the layer about logging log4j is used at present 4 13 esptouch esptouch layer contains the source code of esptouch esptouch layer is an indepent layer at present only ui layer depends upon it and it is just used to display a esptouch demo | server |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.