names
stringlengths
1
98
readmes
stringlengths
8
608k
topics
stringlengths
0
442
labels
stringclasses
6 values
RITOS
ritos real internet time operation system ritos an esp8266 rtos library of the esp8266 core for arduino ide
esp8266 rtos
os
AirBnB_clone
p align center img src download png alt alx logo img p airbnb clone console project completed during alx software engineering course it served as an introduction or first steps towards building a full web application as the title suggests the endgame of airbnb clone was to create a website with some of the core functionlities the popular airbnb website https www airbnb com for property rentals however being introductory this project focused only on the backend leveraging python and json to build a data storage and retrival system python was used to implement a command interpreter that manages objects creates modifies deletes stores and retrives them while json was used for serializing deserializing created objects for storage retrival from a basic file storage system that was also implemented in python languages and technologies used python json bash project files the following were written in python and tested on ubuntu and windows 11 filename description console py a command interpreter used to perform object operations create update delete store retrieve etc how to start use it clone this repository git clone https run it with python3 windows python console py linux console py it would display a simple prompt and wait for commands console py alx software engineering command line type help or to list commands hbnb example how to open hbnb help documented commands type help topic eof help quit hbnb hbnb hbnb quit to learn more about commands just type help commands commands available in the shell include help to learn more about commands quit to exit the program authors donald ajaps https github com adobki opeoluwa muritala https github com opeoluwa muritala
server
Web-Development-Bootcamp
h1 align center web development bootcamp h1 p align center a href https www youtube com watch v fbh9entlimy img src https img youtube com vi fbh9entlimy 0 jpg alt web development bootcamp in hindi a p contribution is fun green heart in order to make a hassle free environment i implore you all while contributing to follow the instructions mentioned below happy submissions slightly smiling face contribution guidelines are we missing any of your favorite features which you think you can add to it we invite you to contribute to this project and make it better to start contributing follow the below guidelines 1 fork this repository 2 clone your forked copy of the project git clone https github com your user name web development bootcamp git 3 navigate to the project directory file folder cd web development bootcamp 4 add a reference remote to the original repository git remote add upstream https github com frontendfreaks web development bootcamp git 5 check the remotes for this repository git remote v 6 always take a pull from the upstream repository to your master branch to keep it at par with the main project updated repository git pull upstream main 7 create a new branch git checkout b your branch name 8 perfom your desired changes to the code base 9 track your changes heavy check mark git add 10 commit your changes git commit m relevant message 11 push the committed changes in your feature branch to your remote repo git push u origin your branch name 12 to create a pull request click on compare and pull requests 13 add appropriate title and description to your pull request explaining your changes and efforts done 14 click on create pull request 15 voila exclamation you have made a pr to the web development bootcamp boom wait for your submission to be accepted and your pr to be merged introduction of mentor br div align center img alt src https avatars githubusercontent com u 59874304 s 400 u a90ce890d0e3d04ef84d5ae09b143dcb2ecc5d1b v 4 width 200px div br div align center a href https www youtube com c vishalrajput 1 img src https img shields io badge youtube white svg style for the badge logo youtube logocolor red a a href https www linkedin com in vishalraj1 img src https img shields io badge linkedin 230077b5 svg style for the badge logo linkedin logocolor white a a href https twitter com vishalraj 1 img src https img shields io badge twitter white svg style for the badge logo twitter logocolor 3a2f2f a a href mailto rajputvishal33786 gmail com img alt gmail src https img shields io badge gmail d14836 style for the badge logo gmail logocolor white a div br br hello everyone i m vishal rajput a frontend developer from india i have 3 years of experience in frontend development over the past two years i have worked with 6 startups and received 20 internships and job offers i m here to share my experience and guide you on how to learn and practice frontend development while building amazing projects join our community and let s learn and grow together you can connect with me on my linkedin twitter youtube and github profiles our valuable contributors div align center a href https github com frontendfreaks web development bootcamp graphs contributors img src https contributors img web app image repo frontendfreaks web development bootcamp a div
hacktoberfest css frontend-development html html-css-javascript javascript
front_end
paloma
logo assets paloma black png continuous integration https github com palomachain paloma actions workflows ci test yml badge svg branch master https github com palomachain paloma actions workflows ci test yml project status wip initial development is in progress but there has not yet been a stable usable release suitable for the public https img shields io badge repo 20status wip yellow svg style flat square https www repostatus org wip github go mod go version https img shields io github go mod go version palomachain paloma logo paloma license apache 2 0 https img shields io github license umee network umee svg style flat square https github com palomachain paloma blob main license lines of code https img shields io tokei lines github palomachain paloma a golang implementation of paloma chain a decentralized automation network for smart contracts deployed in the cosmos evm solana and polkadot networks paloma is the fastest secure crosschain communications blockchain for crosschain software engineers who want simultaneous control of multiple cross chain deployed smart contracts paloma is decentralized and consensus driven message delivery paloma is fast state awareness low cost state computation and a powerful attestation system polama blockchain enables scalable crosschain smart contract execution with any data source table of contents talk to us talk to us international community international community releases releases active networks active networks join a network join an active network contributing contributing md talk to us we have active helpful communities on twitter and telegram twitter https twitter com paloma chain telegram https t me palomachain discord https discord gg htuvgxvh5n forum https forum palomachain com international community bengali readme docs welcome bengali md chinese readme docs welcome chinese md indonesian readme docs welcome indonesian md japanese readme docs welcome japanese md persian readme docs welcome persian md polish readme docs welcome polish md portuguese readme docs welcome portuguese md romanian readme docs welcome romanian md russian readme docs welcome russian md spanish readme docs welcome spanish md thai readme docs welcome thai md turkish readme docs welcome turkish md ukrainian readme docs welcome ukrainian md vietnamese readme docs welcome vietnamese md releases see release procedure contributing md release procedure for more information about the release model active networks testnet paloma testnet 15 january 20 2023 mainnet messenger february 2 2023 join an active network note some have seen errors with glibc version differences with the downloaded binaries this is caused by a difference in the libraries of the host that built the binary and the host running the binary if you experience these errors please pull down the code and build it rather than downloading the prebuilt binary install the correct version of libwasm the current required version of libwasm is 1 3 0 if you re upgrading from a prior version it is recommended to remove the cache to avoid errors if you re already have palomad running you will need to stop it before doing these steps wget https github com cosmwasm wasmvm releases download v1 3 0 libwasmvm x86 64 so sudo mv libwasmvm x86 64 so usr lib rm r paloma data wasm cache to get the latest prebuilt palomad binary shell wget o https github com palomachain paloma releases download v1 9 2 paloma linux x86 64 tar gz sudo tar c usr local bin xvzf palomad sudo chmod x usr local bin palomad to build palomad using latest release shell git clone https github com palomachain paloma git cd paloma git checkout v1 9 2 make build sudo mv build palomad usr local bin palomad if you re upgrading to the most recent version you will need to stop palomad before copying the new binary into place connecting to an existing network download and install the latest release of palomad initialize our configuration this will populate a paloma directory shell moniker hostname palomad init moniker for testnet chain id paloma testnet 15 for mainnet chain id messenger copy the configs of the network we wish to connect to testnet shell wget o paloma config genesis json https raw githubusercontent com palomachain testnet master paloma testnet 15 genesis json wget o paloma config addrbook json https raw githubusercontent com palomachain testnet master paloma testnet 15 addrbook json mainnet shell wget o paloma config genesis json https raw githubusercontent com palomachain mainnet master messenger genesis json wget o paloma config addrbook json https raw githubusercontent com palomachain mainnet master messenger addrbook json next you can generate a new set of keys to the new machine or reuse an existing key shell validator choose a name palomad keys add validator or if you have a mnemonic already we can recover the keys with palomad keys add validator recover head over to https faucet palomaswap com and get some funds we can verify the new funds have been deposited shell palomad query bank balances node tcp testnet palomaswap com 26656 address and start the node shell palomad start if desired we can stake our funds and create a validator shell moniker hostname validator palomad keys list list names head n1 stake amount 1000000ugrain pubkey palomad tendermint show validator palomad tx staking create validator fees 1000000ugrain from validator amount stake amount pubkey pubkey moniker moniker website https www example com details enter a description chain id chain id commission rate 0 1 commission max rate 0 2 commission max change rate 0 05 min self delegation 100 yes broadcast mode block you may receive an error account sequence mismatch you will need to wait until your local paloma catches up with the rest of the chain running with systemd first configure the service shell cat eot etc systemd system palomad service unit description paloma blockchain after network target conditionpathexists usr local bin palomad service type simple limitnofile 65535 restart always restartsec 5 workingdirectory execstartpre execstart usr local bin palomad start environment pigeon healthcheck port 5757 execreload install wantedby multi user target eot then reload systemd configurations and start the service shell service palomad start check that it s running successfully service palomad status or watch the logs journalctl u palomad service f uploading a local contract shell contract contract wasm validator palomad keys list list names head n1 palomad tx wasm store contract from validator broadcast mode block y gas auto fees 3000000000ugrain
blockchain ethereum cosmos-sdk solana binance-smart-chain arbitrum cometbft kava optimism
blockchain
AndroidBaseProjectKotlin
androidbaseprojectkotlin a base project to fast track mobile development with the mvvm architecture it contains basic setup for network operations di and architecture components the project also includes various utility classes and extensions code formatting code formatting is done with the gradle spotless plugin other settings for the plugin can be configured in spotless gradle spotless gradle to format code run gradle gradlew spotlessapply a sample usage of the classes from the base project can be found here https github com cottacush androidbaseprojectkotlin tree master app src main java com cottacush android androidbaseprojectkt sample license copyright c 2019 cotta cush limited licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license
architecture-components dependency-injection androidx template mvvm-architecture android room
front_end
mercado-livre-crawler
mercado livre crawler este um projeto de n vel iniciante para a rea de engenharia de dados o objetivo extrair dados das ofertas de produtos do site mercado livre https www mercadolivre com br e armazen los em um banco de dados postgresql https www postgresql org com um mesmo schema na branch docker version https github com lucasboscatti mercado livre crawler tree docker version est implementado o c digo para ser executado localmente em um container docker https www docker com com orquestramento do crawler e do banco de dados feito pelo docker compose https docs docker com compose na branch principal est o c digo para ser executado e hospedado na plataforma heroku https www heroku com o banco de dados tambe m est hospedado nesta plataforma com um limite de 10 000 registros artigo escrevi um tutorial no medium https medium com lucasboscatti agendando a execu c3 a7 c3 a3o de um crawler no heroku de forma gratuita 337561c87d92 com um passo a passo para realizar o deploy de um crawler com scrapy no heroku an lise dos dados neste reposit rio https github com lucasboscatti mercado livre data analysis fiz uma limpeza e an lise dos dados coletados por este crawler
postgresql scrapy docker docker-compose crawler mercado-livre python heroku sqlalchemy
server
frontend
readme overview the frontend is the web portal for users of nano it hosts html5 web pages based on the rest api of the core module and also provides authentication and user management for the web portal you can modify pages in the path resource to meet your need or even write a whole new module using rest api to replace this one binary release found here https github com project nano releases releases see more detail for quick guide https nanocloud readthedocs io projects guide en latest concept html official site https nanos cloud en us rest api https nanoen docs apiary io build go git clone https github com project nano frontend git cd frontend go build o frontend i ldflags w s command line all nano modules provide the command line interface and called like module name start stop status halt start start module service output error message when start failed or version information stop stops the service gracefully releases allocated resources and notify any related modules status checks if the module is running halt terminate service immediately you can call the frontend module both in the absolute path and relative path cd opt nano frontend frontend start or opt nano frontend frontend start please check the log file log frontend log when encountering errors configure configuration stores in file config frontend cfg parameter description address listening address of web portal ipv4 format like 192 168 100 port listening port of web portal 5870 in default as integer service host listening address of backend service the core module ipv4 format like 192 168 100 service port listening port of backend service the core module 5850 in default as integer
front_end
Advanced-Natural-Language-Processing-with-TensorFlow-2
advanced natural language processing with tensorflow 2 2019 has been a watershed moment for nlp with transformer and attention based networks this is as transformational for nlp as alexnet was for computer vision in 2012 tremendous advances in natural language processing have been made in the last couple of years and we are now moving from research labs into applications these advances span the domains of natural language understanding nlu natural language generation nlg and natural language interaction nli with so much research in all of these domains it can be a daunting task to understand the exciting developments in various domains inside nlp this book is focused on cutting edge applications in the fields of natural language processing language generation and dialogue systems it provides the concepts of pre processing text using techniques such as tokenization parts of speech tagging lemmatization named entity recognition ner using popular libraries such as stanford nlp and spacy taking a very practical application focussed perspective the book covers key emerging areas such as generating text for use in sentence completion and text summarization bridging images and text by generating captions for images and answering common sense questions for them and managing dialogue aspects of chatbots it covers one of the most important reasons behind recent advances of nlp transfer learning and fine tuning unlabeled textual data is easily available however labelling this data is costly this book covers practical techniques that can simplify labeling of textual data by the end of the book the reader will have an advanced knowledge of the tools techniques and deep learning architectures used to solve complex nlp problems the book will cover encoder decoder networks lstms and bilstms crfs bert transformers and other key technology pieces using tensorflow they will have working code that can be adapted to their own use cases we hope that the readers will be able to even do novel state of the art research using the skills gained the book uses tensorflow 2 3 and keras extensively several advanced tensorflow techniques are also covered such as custom learning rate schedules custom loss functions custom layers custom training loops subword encoding for embeddings tensorflow datasets package for downloading and managing datasets tf data dataset usage and performance optimization model checkpointing the book is organized in the following chapters with links to code 1 essentials of nlp chapter1 nlp essentials 2 understanding sentiment in natural language with bilstms chapter2 nlu sentiment analysis bilstm 3 named entity recognition ner with bilstms crfs and viterbi decoding chapter3 ner with lstm crf 4 transfer learning with bert chapter4 xfer learning bert 5 generating text with rnns and gpt 2 chapter5 nlg with transformer gpt 6 text summarization with seq2seq attention and transformer networks chapter6 textsum seq2seq attention transformer 7 multi modal networks and im age captioning with resnets and transformer chapter 7 image cap multimodal transformers 8 weakly supervised learning for classification with snorkel chapter 8 weak supervision snorkel 9 building conversational ai applications with deep learning chapter 9 conversational agents seminal papers relevant to the chapter or referenced in the chapter can also be found in the appropriate chapter directory installation install md install md contains all the installation instructions for running the code the book has been written and tested using ubuntu 18 04 lts with a nvidia rtx 3070 gpu running python 3 7 and tensorflow 2 3 which was the latest version as of writing author ashish bansal linkedin com in bansalashish
ai
academia-os
p align center img src src favicon png alt academiaos logo width 50 p h1 align center academiaos h1 welcome to academiaos your one stop solution for academic information retrieval and reasoning we ve built this on a robust large language model platform equipped with a bouquet of features dedicated to providing the best assistance for researchers possible p align center img src public overview gif alt demo width 400 p live demo https academia os org join the slack community https join slack com t academiaos shared invite zt 23730lsp0 qlkv 0bs3hgmy2fgtc hnq features find academic papers building on the semanticscholar corpus and openai embeddings academiaos finds and ranks relevant papers to your search queries upload pdfs if you have curated papers or other qualitative documents such as interview transcripts as pdfs you can upload them for downstream tasks text pdfs are handled in browser while scanned pdfs are ocrd using adobe pdf extract api mass information extraction structurally extract information such as a paper s sentiment towards your thesis or information such as the count of study participants from papers at scale automated literature review navigate with a clean and intuitive interface coding of qualitative literature let ai code your interviews social media posts or other qualitative literature automated theory construction get a theoretical model explaining your qualitative data in just a few steps getting started tech stack reactjs antdesign component library langchainjs composability with large language models semanticscholarjs interaction with semantic scholar to get started with academiaos you require node js https nodejs org en download installed in your machine 1 use git clone to clone this repository 2 run npm install development mode npm start initiates the application in the development mode use http localhost 3000 http localhost 3000 to view the application on your browser the application reloads automatically if any edits are made any lint errors are visible in the console npm test initiates the test runner in the interactive watch mode visit the section about running tests https facebook github io create react app docs running tests for further information production build npm run build compiles the application for production into the build folder efficiently bundles react in the production mode and optimizes the build to deliver optimum performance the build is minified and the filenames include the hashes your application is now ready for deployment visit the section about deployment https facebook github io create react app docs deployment for an in depth understanding contributing we eagerly look forward to your valuable contributions to the academiaos project feel free to brainstorm ideas recommend suggestions or report bugs you re always invited to open an issue or submit a pull request license this endeavor is under the aegis of an open source license refer to the license license file for detailed information crafted with passion and commitment by thomas bellacker happy coding
artificial-intelligence informationretrieval llms largelanguagemodels
ai
finanser-backend
finanser backend development of finanser backend a brazilian app focused on financial health and personal finance management
server
Custom-Crops
2022 08 15 02 51 28 https user images githubusercontent com 70987828 184551011 7da1dca5 faab 473c b6a5 d2489b135ca9 png custom crops stardewvalley like farming system how to buy https afdian net xiaomomi https polymart org resource customcrops 2625 how to compile gradlew build and get the jar in target folder api guide access transformers public class yourclass private customcropsapi api public yourclass api customcropsapi getinstance public void yourmethod api xxx events cropbreakevent cropinteractevent cropplantevent fertilizeruseevent greenhouseglassbreakevent greenhouseglassplaceevent potbreakevent potinfoevent potinteractevent potplaceevent potwaterevent scarecrowbreakevent scarecrowplaceevent sprinklerfillevent sprinklerplaceevent sprinklerinteractevent sprinklerbreakevent seasonchangeevent
os
PointLLM
br p align center h1 align center img src assets icon png align center width 6 5 strong pointllm empowering large language models to understand point clouds strong h1 p align center a href https runsenxu com target blank runsen xu a emsp a href https guanfang12 github io target blank xiaolong wang a emsp a href https tai wang github io target blank tai wang a emsp a href http yilunchen com about target blank yilun chen a emsp a href https oceanpang github io target blank jiangmiao pang a emsp a href http dahua site target blank dahua lin a emsp br the chinese university of hong kong emsp shanghai ai laboratory emsp zhejiang university p p p align center a href http arxiv org abs 2308 16911 target blank img src https img shields io badge arxiv 2308 16911 blue a a href https arxiv org pdf 2308 16911 pdf target blank img src https img shields io badge paper blue a a href https runsenxu com projects pointllm target blank img src https img shields io badge project x1f680 blue a a href http 101 230 144 196 target blank img src https img shields io badge demo x1f917 blue a a href target blank img src https visitor badge laobi icu badge page id openrobotlab pointllm left color gray right color blue a a href https openxlab org cn apps detail openxlab app pointllm target blank img src https cdn static openxlab org cn app center openxlab app svg a p about teaser assets teaser jpg div style text align center img src assets teaser jpg alt dialogue teaser width 100 div we introduce b pointllm a multi modal large language model capable of understanding colored point clouds of objects b it perceives object types geometric structures and appearance without concerns for ambiguous depth occlusion or viewpoint dependency b we collect a novel dataset comprising 660k simple and 70k complex point text instruction pairs b to enable a two stage training strategy to rigorously evaluate our model s perceptual abilities and its generalization capabilities b we establish two benchmarks generative 3d object classification and 3d object captioning assessed through three different evaluation methods b news 2023 10 18 we release our instruction following data including both the simple description and complex instructions download here https huggingface co datasets runsenxu pointllm 2023 09 26 we release the inferencing codes with checkpoints as well as the objaverse colored point cloud files we use you can chat with pointllm with your own machines 2023 08 31 we release the paper http arxiv org abs 2308 16911 of pointllm and an online gradio demo http 101 230 144 196 try it x1f389 contents with emoji contents online demo online demo dialogue examples dialogue examples overview overview instruction following data instruction following data inferencing inferencing todo list todo list citation citation license license related work related work acknowledgements acknowledgements online demo b pointllm is online try it at http 101 230 144 196 http 101 230 144 196 or at openxlab pointllm https openxlab org cn apps detail openxlab app pointllm b you can chat with pointllm about the models of the objaverse https objaverse allenai org dataset or about your own point clouds please do not hesitate to tell us if you have any feedback dialogue examples dialogue 1 dialogue 2 dialogue 3 dialogue 3 img width 100 src assets dialogue 1 jpg img width 100 src assets dialogue 2 jpg img width 100 src assets dialogue 3 jpg img width 100 src assets dialogue 1 jpg overview model p align center img src assets model jpg align center width 100 p the point encoder extracts features from the input point cloud and projects them to the latent space of the llm backbone the llm backbone processes sequences of point tokens and text tokens and generates the predicted tokens as the output experiment results qualitative comparisons with 2d models p align center img src assets qualitative comparisons jpg align center width 100 p instruction following data our instruction following data including both the simple description and complex instructions can be downloaded here https huggingface co datasets runsenxu pointllm the simple description data has 660k samples and the complex instructions have 70k samples both training data are based on the objaverse dataset the complex instructions are generated with gpt 4 inferencing installation we test our codes under the following environment ubuntu 20 04 nvidia driver 515 65 01 cuda 11 7 python 3 10 13 pytorch 2 0 1 transformers 4 28 0 dev transformers git cae78c46 to start 1 clone this repository bash git clone git github com openrobotlab pointllm git cd pointllm 2 install packages bash conda create n pointllm python 3 10 y conda activate pointllm pip install upgrade pip enable pep 660 support pip install e data preparation 1 download the two compressed files of 660k objaverse colored point clouds here https huggingface co datasets runsenxu pointllm tree main they require about 77gb of storage space 2 run the following command to merge the two files into one and uncompress it this will produce a folder named 8192 npy containing 660k point cloud files named objaverse id 8192 npy each file is a numpy array with dimensions 8192 6 bash cat objaverse 660k 8192 npy split a objaverse 660k 8192 npy tar gz tar xvf objaverse 660k 8192 npy tar gz 3 in pointllm folder create a soft link to the uncompressed file in the directory bash cd pointllm ln s path to 8192 npy objaverse data chatting 1 the model checkpoints are available at pointllm 7b v1 1 https huggingface co runsenxu pointllm 7b v1 1 tree main and pointllm 13b v1 1 https huggingface co runsenxu pointllm 13b v1 1 tree main 2 run the following command to launch a chatbot using the torch float32 data type for chatting about 3d models of objaverse the model checkpoints will be downloaded automatically you can also manually download the model checkpoints and specify their paths bash cd pointllm python pointllm eval pointllm chat py model path runsenxu pointllm 7b v1 1 data path objaverse data torch dtype float32 3 you can also easily modify the codes for using point clouds other than those from objaverse as long as the point clouds input to the model have dimensions n 6 where the first three dimensions are xyz and the last three dimensions are rgb you may sample the point clouds to have 8192 points as our model is trained on such point clouds 4 the following table shows gpu requirements for different models and data types we recommend using torch bfloat16 if applicable which is used in the experiments in our paper model data type gpu memory pointllm 7b torch float16 14gb pointllm 7b torch float32 28gb pointllm 13b torch float16 26gb pointllm 13b torch float32 52gb todo list x add inferencing codes with checkpoints x release instruction following data add training codes add evaluation codes add data generation codes citation if you find our work helpful please cite bibtex article xu2023pointllm title pointllm empowering large language models to understand point clouds author xu runsen and wang xiaolong and wang tai and chen yilun and pang jiangmiao and lin dahua journal arxiv preprint arxiv 2308 16911 year 2023 license a rel license href http creativecommons org licenses by nc sa 4 0 img alt creative commons license style border width 0 src https i creativecommons org l by nc sa 4 0 80x15 png a br this work is under the a rel license href http creativecommons org licenses by nc sa 4 0 creative commons attribution noncommercial sharealike 4 0 international license a related work together let s make llm for 3d great point bind point llm https arxiv org abs 2309 00615 aligns point clouds with image bind and leverages imagebind llm to reason multi modality input without 3d instruction data training 3d llm https arxiv org abs 2307 12981 employs 2d foundation models to encode multi view images of 3d point clouds acknowledgements llava https github com haotian liu llava our codebase is built upon llava vicuna https github com lm sys fastchat we use the vicuna 7b and vicuna 13b checkpoints objaverse https objaverse allenai org we use models of the objaverse dataset for training and evaluation cap3d https github com crockwell cap3d we use the cap3d captioning data for our data generation ulip 2 https github com salesforce ulip we use ulip 2 for pre training our point cloud encoder
3d chatbot foundation-models gpt-4 large-language-models llama multimodal objaverse point-cloud representation-learning vision-and-language pointllm
ai
Verilog_code
verilog code exercises from digital design embedded systems classes instructions each folder committed contains a design module and a testbench module these files can be ran on https www edaplayground com with the following settings image https user images githubusercontent com 55563360 183222995 d4ea1a90 7ca1 40dd aab1 bc6be8f93e56 png
os
tf-transformers
copyright 2021 the legacyai team all rights reserved licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license p align center br img src src logo2 png width 400 br p p align center a href https github com legacyai tf transformers actions workflow tests img alt tests src https github com legacyai tf transformers workflows tests badge svg a a href https codecov io gh legacyai tf transformers img alt coverage src https codecov io gh legacyai tf transformers branch main graph badge svg token 9tz10g9gl6 a a href https opensource org licenses apache 2 0 img alt license src https img shields io badge license apache 202 0 blue svg a p h3 align center p tf transformers faster and easier state of the art nlp in tensorflow 2 0 h3 tf transformers is designed to harness the full power of tensorflow 2 to make it much faster and simpler comparing to existing tensorflow based nlp architectures on an average there is 80 improvement over current exsting tensorflow based libraries on text generation and other tasks you can find more details in the benchmarks section all most nlp downstream tasks can be integrated into tranformer based models with much ease all the models can be trained using model fit which supports gpu multi gpu tpu unique features faster autoreggressive decoding using tensorflow2 faster than pytorch in most experiments v100 gpu 80 faster compared to existing tf based libararies relative difference refer benchmark code tests notebooks benchmarks complete tflite support for bert roberta t5 albert mt5 for all down stream tasks except text generation faster sentence piece alignment no more lcs overhead variable batch text generation for encoder only models like gpt2 no more hassle of writing long codes for tfrecords minimal and simple off the shelf support for auto batching tf data dataset or tf ragged tensors pass dictionary outputs directly to loss functions inside tf keras model fit using model compile2 refer examples src tf transformers notebooks tutorials or blog https legacyai org medium com tf transformers f7722536ba61 multiple mask modes like causal user defined prefix by changing one argument refer examples src tf transformers notebooks tutorials or blog https legacyai org medium com tf transformers f7722536ba61 performance benchmarks evaluating performance benhcmarks is trickier i evaluated tf transformers primarily on text generation tasks with gpt2 small and t5 small with amazing huggingface as it is the ready to go library for nlp right now text generation tasks require efficient caching to make use of past key and value pairs on an average tf transformers is 80 faster than huggingface tensorflow implementation and in most cases it is comparable or faster than pytorch 1 gpt2 benchmark the evaluation is based on average of 5 runs with different batch size beams sequence length etc so there is qute a larg combination when it comes to beam and top k 8 decoding the figures are randomly taken 10 samples but you can see the full code and figures in the repo gpt2 greedy p align left br img src tests notebooks benchmarks gpt2 gpt2 greedy sample png width 900 br p gpt2 beam p align left br img src tests notebooks benchmarks gpt2 gpt2 beam sample png width 900 br p gpt2 top k top p p align left br img src tests notebooks benchmarks gpt2 gpt2 topk top p sample png width 900 br p gpt2 greedy histogram p align left br img src tests notebooks benchmarks gpt2 greedy png width 500 br p codes to reproduce gpt2 benchmark experiments tests notebooks benchmarks gpt2 codes to reproduce t5 benchmark experiments tests notebooks benchmarks t5 quickstart i am providing some basic tutorials here which covers basics of tf transformers and how can we use it for other downstream tasks all most tutorials has following structure introduction about the problem prepare training data load model and asociated downstream tasks define optimizer loss train using keras and customtrainer evaluate using dev data in producton secton defines how can we use tf saved model in production pipelines production ready tutorials start by converting huggingface models base models only to tf transformers models here are a few examples jupyter notebooks basics of tf transformers src tf transformers notebooks tutorials basics ipynb convert huggingface models bert albert roberta gpt2 t5 mt5 to tf transformers checkpoints src tf transformers notebooks conversion scripts name entity recognition albert tflite joint loss pipeline src tf transformers notebooks tutorials ner albert ipynb squad v1 1 roberta tflite pipeline src tf transformers notebooks tutorials squad roberta ipynb roberta2roberta encoder decoder xsum summarisation src tf transformers notebooks tutorials seq2seq summarization ipynb squad v1 1 t5 text generation src tf transformers notebooks tutorials t5 squad as generation ipynb squad v1 1 t5 span selection tflite pipeline src tf transformers notebooks tutorials t5 squad span selection ipynb albert glue joint loss glue score 81 0 on 14 m parameter 5 layers src tf transformers notebooks tutorials joint loss experiments albert squad joint loss em f1 78 1 87 0 on 14 m parameter 5 layers src tf transformers notebooks tutorials joint loss experiments squad ipynb squad v1 1 gpt2 causal masking em f1 37 36 50 20 coming soon squad v1 1 gpt2 prefix masking em f1 47 52 63 20 coming soon bert sts b regression coming soon bert cola text classification tflite pipeline src tf transformers notebooks tutorials why should i use tf transformers 1 use state of the art models in production with less than 10 lines of code high performance models better than all official tensorflow based models very simple classes for all downstream tasks complete tflite support for all tasks except text generation 2 make industry based experience to avaliable to students and community with clear tutorials 3 train any model on gpu multi gpu tpu with amazing tf keras model fit train state of the art models in few lines of code all models are completely serializable 4 customize any models or pipelines with minimal or no code change do we really need to distill jont loss is all we need 1 glue we have conducted few experiments to squeeze the power of albert base models concept is applicable to any models and in tf transformers it is out of the box the idea is minimize the loss for specified task in each layer of your model and check predictions at each layer as per our experiments we are able to get the best smaller model thanks to albert and from layer 4 onwards we beat all the smaller model in glue benchmark by layer 6 we got a glue score of 81 0 which is 4 points ahead of distillbert with glue score of 77 and mobilebert glue score of 78 the albert model has 14 million parameters and by using layer 6 we were able to speed up the compuation by 50 the concept is applicable to all the models codes to reproduce glue joint loss experiments src tf transformers notebooks tutorials joint loss experiments glue p align left br benchmark results img src src tf transformers notebooks tutorials joint loss experiments glue glue benchmark png width 700 br p glue score not including wnli 2 squad v1 1 we have trained squad v1 1 with joint loss at layer 6 we were able to achieve same performance as of distillbert em 78 1 and f1 86 2 but slightly worser than mobilebert benchmark results p align left br img src src tf transformers notebooks tutorials joint loss experiments squad benchmark png width 200 br p codes to reproduce squad v1 1 joint loss experiments src tf transformers notebooks tutorials joint loss experiments squad ipynb note we have a new model in pipeline installation with pip this repository is tested on python 3 7 and tensorflow 2 4 0 recommended to use a virtual environment https docs python org 3 library venv html assuming tensorflow 2 0 is installed bash pip install tf transformers from github assuming poetry is installed if not pip install poetry git clone https github com legacyai tf transformers git cd tf transformers poetry install pipeline pipeline in tf transformers is different from huggingface here pipeline for specific tasks expects a model and tokenizer fn because in an ideal scenario no one will be able to understand whats the kind of pre processing we want to do to our inputs please refer above tutorial notebooks for examples token classificaton pipeline ner python from tf transformers pipeline import token classification pipeline def tokenizer fn feature feature tokenized text tokenizer tokenize result result input ids tokenizer convert tokens to ids tokenizer cls token feature input ids tokenizer bos token result input mask 1 len result input ids result input type ids 0 len result input ids return result load keras serialized model model ner load model slot map reverse dictionary index entity mapping pipeline token classification pipeline model model ner tokenizer tokenizer tokenizer fn tokenizer fn special piece spiece underline label map slot map reverse max seq length 128 batch size 32 sentences i would love to listen to carnatic music by yesudas play carnatic fusion by various artists please book 2 tickets from bangalore to kerala result pipeline sentences span selection pipeline qa python from tf transformers pipeline import span extraction pipeline def tokenizer fn features features dict of tokenized text convert them into ids result input ids tokenizer convert tokens to ids features input ids input type ids tf zeros like input ids numpy tolist input mask tf ones like input ids numpy tolist result input ids input ids result input type ids input type ids result input mask input mask return result model load keras saved model span extraction pipeline pipeline span extraction pipeline model model tokenizer tokenizer tokenizer fn tokenizer fn special piece roberta special peice n best size 20 n best 5 max answer length 30 max seq length 384 max query length 64 doc stride 20 questions when was kerala formed contexts kerala english k r l malayalam ke m about this soundlisten help info is a state on the southwestern malabar coast of india it was formed on 1 november 1956 following the passage of the states reorganisation act by combining malayalam speaking regions of the erstwhile states of travancore cochin and madras spread over 38 863 km2 15 005 sq mi kerala is the twenty first largest indian state by area it is bordered by karnataka to the north and northeast tamil nadu to the east and south and the lakshadweep sea 14 to the west with 33 387 677 inhabitants as per the 2011 census kerala is the thirteenth largest indian state by population it is divided into 14 districts with the capital being thiruvananthapuram malayalam is the most widely spoken language and is also the official language of the state 15 result pipeline questions questions contexts contexts classification model pipeline python from tf transformers pipeline import classification pipeline from tf transformers data import pad dataset normal tokenizer berttokenizer from pretrained bert base uncased max seq length 128 pad dataset normal def tokenizer fn texts feature tokenized text tokenizer tokenize pad dataset noral will automatically pad it input ids input type ids input mask for text in texts input ids ex tokenizer cls token tokenizer tokenize text max seq length 2 tokenizer sep token 2 to add cls and sep input ids ex tokenizer convert tokens to ids input ids ex input mask ex 1 len input ids ex input type ids ex 0 len input ids ex input ids append input ids ex input type ids append input type ids ex input mask append input mask ex result result input ids input ids result input type ids input type ids result input mask input mask return result model load keras saved model label map reverse 0 unacceptable 1 acceptable pipeline classification pipeline model model tokenizer fn tokenizer fn label map label map reverse batch size 32 sentences in which way is sandy very anxious to see if the students will be able to solve the homework problem the book was written by john play carnatic fusion by various artists she voted herself result pipeline sentences supported models architectures tf transformers currently provides the following architectures 1 albert https huggingface co transformers model doc albert html from google research and the toyota technological institute at chicago released with the paper albert a lite bert for self supervised learning of language representations https arxiv org abs 1909 11942 by zhenzhong lan mingda chen sebastian goodman kevin gimpel piyush sharma radu soricut 2 bert https huggingface co transformers model doc bert html from google released with the paper bert pre training of deep bidirectional transformers for language understanding https arxiv org abs 1810 04805 by jacob devlin ming wei chang kenton lee and kristina toutanova 3 bert for sequence generation https huggingface co transformers model doc bertgeneration html from google released with the paper leveraging pre trained checkpoints for sequence generation tasks https arxiv org abs 1907 12461 by sascha rothe shashi narayan aliaksei severyn 4 electra https huggingface co transformers model doc electra html from google research stanford university released with the paper electra pre training text encoders as discriminators rather than generators https arxiv org abs 2003 10555 by kevin clark minh thang luong quoc v le christopher d manning 5 gpt 2 https huggingface co transformers model doc gpt2 html from openai released with the paper language models are unsupervised multitask learners https blog openai com better language models by alec radford jeffrey wu rewon child david luan dario amodei and ilya sutskever 6 mt5 https huggingface co transformers model doc mt5 html from google ai released with the paper mt5 a massively multilingual pre trained text to text transformer https arxiv org abs 2010 11934 by linting xue noah constant adam roberts mihir kale rami al rfou aditya siddhant aditya barua colin raffel 7 roberta https huggingface co transformers model doc roberta html from facebook released together with the paper a robustly optimized bert pretraining approach https arxiv org abs 1907 11692 by yinhan liu myle ott naman goyal jingfei du mandar joshi danqi chen omer levy mike lewis luke zettlemoyer veselin stoyanov 8 t5 https huggingface co transformers model doc t5 html from google ai released with the paper exploring the limits of transfer learning with a unified text to text transformer https arxiv org abs 1910 10683 by colin raffel and noam shazeer and adam roberts and katherine lee and sharan narang and michael matena and yanqi zhou and wei li and peter j liu note tf transformers is a personal project this has nothing to do with any organization so i might not be able to host equivalent checkpoints of all base models as a result there is a conversion src tf transformers notebooks conversion scripts notebooks to convert above mentioned architectures from huggingface to tf transformers credits i want to give credits to tensorflow nlp official repository https github com tensorflow models tree master official nlp i used november 2019 version of master branch where tf keras network was used for models i have modified that by large extend now apart from that i have used many common scripts from many open repos i might not be able to recall everything as it is but still credit goes to them too citation
ai
xradio-skylark-sdk
xradio skylark sdk xradio skylark sdk supports xr872 xr808 series wireless mcus configuration edit gcc mk to define gcc path to your own path eg cc dir tools gcc arm none eabi 4 9 2015q2 bin building building commands cd prj gcc path eg cd project demo wlan demo gcc make config run configure sh to select sdk configuration make config clean remove files generated by make config make lib build libraries and copy them to lib make lib clean remove files in src generated by make lib make lib install clean remove libraries in lib generated by make lib make build the executable binary make clean remove files generated by make make image create the image file make image clean remove files generated by make image make objdump generate the disassembled file make build same as make lib make make image make build clean same as make image clean clean lib clean lib install clean links sdk https github com xradiotech xradio skylark sdk git wiki https github com xradiotech xradiotech wiki git doc https docs xradiotech com
os
WebFundamentals
web fundamentals doi https zenodo org badge 49000057 svg https zenodo org badge latestdoi 49000057 this repository contains the slides for the web fundamentals http rubenverborgh github io webfundamentals module of the ghent university course web development http studiegids ugent be 2016 en studiefiches c003779 pdf br view the slides online http rubenverborgh github io webfundamentals questions feedback and suggestions welcome do you have a question on one of topics please create an issue https github com rubenverborgh webfundamentals issues new do you have feedback on contents or form please create an issue https github com rubenverborgh webfundamentals issues new do you have a suggestion to improve the slides please create a pull request https github com rubenverborgh webfundamentals pulls please read and accept the contributor agreement https github com rubenverborgh webfundamentals blob gh pages contributing md before contributing finding your way around this repository contains 1 introductory slidedeck index html https github com rubenverborgh webfundamentals blob gh pages index html in the root folder https github com rubenverborgh webfundamentals 5 lecture slidedecks index html files in subfolders such as architecture https github com rubenverborgh webfundamentals tree gh pages architecture shared images https github com rubenverborgh webfundamentals tree gh pages shared images styles https github com rubenverborgh webfundamentals tree gh pages shared styles fonts https github com rubenverborgh webfundamentals tree gh pages shared fonts and scripts https github com rubenverborgh webfundamentals tree gh pages shared scripts images per lecture images folders in subfolders such as architecture https github com rubenverborgh webfundamentals tree gh pages architecture auxiliary files in the root folder https github com rubenverborgh webfundamentals how to start a typical starting point would be to open up any index html file either in the root folder or any of the subfolders this allows you to edit the contents of the corresponding slidedeck the slides themselves are regular html files brought to life with the shower https github com shower shower presentation engine they use the clear https github com rubenverborgh shower clear template with a few customizations in shared styles web fundamentals css https github com rubenverborgh webfundamentals blob gh pages shared styles web fundamentals css you can just open the slides in your browser from the local filesystem while editing alternatively you can install gulp http gulpjs com and run the gulp command in the root folder which will autorefresh your browser upon changes license except where otherwise noted the content of these slides is licensed under a creative commons attribution 4 0 international license http creativecommons org licenses by 4 0
ugent open-webslides web slides course
front_end
front-end-project-week
front end project week this week you will be building a minimum viable product mvp for a note taking app called lambda notes you are to treat this week as if you are working at a company and the instructor is your client the project managers will be your main support throughout the week the main objective of this week is to develop the mvp feature set listed below using react and any other technologies you have learned here at lambda school there are design files in this repository you should use as a creative guide git commits you are required to showcase progress with at least 1 commit a day this will let your project manager know where you are and if you need help this also allows the client to get progress reports from the company in a real world setting trello set up create a trello account create a new board called lambda notes your name create lists titled backlog to do in progress and done fill in the to do list with the mvp features listed below fill in the backlog list with all the extra features listed below share your board with the project manager that has been assigned to you if you have not been assigned yet reach out to your lead pm for guidance add your trello url to your project s readme md file commit the change push it to your repository submit a pull request mvp features display a list of notes create a note with a title and content view an existing note edit an existing note delete an existing note wire up your static react app to our notes api we want to work with some data that will be persistent across a server we have built a notes api notes md for you you ll find information about this server in the notes md file upon your first commit please submit a pull request and add both the trello set up and mvp features task lists to your first pull request comment markdown trello set up create a trello account create a new board called lambda notes your name create lists titled backlog to do in progress and done fill in the to do list with the mvp features listed below fill in the backlog list with all the extra features listed below share your board with the project manager that has been assigned to you if you have not been assigned yet reach out to your lead pm for guidance add your trello url to your project s readme md file commit the change push it to your repository submit a pull request mvp features display a list of notes create a note with a title and content view an existing note edit an existing note delete an existing note wire up your static react app to our notes api now that you ve completed the mvp for this project we want to work with some data that will be persistent across a server we have built a notes api notes md for you you ll find information about this server in the notes md file once you have completed the minimum viable product requirements direct message your project manager for approval if approved you may continue working on the extra features once your mvp has been approved you have been given a feature list that the client would love to have completed your goal would be to finish mvp as soon as you can and get working the list of features extra features re factor your code to include redux for your state management search functionality markdown support in notes sorting options in the list view create and display tags that can be added to notes drag sorting in the list view add the ability to have checklists within the note view export all notes to a csv create a login system around the mvp you will notice that this repository does not have any starter code this is on purpose you are to start from scratch using any files you have built throughout your time here at lambda school
front_end
mantis
mantis ethereum like blockchain scala client built by iohk s for ethereum classic etc network status not maintained the lastest etc hard fork supported by the client is magneto hard fork https ecips ethereumclassic org ecips ecip 1103 you can check the latest build results of the current branch by clicking the status icon in the header of the github file browser download the client the latest release can be downloaded from here https github com input output hk mantis releases command line version in the bin directory you can find the generic launcher to connect to a pre configured network just pass the network name as a parameter example bin mantis launcher etc for joining ethereum classic network possible networks etc eth mordor testnet internal command line interface cli is a tool that can be used to generate a new private key bin mantis cli generate private key derive an address from private key bin mantis cli derive address 00b11c32957057651d56cd83085ef3b259319057e0e887bd0fdaee657e6f75d0 generate genesis allocs using private keys and or addresses bin mantis cli generate allocs balance 42 address 8b196738d90cf3d9fc299e0ec28e15ebdcbb0bdcb281d9d5084182c9c66d5d12 key 00b11c32957057651d56cd83085ef3b259319057e0e887bd0fdaee657e6f75d1 generate multiple key pairs following example generate 5 key pairs bin mantis cli generate key pairs 5 encrypt private key default passphrase is empty string bin mantis cli encrypt key passphrase pass 00b11c32957057651d56cd83085ef3b259319057e0e887bd0fdaee657e6f75d0 the command output uses the same format as the keystore so it could be used ex to setup private faucet ex id 3038d914 c4cd 43b7 9e91 3391ea443f95 address c28e15ebdcbb0bdcb281d9d5084182c9c66d5d12 version 3 crypto cipher aes 128 ctr ciphertext 6ecdb74b2a33dc3c016b460dccc96843d9d050aea3df27a3ae5348e85b3adc3e cipherparams iv 096b6490fe29e42e68e2db902920cad6 kdf scrypt kdfparams salt cdcc875e116e2824ab02f387210c2f4ad7fd6fa1a4fc791cc92b981e3062a23e n 262144 r 8 p 1 dklen 32 mac 8388ae431198d31d57e4c17f44335c2f15959b0d08d1145234d82f0d253fa593 building the client as an alternative to downloading the client build the client from source with sbt prerequisites to build jdk 1 8 download from java com http www java com sbt download sbt http www scala sbt org download html python 2 7 15 download from python org https www python org downloads build the client in the root of the project git submodule update recursive init sbt dist this updates all submodules and creates a distribution zip in target universal note building in dev mode allows faster and incremental compilation for this set environment variable mantis dev to true or use the system property dmantisdev true with nix in the root of the project build the client nix build on a mac this project uses nix for ci deployment and optionally local development some of the dependencies are not available for darwin macos however to work with nix on a mac you can instead use docker via the nix in docker run script which will start a nix shell with the same environment as ci update sbt nix dependencies when updating project dependencies the nix fixed output derivation will need to be updated so that it includes the new dependency state to do so please run update nix sh git add nix overlay nix git commit m update nix sbt sha for this command to work you ll need the flakes https nixos wiki wiki flakes feature enabled in your nix environment note this should only be necessary when updating dependencies for example edits to build sbt or project plugins sbt will likely need to be regenerated monitoring locally build run monitoring client a docker compose setup using prometheus and grafana and a preconfigured dashboard is available as a precondition you need to have docker and sbt installed before running the script you need to enable metrics by editing the file metrics conf and setting mantis metrics enabled true to build the monitoring run the following script at docker mantis build sh this script builds a docker image of mantis using the local sources and starts the docker compose grafana will be available at http localhost 3000 using user and password admin and admin with a dashboard called mantis tls setup both the json rpc on the node and faucet can be additionally protected using tls the development environment it already properly configured with a development certificate generating a new certificate if a new certificate is required create a new keystore with a certificate by running tls gen cert sh configuring the node 1 configure the certificate and password file to be used at mantis network rpc http certificate key on the application conf file keystore path path to the keystore storing the certificates if generated through our script they are by default located in tls mantisca p12 keystore type type of certificate keystore being used if generated through our script use pkcs12 password file path to the file with the password used for accessing the certificate keystore if generated through our script they are by default located in tls password 2 enable tls in specific config for json rpc mantis network rpc http mode https configuring the faucet 1 configure the certificate and password file to be used at mantis network rpc http certificate key on the faucet conf file keystore path path to the keystore storing the certificates if generated through our script they are by default located in tls mantisca p12 keystore type type of certificate keystore being used if generated through our script use pkcs12 password file path to the file with the password used for accessing the certificate keystore if generated through our script they are by default located in tls password 2 enable tls in specific config for json rpc mantis network rpc http mode https 3 configure the certificate used from rpcclient to connect with the node necessary if the node uses http secure this certificate and password file to be used at faucet rpc client certificate key on the faucet conf file keystore path path to the keystore storing the certificates keystore type type of certificate keystore being used if generated through our script use pkcs12 password file path to the file with the password used for accessing the certificate keystore faucet setup and testing 1 first start a client node using the docker compose by running the script found at docker mantis build sh modify the script before running it by adding the volumes and command sections to mantis configuration mantis image mantis latest ports 8546 8546 13798 13798 9095 9095 networks mantis net volumes home mantis home demiourgos728 mantis command dconfig file conf sagano conf 2 create a wallet address run the following curl command replacing password by a password of your choice curl request post url http 127 0 0 1 8546 header cache control no cache header content type application json data jsonrpc 2 0 method personal newaccount params password id 1 you will receive a response like this jsonrpc 2 0 result address id 1 3 modify src universal conf faucet conf file config your account address created in the previous step with the password choosen by you wallet address address wallet password password 4 now check the keystore folder in mantis testnet internal nomad keystore inside you will find a key generate with the curl request sent in step 2 copy that file to mantis faucet keystore cp utc date key mantis faucet keystore 5 start the faucet in command line sbt dconfig file src universal conf faucet conf run faucet 6 run the following curl command to send tokens from your faucet to a wallet address curl request post url http 127 0 0 1 8099 header content type application json data jsonrpc 2 0 method faucet sendfunds params address id 1 happy transfer note in order for the transfer transaction be persisted a faucet needs sufficient founds in its account and in this test case a new faucet without etc tokens is being created feedback feedback gratefully received through the ethereum classic forum http forum ethereumclassic org known issues there is a list of known issues in the release file located in the root of the installation
scala etc ethereum
blockchain
Android-Projects
ar apps in android studio android studio projects with using augmented reality libraries here will be simple apps with adding different objects to the scene animations of 3d objects games with shooting to the objects face detection and masks playing video on scenes and many other simple projects requirements for ar apps add to the android section in buidl gradle following code after buildtypes java compileoptions sourcecompatibility javaversion version 1 8 targetcompatibility javaversion version 1 8 add to the dependencies section in buidl gradle following code after all implementations java provides arcore session and related resources implementation com google ar core 1 11 0 provides arfragment and other ux resources implementation com google ar sceneform ux sceneform ux 1 11 0 alternatively use arsceneview without the ux dependency implementation com google ar sceneform core 1 11 0 add to the manifest section in androidmanifest xml permissions before application section and add meta data inside of application java uses permission android name android permission camera uses feature android name android hardware camera ar android required true application meta data android name com google ar core android value required application augmentedr 1 placing simple 3d object loaded from googles poly https poly google com website to the plane follow requirements section load obj file from googles poly https poly google com create sample data directory copy and past your obj and mtl files there click import sceneform asset on your obj file and your object will appear in assets folder create fragment element in your activity layout some magic i mean code detect the plane and add your model don t forget to do it transformable for resizing and moving in your activity class and it s ready first app capture https github com mrcrambo android studio ar blob master samples ar 1 gif augmentedr 2 placing simple 3d objects like cube sphere and cylinder to the plane follow requirements section create fragment element in your activity layout with buttons for each object add on tap plane listener and create there selected object don t forget to make it transformable for moving and resizing second app capture https github com mrcrambo android studio ar blob master samples ar 2 gif augmentedr 3 viewpager with images on the plane follow requirements section create fragment element in your activity layout and create layout with viewpager add on tap plane listener then create viewpager with adapter and add them to the scene make it not transformable for easy switching between images third app capture https github com mrcrambo android studio ar blob master samples ar 3 gif augmentedr 4 adding fox mask to the face follow requirements section load some mask for face with fbx and import it create fragment element in your activity layout create custom fragment which extends arfragmet render it and add to the scene no gifs for this app augmentedr 5 game kill the flying president follow requirements section load some 3d object file from googles poly https poly google com create fragment element in your activity layout add buttons and ui elements add elements to the scene in different points add bullets and write shooting mechanism detect collisions and remove the objects add sound for bullets fifth app capture https github com mrcrambo android studio ar blob master samples ar 5 gif area 51 ar as a final project i developed augmented reality shooter and published it in play market https play google com store apps details id com drakosha zone51ar feel free to play it and leave your comments here is the instructions how to build your own shooter follow requirements section load some 3d objects from googles poly https poly google com or other sites create fragment element in your activity layout add buttons and ui elements add elements to the scene in different points in front of you and add thread for updating thier positions add bullets and write shooting mechanism detect collisions and remove the objects add few more levels and bosses with abilities add analytics ads music and promo codes publish it screenshot 1 screenshot 2 https lh3 googleusercontent com dp g 2 qwewmqrfgq nkvfs ioncnzigf6ijljpxuelgiknxid1xqofxs65wizsjex52 w3604 h2708 https lh3 googleusercontent com pfooi8wdlk84i396rg98eloxapxjk6xncfd3z13po9r zdyvofvadwzi1tjjp32dp5m w3604 h2708 solveit simple project for solving math expressions using text detection implement com google android gms play services vision latest version detect text using camera add math expressions solver from string add view for entering text if user has troubles with detecting expression img src https github com mrcrambo android projects blob master samples solve it jpeg alt drawing width 400
android augmented-reality 3d-objects computer-vision text-recognition
ai
perfect-php-vagrant
my vagrant development box gitter http img shields io badge gitter join 20chat 20 e2 86 92 brightgreen svg style flat square https gitter im ovr perfect php vagrant utm source badge utm medium badge utm campaign pr badge utm content badge author http img shields io badge author ovr blue svg style flat square https twitter com ovrweb software license https img shields io badge license mit brightgreen svg style flat square license md start your development on virtual machine by vagrant technology img src http dmtry me img logos my perfect php machine jpg software php 5 6 default json memcache fpm curl nginx http nginx org web server zephir language https github com phalcon zephir compiled high level language aimed to the creation of c extensions for php phalcon 2 https github com phalcon cphalcon tree 2 0 0 framework for php on zephir lynx https github com lynx lynx orm dbal for php on zephir composer https getcomposer org dependency manager for php frontend global env nodejs https nodejs org bower http bower io gulp http gulpjs com grunt cli https github com gruntjs grunt cli default vm parameters yaml name phalcon2 dev hostname vm local box ubuntu trusty64 provider virtualbox gui false ram 512 cpus 1 ip 10 10 10 150 projects folder projects pre installed projects phalcon full skeleton edition https github com ovr phalcon module skeleton on http phalcon module local http phalcon module local angular skeleton https github com ovr angular skeleton on http angular skeleton local http angular skeleton local getting started 1 download and install virtualbox https www virtualbox org 2 download and install vagrant http www vagrantup com 3 install project don t forget to install vagrant host manager plugin bash vagrant plugin install vagrant hostmanager and vagrant cachier to cache shared packages installation bash vagrant plugin install vagrant cachier installation bash git clone https github com ovr perfect php vagrant git cd perfect php vagrant nano config yaml vagrant up weight when installation will be finished and open http servername to see info about server troubleshooting if after vagrant up you are getting something like it sh default warning connection timeout retrying default warning connection timeout retrying default warning connection timeout retrying you need to diagnose an error by setuping gui to true in config yaml yaml gui true vagrant to stop and reinstall machine please run bash vagrant halt vagrant destroy f vagrant up license this project is open sourced software licensed under the mit license see the license file for more information
front_end
Udagram-image-filtering-microservice-master
ranjitpatnaik udagram image filtering microservice udagram image filtering microservice github repository https github com ranjit patnaik udagram image filtering microservice master this is the second assignment for the udacity cloud developer nanodegree https www udacity com course cloud developer nanodegree nd9990 course it is a node express application which runs a simple script to process images and is deployed using aws elastic beanstalk you ll need to create a new node server we should open a new terminal within the project directory and run initialize a new project npm i run the development server with npm run dev create a new endpoint in the server ts file the starter code has a task for you to complete an endpoint in src server ts which uses query parameter to download an image from a public url filter the image and return the result i included the few helper functions to handle some of the concepts and i importing it for you at the top of the src server ts file import filterimagefromurl deletelocalfiles from util util deploying your system follow this process described in the terminal to eb init a new application and eb create the new environment to deploy your image filter service don not forget you can use the eb deploy to push changes for deploying used terminal npm run build eb init eb create eb deploy usage base url http localhost 8082 filtering an image path parameter description filteredimage image url required the image url to filter example http localhost 8082 filteredimage image url https www gstatic com webp gallery3 1 png note all api requests require authorization headers click here http localhost 8082 token to generate a token aws elastic beanstalk deployed application dashboard depcruise generated graph https github com ranjitpatnaik udagram image filtering microservice master blob master deployment screenshots elastic 20beanstalk png postman collection file is here https github com ranjitpatnaik udagram image filtering microservice master blob master cloud cdnd c2 final postman collection json
cloud
mfaraz
mfaraz mobile development
front_end
onepanel
img width 200px src img logo png build https img shields io github workflow status onepanelio onepanel publish 20dev 20docker 20image master color 01579b code https img shields io codacy grade d060fc4d1ac64b85b78f85c691ead86a color 01579b release https img shields io github v release onepanelio core color 01579b https github com onepanelio core releases sdk https img shields io pypi v onepanel sdk color 01579b label sdk https pypi org project onepanel sdk docs https img shields io github v release onepanelio core color 01579b label docs https docs onepanel io issues https img shields io github issues raw onepanelio core color 01579b label issues https github com onepanelio core issues lfai https img shields io badge link lfai 01579b https landscape lfai foundation selected onepanel license https img shields io github license onepanelio core color 01579b https opensource org licenses apache 2 0 end to end computer vision platform label build train tune deploy and automate in a unified platform that runs on any cloud and on premises https user images githubusercontent com 1211823 116489376 afc60000 a849 11eb 8e8b b0c64c07c144 mp4 why onepanel img width 100 src img features png quick start see quick start guide https docs onepanel ai docs getting started quickstart to get started community to submit a feature request report a bug or documentation issue please open a github pull request https github com onepanelio core pulls or issue https github com onepanelio core issues contributing onepanel is modular and consists of multiple repositories https docs onepanel ai docs getting started contributing project repositories see contribution guide https docs onepanel ai docs getting started contributing and contributing md in each repository for additional contribution guidelines acknowledgments onepanel seamlessly integrates the following open source projects under the hood argo https github com argoproj argo workflows couler https github com couler proj couler cvat https github com opencv cvat jupyterlab https github com jupyterlab jupyterlab nni https github com microsoft nni we are grateful for the support these communities provide and do our best to contribute back as much as possible license onepanel is licensed under apache 2 0 https github com onepanelio core blob master license
deeplearning pipelines machinelearning jupyterlab tensorflow pytorch tensorboard computer-vision mlops workflows aiops annotation hyperparameter-tuning inference training labeling ai etl
ai
ev3rt-hrp3
ev3rt hrp3 rtos for mindstorms ev3 w toppers hrp3 kernel usage bash git clone https github com ev3rt git ev3rt hrp3 cd ev3rt hrp3 git submodule init git submodule update build loader uimage cd ev3rt hrp3 sdk firmware make img loader build helloev3 app cd ev3rt hrp3 sdk workspace make app helloev3 build status status platform build status https travis ci org ev3rt git ev3rt hrp3 svg branch master https travis ci org ev3rt git ev3rt hrp3 ubuntu 14 04
os
iot-coap
build status https api travis ci org phodal iot coap png https travis ci org phodal iot coap version http img shields io npm v iot coap svg http http img shields io npm v iot coap svg code climate https codeclimate com github phodal iot coap badges gpa svg https codeclimate com github phodal iot coap test coverage https codeclimate com github phodal iot coap badges coverage svg https codeclimate com github phodal iot coap dependencies https david dm org phodal freerice svg style flat https david dm org phodal iot coap svg style flat0 npm https nodei co npm iot coap png https nodei co npm iot coap npm https nodei co npm dl iot coap png https nodei co npm iot coap tested on node 0 10 32 and 0 11 13 coap iot framework mini internet of things system with coap protocol http protocol with restful to https github com phodal iot iot mqtt coap websocket online test for iot protocol http mqtt phodal com bug https github com phodal diaonan https github com phodal diaonan thanks to restify https github com mcavage node restify node coap https github com mcollina node coap node sqlite3 https github com mapbox node sqlite3 mongodb https github com mongodb node mongodb native install 1 install npm install iot coap 2 create index js var iotcoap require iot coap iotcoap run iotcoap rest run db mongodb sqlite3 you can choice db on iot js with sqlite or mongodb iot js create iot js exports config db name iot db mongodb name iot mongodb documents iot db mongodb table name basic keys id value sensors1 sensors2 db table id integer primary key value text sensors1 float sensors2 float mongodb init id 1 value is id 1 sensors1 19 sensors2 20 id 2 value is id 2 sensors1 20 sensors2 21 init table insert or replace into basic id value sensors1 sensors2 values 1 is id 1 19 20 insert or replace into basic id value sensors1 sensors2 values 2 is id 2 20 21 query table select from basic rest url id id rest post url rest port 8848 run node index js test firefox 1 copper https addons mozilla org en us firefox addon copper 270430 https addons mozilla org en us firefox addon copper 270430 install copper plugins 2 debug contrl choice debug control 3 accept content format application json node get node method test get js http post curl h content type application json d id 3 value dream sensors1 12 sensors2 13 http localhost 8848 https github com phodal collection iot setup dev 1 clone git github com phodal iot coap git 2 install dependencies npm install jslint nodejs jslint https www phodal com blog nodejs add jslint with pre commit 1 nodejs os 2 clone npm install qq 348100589 coap basic coap hello world hello coap json returnjson coap xml returnxml iot coap block iotblock coap sqlite nodejs querydb coap sqlite nodejs db ide jetbrains http www jetbrains com webstorm license license 2014 phodal huang http www phodal com this code is distributed under the mit license iot https github com phodal iot basic http www phodal com blog use constrained application protocol in internet of things hello http www phodal com blog use node coap create a coap server returnjson http www phodal com blog use coap build internet of things return json querydb http www phodal com blog use node coap sqlite create a coap server get response db http www phodal com blog use coap nodejs sqlite build iot returnxml http www phodal com blog use jstoxml convert iot coap return json iotblock http www phodal com blog use coap block send data on iot coap
server
StudyOfTime
studyoftime industrial engineering tool for ios
os
openhab-core
openhab core github actions build status https github com openhab openhab core actions workflows ci build yml badge svg branch main https github com openhab openhab core actions workflows ci build yml jenkins build status https ci openhab org job openhab core badge icon https ci openhab org job openhab core epl 2 0 https img shields io badge license epl 202 green svg https opensource org licenses epl 2 0 crowdin https badges crowdin net openhab core localized svg https crowdin com project openhab core bountysource https www bountysource com badge tracker tracker id 28054058 https www bountysource com teams openhab issues tracker ids 28054058 this project contains core bundles of the openhab runtime building and running the project is fairly easy if you follow the steps detailed below please note that openhab core is not a product itself but a framework to build solutions on top it is picked up by the main openhab distribution https github com openhab openhab distro this means that what you build is primarily an artifact repository of osgi bundles that can be used within smart home products 1 prerequisites the build infrastructure is based on maven if you know maven already then there won t be any surprises for you if you have not worked with maven yet just follow the instructions and everything will miraculously work what you need before you start java se development kit 17 maven 3 from https maven apache org download html make sure that the mvn command is available on your path 2 checkout checkout the source code from github e g by running git clone https github com openhab openhab core git 3 building with maven to build this project from the sources maven takes care of everything set maven opts to xms512m xmx1024m change into the openhab core directory cd openhab core run mvn clean install to compile and package all sources if there are tests that are failing occasionally on your local build run mvn dskiptests true clean install instead to skip them how to contribute if you want to become a contributor to the project please read about contributing https www openhab org docs developer contributing html and check our guidelines https www openhab org docs developer guidelines html first
openhab java home-automation internet-of-things iot
server
Cloud-Project
cloud project azubi cloud engineering projects
cloud
project-walkthroughs
overview this repository contains files notebooks and data used for live project walkthroughs on dataquest you can watch the project walkthroughs on youtube https www youtube com channel uc lepy0lm0e2 ikyuwpi5a these walkthroughs help you build complete end to end projects that can go into your portfolio prerequisites to complete these projects you ll need to have a good understanding of python syntax including functions if statements and data structures data cleaning pandas syntax using jupyter notebook the basics of machine learning please make sure you ve completed these dataquest courses or know the material before trying these projects python introduction https www dataquest io course introduction to python for loops and if statements https www dataquest io course for loops and conditional statements in python dictionaries in python https www dataquest io course dictionaries frequency tables and functions in python functions and jupyter notebook https www dataquest io course python functions and jupyter notebook python intermediate https www dataquest io course python for data science intermediate pandas and numpy fundamentals https www dataquest io course pandas fundamentals data cleaning https www dataquest io course python datacleaning machine learning fundamentals https www dataquest io course machine learning fundamentals
data-science machine-learning pandas python
ai
defichain-app
github release https img shields io github v release defich app https github com defich app releases a href https github com defich app releases img alt github release latest by date src https img shields io github downloads defich app latest total a a href https github com defich app graphs contributors img alt github contributors src https img shields io github contributors defich app a github license https img shields io github license naereen strapdown js svg https github com defich app blob main license a href https twitter com defichain img alt twitter follow src https img shields io twitter follow defichain style social a a href https www reddit com r defiblockchain img alt subreddit subscribers src https img shields io reddit subreddit subscribers defiblockchain style social a defi desktop wallet use defi desktop wallet to interact with defichain it is a wallet for dfi wrapped btc eth usdt liquidity mine use the dex create masternodes and more image https defichain com img app liquidity 2x png documentation getting started getting started development development about us https defichain com getting started download the desktop app https defichain com downloads or check releases https github com defich app releases for latest downloadable installers for windows mac and linux development initial setup install all dependencies for both electron and webapp bash npm run init setup the required node to connect to the node you need to setup the node run the command below that matches your operating system operating system command windows npm run pre build win mac npm run pre build mac linux npm run pre build linux running apps electron and webapp to run both apps in dev mode bash npm run start dev to run webapp only bash npm run start react to run electron only note this is used to test a compiled build dev or prod of react app you need to have a compiled react app for this command to work bash npm run start electron building apps to build the app using native platform bash npm run build to build the app for all platforms bash npm run build all licenses disclaimer by using defi desktop app this repo you the user agree to be bound by the terms of this license license qr scanner shutter audio webapp src assets audio shutter mp3 is licensed by soundsnap https www soundsnap com commercial redistribution of the audio is prohibited for full soundsnap license visit https www soundsnap com licence https www soundsnap com licence
electron webapp
blockchain
blockchain-devkit
page type sample languages csharp java javascript html products azure azure blockchain service description this sample contains content and samples in number of areas including connecting data producers and consumers to the or from the blockchain azure blockchain devkit mit license badge https img shields io badge license mit green svg azure blockchain development kit this repository contains content and samples in number of areas including connect https github com azure samples blockchain devkit tree master connect connect various data producers and consumers to or from the blockchain integrate https github com azure samples blockchain devkit tree master integrate integrate legacy tools systems and protocols accelerators https github com azure samples blockchain devkit tree master accelerators deep dive into end to end examples or solutions to common patterns devops for smart contracts https github com azure samples blockchain devkit tree master devops bring traditional devops practices into a distributed application environment repo contents file folder description accelerators samples showing common end to end application patterns and business scenarios connect samples showing how to produce or consume information sending to and reading from the blockchain through outside sources devops a series of patterns and whitepapers of how to integrate traditional software devops into a multi party distributed application environments integrate samples showing how to connect to traditional enterprise systems such as sql databases ftp storage or cloud file and email services gitignore define what to ignore at commit time changelog md list of changes to the sample contributing md guidelines for contributing to the azure blockchain devkit readme md this readme file license the license for the azure development kit samples sample index looking for a sample based on a particular service or technology e g using sql with blockchain the index below will give you pointers to samples based on the technology service featured these are a select subset of the total samples available technology or service samples media blockchain service svg azure blockchain service https azure microsoft com en us services blockchain service azure blockchain and azure cognitive search https github com azure samples blockchain devkit blob master integrate data azure search ethereumlogicappazuresearch md br azure blockchain and sql https github com azure samples blockchain devkit tree master integrate data sql media blockchainworkbenchcolor 48x48 svg azure blockchain workbench https azure microsoft com en us features blockchain workbench azure blockchain workbench and mysql https github com azure samples blockchain devkit tree master integrate data mysql br azure blockchain workbench and cosmos db https github com azure samples blockchain devkit tree master integrate data cosmosdb br azure blockchain workbench with power bi br deliver data from iot hub to azure blockchain workbench https github com azure samples blockchain devkit blob master connect iot iot hub blockchain workbench configureiotdemo md br azure iot central and azure blockchain workbench integration https github com azure samples blockchain devkit tree master connect iot iot central blockchain workbench br azure blockchain workbench and mobile apps with xamarin https github com azure samples blockchain devkit tree master connect mobile blockchain workbench workbench client br media botservice svg azure bot service https azure microsoft com en us services bot service azure blockchain workbench and azure bot service https github com azure samples blockchain devkit tree master connect bots and assistants web botservice media cosmosdb svg azure cosmos db https azure microsoft com en us services cosmos db azure blockchain workbench and cosmos db https github com azure samples blockchain devkit tree master integrate data cosmosdb media azure functions color large 50x50 png azure functions https azure microsoft com en us services functions attestable documents and media using azure functions to hash file contents and metadata https github com azure samples blockchain devkit blob master accelerators attestable documents and media blockchain workbench adobecreativecloud readme md media azureiotcentral svg azure iot central https azure microsoft com en us services iot central azure iot central and azure blockchain workbench integration https github com azure samples blockchain devkit tree master connect iot iot central blockchain workbench media azure io t hub 50x50 png azure iot hub https azure microsoft com en us services iot hub deliver data from iot hub to azure blockchain workbench https github com azure samples blockchain devkit blob master connect iot iot hub blockchain workbench configureiotdemo md media azure logic apps color 50x50 png azure logic apps https azure microsoft com en us services logic apps egress using ethereum logic app connector to capture events from the blockchain https github com azure samples blockchain devkit tree master integrate data postgresql br ingress using ethereum logic app connector to create updates contracts on the blockchain https github com azure samples blockchain devkit tree master integrate data sql media azure mysql cleardb database color 50x50 png azure database for mysql https azure microsoft com en us services mysql azure blockchain workbench and mysql https github com azure samples blockchain devkit tree master integrate data mysql img src media postgres service svg style zoom 125 azure database for postgresql https azure microsoft com en us services postgresql blockchain and postgresql https github com azure samples blockchain devkit tree master integrate data postgresql media azure search color 50x50 png azure cognitive search https azure microsoft com en us services search azure blockchain and azure cognitive search https github com azure samples blockchain devkit blob master integrate data azure search ethereumlogicappazuresearch md media azure sql database generic color 50x50 png azure sql database https azure microsoft com en us services sql database azure blockchain and sql https github com azure samples blockchain devkit tree master integrate data sql media powerbi svg power bi https powerbi microsoft com en us azure blockchain workbench and power bi https github com azure samples blockchain devkit tree master integrate data powerbi media xamarin svg xamarin https dotnet microsoft com apps xamarin azure blockchain workbench and mobile apps with xamarin https github com azure samples blockchain devkit tree master connect mobile blockchain workbench workbench client prerequisites where noted some of the samples in this development kit may need the following azure blockchain workbench https docs microsoft com en us azure blockchain workbench deploy deploy blockchain workbench samples using the ethereum logic app connector available on the azure marketplace require an ethereum network with a public rpc endpoint if you wish to use the azure blockchain workbench with the ethereum logic app connectors you will need a public rpc endpoint you may use an existing one or create a new one you may create a new endpoint in azure here https portal azure com pub source email pub status success create microsoft azure blockchain azure blockchain ethereumethereum poa consortium once your endpoint is ready copy the rpc address from the deployment output and deploy azure blockchain workbench to your subscription in the azure deployment blade enter the rpc endpoint in the blade as shown below media wbdeployment png jpg feedback for general product feedback please visit our forum https techcommunity microsoft com t5 blockchain bd p azureblockchain data 02 to request additional features or samples visit our uservoice site https feedback azure com forums 586780 blockchain data 02
blockchain
iron-frontend-v1
src assets img iron logo png raw true iron finance a multi chain partial collateralized stablecoin styled with prettier https img shields io badge styled with prettier ff69b4 svg https github com prettier prettier how to run install dependencies with npm install and start project with npm start command configuration config project src config ts with template src iron bank config ts config farm info src farms ts links documentations https docs iron finance telegram https t me ironfinance medium https ironfinance medium com github https github com ironfinance
front_end
magical-eyes
magical eyes sources of the final project in ece2160 embedded system design
os
EEN646_Embedded
een646 embedded specified repository for engineering course embedded system design
os
coding-standards
cx partners https www cxpartners co uk ui img logo png front end coding standards and best practices a k a how to do webdev at cxpartners overview these guidelines principles and standards allow us to produce code of a consistent quality across all projects we undertake work concurrently with multiple devs on the same codebase at the same time in the same way produce code that is less prone to bugs and regressions is easier to understand and debug write code that supports re use it is required reading for all devs internal and external working on a cx web development project contributing when considering coding standards we re more interested in consistency than developer freedom however this should be considered a living document it will evolve to reflect the changing needs of the cx dev team and the work we do how to contribute those with access to the repo should create a proposal new guideline branch where the name reflects the addition or update otherwise fork the repo and when ready create a pull request global development guidelines performance page load times both real and perceived are a key consideration for users of all browsers and device types there are some general things we can do in front end development send fewest bytes possible down the wire avoid unnecessary use of display none keep css selectors concise be wary of sass nesting minimise http requests minimise blocking content should be readable before client side processing lazy load supplementary content especially images testing and qa should find no problems code is a craft make it your responsibility to ensure it is the best it can be that it s tested bug free and adheres to these guidelines testing and qa folks aren t responsible for quality developers are don t use testers as bug catchers testers should find no problems after you have committed your code when testers find bugs tickets need to be opened developers assigned and scheduled in to fix the problem this lengthens the time it takes from identifying to resolving a bug make sure as much as possible you have tested your code on a reasonable number of devices so you can catch problems before you commit to the repo you are producing source code due to the size of most webdev projects that cxpartners undertakes and the processes and methodologies we adhere to there will always be a build process that takes source code and generates built artefacts for example scss will be compiled to minified and concatenated css template files will be rendered to html html will be beautified stripped of comments and references to css and js changed to minified and concatenated versions javascript with be minified and concatenated this means don t check generated files into the repo e g css files when using sass always check in configuration files for the tools that you use e g config rb for compass mixture json for mixture io app when including css and js reference the non minified versions don t repeat yourself dry if you repeat anything that has already been defined in code refactor it so that it only ever has one representation in the codebase if you stick to this principle you will ensure that you will only ever need to change one implementation of a feature without worrying about needing to change any other part of the code separation of concerns separate structure from presentation from behaviour to aid maintainability and understanding keep css presentation js behaviour and html structure in separate files avoid writing inline css or javascript in html avoid writing css or html in javascript don t choose html elements to imply style where appropriate use css rather than javascript for animations and transitions try to use templates when defining markup in javascript write code to be read debugging is twice as hard as writing the code in the first place therefore if you write the code as cleverly as possible you are by definition not smart enough to debug it brian kerninghan follow the principles of keep it simple stupid http en wikipedia org wiki kiss principle kiss hard to read or obfuscated code is difficult to maintain and debug don t be too clever write code to be read commenting explain design or architectural decisions that cannot be conveyed in code alone by adding comments to your code be sure that in conjunction with writing code that adheres to these guidelines someone can pick up your code and immediately understand it be verbose with your comments but ensure your comments add something to the code they don t just repeat what is there they are kept up to date if you change something that has been commented ensure you up date the comment as well if code needs extensive commenting can it be refactored to make it less complex easier to understand you focus on why rather than how unless your code is too complex it should be self documenting don t leave commented out chunks of code in the codebase it makes the code look unfinished and can be confusing for other developers file naming don t use whitespaces in file names they can lead to problems including text escaping and are harder to read when encoded in urls use hyphens for word separators identify technical debt use code comment annotations to mark parts of your code that require further work this will allow the measurement and management of technical debt tag use todo document tasks to be completed optimise mark something that is working but could be refactored don t use fixme which defines things that are broken you shouldn t be committing broken code to the repo styling with css sass our approach to css is influenced by nicole sullivan s oocss http oocss org ideas and jonathan snook s scalable and modular architecture for css smacss http smacss com both of which advocate a general separation of concerns to promote re usability and prevent code bloat general guidelines lint your scss according to the scss lint configuration file found in the root of all cx projects promote scalable and modular css architecture using the principles defined in the smacss style guide utilise bem s http coding smashingmagazine com 2012 04 16 a new front end methodology bem block element modifier methodology use classes rather than element selectors to de couple css from html semantics and ensure that your code doesn t impact how it may be integrated into backend systems make layouts fluid using variable units of measurement use id selectors only when explicitly required they prohibit re use and may need to be re written during systems integration use short hex values where applicable e g fff instead of ffffff consider using warn and debug directives to aid development especially within mixins use scss variables appropriately to ensure you code is kept dry no h3 color white yes white fff h3 color white each selector and style declaration should be on its own line to help with git diffs and error reporting good h3 gamma gamma include font size h3 font size line height heading line height not so good h3 gamma gamma include font size h3 font size line height heading line height don t specify units for zero values e g margin 0 instead of margin 0px use 0 instead of none e g border 0 rather than border none if you use experimental properties that will require prefixing it is recommended to use autoprefixer to post process the css autoprefixer can be combined with usage data from caniuse caniuse com to only output relevant prefixes e g unless you re supporting really early versions of chrome you don t need webkit border radius which takes a lot of work out of manual prefixing and is more intelligent than mixins and libraries wherever possible specific page level styling should be avoided in favour of layout or component modifiers avoid inline css use shorthand properties no margin 1px 1px 1px 1px yes margin 1px no margin 0 0 20px 0 yes margin 0 0 20px it is worth noting that whilst the css generated by sass can be outputted with optimisations applied we cannot make assumptions about how the code we deliver will be used and as such creating optimal optimal source code is preferred to relying on processors write colours in lowercase no color 1ab2c0 yes color 1ab2c0 omit protocols from external resources to prevent unintended security warnings through accidentally mixing protocols and for small file size savings nope main navigation background url http d111111abcdef8 cloudfront net images image jpg yes main navigation background url d111111abcdef8 cloudfront net images image jpg use whitespace to aid readability anything which modifies the context such as a selector or a media query should be preceded with a blank line bad foo color blue media screen and min width 1000px color yellow bar color red media screen and min width 1000px color green good foo color blue media screen and min width 1000px color yellow bar color red media screen and min width 1000px color green depth of applicability don t over specify css selectors overly specified selectors are difficult to understand and lead to subsequent selectors needing to be of an even higher specificity see smacss depth of applicability http smacss com book applicability nope sidebar div ul li margin bottom 5px the above example is tightly coupled to the html structure which prevents re use and is brittle if the html needs changing then the style will break over qualification don t qualify id s or classes with tag names over qualified id selector ul main navigation over qualified class selector table results as above you will be binding site structure with presentation making the site harder to maintain and inhibit re use while we are at it there is a case for reusing patterns of naming for things commenting use comments to explain design or architectural decisions to make notes so that any developers modifying extending or debugging the code can do so understanding your original decisions divide up groups of declarations using standard block and single line comment formats if you have too many major sections in your partial perhaps it should be more than one partial this is a block comment carrying a bit more weight maybe it has multiple lines it often announces a new section of some kind this is a block comment carrying a bit more weight maybe it has multiple lines it often announces a new section perhaps a function this can be used in js to denote a jsdoc block see http usejsdoc org about getting started html adding documentation comments to your code single line explanation of something use multiline comments if you want the comment to be preserved in the compiled css this comment is several lines long since it uses the css comment syntax it will appear in the css output body color black unit sizing use rem s to size fonts these will take into account the user s font size setting this research from 2006 suggests around 10 of people have changed it http archive oreilly com pub post more statistics on user clicks html avoid the font size 62 5 hack to ensure that a r em unit equates to 10px never change the font size of the root element from its default of 100 16px far more elements are 16px than 10px so this way we have to specify the sizes of fewer things this is where the cascade excels use unitless line heights if not using flexbox or css columns grids use percentages for fluid layouts and components use ems or pixels to specify the following properties unless percentages make sense but as above exercise good judgement margin padding top left bottom right this makes sense dropdown toggle hover dropdown menu display block top 100 display just below dropdown toggle this doesn t magic number alert http csswizardry com 2012 11 code smells in css magic numbers dropdown toggle hover dropdown menu display block top 62px height of dropdown toggle at present this makes sense the inherited margin will take into account the font size to ensure it looks right box margin bottom 3em this doesn t box margin bottom 6 33 converted from pixels on the mockup based on box height use box sizing border box globally this makes dealing with multiple units in this fashion a lot nicer html markup general guidelines use well structured semantic markup use double quotes on all attributes when using react js for generating markup we rely on single quotes use soft tabs 2 space indents ensure you write valid html check using tools such as the w3c markup validation service http validator w3 org do not omit optional tags http www whatwg org specs web apps current work multipage syntax html syntax tag omission it may be unclear whether a tag has been deliberately omitted or if it has been left out accidentally although unquoted attributes are supported always quote attribute values input type text class form field form field string uh oh omit protocols from external resources to prevent unintended security warnings through accidentally mixing protocols don t do script src http www google com js gweb analytics autotrack js script do script src www google com js gweb analytics autotrack js script doctype use the html5 doctype to indicate that you are serving html5 content using this doctype ensures your browser s layout engine uses standards or no quirks mode rather than quirks or almost standard modes any browsers that don t currently support html5 will enter this mode and interpret non html5 in a compliant way whilst ignoring new unsupported features doctype html html tag add a lang attribute to set the default language the lang attribute takes an iso language code as its value typically this is a two letter code such as en for english but it can also be an extended code such as en gb for british english html lang en write semantic markup ensure the markup you write is relevant and has meaning in the context of the content it is being applied to don t use markup to infer style bad div class mainheading my blog div br br content br br bad h1 i want to draw attention to this as i am important h1 h1 and so am i h1 good h1 my blog h1 p content p input labels use label tags for input fields so that the input element acquires focus when the label is clicked when using labels try to use the wrapping pattern rather than the for attribute so we can avoid using id s which may interfere with integration with backend systems label address input type text name address label label for address address label input type text id address name name boolean attributes use the single word syntax for boolean attribute values to aid readability reduce clutter and prevent unnecessary bytes going down the wire the presence of the attribute itself implies that the value is true an absence implies a value of false old hat option selected selected value 1 option cutting edge option selected value 1 option behaviour js general guidelines we use react js jquery has been superceded also we prefer to make universal isomorphic react applications use es6 babel will transpile it use soft tabs with a two space indent never use eval all projects will contain a jshintrc file in the root this will define the expected coding standards for the project enforced by jshint we are tending towards using the airbnb preconfigured linting npm install save dev eslint config airbnb and extend your eslintrc file parser babel eslint env browser true node true extends airbnb rules indent 2 tab don t use coffeescript it s an abstraction too far javascript is far from a perfect language but learn how to deal with the issues of javascript using recognised and accepted patterns and adhere to best practice guidelines typescript totally open to being persuaded lots of people are drinking the koolaid present a business case avoid applying styles with javascript preferably add or remove classes keep style in css to make it easier to maintain and debug no good span style display none name address 1 icon label text span use is or has state rules to apply state to elements for example class is visually hidden class has icon all good span class is visually hidden has icon icon label text span opt in to using a restricted variant of javascript using use strict so that errors that should be thrown are avoid inline javascript in html markup don t recreate functionality that may already be present in a utility library that is already in use send data down the wire and theme on the client rather than sending html when making ajax requests always use parentheses in blocks to aid readability good while true shuffle not good while true shuffle also not good while true shuffle use the comparison operator to avoid having to deal with type coercion complications use quotation marks consistently e g only use single or double quotes throughout your code don t mix unless there is a valid reason not to prefer single quotes such that html contained therein does not need to be escaped for example good const foo just a normal string good const foo a href bar nice clean html string a not good const foo a href bar html string with escaped double quotes a use event preventdefault instead of return false to prevent default event actions returning a boolean value is semantically incorrect when considering the context is an event returning false doesn t make sense in this context htmlelement addeventlistener click e do stuff return false use preventdefault htmlelement addeventlistener click e e preventdefault do stuff don t use var any more we have const and let which babel will compile as necessary favour const over let in es6 in javascript const means that the identifier can t be reassigned let is a signal that the variable may be reassigned it also signals that the variable will be used only in the block it s defined in single let pattern single let pattern let mylet1 1 anotherlet2 test andanotherlet true is this cool let mylet1 1 anotherlet2 test oops automatic semicolon insertion just popped a on this andanotherlet true is now global better let a 1 b 2 sum a b myobject i j good old dependable multiple let pattern let mylet 1 let anotherlet2 test let andanotherlet true don t use semi colons javascript doesn t require them avoid excessive function arguments if a function requires more than three arguments considering refactoring to use a configuration object bad let myfunction1 function arg1 arg2 arg3 arg4 myfunction1 firstargument argument2 true my third argument good let myfunction2 function config myfunction2 arg1 firstargument arg2 argument2 arg3 true arg4 my third argument configuration objects allow for optional parameters are easier to read are easier to maintain naming conventions although we can infer that oa is a private object we know nothing about what value it will contain use descriptive yet concise variable names for long lived variables avoid using what might be considered reserved variable names out of context for example e in javascript is considered an event object don t use it for anything else for temporary variables i e those which are used to store short lived values e g variables used for iteration it is ok to use non descriptive names e g i j k commenting comment your code there are two reasons why you should comment inline comments explain design or architectural decisions that cannot be conveyed in code alone these are mainly for the benefit of other developers modifying extending or debugging the code but also for you you may return to the code a week later and wonder how it works i am a nicely formatted comment i am a nicely formatted brief one line comment for let i 0 i 10 i uh oh don t put comments at the end of lines uh oh don t make up your own comment format this is a long comment that shouldn t really be using the one line comment syntax for api documentation for users who want to use your libraries plugins etc without reading the code use jsdoc standards to comment files and functions file a jquery ui widget to build product swatches author joel mitchell name cx productswatch dependencies jquery jquery ui widget factory mockjax for testing event handler to close the menu param event e jquery event object var myfunction function e break code into separate lines where applicable ensure code is written on separate lines to aid git diffs and error reporting good page setviewport width 1280 height 1024 not so good page setviewport width 1280 height 1024
front_end
Iris-Recognition-Registration-Database-System
iris recognition registration database system an iris recognition registration system that uses computer vision technology to do facial recognition to capture the iris of an individual and store their information in a database for biometric authentication the dlib library is used to detect and recognize the face structure to pinpoint the eyes and then capture the eyes through opencv s video capturing functions the captured iris is stored into an image along with input from the user during verification an associated id inputted by the user is entered when verifying the iris the iris is captured through the video feed and sent to a pattern feature like matching system setup using orb s detector the detector does pattern matching of both the user s iris through the feed and the stored image of the user s iris from the keypoints a test is done by the program to check for a match using a matching ratio value assigned by the program img src screenshots screen1 png height 400 hr strong python editor used strong ul li idle li ul strong platform test strong ul li windows li ul strong equipment strong ul li laptop desktop running program li li high definition camera or mobile phone device running camera through an rtsp server li ul hr h2 installation setting up python 3 x with opencv and external libraries h2 strong 1 download python 3 here in this case it was python 3 6 4 amd64 for windows strong https www python org downloads release python 364 strong 2 to install python libraries on windows strong ul li open up installation directory and run cmd on directory li li cd d python3 scripts li li cmd on toolbar li li pip install libraryname li ul strong 3 install the following libraries strong ul li opencv pip3 install opencv python li li numpy pip install numpy li li pillow pip install pillow li li pyqt5 pip install pyqt5 li li cmake pip install cmake this is required to install dlib li li dlib pip install dlib li li datetime pip install datetime li li b x sift pip install opencv contrib python removed the library is patented so its no longer used for iris matching but opencv contrib python has other useful libraries so its worth installing b li ul strong to generate gui file as python strong ul li navigate to folder of gui file and type cmd li li in command terminal type the following li ul pyuic5 x guiirrdn ui o guiirrd py pyuic5 x guiipopup ui o guiipopup py hr br b registering the iris b once the register button is pressed the program will halt and open a prompt asking for user input the input is stored along with the image the captured iris portfolios are stored under the database folder img src screenshots screen2 png height 400 img src screenshots screen3 png height 400 hr br b viewing registered users in the database b all users registered with the system can be viewed using the show registered users button this will open up a table of all registered users with each cell corresponding to the inputted data from the individual users registered the iris registered by each user can also be viewed by pressing the info button configured in the cell of the table img src screenshots screen4 png height 400 img src screenshots screen5 png height 400 hr br b user verification with associated id b user verification is done by first scanning the users facial features using dlibs landmark detector and pinpointing where the eyes are located the camera then captures the eyes and hones in on the iris when verifiying a prompt asking for user id will show once the user s id is inputted the captured iris is scanned and pattern matched with the registered id with the same id as the user input img src screenshots screen7 png height 400 img src screenshots screen8 png height 400 img src screenshots screen9 png height 400 the verification process involves using orb s image matching library the image of the user s iris captured live during the verification screening is fed through the method iris match res along with the registered captured iris with the id the user inputted if there is a match using orb s brute force matching then the user entry for the registered user and their information will be shown in a dialog the match rate conditional statement will have to be adjusted based on what video capture system is used for stronger cameras a match rate greater then 50 will work for weaker camera devices such as a wireless webcam a lower value such as 35 should be considered img src screenshots screen10 png height 400 img src screenshots screen11 png height 400 the main components have been broken up and placed into the individualprogs folder this is to see the basic form of the core features that makeup this program hr a couple of things to note ul li the camera must take high definition photos for orb detector to find keypoints for iris matching if camera image isnt high enough quality then the message image quality is low definition unable to verify please use a stronger camera will show up in the console when doing user verification li li id association is done in this project as it saves on computational power although an id number during verification is not needed for matching the image can be iteratively compared with all eyes in the filesystem until a match is found using orb detection if the code is modified li li an id associated with the iris captured is beneficial with a database comprised of over 1000 people who are registered which is why for this project that route was taken the code can be easily modified to ignore id input and just pattern match with all registered user iris s li ul
server
Sub-IoT-Stack
gitter https badges gitter im sub iot community svg https gitter im sub iot community utm source badge utm medium badge utm campaign pr badge welcome sub iot houses an implementation of the dash 7 alliance http www dash7 alliance org protocol which can be optionally combined with a 3rd party implementation loramac node https github com lora net loramac node of the lorawan https lora alliance org specification for ultra low power communication the aim of the project is to provide an implementation of the sub ghz stacks which allows for fast development and prototyping this implementation focusses on completeness correctness ease of use and understanding performance and code size are less important aspects for clarity a clear separation between the iso layers is maintained in the code for more information visit the sub iot site https sub iot github io sub iot stack getting started please refer to the documentation section on our site https sub iot github io sub iot stack docs home related repositories pyd7a https github com sub iot pyd7a provides a collection of python modules supporting the dash7 alliance protocol in general and sub iot in particular community the developers can be reached on the dash7 ap oss google group https groups google com forum forum dash7 ap oss license licensed under the apache license version 2 0 the license you may not use these files except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license copy copyright 2015 2021 university of antwerp aloxy nv and others
dash7 iot embedded communication-stack firmware
server
dive-into-machine-learning
supportukrainenow org real ways you can help ukraine https supportukrainenow org initiatives before we dive in dive into machine learning here are some notable projects and initiatives that might interest you as well related to machine learning algorithmwatch https algorithmwatch org en newsletter https algorithmwatch org en newsletter a non profit research and advocacy organization that is committed to watch unpack and analyze automated decision making adm systems and their impact on society daviddao awful ai https github com daviddao awful ai awful ai is a curated list to track current scary usages of ai hoping to raise awareness humanetech community awesome humane tech https github com humanetech community awesome humane tech promoting solutions that improve wellbeing freedom and society code against climate change projectdrawdown solutions https github com projectdrawdown solutions project drawdown https www drawdown org project drawdown entered the climate conversation with the publication of the 2017 book with the drawdown review in 2020 the project continues its mission to inspire and communicate solutions python and jupyter notebooks philsturgeon awesome earth https github com philsturgeon awesome earth daviddao code against climate change https github com daviddao code against climate change protontypes open sustainable technology https github com protontypes open sustainable technology dive into machine learning banner png dive into machine learning dive into machine learning hi there you might find this resource helpful if you know python or you re https github com alexmojaki futurecoder learning https nbviewer org github jakevdp whirlwindtourofpython blob master index ipynb it https github com vinta awesome python resources snake https github com ossu computer science introduction to programming you re new to machine learning https en wikipedia org wiki machine learning you care about the ethics of ml https github com ethicalml awesome artificial intelligence guidelines 8 responsible machine learning principles https ethical institute principles html open ethics canvas https openethics ai canvas you learn by doing for some great alternatives jump to the end https github com hangtwenty dive into machine learning more ways to dive into machine learning or check out nam vu s guide machine learning for software engineers https github com zuzoovn machine learning for software engineers of course there is no easy path to expertise also i m not an expert i just want to connect you with some great resources from experts applications of ml are all around us i think it s in the public interest for more people to learn more about ml especially hands on because there are many different ways to learn whatever motivates you to dive into machine learning if you know a bit of python these days you can get hands on with a machine learning hello world in minutes let s get started tools you ll need if you prefer local installation python https www python org python 3 is the best option jupyter notebook https jupyter org formerly known as ipython notebook some scientific computing packages numpy pandas scikit learn matplotlib you can install python 3 and all of these packages in a few clicks with the anaconda python distribution https www anaconda com download anaconda is popular in data science and machine learning communities use whichever tool works for you if you re unsure or need more context about using conda virtualenv poetry pipenv here s a very helpful guide https web archive org web 20211226071314 https brainsteam co uk 2021 04 01 opinionated guide to virtualenvs cloud based options some options you can use from your browser binder https mybinder org is jupyter notebook s official choice to try jupyterlab https jupyter org try deepnote https deepnote com allows for real time collaboration google colab https colab research google com provides free gpus for other options see markusschanta awesome jupyter hosted notebook solutions https github com markusschanta awesome jupyter hosted notebook solutions ml tooling best of jupyter notebook environments https github com ml tooling best of jupyter let s go learn how to use jupyter notebook http opentechschool github io python data intro core notebook html 5 10 minutes you can learn by screencast https www youtube com watch v qb7ft68tca8 instead now follow along with this brief exercise an introduction to machine learning with scikit learn http scikit learn org stable tutorial basic tutorial html do it in ipython or a jupyter notebook coding along and executing the code in a notebook i ll wait https user images githubusercontent com 2420688 29441281 00eff0c4 837f 11e7 9666 d653a1cd2372 jpeg http scikit learn org stable tutorial basic tutorial html what just happened you just classified some hand written digits using scikit learn http scikit learn org stable index html neat huh dive in a visual introduction to machine learning let s learn a bit more about machine learning and a couple of common ideas and concerns read a visual introduction to machine learning part 1 http www r2d3 us visual intro to machine learning part 1 by stephanie yee https twitter com stephaniejyee and tony chu https twitter com tonyhschu a visual introduction to machine learning part 1 https user images githubusercontent com 2420688 29441234 a2028c98 837e 11e7 88f2 1ca5a94684f6 gif http www r2d3 us visual intro to machine learning part 1 it won t take long it s a beautiful introduction try not to drool too much a few useful things to know about machine learning ok let s dive deeper read a few useful things to know about machine learning http homes cs washington edu pedrod papers cacm12 pdf by prof pedro domingos https homes cs washington edu pedrod it s densely packed with valuable information but not opaque don t worry if you don t understand it all yet take some time with this one jargon note what is the difference between data analytics data analysis data mining data science machine learning and big data http www quora com what is the difference between data analytics data analysis data mining data science machine learning and big data 1 another handy term data engineering https www coursera org articles what does a data engineer do and how do i become one mlops https ml ops org overlaps with data eng and there s an introductory mlops section later in this guide production deployment mlops explore another notebook next code along with one or more of these notebooks series of notebooks 2022 rasbt machine learning book https github com rasbt machine learning book notebooks from machine learning with pytorch and scikit learn by sebastian raschka yuxi hayden liu and vahid mirjalili https sebastianraschka com blog 2022 ml pytorch book html dr randal olson s example machine learning notebook https github com rhiever data analysis and machine learning projects blob master example data science notebook example 20machine 20learning 20notebook ipynb let s pretend we re working for a startup that just got funded to create a smartphone app that automatically identifies species of flowers from pictures taken on the smartphone we ve been tasked by our head of data science to create a demo machine learning model that takes four measurements from the flowers sepal length sepal width petal length and petal width and identifies the species based on those measurements alone launch in binder no installation steps required https mybinder org v2 gh rhiever data analysis and machine learning projects master filepath example data science notebook 2fexample 20machine 20learning 20notebook ipynb various topical notebooks trekhleb machine learning experiments https github com trekhleb machine learning experiments trekhleb homemade machine learning https github com trekhleb homemade machine learning find more great jupyter notebooks when you re ready jupyter s official gallery of interesting jupyter notebooks statistics machine learning and data science https github com jupyter jupyter wiki permalink https github com jupyter jupyter wiki a gallery of interesting jupyter notebooks ae03c01ed25024aa06a4479ea600895d59b38bc4 immerse yourself pick one of the courses below and start on your way prof andrew ng s machine learning on coursera https www coursera org learn machine learning prof andrew ng s https hai stanford edu people andrew ng machine learning https www coursera org learn machine learning is a popular and esteemed free online course i ve seen it recommended https www quora com how do i learn machine learning 1 answer cory hicks 1 often https www quora com how do i learn machine learning 1 answer xavier amatriain and emphatically https www forbes com sites anthonykosner 2013 12 29 why is machine learning cs 229 the most popular course at stanford it s recommended to grab a textbook to use as an in depth reference the two i saw recommended most often were understanding machine learning https web archive org web 20210717194345 http www cs huji ac il shais understandingmachinelearning copy html and elements of statistical learning https web stanford edu hastie papers eslii pdf you only need to use one of the two options as your main reference here s some context comparison to help you pick which one is right for you https github com hangtwenty dive into machine learning issues 29 public datasets and pet projects you might like to have a pet project to play with on the side when you are ready for that you could explore one of these awesome public datasets https github com caesar0301 awesome public datasets paperswithcode com datasets https paperswithcode com datasets datasetlist com https www datasetlist com kkulma climate change data https github com kkulma climate change data open data tips for this course study tips for prof andrew ng s course by ray li https rayli net blog data coursera machine learning review if you re wondering is it still a relevant course or trying to figure out if it fits for you personally check out these reviews review andrew ng s machine learning course https towardsdatascience com review andrew ngs machine learning course b905aafdb7d9 the user reviews on coursera https www coursera org learn machine learning reviews tips for studying on a busy schedule it s hard to make time available every week so you can try to study more effectively within the time you have available here are some ways to do that learning how to learn by barbara oakley https www coursera org learn learning how to learn by barbara oakley a free video course on coursera prefer book audiobook these are great options barbara oakley s book a mind for numbers how to excel at math and science https barbaraoakley com books a mind for numbers reviews https www goodreads com book show 18693655 a mind for numbers we all have what it takes to excel in areas that don t seem to come naturally to us at first make it stick the science of successful learning https www retrievalpractice org make it stick reviews https www goodreads com book show 18770267 make it stick take my tips with a grain of salt i am not a machine learning expert i m just a software developer and these resources tips were useful to me as i learned some ml on the side other courses data science courses as jupyter notebooks practical data science http radimrehurek com data science python python data science handbook as jupyter notebooks https jakevdp github io pythondatasciencehandbook microsoft data science for beginners https github com microsoft data science for beginners added in 2021 https dev to azure free data science for beginners curriculum on github 1hme 10 week 20 lesson curriculum all about data science each lesson includes pre lesson and post lesson quizzes written instructions to complete the lesson a solution and an assignment our project based pedagogy allows you to learn while building a proven way for new skills to stick see also microsoft ml for beginners https github com microsoft ml for beginners details summary more free online courses i ve seen recommended machine learning data science and related topics summary coursera s data science specialization https www coursera org specializations jhu data science prof pedro domingos s introductory video series https www youtube com playlist list pltpqex 31jxgtdac6 3hxwcp7fq4n8ygr prof pedro domingos https homes cs washington edu pedrod wrote the paper a few useful things to know about machine learning https homes cs washington edu pedrod papers cacm12 pdf which you may remember from earlier in the guide ossu data science https github com ossu data science see also ossu computer science https github com ossu computer science stanford cs229 machine learning https github com afshinea stanford cs 229 machine learning harvard cs109 data science https cs109 github io 2015 advanced statistical computing vanderbilt bios8366 http stronginference com bios8366 lectures html interactive kevin markham s video series intro to machine learning with scikit learn http blog kaggle com 2015 04 08 new video series introduction to machine learning with scikit learn starts with what we ve already covered then continues on at a comfortable place uc berkeley s data 8 the foundations of data science http data8 org course and the textbook computational and inferential thinking https www inferentialthinking com teaches critical concepts in data science prof mark a girolami s machine learning module github mirror https github com josephmisiti machine learning module good for people with a strong mathematics background an epic quora thread how can i become a data scientist https www quora com how can i become a data scientist redirected qid 59455 ujjwalkarn machine learning tutorials https github com ujjwalkarn machine learning tutorials there are more alternatives linked at the bottom of this guide more ways to dive into machine learning details getting help questions answers chats start with the support forums and chats related to the course s you re taking check out datascience stackexchange com https datascience stackexchange com and stats stackexchange com such as the tag machine learning https stats stackexchange com questions tagged machine learning there are some subreddits like r learningmachinelearning https www reddit com r learningmachinelearning and r machinelearning https www reddit com r machinelearning don t forget about meetups also look for chat invitations on project pages and so on some communities to know about r learnmachinelearning https www reddit com r learnmachinelearning r machinelearning https reddit com r machinelearning r dataisbeautiful https reddit com r dataisbeautiful r datascience https reddit com r datascience cross validated stats stackexchange com https stats stackexchange com ossu data science has a discord server and newsletter https github com ossu data science text discord 20server supplement learning pandas well details summary you ll want to get more familiar with pandas summary essential things in pandas i wish i d had known earlier http nbviewer jupyter org github rasbt python reference blob master tutorials things in pandas ipynb as a jupyter notebook essential 10 minutes to pandas http pandas pydata org pandas docs stable 10min html another helpful tutorial real world data cleanup with python and pandas https trendct org 2016 08 05 real world data cleanup with python and pandas video series from data school about pandas https www youtube com playlist list pl5 da3qgb5iccsgw1mxlz0hq8ll5u3u9y reference guide to 30 common pandas tasks plus 6 hours of supporting video here are some docs i found especially helpful as i continued learning cookbook http pandas pydata org pandas docs stable cookbook html data structures http pandas pydata org pandas docs stable dsintro html esp dataframe http pandas pydata org pandas docs stable dsintro html dataframe section reshaping by pivoting dataframes https pandas pydata org pandas docs stable user guide reshaping html computational tools http pandas pydata org pandas docs stable computation html and stackexchange thread what is covariance in plain language https stats stackexchange com questions 29713 what is covariance in plain language group by split apply and combine dataframes http pandas pydata org pandas docs stable groupby html visualizing your dataframes https pandas pydata org pandas docs stable user guide visualization html bookmarks for scaling pandas and alternatives dask https dask org a pandas like interface but for larger than memory data and under the hood parallelism vaex https vaex io out of core hybrid apache arrow numpy dataframe for python ml visualize and explore big tabular data at a billion rows per second details supplement troubleshooting these debugging tools can be used inside or outside a jupyter notebook birdseye https birdseye readthedocs io en latest integrations html jupyter ipython notebooks snoop https github com alexmojaki snoop pandas log https github com eyaltrabelsi pandas log git there are many more tools than that but those might get you started or might be especially useful while you re learning beyond learning troubleshooting is more than just logs or debuggers of course there s also some mlops links later in this guide production deployment mlops assorted tips and resources risks some starting points machine learning systems automatically learn programs from data pedro domingos in a few useful things to know about machine learning http homes cs washington edu pedrod papers cacm12 pdf the programs you generate will require maintenance like any way of creating programs faster you can rack up technical debt https en wikipedia org wiki technical debt here is the abstract of machine learning the high interest credit card of technical debt https research google pubs pub43146 machine learning offers a fantastically powerful toolkit for building complex systems quickly this paper argues that it is dangerous to think of these quick wins as coming for free using the framework of technical debt we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning the goal of this paper is highlight several machine learning specific risk factors and design patterns to be avoided or refactored where possible these include boundary erosion entanglement hidden feedback loops undeclared consumers data dependencies changes in the external world and a variety of system level anti patterns if you re reading this guide you should read that paper you can also listen to a podcast episode interviewing one of the authors of this paper https softwareengineeringdaily com 2015 11 17 machine learning and technical debt with d sculley awesome production machine learning https github com ethicalml awesome production machine learning a curated list of awesome open source libraries to deploy monitor version and scale your machine learning it includes a section about privacy preserving ml https github com ethicalml awesome production machine learning privacy preserving machine learning by the way rules of machine learning best practices for reliable ml engineering http martin zinkevich org rules of ml rules of ml pdf by martin zinkevich regarding ml engineering practices the high cost of maintaining machine learning systems http www kdnuggets com 2015 01 high cost machine learning technical debt html overfitting vs underfitting a conceptual explanation https towardsdatascience com overfitting vs underfitting a conceptual explanation d94ee20ca7f9 11 clever methods of overfitting and how to avoid them http hunch net p 22 so you want to build an ethical algorithm an interactive tool to prompt discussions https cdt info ddtool source https github com numfocus algorithm ethics that s not a comprehensive list of course they are just some gateways and starting points know some other resources please share them pull requests are welcome peer review openreview net https openreview net about aims to promote openness in scientific communication particularly the peer review process details summary em more about openreview net em summary open peer review we provide a configurable platform for peer review that generalizes over many subtle gradations of openness allowing conference organizers journals and other reviewing entities to configure the specific policy of their choice we intend to act as a testbed for different policies to help scientific communities experiment with open scholarship while addressing legitimate concerns regarding confidentiality attribution and bias open publishing track submissions coordinate the efforts of editors reviewers and authors and host sharded and distributed for speed and reliability open access free access to papers for all free paper submissions no fees open discussion hosting of accepted papers with their reviews comments continued discussion forum associated with the paper post acceptance publication venue chairs editors can control structure of review comment forms read write access and its timing open directory collection of people with conflict of interest information including institutions and relations such as co authors co pis co workers advisors advisees and family connections open recommendations models of scientific topics and expertise directory of people includes scientific expertise reviewer paper matching for conferences with thousands of submissions incorporating expertise bidding constraints and reviewer balancing of various sorts paper recommendation to users open api we provide a simple rest api open source we are committed to open source many parts of openreview are already in the openreview organization on github https github com openreview some further releases are pending a professional security review of the codebase openreview net https openreview net is created by andrew mccallum s information extraction and synthesis laboratory in the college of information and computer sciences at university of massachusetts amherst openreview net https openreview net is built over an earlier version described in the paper open scholarship and peer review a time for experimentation https openreview net forum id xf0zsbd2iufmg published in the icml 2013 peer review workshop https openreview net group id icml cc 2013 peerreview openreview is a long term project to advance science through improved peer review with legal nonprofit status through code for science society we gratefully acknowledge the support of the great diversity of openreview sponsors https openreview net sponsors scientific peer review is sacrosanct and should not be owned by any one sponsor details production deployment mlops https ml ops org if you are learning about mlops but find it overwhelming these resources might help you get your bearings mlops stack template https valohai com blog the mlops stack by henrik skogstr m lessons on ml platforms from netflix doordash spotify and more https towardsdatascience com lessons on ml platforms from netflix doordash spotify and more f455400115c7 by ernest chan in towards data science mlops stack canvas https ml ops org content mlops stack canvas at ml ops org https ml ops org recommended awesomelists to save star watch ethicalml awesome artificial intelligence guidelines https github com ethicalml awesome artificial intelligence guidelines ethicalml awesome production machine learning https github com ethicalml awesome production machine learning privacy preserving machine learning visenger awesome ml model governance https github com visenger awesome ml model governance visenger awesome mlops https github com visenger awesome mlops eugeneyan applied ml https github com eugeneyan applied ml easier sharing of deep learning models and demos replicate https replicate com makes it easy to share a running machine learning model easily try out deep learning models from your browser the demos link to papers code on github if you want to dig in and see how something works the models run in containers built by cog https github com replicate cog containers for machine learning it s an open source tool for putting models into reproducible docker containers you can put models in containers with just python and yaml there s an api for replicate to run predictions for you deep learning take note some experts warn us not to get too far ahead of ourselves and encourage learning ml fundamentals before moving onto deep learning that s paraphrasing from some of the linked coursework in this guide for example prof andrew ng encourages building foundations in ml before studying dl perhaps you re ready for that now or perhaps you d like to get started soon and learn some dl in parallel to your other ml learnings when you re ready to dive into deep learning here are some helpful resources dive into deep learning https d2l ai an interactive book about deep learning view on github https github com d2l ai d2l en quickstart run this book locally using jupyter notebooks https d2l ai chapter installation index html run this book in your browser using google colab https d2l ai chapter appendix tools for deep learning colab html the entire book is drafted in jupyter notebooks seamlessly integrating exposition figures math and interactive examples with self contained code you can modify the code and tune hyperparameters to get instant feedback to accumulate practical experiences in deep learning details summary more deep learning links summary prof andrew ng s https scholar google com citations user mg4immeaaaaj hl en courses on deep learning https www coursera org specializations deep learning there five courses as part of the deep learning specialization on coursera https www coursera org specializations deep learning these courses are part of his new venture deeplearning ai https www deeplearning ai some course notes about it ashishpatel26 andrew ng notes https github com ashishpatel26 andrew ng notes deep learning https www deeplearningbook org a free book published mit press by ian goodfellow yoshua bengio and aaron courville a notable testimonial for it is here what are the best ways to pick up deep learning skills as an engineer https www quora com what are the best ways to pick up deep learning skills as an engineer fastai fastbook https github com fastai fastbook by jeremy howard and sylvain gugger an introduction to deep learning fastai and pytorch explosion thinc https github com explosion thinc is an interesting library that wraps pytorch tensorflow and mxnet models concise functional programming approach to model definition using composition rather than inheritance integrated config system to describe trees of objects and hyperparameters paperswithcode com https paperswithcode com the mission of papers with code is to create a free and open resource with machine learning papers code datasets methods and evaluation tables labmlai annotated deep learning paper implementations https github com labmlai annotated deep learning paper implementations implementations tutorials of deep learning papers with side by side notes 50 of them really nicely annotated and explained distill pub https distill pub about publishes explorable explanations definitely worth exploring and following details collaborate with domain experts machine learning can be powerful but it is not magic whenever you apply machine learning to solve a problem you are going to be working in some specific problem domain to get good results you or your team will need substantive expertise domain knowledge learn what you can for yourself but you should also collaborate with experts you ll have better results if you collaborate with subject matter experts and domain experts https en wikipedia org wiki subject matter expert domain expert software machine learning and user experience ux i couldn t say it better machine learning won t figure out what problems to solve if you aren t aligned with a human need you re just going to build a very powerful system to address a very small or perhaps nonexistent problem that quote is from the ux of ai by josh lovejoy https design google library ux ai in other words you are not the user https www nngroup com articles false consensus suggested reading martin zinkevich s rules of ml engineering rule 23 you are not a typical end user https developers google com machine learning guides rules of ml human analysis of the system skilling up what are some ways one can practice details summary strong one way strong competitions and challenges summary you need practice on hacker news user olympus commented to say you could use competitions to practice and evaluate yourself https news ycombinator com item id 10508565 kaggle https www kaggle com competitions and chalearn http www chalearn org are hubs for machine learning competitions you can find more competitions here https github com paperswithcode releasing research code results leaderboards or here https towardsdatascience com 12 data science ai competitions to advance your skills in 2021 32e3fcb95d8c you also need understanding you should review what kaggle competition winners say about their solutions for example the no free hunch blog http blog kaggle com these might be over your head at first but once you re starting to understand and appreciate these you know you re getting somewhere competitions and challenges are just one way to practice machine learning isn t just about kaggle competitions https jvns ca blog 2014 06 19 machine learning isnt kaggle competitions details details summary strong another way strong try doing some practice studies summary here s a complementary way to practice do practice studies 1 ask a question start exploring some data the most important thing in data science is the question https github com datasciencespecialization courses blob master 01 datascientisttoolbox 03 02 whatisdata index rmd the data is the second most important thing dr jeff t leek https github com jtleek so start with a question then find real data https github com caesar0301 awesome public datasets analyze it then 2 communicate results when you think you have a novel finding ask for review when you re still learning ask in informal communities some are linked below some communities to know about 3 learn from feedback consider learning in public https www swyx io learn in public it works great for some folks don t pressure yourself yet though everybody is different and it s good to know your learning style how can you come up with interesting questions here s one way pick a day each week to look for public datasets https github com caesar0301 awesome public datasets and write down some questions that come to mind also sign up for data is plural https tinyletter com data is plural a newsletter of interesting datasets when a question inspires you try exploring it with the skills you re learning this advice to do practice studies and learn from review is based on a conversation https github com hangtwenty dive into machine learning issues 11 issuecomment 153934120 with dr randal s olson http www randalolson com here s more advice from olson quoted with permission https github com hangtwenty dive into machine learning issues 11 issuecomment 154135498 i think the best advice is to tell people to always present their methods clearly and to avoid over interpreting their results part of being an expert is knowing that there s rarely a clear answer especially when you re working with real data as you repeat this process your practice studies will become more scientific interesting and focused also here s a video about the scientific method in data science https 101 datascience community 2012 06 27 the data scientific method details details summary more machine learning career related links summary advice on building a machine learning career and reading research papers by prof andrew ng https www kdnuggets com 2019 09 advice building machine learning career research papers andrew ng html some links for finding following interesting papers code papers with code https paperswithcode com is a popular site to follow and it can lead you to other resources github com paperswithcode https github com paperswithcode mit papers code https mitibmwatsonailab mit edu research papers code peer review is the lifeblood of scientific validation and a guardrail against runaway hype in ai our commitment to publishing in the top venues reflects our grounding in what is real reproducible and truly innovative papers labml ai papers weekly https papers labml ai papers weekly monthly https papers labml ai papers monthly pull requests welcome details more data science materials here are some additional data science resources python data science handbook as jupyter notebooks https jakevdp github io pythondatasciencehandbook r0f1 datascience https github com r0f1 datascience a curated list of awesome resources for practicing data science using python including not only libraries but also links to tutorials code snippets blog posts and talks aside bayesian statistics and machine learning from the bayesian machine learning overview on metacademy https metacademy org roadmaps rgrosse bayesian machine learning bayesian ideas have had a big impact in machine learning in the past 20 years or so because of the flexibility they provide in building structured models of real world phenomena algorithmic advances and increasing computational resources have made it possible to fit rich highly structured models which were previously considered intractable details summary here are some awesome resources for learning bayesian methods summary the free book probabilistic programming and bayesian methods for hackers http camdavidsonpilon github io probabilistic programming and bayesian methods for hackers made with a computation understanding first mathematics second point of view uses pymc https github com pymc devs pymc it s available in print too like learning by playing me too try 19 questions https github com fulldecent 19 questions a machine learning game which asks you questions and guesses an object you are thinking about and explains which bayesian statistics techniques it s using time series forecasting with bayesian modeling by michael grogan https www manning com liveprojectseries time series forecasting with bayesian modeling a 5 project series paid but the first project is free bayesian modelling in python https github com markdregan bayesian modelling in python uses pymc https github com pymc devs pymc as well details back to top dive into machine learning more ways to dive into machine learning here are some other guides to learning machine learning machine learning for software engineers by nam vu https github com zuzoovn machine learning for software engineers in their words it s a top down and results first approach designed for software engineers definitely bookmark and use it it can answer many questions and connect you with great resources ujjwalkarn machine learning tutorials https github com ujjwalkarn machine learning tutorials josephmisiti awesome machine learning https github com josephmisiti awesome machine learning courses by cloud vendors these are usually high quality content but steer you heavily to use vendor specific tools services to avoid getting locked into vendor specifics you can make sure you re learning from other resources as well microsoft ml for beginners https github com microsoft ml for beginners microsoft data science for beginners https github com microsoft data science for beginners machine learning crash course from google https developers google com machine learning crash course more of their options https cloud google com training machinelearning ai amazon aws https aws amazon com machine learning mlu more of their options https aws amazon com machine learning learn 2022 machine learning with pytorch and scikit learn by sebastian raschka yuxi hayden liu and vahid mirjalili https github com rasbt machine learning book back to top dive into machine learning
ai
lwc-builder
lwc builder vscode extension to build salesforce lightning web component lwc on the ui installation now available on vscode marketplace https marketplace visualstudio com items itemname ninoish lwc builder to open lwc builder open the command palette and find open lwc builder command right click on the lwc folder of your sfdx project and click open lwc builder screenshot https i imgur com 9nv0kk0 jpg demo video demo video https img youtube com vi prhvttgvmeu 0 jpg https youtu be prhvttgvmeu t 573 how we built lwc builder vscode extension sample screenshot full screenshot https imgur com dnl3w4s jpg license this extension is distributed under the bsd 3 https opensource org licenses bsd 3 clause license contribution anyone is welcome to contribute please follow contributing md https github com developerforce lwc builder blob main contributing md see also lwc builder ui https github com developerforce lwc builder ui lwc builder is a vscode extension that launches lwc builder ui in vscode context
vscode vscode-extension lwc lightning-web-components
front_end
dashboard
netbeast dashboard build status https travis ci org netbeast dashboard svg https travis ci org netbeast dashboard windows build status https ci appveyor com api projects status l67h46kbdxtvy43p svg true https ci appveyor com project jsdario dashboard img src https avatars2 githubusercontent com u 7111340 v 3 s 400 height 24px width auto http bit ly 1vyfqdh img src https www iconexperience com img i collection png 512x512 plain graph png height 24px width auto http npm anvaka com view 2d netbeast cli important notice netbeast dashboard project is moving on developing is hard being disruptive is harder industry won t stop trying to impose new protocols and standards indies won t stop creating open source projects that everyone should adopt this repo was our own bet but it is really hard to take off and critical adoption rate is really difficult to achieve so we changed our focus firstly we developed yeti smart home https getyeti co it is a mobile app with effortless installation and a thoroughtly worked ui that is the perfect platform to build the next generation tools for iot development that is actually usable by non technical people we also did it in an open way sharing our know how with the community and releasing a number of packages you can read it more here developing beyond the screen https medium com react native development developing beyond the screen 9af812b96724 now we are going to take a second step we just started the bigfoot project we took everything we learnt from building the first netbeast dashboard and the ux ui experience of yeti developing a third beast instead of developing a new platform or protocol we are releasing a series of documentation guides and wrapper of already standard tools to make them work with each other so the bigfoot project is a collection of already existing tools that work together out of the box and will help you develop your next connected thing as soon as possible please join us here https github com netbeast bigfoot we want to make compatible all the pieces of our ecosystem to serve best users and developers equally as our mission is to make things work for people instead making people work for things netbeast team connect everything regardless its brand or technology one api unlimited products and hacks netbeast middleware translates messages from different iot protocols and device interfaces so they work as one have no more hubs work across devices not brands var netbeast require netbeast netbeast find then function netbeast lights set power 1 will turn on all lights in your home contents installation installation basic installation basic raspberry beagle bone pine64 or your choice of board installation board using docker installation docker overview overview documentation documentation create iot with node js iot with node apps apps connect devices connect devices community community contribute contribute license md https github com netbeast dashboard blob master license txt a name installation a installation a name installation basic a basic make sure you have installed git https git scm com book en v2 getting started installing git and nodejs https nodejs org en bash npm install g netbeast cli netbeast start find it live at http localhost 8000 or run it as netbeast start port port pro tip to get started developing you will find handy to have it installed in a folder of your choice git clone https github com netbeast dashboard cd dashboard npm install production npm start dashboard live gif public img dashboard demo gif a name installation board a raspberry beagle bone pine64 or your choice of board make sure again you have installed git https git scm com book en v2 getting started installing git and nodejs https nodejs org en it can be tricky depending on your os architecture if any doubts please reach forum http bit ly 23uyvmr or open an issue https github com netbeast dashboard issues 1 apply the basic installation from above preferably using git git clone https github com netbeast dashboard clone in this folder npm i production no front end or test dependencies 1 keep it running 24h 7 days a week to use it as smart home hub you can use utilities such as forever https www npmjs com package forever or pm2 https www npmjs com package pm2 npm i g pm2 sudo pm2 start index js port 80 3 soon learn how to attach a dhcp name to your netbeast as https home netbeast and how to deal with wireless configuration in linux from our blog https blog netbeast co a name installation docker a using docker whale make sure you already have docker https docs docker com engine installation installed 1 run our docker image if it s the first time it ll be downloaded from the docker hub https hub docker com r netbeast netbeast docker run p 49160 8000 d netbeast netbeast this will run netbeast dashboard on port 49160 of the host running the container you can now play with it access the dashboard on http localhost 49160 http localhost 49160 et voil a name overview a overview find inspiration think about new projects connect your new hardware netbeast apps are html5 user interfaces that enable controlling iot or visualizing their data netbeast plugins are apps that translate from the netbeast iot unified scheme to each particular implementation of an iot device explore existing apps and plugins of our public registry https dashboard netbeast co explore control devices regardless of their brand and technology take a look on our unified api on action in this demo on youtube under a netbeast app that creates new scenes image alt text here https img youtube com vi ynerwjdykuq 0 jpg https www youtube com watch v ynerwjdykuq https www youtube com watch v ynerwjdykuq measure all your data use the netbeast api https github com netbeast api along with the dashboard to publish data through mqtt or reuse it in your apps read more http docs netbeast co chapters api reference index html dashboard live gif public img history gif write iot apps without spending on hardware or suffering expensive deployments take advance of netbeast iot middleware to test your apps with software that mocks the hardware interface virtual plugins https docs netbeast co img bulb padjs gif find tutorials in the docs http bit ly 1vyfqdh read a blog post about it on toptal https www toptal com nodejs programming visually with node red or join the forum http bit ly 23uyvmr to ask how to do it a name documentation a documentation we publish a gitbook https www gitbook com book netbeast docs details with fresh documentation on https docs netbeast co http bit ly 1vyfqdh if you want to open an issue contribute or edit it find your way on its github repo https github com netbeast docs a name iot with node a create iot with node js in netbeast we care about education openness and interoperability we have created a series of workshops to teach developers to better use http mqtt in combination with the dashboard to create data bindings and incredible apps use your favorite boards and platforms as arduino pi zero pine64 belkin wemo homekit and a infinite list connected a name apps a apps a netbeast app allows you to run the dashboard unique api in the browser or backend equally just expose some user interface in your apps root in the following snippet we serve in the root all files inside public folder var express require express var app express netbeast apps need to accept the port to be launched by parameters var argv require minimist process argv slice 2 app use express static public var server app listen argv port 31416 function var host server address address var port server address port console log example app listening at http s s host port learn how to create new scenes and user interfaces as bots speech recognition smart triggers learn how to develop netbeast apps debug and publish them on the documentation https docs netbeast co chapters developing apps write your first app html a name connect devices a connect devices a plugin is an app that enables your dashboard to communicate with a different protocol or proprietary device it s like if you that want to learn chinese could speak chinese by installing an app luis cofounder of netbeast a basic plugin must implement at least a discovery primitive to declare itself on netbeast s database fill the gaps to create your first hardware integration into netbeast var netbeast require netbeast var express require express var cmd require commander reads port from command line netbeast tells you in which port to run your plugin endpoint cmd option p port n port to start the http server parseint parse process argv var app express discover your resources scan the network and declare your routes into the api app get discover function todo implement discovery for each device netbeast topic create app my first plugin hook device id end of for or register all device together and delete the resources no longer available netbeast topic udatedb app my first plugin hook device1 id device2 id device3 id device4 id create here your api routes app get app post app put app delete app get device id function req res id of the device the dashboard wants req params device id dashboard will do get on this route when netbeast topic get todo return device values from req query res json your plugin data app post device id function req res id of the device the dashboard wants req params device id dashboard will do post on this route when netbeast topic set todo change device values from req body res json your plugin data var server app listen cmd port 4000 function console log netbeast plugin started on s s server address address server address port learn how to launch it debug it and publish it on the documentation https docs netbeast co chapters developing plugins write your first plugin html a name community a community img src https pbs twimg com profile images 3264780953 6c9a2cd7bb2efcb4c53d32900e52c8ac 400x400 png height 24px width auto join us in our forum http forum netbeast co c otros desarrolladores img src https slack com img slack hash 128 v1442100037 png height 24px width auto ask for an invitation to join our slack team here https netbeastco typeform com to vglexg project website https netbeast co developer a name contribute a contribute take a look to our contributing md https github com netbeast dashboard blob master contributing md file in order to see how can you be part of this project or take a look on netbeast s discourse forum http bit ly 23uyvmr to find for inspiration projects and help tl dr make a pull request if your pr is eventually merged don t forget to write down your name on the authors txt https github com netbeast dashboard blob master authors file img src https github com netbeast docs blob master img open source png raw true height 140px width auto img src https github com netbeast docs blob master img open hw png raw true height 140px width auto nbsp nbsp nbsp img src https camo githubusercontent com 41830215b4097f57cd7780ad127fb0917fc8f818 68747470733a2f2f63646e2e7261776769742e636f6d2f6665726f73732f7374616e646172642f6d61737465722f737469636b65722e737667 height 140px width auto
javascript iot dashboard smart-home react mqtt netbeast
server
picpay-desafio-frontend
desafio front end picpay primeiramente obrigado pelo seu interesse em trabalhar na melhor plataforma de pagamentos do mundo abaixo voc encontrar todos as informa es necess rias para iniciar o seu teste avisos antes de come ar para a entrega do teste voc precisar ter two factor habilitado na sua conta do github voc poder ver o passo a passo nesse tutorial https help github com pt github authenticating to github configuring two factor authentication com two factor habilitado voc precisa configurar a chave ssh https help github com pt github authenticating to github adding a new ssh key to your github account para iniciar o teste crie uma branch a partir da master nesse padr o de nomenclatura dd mm yy nome sobrenome por exemplo 30 04 20 meu nome voc poder consultar o google stackoverflow ou algum projeto particular na sua m quina fique vontade para perguntar qualquer d vida aos recrutadores fique tranquilo respire assim como voc tamb m j passamos por essa etapa boa sorte setup do projeto angular cli 8 3 18 node 10 15 3 angular 8 2 14 como rodar instale as depend ncias usando o comando npm install na raiz do reposit rio rode este comando ng serve para iniciar o servidor de desenvolvimento a aplica o estar dispon vel na porta http localhost 4200 como submeter commite suas altera es de forma organizada abra uma pull request da sua branch para a master com a nomenclatura nome sobrenome dd mm yy objetivo o objetivo construir uma aplica o que simula o envio de dinheiro para uma outra pessoa via cart o de cr dito fluxo das telas na primeira tela ter uma listagem de usu rios onde a pessoa pode clicar em algum usu rio da lista para realizar o pagamento quando clicado em um usu rio ent o aberto um modal de pagamento contendo as informa es do usu rio de destino a op o de selecionar um cart o de cr dito e um bot o de pagar o usu rio deve ent o digitar o valor escolher o cart o e clicar em pagar para realiza o do pagamento deve se chamar um endpoint de pagamento que aprovar recusar a transa o e ent o deve se mostrar na tela o modal de pagamento conclu do com sucesso ou o de erro screenshots lista de usu rios img src screenshots lista usuarios png alt lista de usu rios style width 100 max width 500px modal de pagamento e listagem de cart es img src screenshots modal pagamento png alt modal de pagamento style width 100 max width 400px modal de pagamento conclu do com sucesso img src screenshots modal sucesso png alt modal de pagamento com sucesso style width 100 max width 400px modal de erro no pagamento img src screenshots modal falha png alt modal de erro no pagamento style width 100 max width 400px cart es para exibir o cart o v lido vai aprovar a transa o no backend javascript let cards valid card card number 1111111111111111 cvv 789 expiry date 01 18 invalid card card number 4111111111111234 cvv 123 expiry date 01 20 transa o endpoint https run mocky io v3 533cd5d7 63d3 4488 bf8d 4bb8c751c989 m todo post typescript payload interface transactionpayload card info card number string cvv number expiry date string destination user id destination user id number value of the transaction value number obs por se tratar de um mock o endpoint sempre retornar o mesmo payload sucesso no pagamento independente do cart o usu rios endpoint https www mocky io v2 5d531c4f2e0000620081ddce m todo get typescript payload interface user id number name string img string username string diferenciais teste unit rio e2e melhoria no estilo da aplica o valida o de formul rios e m scaras organiza o do c digo estamos sempre em busca de melhoria por isso caso tenha alguma sugest o fique a vontade pra compartilhar com a gente mais uma vez boa sorte green heart
front_end
Skeleton-Sass
skeleton sass http getskeleton com skeleton sass is the un official sass version of dave gamache s https twitter com dhg skeleton framework it currently featues a stable version of skeleton 2 0 4 skeleton is a simple responsive boilerplate to kickstart any responsive project check out http getskeleton com for documentation and details getting started install global dependencies node js http nodejs org bower http bower io sudo npm install bower g grunt js http gruntjs com sudo npm install g grunt cli install local dependencies download zip https github com whatsnewsaes skeleton sass archive master zip clone the repo github mac openrepo https github com whatsnewsaes skeleton sass or bower install skeleton scss from your terminal cd to project folder run sudo npm install first time users run grunt to watch and compile sass files what s in the download the download includes skeleton s css normalize css as a reset a sample favicon and an index html as a starting point skeleton index html scss skeleton scss images favicon png package json gruntfile js readme md contributions the goal of skeleton sass is to have a mirrored sass repository of skeleton in order to keep the integrity of the original skeleton framework i cannot accept any features or functionality outside the original implementation of dave gamache s https twitter com dhg skeleton framework https github com dhg skeleton if you would like to see features functionality or extensions outside of the original please make a pr or issue on the original skeleton framework if you have sass improvements additional mixins or other helpful sass techniques that stay within the original codebase feel free to make a pull request why it s awesome skeleton is lightweight and simple it styles only raw html elements with a few exceptions and provides a responsive grid nothing more minified it s less than a kb it s a starting point not a ui framework no compiling or installing just vanilla css browser support chrome latest firefox latest opera latest safari latest ie latest the above list is non exhaustive skeleton works perfectly with almost all older versions of the browsers above though ie certainly has large degradation prior to ie9 license all parts of skeleton sass are free to use and abuse under the open source mit license http opensource org licenses mit license php colophon skeleton was built using sublime text 3 http www sublimetext com 3 and designed with sketch http bohemiancoding com sketch the typeface raleway http www google com fonts specimen raleway was created by matt mcinerney http matt cc and pablo impallari http www impallari com code highlighting by google s prettify library https code google com p google code prettify icons in the header of the documentation are all derivative work of icons from the noun project thenounproject com feather http thenounproject com term feather 22073 by zach vandehey pen http thenounproject com term pen 21163 with cap by ed harrison pen http thenounproject com term pen 32847 with clicker by matthew hall and watch http thenounproject com term watch 48015 by julien deveaux acknowledgement skeleton was created by dave gamache https twitter com dhg for a better web skeleton sass was created by seth coelen http sethcoelen com for a better skeleton a href https ko fi com i 2446a87jj08cz target blank img style border 0px width 100px src https az743702 vo msecnd net cdn btn1 png border 0 alt buy me a coffee at ko fi com a
skeleton-sass skeleton-framework css skeleton skeleton-css
front_end
iOS-CS193P-Standford
codebeat badge https codebeat co badges 91a70745 1904 493e 8bc4 e9d5e776e105 https codebeat co projects github com sencudra cs193p master swiftlint https img shields io badge swiftlint 0 38 2 green stanford engineering cs193p developing ios apps media cs193p png this is my very long path through the ios development course https www youtube com playlist list plpa aybrweuzgfmkt w65z64mognkrzmq there are 17 3 lectures and 6 problem 3 reading assignments goals 1 complete this course from cover to cover 2 follow the codestyle 3 get familiar with ci and stuff 4 have fun miniprojects that were created alongside this course 1 svg2swift https github com sencudra svg2swift converter just a useless python tool to convert svg s path to swift lines 2 tbd lectures lecture slides video progress 1 overview of ios slides lecture 1 slides pdf media play png https www youtube com watch v z9ixfyhhkyi index 1 list pl l7vs8vbndfbikil3feqhkkxtysncsvn 2 model view controller mvc slides lecture 2 slides pdf media play png https www youtube com watch v 4igdu4iwmfc index 2 list pl l7vs8vbndfbikil3feqhkkxtysncsvn fl1 debugging xcode tips tricks media play png https www youtube com watch v 7cexddgjsvu index 19 list pl l7vs8vbndfbikil3feqhkkxtysncsvn 3 swift slides lecture 3 slides pdf media play png https www youtube com watch v 88husjydcwy index 3 list pl l7vs8vbndfbikil3feqhkkxtysncsvn 4 protocols closures slides lecture 4 slides pdf media play png https www youtube com watch v rgmkmhy ewe list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 4 5 drawing in ios slides lecture 5 slides pdf media play png https www youtube com watch v poo0pz0gplk list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 5 6 multitouch multiple mvcs slides lecture 6 slides pdf media play png https www youtube com watch v n pynplrhys index 6 list pl l7vs8vbndfbikil3feqhkkxtysncsvn 7 multiple mvcs timer animation slides lecture 7 slides pdf media play png https www youtube com watch v diihwsxosdk index 7 list pl l7vs8vbndfbikil3feqhkkxtysncsvn 8 animation slides lecture 8 slides pdf media play png https www youtube com watch v 5w9lu9abjze index 8 list pl l7vs8vbndfbikil3feqhkkxtysncsvn fl2 github source code workflow media play png https www youtube com watch v p8gyk aunk list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 18 9 view controller lifecycle scroll view slides lecture 9 slides pdf media play png https www youtube com watch v qjrmau1wmmu index 9 list pl l7vs8vbndfbikil3feqhkkxtysncsvn 10 multithreading autolayout slides lecture 10 slides pdf media play png https www youtube com watch v u1g8f6f3pyq list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 10 fl3 instruments media play png https www youtube com watch v bcnlw9rhee0 list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 20 11 drag and drop uitableview uicollectionview slides lecture 11 slides pdf media play png https www youtube com watch v hore835 mj4 list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 11 12 emoji art demo uitextfield slides lecture 12 slides pdf media play png https www youtube com watch v qcj79tknk1i index 12 list pl l7vs8vbndfbikil3feqhkkxtysncsvn 13 emoji art demo persistence slides lecture 13 slides pdf media play png https www youtube com watch v 9o nsiichpg list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 13 14 more about documents demo slides lecture 14 slides pdf media play png https www youtube com watch v zkhcllza es index 14 list pl l7vs8vbndfbikil3feqhkkxtysncsvn 15 alert and action sheet notifications kvo application lifecycle slides lecture 15 slides pdf media play png https www youtube com watch v bjlrcnev88k list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 15 16 segues modal popover unwind embed slides lecture 16 slides pdf media play png https www youtube com watch v nk kg294hrc list pl l7vs8vbndfbikil3feqhkkxtysncsvn index 16 17 core motion camera slides lecture 17 slides pdf media play png https www youtube com watch v ccg0qoszixa index 17 list pl l7vs8vbndfbikil3feqhkkxtysncsvn reading assignments reading name progress 1 reading 1 intro to swift reading reading 1 intro to swift pdf 2 reading 2 more swift reading reading 2 more swift pdf 3 reading 3 finishing off swift reading reading 3 finishing off swift pdf 4 additional reading 4 updates of swift 5 1 problem sets ps name progress 1 assignment 1 concentration problemsets programming project 1 concentration pdf 2 assignment 2 set problemsets programming project 2 set pdf 3 assignment 3 graphical set problemsets programming project 3 graphical set pdf 4 assignment 4 animated set problemsets programming project 4 animated set pdf 5 assignment 5 image gallery problemsets programming project 5 image gallery pdf 6 assignment 6 persistent image gallary problemsets programming project 6 persistent image gallery pdf 7 tba advanced topics reading name progress 1 ios sdk updates
os
nlp-pizza
nlp pizza doml sponsored project in natural language processing olin college spring 2015 getting the data 1 go to the kaggle reddit pizza data page http www kaggle com c random acts of pizza download train json zip and download train json zip and test json zip and accept the competition rules 2 create a data directory in this homefolder 3 unzip train json and test json and place them in the data folder 4 your directory structure should look like the following nlp pizza gitignore readme md data train json test json notebooks explore pizza ipynb
ai
Autonomous-Garbage-Collection-TriRover
autonomous garbage collection with tri rover embedded systems design capstone project support information for individual component is posted in this repository team presentation video https youtu be qpk 7vanpvi team presentation video technologies used 1 embedded iot board texas instrument 2 prebuilt 3 wheels rover with motor and encoders 3 servo motor gripper and ir sensor with the right mount and power connections 4 logic analyzer 5 raspberry pi messaging server mqtt broker 6 mqtt communications with json data formats 7 embedded systems written in c 8 testing and statistics written in python 9 embedded protocols spi adc pwm gpio and freertos 10 zoom slack trello for remote work adjustments
os
Blockchain
block chain python https imgur com xslgvtl projects part 1 in class project basic setup and proof of work basic block gp part 1 take home project client miners client mining p part 2 in class project basic transactions basic transactions gp part 2 take home project basic wallet basic wallet p based on blockchain by dvf used under mit license https github com dvf blockchain
blockchain
db-fmi-2019
db fmi 2019 learning repository for database course at fmi 2019 software engineering
server
my-llm
my llm all about large language models my practice my alpaca https github com l294265421 my alpaca reproduce alpaca multi turn alpaca https github com l294265421 multi turn alpaca train alpaca with multi turn dialogue datasets alpaca rlhf https github com l294265421 alpaca rlhf train multi turn alpaca with rlhf reinforcement learning with human feedback based on deepspeed chat my autocrit https github com l294265421 my autocrit experiments using autocrit try large models https github com l294265421 try large models try large models my rl https github com l294265421 my rl learn reinforcement learning using tianshou my articles chatgpt techniques introduction for everyone https github com l294265421 chatgpt techniques introduction for everyone pre train models t5 paper papers pre train models 2020 jmlr exploring 20the 20limits 20of 20transfer 20learning 20with 20a 20unified 20text to text 20transformer pdf architecture encoder decoder gpt paper gpt https s3 us west 2 amazonaws com openai assets research covers language unsupervised language understanding paper pdf gpt 2 https d4mucfpksywv cloudfront net better language models language models are unsupervised multitask learners pdf gpt 3 https arxiv org pdf 2005 14165 pdf gpt neo gpt j 6b megatron 11b pangu a 13b fairseq glam paper papers pre train models 2022 icml glam 20efficient 20scaling 20of 20language 20models 20with 20mixture of experts pdf lamda paper papers pre train models 2022 lamda 20language 20models 20for 20dialog 20applications pdf jurassic 1 paper https uploads ssl webflow com 60fd4503684b466578c0d307 61138924626a6981ee09caf6 jurassic tech paper pdf mt nlg paper https arxiv org pdf 2201 11990 pdf ernie paper ernie https arxiv org pdf 1904 09223 pdf ernie 2 0 https arxiv org pdf 1907 12412 pdf ernie 3 0 papers pre train models 2021 ernie 203 0 20large scale 20knowledge 20enhanced 20pre training 20for 20language 20understanding 20and 20generation pdf gopher paper papers pre train models 2021 scaling 20language 20models 20methods 20analysis 20 20insights 20from 20training 20gopher pdf conclusion gains from scale are largest in areas such as reading comprehension fact checking and the identification of toxic language but logical and mathematical reasoning see less benefit chinchilla paper papers pre train models 2022 training 20compute optimal 20large 20language 20models pdf conclusion we find that current large language models are significantly under trained a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant we find that for compute optimal training the model size and the number of training tokens should be scaled equally for every doubling of model size the number of training tokens should also be doubled palm paper papers pre train models 2022 palm 20scaling 20language 20modeling 20with 20pathways pdf architecture decoder palm 2 blog https ai google discover palm2 palm 2 technical report https ai google static documents palm2techreport pdf opt paper papers pre train models 2022 opt 20open 20pre trained 20transformer 20language 20models pdf architecture decoder gpt neox paper papers pre train models 2022 gpt neox 20b 20an 20open source 20autoregressive 20language 20model pdf github https github com eleutherai gpt neox architecture decoder bloom paper papers pre train models 2023 bloom 20a 20176b parameter 20open access 20multilingual 20language 20model pdf architecture decoder llama paper papers pre train models 2023 llama 20open 20and 20efficient 20foundation 20language 20models pdf model https huggingface co decapoda research architecture decoder glm paper 2022 acl glm general language model pretraining with autoregressive blank infilling paper papers pre train models 2022 acl glm 20general 20language 20model 20pretraining 20with 20autoregressive 20blank 20infilling pdf github https github com thudm glm 2023 iclr glm 130b an open bilingual pre trained model paper papers pre train models 2023 iclr glm 130b 20an 20open 20bilingual 20pre trained 20model pdf github https github com thudm glm 130b architecture autoregressive blank infilling bloomberggpt paper papers pre train models 2023 bloomberggpt 20a 20large 20language 20model 20for 20finance pdf moss github https github com openlmlab moss openllama an open reproduction of llama github https github com openlm research open llama dolly github https github com databrickslabs dolly panda github https github com dandelionsllm pandallm paper papers pre train models 2023 panda 20llm 20training 20data 20and 20evaluation 20for 20open sourced 20chinese 20instruction following 20large 20language 20models pdf welm paper papers pre train models 2022 welm 20a 20well read 20pre trained 20language 20model 20for 20chinese pdf baichuan baichuan 7b https github com baichuan inc baichuan 7b baichuan 13b https github com baichuan inc baichuan 13b llama 2 site https ai meta com resources models and libraries llama paper papers pre train models 2023 llama 202 20open 20foundation 20and 20fine tuned 20chat 20models pdf survey 2023 a survey of large language models paper https arxiv org abs 2303 18223 methods max sequence length blog transformer 7 https spaces ac cn archives 9431 paper 2023 scaling transformer to 1m tokens and beyond with rmt paper papers pre train methods max sequence length 2023 scaling 20transformer 20to 201m 20tokens 20and 20beyond 20with 20rmt pdf 2022 nips recurrent memory transformer paper papers pre train methods max sequence length 2022 nips recurrent 20memory 20transformer pdf 2022 parallel context windows improve in context learning of large language models paper papers pre train methods max sequence length 2022 parallel 20context 20windows 20improve 20in context 20learning 20of 20large 20language 20models pdf position rotary alibi paper papers pre train methods position 2022 iclr train 20short 20test 20long 20attention 20with 20linear 20biases 20enables 20input 20length 20extrapolation pdf survey transformer https zhuanlan zhihu com p 352898810 normalization rmsnorm layer normalization pre ln post ln sandwich ln deepnorm activation function swiglu gelus swish tokenizer bpe paper papers pre train methods tokenizer 2016 acl neural 20machine 20translation 20of 20rare 20words 20with 20subword 20units pdf interpretability transformer circuits thread https transformer circuits pub lr scheduler 2020 scaling laws for neural language models paper https arxiv org pdf 2001 08361v1 pdf fine tune models general t0 paper https arxiv org pdf 2110 08207 pdf flan paper papers fine tune models 2022 iclr finetuned 20language 20models 20are 20zero shot 20learners pdf github https github com google research flan flan lm paper https arxiv org pdf 2210 11416 pdf bloomz mt0 paper https arxiv org pdf 2211 01786 pdf chatgpt blog https openai com blog chatgpt alpaca a strong replicable instruction following model site https crfm stanford edu 2023 03 13 alpaca html github https github com tatsu lab stanford alpaca fine tuning vicuna an open source chatbot impressing gpt 4 with 90 chatgpt quality github https github com lm sys fastchat site https vicuna lmsys org online demo https chat lmsys org koala a dialogue model for academic research blog https bair berkeley edu blog 2023 04 03 koala github koala data pipeline https github com young geng koala data pipeline koala evaluation set https github com arnav gudibande koala test set alpaca lora github https github com tloen alpaca lora chatglm 6b github https github com thudm chatglm 6b blog https chatglm cn blog firefly github https github com yangjianxin1 firefly thai buffala lora 7b v0 1 model https huggingface co thaweewat thai buffala lora 7b v0 1 multi turn alpaca github https github com l294265421 multi turn alpaca open assistant site https open assistant io zh github https github com laion ai open assistant paper papers 2023 openassistant 20conversations 20 20democratizing 20large 20language 20model 20alignment pdf chinese chinese chatllama github https github com ydli ai chinese chatllama blog llama https zhuanlan zhihu com p 612752963 chatllama https zhuanlan zhihu com p 616748134 belle github https github com lianjiatech belle chinese llama alpaca github https github com ymcui chinese llama alpaca luotuo chinese llm github https github com lc1332 luotuo chinese llm chinese vicuna github https github com facico chinese vicuna chinese alpaca lora github https github com lc1332 chinese alpaca lora japanese japanese alpaca lora github https github com kunishou japanese alpaca lora medical 2023 chatdoctor a medical chat model fine tuned on llama model using medical domain knowledge paper papers 2023 chatdoctor 20a 20medical 20chat 20model 20fine tuned 20on 20llama 20model 20using 20medical 20domain 20knowledge pdf huatuo llama github https github com scir hi huatuo llama med chinese law lawgpt zh github https github com liuhc0428 law gpt recommendation 2023 recalpaca low rank llama instruct tuning for recommendation other 2023 a survey of domain specialization for large language models paper https arxiv org pdf 2305 18703 pdf methods rl 2017 proximal policy optimization algorithms paper papers fine tune methods rl 2017 proximal 20policy 20optimization 20algorithms pdf why is the log probability replaced with the importance sampling in the loss function https ai stackexchange com questions 7685 why is the log probability replaced with the importance sampling in the loss fun 2016 asynchronous methods for deep reinforcement learning paper papers fine tune methods rl 2016 asynchronous 20methods 20for 20deep 20reinforcement 20learning pdf 2015 high dimensional continuous control using generalized advantage estimation paper papers fine tune methods rl 2015 high dimensional 20continuous 20control 20using 20generalized 20advantage 20estimation pdf 2015 mlr trust region policy optimization paper papers fine tune methods rl 2015 mlr trust 20region 20policy 20optimization pdf reward modeling 2023 reward design with language models paper papers fine tune methods rl reward modeling 2023 reward 20design 20with 20language 20models pdf 2022 scaling laws for reward model overoptimization paper papers fine tune methods rl reward modeling 2022 scaling 20laws 20for 20reward 20model 20overoptimization pdf autocrit github https github com carperai autocrit tree contrastive scalar rm reward modeling github https github com dahoas reward modeling 2023 on the fragility of learned reward functions paper papers fine tune methods rl reward modeling 2023 on 20the 20fragility 20of 20learned 20reward 20functions pdf peft 2021 lora low rank adaptation of large language models paper papers fine tune methods peft 2021 lora 20low rank 20adaptation 20of 20large 20language 20models pdf align 2023 raft reward ranked finetuning for generative foundation model alignment paper https arxiv org abs 2304 06767 2023 preference ranking optimization for human alignment paper https arxiv org pdf 2306 17492 pdf 2023 is reinforcement learning not for natural language processing benchmarks baselines and building blocks for natural language policy optimization paper papers fine tune methods align 2023 iclr is 20reinforcement 20learning 20 not 20for 20natural 20language 20processing 20benchmarks 20baselines 20and 20building 20blocks 20for 20natural 20language 20policy 20optimization pdf 2023 fine grained human feedback gives better rewards for language model training paper papers fine tune methods align 2023 fine grained 20human 20feedback 20gives 20better 20rewards 20for 20language 20model 20training pdf 2023 chain of hindsight aligns language models with feedback paper papers fine tune methods align 2023 chain 20of 20hindsight 20aligns 20language 20models 20with 20feedback pdf 2023 training socially aligned language models in simulated human society paper papers fine tune methods align 2023 training 20socially 20aligned 20language 20models 20in 20simulated 20human 20society pdf 2023 let s verify step by step paper papers fine tune methods align 2023 let s 20verify 20step 20by 20step pdf 2023 the false promise of imitating proprietary llms paper papers fine tune methods align 2023 the 20false 20promise 20of 20imitating 20proprietary 20llms pdf 2023 alpacafarm a simulation framework for methods that learn from human feedback paper papers fine tune methods align 2023 alpacafarm 20a 20simulation 20framework 20for 20methods 20that 20learn 20from 20human 20feedback pdf 2023 lima less is more for alignment paper papers fine tune methods align 2023 lima 20less 20is 20more 20for 20alignment pdf 2023 rrhf rank responses to align language models with human feedback without tears paper papers fine tune methods align 2023 rrhf 20rank 20responses 20to 20align 20language 20models 20with 20human 20feedback 20without 20tears pdf code https github com ganjinzero rrhf 2022 solving math word problems with process and outcome based feedback paper https arxiv org pdf 2211 14275 pdf 2022 training a helpful and harmless assistant with reinforcement learning from human feedback paper papers fine tune methods align 2022 training 20a 20helpful 20and 20harmless 20assistant 20with 20reinforcement 20learning 20from 20human 20feedback pdf 2022 training language models to follow instructions with human feedback paper papers fine tune methods align 2022 training 20language 20models 20to 20follow 20instructions 20with 20human 20feedback pdf github https github com anthropics hh rlhf 2022 red teaming language models to reduce harms methods scaling behaviors and lessons learned paper https arxiv org abs 2209 07858 2022 lamda language models for dialog applications paper papers pre train models 2022 lamda 20language 20models 20for 20dialog 20applications pdf 2022 constitutional ai harmlessness from ai feedback paper papers fine tune methods align 2022 constitutional 20ai 20harmlessness 20from 20ai 20feedback pdf 2021 a general language assistant as a laboratory for alignment paper papers fine tune methods align 2021 a 20general 20language 20assistant 20as 20a 20laboratory 20for 20alignment pdf 2021 ethical and social risks of harm from language models paper papers fine tune methods align 2021 ethical 20and 20social 20risks 20of 20harm 20from 20language 20models pdf 2020 nips learning to summarize from human feedback paper papers fine tune methods align 2020 nips learning 20to 20summarize 20from 20human 20feedback pdf 2019 fine tuning language models from human preferences paper papers fine tune methods align 2019 fine tuning 20language 20models 20from 20human 20preferences pdf 2018 scalable agent alignment via reward modeling a research direction paper papers fine tune methods align 2018 scalable 20agent 20alignment 20via 20reward 20modeling 20a 20research 20direction pdf reinforcement learning for language models blog https gist github com yoavg 6bff0fecd65950898eba1bb321cfbd81 2017 nips deep reinforcement learning from human preferences paper papers fine tune methods align 2017 nips deep 20reinforcement 20learning 20from 20human 20preferences pdf 2016 concrete problems in ai safety paper papers fine tune methods align 2016 concrete 20problems 20in 20ai 20safety pdf other 2022 naacl metaicl learning to learn in context paper papers fine tune methods other 2022 naacl metaicl 20learning 20to 20learn 20in 20context pdf 2022 iclr multitask prompted training enables zero shot task generalization paper papers fine tune methods other 2022 iclr multitask 20prompted 20training 20enables 20zero shot 20task 20generalization pdf prompt learning 2023 tree of thoughts deliberate problem solving with large language models paper papers prompt learning 2023 tree 20of 20thoughts 20deliberate 20problem 20solving 20with 20large 20language 20models pdf 2023 guiding large language models via directional stimulus prompting paper papers prompt learning 2023 guiding 20large 20language 20models 20via 20directional 20stimulus 20prompting pdf 2023 iclr self consistency improves chain of thought reasoning in language models paper papers prompt learning 2023 iclr self consistency 20improves 20chain 20of 20thought 20reasoning 20in 20language 20models pdf 2023 is prompt all you need no a comprehensive and broader view of instruction learning paper papers prompt learning 2023 is 20prompt 20all 20you 20need 20no 20a 20comprehensive 20and 20broader 20view 20of 20instruction 20learning pdf survey 2021 pre train prompt and predict a systematic survey of prompting methods in natural language processing paper papers prompt learning 2021 pre train 20prompt 20and 20predict 20a 20systematic 20survey 20of 20prompting 20methods 20in 20natural 20language 20processing pdf prompt tuning 2023 prompt pre training with twenty thousand classes for open vocabulary visual recognition paper papers prompt learning 2023 prompt 20pre training 20with 20twenty thousand 20classes 20for 20open vocabulary 20visual 20recognition pdf 2022 ac ppt pre trained prompt tuning for few shot learning paper papers prompt learning 2022 ac ppt 20pre trained 20prompt 20tuning 20for 20few shot 20learning pdf 2022 acl p tuning prompt tuning can be comparable to fine tuning across scales and tasks paper papers prompt learning 2022 acl p tuning 20prompt 20tuning 20can 20be 20comparable 20to 20fine tuning 20across 20scales 20and 20tasks pdf 2021 emnlp the power of scale for parameter efficient prompt tuning paper papers prompt learning 2021 emnlp the 20power 20of 20scale 20for 20parameter efficient 20prompt 20tuning pdf 2021 acl prefix tuning optimizing continuous prompts for generation paper papers prompt learning 2021 acl prefix tuning 20optimizing 20continuous 20prompts 20for 20generation pdf 2021 gpt understands too paper papers prompt learning 2021 gpt 20understands 20too pdf integrating external data tool learning toollearningpapers https github com thunlp toollearningpapers methods 2023 openagi when llm meets domain experts paper https arxiv org abs 2304 04370 2023 webcpm interactive web search for chinese long form question answering paper https arxiv org abs 2305 06849 2023 evaluating verifiability in generative search engines paper papers integrating external data 2023 evaluating 20verifiability 20in 20generative 20search 20engines pdf 2023 enabling large language models to generate text with citations paper papers integrating external data 2023 enabling 20large 20language 20models 20to 20generate 20text 20with 20citations pdf 2022 acl lifelong pretraining continually adapting language models to emerging corpora paper https aclanthology org 2022 bigscience 1 1 pdf 2022 findings acl elle efficient lifelong pre training for emerging data paper https aclanthology org 2022 findings acl 220 pdf langchain github langchain https github com hwchase17 langchain chinese langchain https github com yanqiangmiffy chinese langchain 2023 check your facts and try again improving large language models with external knowledge and automated feedback paper papers integrating external data 2023 check 20your 20facts 20and 20try 20again 20improving 20large 20language 20models 20with 20external 20knowledge 20and 20automated 20feedback pdf 2022 teaching language models to support answers with verified quotes 2021 webgpt browser assisted question answering with human feedback paper papers integrating external data 2021 webgpt 20browser assisted 20question answering 20with 20human 20feedback pdf 2021 improving language models by retrieving from trillions of tokens 2020 realm retrieval augmented language model pre training 2020 retrieval augmented generation for knowledge intensive nlp tasks other gpt llm https www zhihu com question 591935281 dataset for pre training redpajama data github https github com togethercomputer redpajama data redpajama data 1t huggingface https huggingface co datasets togethercomputer redpajama data 1t blog https www together xyz blog redpajama c4 pile roots wudao corpora large scale chinese corpus for nlp github https github com brightmart nlp chinese corpus csl a large scale chinese scientific literature dataset github https github com ydli ai csl github https github com fudannlplab cbook 150k chinese open instruction generalist coig paper https arxiv org pdf 2304 07987v1 pdf github1 https github com nlpxiaoxu llm finetune finnlp github https github com ai4finance foundation finnlp smoothnlp public financial datasets for nlp researches https github com smoothnlp financialdatasets for sft chatalpaca github https github com cascip chatalpaca instructionzoo github https github com freedomintelligence instructionzoo flaginstruct https github com flagopen flaginstruct fnlp moss 002 sft data hugging face datasets https huggingface co datasets fnlp moss 002 sft data for reward model for evaluation superclue https www cluebenchmarks com superclue html open llms benchmark https mp weixin qq com s ogo9gjueun09mopuuttojw promptcblue glue superglue squad coqa wmt lambada rouge cuge mmlu hellaswag openbookqa arc triviaqa truthfulqa methods 2023 a pretrainer s guide to training data measuring the effects of data age domain coverage quality toxicity paper papers dataset methods 2023 a 20pretrainer s 20guide 20to 20training 20data 20measuring 20the 20effects 20of 20data 20age 20domain 20coverage 20quality 20 20toxicity pdf 2023 doremi optimizing data mixtures speeds up language model pretraining 2023 data selection for language models via importance resampling 2022 self instruct aligning language model with self generated instructions paper papers dataset methods 2022 self instruct 20aligning 20language 20model 20with 20self 20generated 20instructions pdf 2022 acl deduplicating training data makes language models better paper papers dataset methods 2022 acl deduplicating 20training 20data 20makes 20language 20models 20better pdf evaluation 2023 findings acl few shot fine tuning vs in context learning a fair comparison and evaluation paper https aclanthology org 2023 findings acl 779 2023 harnessing the power of llms in practice a survey on chatgpt and beyond paper papers evaluation 2023 harnessing 20the 20power 20of 20llms 20in 20practice 20a 20survey 20on 20chatgpt 20and 20beyond pdf 2023 instructeval towards holistic evaluation of instruction tuned large language models paper papers evaluation 2023 instructeval 20towards 20holistic 20evaluation 20of 20instruction tuned 20large 20language 20models pdf llmzoo a project that provides data models and evaluation benchmark for large language models github https github com freedomintelligence llmzoo 2023 evaluating chatgpt s information extraction capabilities an assessment of performance explainability calibration and faithfulness paper papers evaluation 2023 evaluating 20chatgpt s 20information 20extraction 20capabilities 20an 20assessment 20of 20performance 20explainability 20calibration 20and 20faithfulness pdf 2023 towards better instruction following language models for chinese investigating the impact of training data and evaluation paper papers evaluation 2023 towards 20better 20instruction 20following 20language 20models 20for 20chinese 20investigating 20the 20impact 20of 20training 20data 20and 20evaluation pdf pandalm https github com weopenml pandalm lm evaluation harness https github com eleutherai lm evaluation harness big bench https github com google big bench 2023 halueval a large scale hallucination evaluation benchmark for large language models paper https arxiv org pdf 2305 11747v2 pdf 2023 c eval a multi level multi discipline chinese evaluation suite for foundation models paper https arxiv org abs 2305 08322 2023 safety assessment of chinese large language models paper https arxiv org pdf 2304 10436 pdf 2022 holistic evaluation of language models paper https arxiv org pdf 2211 09110 pdf aspects helpfulness honesty harmlessness truthfulness robustness bias toxicity and misinformation nlp inference analysis pythia interpreting autoregressive transformers across time and scale github https github com eleutherai pythia 2023 inspecting and editing knowledge representations in language models paper https arxiv org abs 2304 00740 products chatgpt https chat openai com https yiyan baidu com https tongyi aliyun com agentgpt github https github com reworkd agentgpt hugginggpt github https github com microsoft jarvis paper https arxiv org abs 2303 17580 autogpt github https github com significant gravitas auto gpt minigpt 4 github https github com vision cair minigpt 4 paper papers 2023 minigpt 4 pdf sharegpt github https github com domeccleston sharegpt character ai site https beta character ai llava paper papers 2023 visual 20instruction 20tuning pdf site https llava vl github io video llama paper papers 2023 video llama 20an 20instruction tuned 20audio visual 20language 20model 20for 20video 20understanding pdf chatpaper github https github com kaixindelele chatpaper tools deepspeed https github com microsoft deepspeed deepspeed chat https github com microsoft deepspeedexamples tree master applications deepspeed chat colossalai https github com hpcaitech colossalai megatron lm https github com nvidia megatron lm trlx https github com carperai trlx trl https github com lvwerra moss rlhf https github com openlmlab moss rlhf traditional nlp tasks 2023 annollm making large language models to be better crowdsourced annotators paper papers traditional nlp tasks 2023 annollm 20making 20large 20language 20models 20to 20be 20better 20crowdsourced 20annotators pdf 2022 super naturalinstructions generalization via declarative instructions on 1600 nlp tasks paper https arxiv org abs 2204 07705 sentiment analysis 2023 sentiment analysis in the era of large language models a reality check paper papers traditional nlp tasks sentiment analysis 2023 sentiment 20analysis 20in 20the 20era 20of 20large 20language 20models 20a 20reality 20check pdf github https github com damo nlp sg llm sentiment 2023 can chatgpt understand too a comparative study on chatgpt and fine tuned bert 2023 is chatgpt a good sentiment analyzer a preliminary study 2023 llms to the moon reddit market sentiment analysis with large language models 2023 is gpt 3 a good data annotator paper https arxiv org abs 2212 10450 weak supervision 2022 language models in the loop incorporating prompting into weak supervision paper https arxiv org abs 2205 02318 knowledge graph survey 2023 unifying large language models and knowledge graphs a roadmap paper https arxiv org abs 2306 08302 related topics neural text generation 2020 iclr neural text generation with unlikelihood training paper papers related topics neural text generation 2020 iclr neural 20text 20generation 20with 20unlikelihood 20training pdf 2021 findings emnlp gedi generative discriminator guided sequence generation paper papers related topics neural text generation 2021 findings emnlp gedi 20generative 20discriminator 20guided 20sequence 20generation pdf 2021 acl dexperts decoding time controlled text generation with experts and anti experts paper papers related topics neural text generation 2021 acl dexperts 20decoding time 20controlled 20text 20generation 20with 20experts 20and 20anti experts pdf 2021 iclr mirostat a neural text decoding algorithm that directly controls perplexity paper papers related topics neural text generation 2021 iclr mirostat 20a 20neural 20text 20decoding 20algorithm 20that 20directly 20controls 20perplexity pdf 2022 nips a contrastive framework for neural text generation paper papers related topics neural text generation 2022 nips a 20contrastive 20framework 20for 20neural 20text 20generation pdf controllable generation 2022 acl length control in abstractive summarization by pretraining information selection paper papers related topics neural text generation controllable generation 2022 acl length 20control 20in 20abstractive 20summarization 20by 20pretraining 20information 20selection pdf distributed training pytorch https zhuanlan zhihu com p 76638962 tensorflow ring all reduce https zhuanlan zhihu com p 69797852 optimizer state sharding zero https zhuanlan zhihu com p 394064174 zero offload https www deepspeed ai tutorials zero offload pipeline parallelism gpipe https zhuanlan zhihu com p 613196255 dp ddp zero https zhuanlan zhihu com p 617133971 deepspeed zero https zhuanlan zhihu com p 618865052 quantization 2020 integer quantization for deep learning inference principles and empirical evaluation paper papers related topics quantization 2020 integer 20quantization 20for 20deep 20learning 20inference 20principles 20and 20empirical 20evaluation pdf 2023 iclr gptq accurate post training quantization for generative pre trained transformers paer papers related topics quantization 2023 iclr gptq 20accurate 20post training 20quantization 20for 20generative 20pre trained 20transformers pdf 2023 qlora efficient finetuning of quantized llms paper papers related topics quantization 2023 qlora 20efficient 20finetuning 20of 20quantized 20llms pdf other gpt llm https www zhihu com question 591935281 answer 2979220793 instruct gpt rlhf https zhuanlan zhihu com p 622134699 chatgpt nlp sota https www zhihu com question 595938881 ppo 10 ppo pytorch https zhuanlan zhihu com p 512327050 https zhuanlan zhihu com p 622056740 utm source wechat session utm medium social utm oi 556103293550534656 https www zhihu com question 498271491 https www zhihu com question 594938636 answer 3081636786 related project open llms https github com eugeneyan open llms a list of open llms available for commercial use safe rlhf https github com pku alignment safe rlhf awesome multimodal large language models https github com bradyfu awesome multimodal large language models awesome chinese llm https github com hqwu hitcs awesome chinese llm
chatgpt large-language-models deepspeed distributed-training
ai
nlp-demystified
natural language processing demystified nlp demystified is a free comprehensive course to turn you into an nlp expert it covers everything from the very basics to the state of the art 15 modules of theory and concepts clearly explained 9 fully documented notebooks with end to end examples of how to accomplish common nlp tasks no machine learning knowledge assumed just know python and a bit of high school math visit nlpdemystified org https nlpdemystified org to start learning content 1 introduction video https www youtube com watch v dioxck7i2wa no notebook for this module 2 tokenization video https www youtube com watch v lzfrij85bfm notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified preprocessing ipynb 3 basic preprocessing video https www youtube com watch v i173tmctxpk notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified preprocessing ipynb scrollto uusfycpvt4ni 4 advanced preprocessing video https www youtube com watch v aeue9axo5ss notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified preprocessing ipynb scrollto o9hlyyut1kop 5 measuring document similarity with basic bag of words video https www youtube com watch v qbpdjzk2oca notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified vectorization ipynb 6 simple document search with tf idf video https www youtube com watch v fiysi41f1yg notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified vectorization ipynb scrollto cnc i4oh2arw 7 building models finding patterns for fun and profit video https www youtube com watch v 2c7bmseal8 no notebook for this module 8 naive bayes fast and simple text classification video https www youtube com watch v frwvpzoqbpq notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified classification naive bayes ipynb 9 topic modelling automatically discovering topics in documents video https www youtube com watch v 9mnv4awa9qi notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified topic modelling lda ipynb 10 neural networks i core mechanisms and coding one from scratch video https www youtube com watch v vs1mgwas8em notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified neural networks foundations ipynb 11 neural networks ii effective training techniques video https www youtube com watch v pytt93q b2i notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified neural networks foundations ipynb scrollto 08e eoqxxnvn 12 word vectors video https www youtube com watch v iebl0rqf5lg notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified word vectors ipynb 13 recurrent neural networks and language models video https www youtube com watch v y0fqgwbfkqw notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified recurrent neural networks ipynb 14 sequence to sequence and attention video https www youtube com watch v tvizbouq6lk notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified seq2seq and attention ipynb 15 transformers from scratch pre training and transfer learning video https www youtube com watch v acxqoltilme notebook https colab research google com github futuremojo nlp demystified blob main notebooks nlpdemystified transformers and pretraining ipynb
ai
LLM-research-paper
llm research paper paper list about large language model llm llm training 1 lima less is more for alignment arxiv 2023 paper http export arxiv org abs 2305 11206 br chunting zhou pengfei liu puxin xu srini iyer jiao sun yuning mao xuezhe ma avia efrat ping yu lili yu susan zhang gargi ghosh mike lewis luke zettlemoyer omer levy llm survey 1 a survey of large language models arxiv 2023 paper https arxiv org abs 2303 18223 br wayne xin zhao kun zhou junyi li tianyi tang xiaolei wang yupeng hou yingqian min beichen zhang junjie zhang zican dong yifan du chen yang yushuo chen zhipeng chen jinhao jiang ruiyang ren yifan li xinyu tang zikang liu peiyu liu jian yun nie ji rong wen llm evaluation 1 is chatgpt a general purpose natural language processing task solver arxiv 2023 paper https arxiv org abs 2302 06476 br chengwei qin aston zhang zhuosheng zhang jiaao chen michihiro yasunaga diyi yang 2 a multitask multilingual multimodal evaluation of chatgpt on reasoning hallucination and interactivity arxiv 2023 paper https arxiv org abs 2302 04023 br yejin bang samuel cahyawijaya nayeon lee wenliang dai dan su bryan wilie holy lovenia ziwei ji tiezheng yu willy chung quyet v do yan xu pascale fung 3 we re afraid language models aren t modeling ambiguity arxiv 2023 paper https arxiv org pdf 2304 14399 pdf br alisa liu zhaofeng wu julian michael alane suhr peter west alexander koller swabha swayamdipta noah a smith yejin choi efficient tuning 1 vl adapter parameter efficient transfer learning for vision and language tasks cvpr 2022 paper https arxiv org abs 2112 06825 br yi lin sung jaemin cho mohit bansal 2 multimodal few shot learning with frozen language models nips 2021 paper https proceedings neurips cc paper 2021 file 01b7575c38dac42f3cfb7d500438b875 paper pdf br maria tsimpoukelli jacob l menick serkan cabi s m ali eslami oriol vinyals felix hill 3 modular and parameter efficient multimodal fusion with prompting findings of acl 2022 paper https aclanthology org 2022 findings acl 234 br sheng liang mengjie zhao hinrich schuetze 4 learning to prompt for vision language models ijcv 2022 paper https arxiv org pdf 2109 01134 pdf br kaiyang zhou jingkang yang chen change loy ziwei liu 5 conditional prompt learning for vision language models cvpr 2022 paper https openaccess thecvf com content cvpr2022 papers zhou conditional prompt learning for vision language models cvpr 2022 paper pdf br kaiyang zhou jingkang yang chen change loy ziwei liu 6 maple multi modal prompt learning arxiv 2022 paper https arxiv org pdf 2210 03117 pdf br muhammad uzair khattak hanoona rasheed muhammad maaz salman khan fahad shahbaz khan 7 aim adapting image models for efficient video understanding iclr 2023 paper https arxiv org abs 2302 03024 br taojiannan yang yi zhu yusheng xie aston zhang chen chen mu li llm with multimodal 1 see think confirm interactive prompting between vision and language models for knowledge based visual reasoning arxiv 2023 paper https arxiv org pdf 2301 05226 pdf br zhenfang chen qinhong zhou yikang shen yining hong hao zhang chuang gan 2 multimodal chain of thought reasoning in language models arxiv 2023 paper https arxiv org abs 2302 00923 br zhuosheng zhang aston zhang mu li hai zhao george karypis alex smola 3 prompting large language models with answer heuristics for knowledge based visual question answering cvpr 2023 paper https arxiv org abs 2303 01903 code https github com milvlg prophet br zhenwei shao zhou yu meng wang jun yu 4 prompt generate then cache cascade of foundation models makes strong few shot learners cvpr 2023 paper https arxiv org abs 2303 02151 code https github com zrrskywalker cafo br renrui zhang xiangfei hu bohao li siyuan huang hanqiu deng hongsheng li yu qiao peng gao 5 an empirical study of gpt 3 for few shot knowledge based vqa aaai 2022 paper https arxiv org abs 2109 05014 br zhengyuan yang zhe gan jianfeng wang xiaowei hu yumao lu zicheng liu lijuan wang 6 kat a knowledge augmented transformer for vision and language naacl 2022 paper https aclanthology org 2022 naacl main 70 br liangke gui borui wang qiuyuan huang alexander hauptmann yonatan bisk jianfeng gao 7 revive regional visual representation matters in knowledge based visual question answering nips 2022 paper https arxiv org abs 2206 01201 br yuanze lin yujia xie dongdong chen yichong xu chenguang zhu lu yuan 8 decap decoding clip latents for zero shot captioning via text only training iclr 2023 paper https arxiv org abs 2303 03032 br wei li linchao zhu longyin wen yi yang 9 visual chatgpt talking drawing and editing with visual foundation models arxiv 2023 paper https arxiv org abs 2303 04671 br chenfei wu shengming yin weizhen qi xiaodong wang zecheng tang nan duan 10 mm react prompting chatgpt for multimodal reasoning and action arxiv 2023 paper https arxiv org abs 2303 11381 br zhengyuan yang linjie li jianfeng wang kevin lin ehsan azarnasab faisal ahmed zicheng liu ce liu michael zeng lijuan wang 11 vipergpt visual inference via python execution for reasoning arxiv 2023 paper https arxiv org abs 2303 08128 br d dac sur s sachit menon carl vondrick 12 minigpt 4 enhancing vision language understanding with advanced large language models arxiv 2023 paper https arxiv org abs 2304 10592 br deyao zhu jun chen xiaoqian shen xiang li mohamed elhoseiny 13 instructblip towards general purpose vision language models with instruction tuning arxiv 2023 paper https arxiv org abs 2305 06500 br wenliang dai junnan li dongxu li anthony meng huat tiong junqi zhao weisheng wang boyang li pascale fung steven hoi 14 blip 2 bootstrapping language image pre training with frozen image encoders and large language models icml 2023 paper https arxiv org abs 2301 12597 br junnan li dongxu li silvio savarese steven hoi 15 videochat chat centric video understanding arxiv 2023 paper https arxiv org abs 2305 06355 br kunchang li yinan he yi wang yizhuo li wenhai wang ping luo yali wang limin wang yu qiao 16 self chained image language model for video localization and question answering arxiv 2023 paper https arxiv org abs 2305 06988 br shoubin yu jaemin cho prateek yadav mohit bansal 17 gpt4roi instruction tuning large language model on region of interest arxiv 2023 paper https arxiv org pdf 2307 03601 pdf br shilong zhang peize sun shoufa chen min xiao wenqi shao wenwei zhang kai chen ping luo llm in data generation 1 generate labeled training data using prompt programming and gpt 3 an example of big five personality classification arxiv 2023 paper https arxiv org ftp arxiv papers 2303 2303 12279 pdf br eason chen 2 auggpt leveraging chatgpt for text data augmentation arxiv 2023 paper https arxiv org abs 2302 13007 br haixing dai zhengliang liu wenxiong liao xiaoke huang yihan cao zihao wu lin zhao shaochen xu wei liu ninghao liu sheng li dajiang zhu hongmin cai lichao sun quanzheng li dinggang shen tianming liu xiang li 3 reward design with language models iclr 2023 paper https arxiv org pdf 2303 00001 pdf br minae kwon sang michael xie kalesha bullard dorsa sadigh 4 is a prompt and a few samples all you need using gpt 4 for data augmentation in low resource classification tasks arxiv 2023 paper https arxiv org abs 2304 13861 br anders giovanni m ller jacob aarup dalsgaard arianna pera luca maria aiello hallucination in llm 1 selfcheckgpt zero resource black box hallucination detection for generative large language models arxiv 2023 paper https arxiv org pdf 2303 08896 pdf br potsawee manakul adian liusie mark j f gales 2 reflexion an autonomous agent with dynamic memory and self reflection arxiv 2023 paper https arxiv org pdf 2303 11366 pdf br noah shinn beck labash ashwin gopinath in context learning 1 fairness guided few shot prompting for large language models arxiv 2023 paper https arxiv org pdf 2303 13217 pdf br huan ma changqing zhang yatao bian lemao liu zhirui zhang peilin zhao shu zhang huazhu fu qinghua hu bingzhe wu 2 larger language models do in context learning differently arxiv 2023 paper https arxiv org pdf 2303 03846 pdf br jerry wei jason wei yi tay dustin tran albert webson yifeng lu xinyun chen hanxiao liu da huang denny zhou tengyu ma 3 automatic chain of thought prompting in large language models iclr 2023 paper https arxiv org pdf 2210 03493 pdf br zhuosheng zhang aston zhang mu li alex smola
ai
d2-admin-renren-security-enterprise
banner https raw githubusercontent com d2 projects d2 admin renren security enterprise master doc image banner png h2 align center d2admin renren security enterprise h2 p align center d2admin p p align center a img src https img shields io github release d2 projects d2 admin renren security enterprise svg a a href https www travis ci org d2 projects d2 admin renren security enterprise img src https www travis ci org d2 projects d2 admin renren security enterprise svg branch master a a img src https img shields io github last commit d2 projects d2 admin renren security enterprise svg a a img src https img shields io badge code style standard brightgreen svg a p p align center a img src https img shields io github issues d2 projects d2 admin renren security enterprise svg a a img src https img shields io github issues closed d2 projects d2 admin renren security enterprise svg a a img src https img shields io github issues pr d2 projects d2 admin renren security enterprise svg a a img src https img shields io github issues pr closed d2 projects d2 admin renren security enterprise svg a a img src https img shields io github forks d2 projects d2 admin renren security enterprise svg a a img src https img shields io github stars d2 projects d2 admin renren security enterprise svg a p d2admin https github com d2 projects d2 admin https www renren io enterprise d2admin https github com d2 projects d2 admin renren d2admin fairyever com https renren d2admin fairyever com https demo renren io security enterprise d2admin https doc d2admin fairyever com zh https www renren io guide x x token x x x x x x x x x x x x svg x x iframe x x d2admin https github com d2 projects d2 admin or or or cdn https join https raw githubusercontent com d2 projects d2 admin master doc image reward me 2x png d2admin https github com d2 projects d2 admin readme a href https github com d2 projects d2 admin target blank img src https raw githubusercontent com fairyever d2 admin master doc image d2 admin 2x png width 200 a readme html a href https github com d2 projects d2 admin target blank img src https raw githubusercontent com fairyever d2 admin master doc image d2 admin 2x png width 200 a d2admin https github com d2 projects d2 admin d2admin 2000 qq d2 join https raw githubusercontent com fairyever d2 admin master doc image join 2x png join https raw githubusercontent com d2 projects d2 admin master doc image js now 2x png https daily fairyever com https daily fairyever com license mit https github com d2 projects d2 admin renren security enterprise blob master license copyright c 2019 present fairyever https raw githubusercontent com fairyever d2 admin master doc image give a star 2x png
front_end
GrowItApp
growit the problem you want to know how to grow a plant and not sure how to do it is it the right plant for you to grow growing a plant but forgetting to irrigate it about us hey we are growit our app suggests you which plant to grow and supports you along growing process by irrigate notifications via dedicated management platform in addition you can get relevant information about the plant in order to grow the optimal plant for you we have been chosen to represent the course in shenkar website and exhibition website link https www shenkar ac il he news items mobile exhibition software fbclid iwar05nitocx0li18aig8ns16epsps5ogoesiotskua9pjdoivc2s46dvdoow q text growit growit https mobile final project growit s3 eu west 1 amazonaws com poster 35x50 se 2020 gimel1im png growit technologies project built in react native using expo platform and redux as a state manager server written in javascript with nodejs deployed to heroku plants data saved on mongobd plants pictures uploaded to s3 bucket current weather accepts from openweathermap api https openweathermap org
front_end
aRTOS
artos artos a real time operating system br 2015 2017 copyright by flandreunx br br br memory br buddy buddy thread br rw data heap timer br os schedule br 255 bitmap irq lib br symbol expert fifo 1byte linux list t pm br signal more modules br callback key usart console log more
os
web-2020
web 2020 welcome to the introduction to web development course 2020 edition in this repo you will find plenty of frontend and backend examples and it s meant te be somewhat your class notes you can easily check the commit history in order to time travel to a certain point in the course something you need might be refactored within the next couple of lessons you can find the backend examples inside the ejemplos django directory and the frontend examples in the ejemplos redux folder if you find some issue with this code please feel free to create an issue or even better a pull request to fix it
front_end
flaskApp
flaskapp a flask app for learning backend development with python flask framework
server
sql_analysis
sql analysis data modeling data engineering and data analysis of an employee database from six csv files
server
every-chatgpt-gui
every front end gui client for chatgpt api similar to every proximity chat app https github com billmei every proximity chat app i made this list to keep track of every graphical user interface alternative to chatgpt if you want to add your app feel free to open a pull request to add your app to the list you can list your app under the appropriate category in alphabetical order if you want your app removed from this list you can also open a pull request to do that too open source web big agi demo https big agi com source https github com enricoros big agi chatbot ui demo https www chatbotui com source https github com mckaywrigley chatbot ui chatgpt ai template demo https horizon ui com chatgpt ai template source https github com horizon ui chatgpt ai template chatgpt api demo demo https chatgpt ddiu me source https github com ddiu8081 chatgpt demo chatgpt lite demo https gptlite vercel app source https github com blrchen chatgpt lite chatgpt minimal demo https chatgpt minimal vercel app source https github com blrchen chatgpt minimal chatgpt next web demo https chat gpt next web vercel app source https github com yidadaa chatgpt next web chatgpt vercel demo https www chatsverse xyz source https github com ourongxing chatgpt vercel chatgpt web demo https niek github io chatgpt web source https github com cogentapps chat with gpt chat with gpt demo https www chatwithgpt ai source https github com cogentapps chat with gpt chatwithme chat demo https www chatwithme chat source https github com kierangilliam chatwithme chat l gpt demo https gpt ltopx com source https github com peek a booo l gpt mychatgpt demo https my chat gpt lake vercel app source https github com loeffeldude my chat gpt browser extension chatgptbox source https github com josstorer chatgptbox firefox https addons mozilla org firefox addon chatgptbox chrome https chrome google com webstore detail chatgptbox eobbhoofkanlmddnplfhnmkfbnlhpbbo edge https microsoftedge microsoft com addons detail fission chatbox best enjmfilpkbbabhgeoadmdpjjpnahkogf safari https apps apple com app fission chatbox id6446611121 chathub source https github com chathub dev chathub chrome https chrome google com webstore detail chathub all in one chatbo iaakpnchhognanibcahlpcplchdfmgma utm source every chat gpt gui edge https microsoftedge microsoft com addons detail chathub allinone chat kdlmggoacmfoombiokflpeompajfljga utm source every chat gpt gui superpower chatgpt source https github com saeedezzati superpower chatgpt chrome https chrome google com webstore detail superpower chatgpt amhmeenmapldpjdedekalnfifgnpfnkc firefox https addons mozilla org en us firefox addon superpower chatgpt self hosted chatgpt ui source https github com patrikzudel patrikzeros chatgpt api ui chatgpt web source https github com chanzhaoyu chatgpt web my chat gpt source https github com michaelnutt02 my chat gpt desktop chatbox download https chatboxapp xyz source https github com bin huang chatbox chatgpt desktop download https github com chatgptui desktop releases source https github com chatgptui desktop chatgpt menubar app source https github com sw yx chatgpt mac marvin download https www askmarvin ai source https github com prefecthq marvin not open source web ai ls demo https ai ls chatkit demo https chatkit app horizon ai template demo https horizon ui com horizon ai template koalachat demo https koala sh chat mygpt demo https mygpt thesamur ai poe demo https poe com typingmind demo https www typingmind com desktop boltai download https boltai app easychat ai download https easychat ai app macgpt download https www macgpt com writers brew download https writersbrew app more plugins and tools awesomelist https github com reorx awesome chatgpt api
chatgpt gpt-3 gpt-4 large-language-models
front_end
ESP8266-RTOS-IR
esp8266 rtos ir infrared rx tx library for espressif esp8266 rtos sdk https github com espressif esp8266 rtos sdk v3 2 esp idf style to send and receive ir commands receiving ir codes can be done on arbitrary pin which supports gpio mode and pin change interrupts transmission though can only be done on gpio14 this library use i2s ws pin to send accurate 38khz ir signals with 50 duty cycle without blocking cpu like other libraries do recently espressif has released official ir feature driver ir rx h driver ir tx h in master branch but this library is more lightweight compatibility esp8266 rtos sdk v3 2 installation because of this issue 663 https github com espressif esp8266 rtos sdk issues 663 you should add the following line to idf path components freertos port esp8266 include freertos freertosconfig h manually c define include xtimerpendfunctioncall 1 then you can clone this project inside your components folder usage example sending command c include ir ir h include ir raw h static int16 t command1 3291 1611 443 370 425 421 421 1185 424 422 421 1185 425 421 421 370 424 392 448 1188 423 1214 444 372 422 395 447 397 420 1186 449 1185 424 423 419 375 441 372 423 422 420 372 444 370 424 422 420 372 421 393 424 421 421 371 422 392 449 398 420 1185 450 396 421 370 422 423 ir tx init ir raw send command1 sizeof command1 sizeof command1 example receiving nec like command c include ir ir h include ir generic h define ir rx gpio 12 static ir generic config t my protocol config header mark 3200 header space 1600 bit1 mark 400 bit1 space 1200 bit0 mark 400 bit0 space 400 footer mark 400 footer space 8000 tolerance 10 ir rx init ir rx gpio 1024 ir decoder t generic decoder ir generic make decoder my protocol config uint8 t buffer 32 while 1 uint16 t size ir recv generic decoder 0 buffer sizeof buffer if size 0 continue printf decoded packet size d size for int i 0 i size i printf 0x 02x buffer i if i 16 15 newline after every 16 bytes of packet data printf n if size 16 print final newline unless packet size is multiple of 16 and newline was printed inside of loop printf n license mit licensed see the bundled license https github com fonger esp8266 rtos ir blob master license file for more details this library is based on maximkulkin esp ir https github com maximkulkin esp ir which works with esp open rtos https github com superhouse esp open rtos instead of the official espressif sdk
esp8266 rtos esp8266-rtos esp-idf espressif infrared ir-remote
os
induced-rationales-markup-tokens
induced rationales markup tokens paper reproduction code b induced natural language rationales and interleaved markup tokens enable extrapolation in large language models b we provide the necessary notebooks for playback in both tasks scan and addition description of folder contents scan few shot text davinci 002 experiment with prompt composed of explanations and markup tokens experiment rationales markups ipynb experiment with prompt consisting of only explanations experiment rationales only ipynb experiment with prompt composed of only the markup tokens experiment rationales only ipynb in the outputs directory it is possible to check the output of the model in each of the above scenarios note due to costs the ablation tests rationales markups only were performed with 200 samples finetuning ada curie davinci experiment with fine tuning on gpt 3 models ada curie davinci fine tuning gpt3 ada 20 epochs ipynb fine tuning gpt3 curie 3 epochs ipynb fine tuning gpt3 davinci 3 epochs ipynb each experiment was trained using the scan training dataset 16 990 and converted to jsonl format train prepared jsonl the evaluation was done with the test set 3 920 it was also converted into the jsonl format test prepared jsonl in notebooks test has the code used in the test step of the models in the outputs folder making the model output available in each scenario tried addition experiment with prompt composed of explanations and markup tokens experiment rationales markups ipynb experiment with prompt consisting of only explanations experiment rationales only ipynb experiment with prompt composed of only the markup tokens experiment rationales only ipynb in the outputs directory it is possible to check the output of the model in each of the above scenarios note due to costs the ablation tests rationales markups only were performed with 200 samples generating the datasets in both scan and addition tasks the dataset went through some pre processing the same configuration with the following steps 1 download scan dataset wget https raw githubusercontent com facebookresearch meta seq2seq master data tasks train length txt wget https raw githubusercontent com facebookresearch meta seq2seq master data tasks test length txt sed s in tasks train length txt sed s out t train tsv sed s in tasks test length txt sed s out t test tsv 2 run the script b generate dataset py b python generate dataset py file name random 400 test output path scan task scan python3 generate dataset py file name addition random 400 test min digits 4 max digits 14 output path addition task task addition
ai
US-Immigration-Data-Lake
us immigration data lake data engineering capstone project goal to support u s customs border protection department to make better decisions on immigration policies img src https github com saurabhsoni5893 us immigration data lake blob master images datalake png align centre overview the purpose of this data engineering capstone project is to give students a chance to combine what they ve learned throughout the program this project will be an important part of learners portfolio that will help to achieve data engineering related career goals we could choose to complete the project provided by the udacity team or define the scope and data ourselves i took the first approach in building the data lake on the data on immigration to the united states provided by udacity business scenario a business consulting firm specialized in data warehouse services through assisting the enterprises with navigating their data needs and creating strategic operational solutions that deliver tangible business results is contacted by u s customs border protection department specifically they want help with the modernization of department s data warehousing infrastructure by improving performance and ease of use for end users enhancing functionality decreasing total cost of ownership while making it possible for real time decision making in total the department is asking for a full suite of services includes helping department with data profiling data standardization data acquisition data transformation and integration the u s customs and border protection needs help to see what is hidden behind the data flood the consulting firm aim to model and create a brand new analytics solution on top of the state of the art technolgies available to enable department to unleash insights from data then making better decisions on immigration policies for those who came and will be coming in near future to the us the architecture the whole solution is cloud based on top of amazon web services aws first all the datasets were preprocessed with apache spark and stored in a staging area in aws s3 bucket then it is loaded into a amazon redshift cluster using an apache airflow pipeline that transfers and checks the quality of the data to finally provide the department a data lake for their convenient analysis the data model https github com saurabhsoni5893 us immigration data lake blob master images star schema png structure of the project following the udacity guide for this project the structure is as shown below step 1 scope the project and gather data step 2 explore and assess the data step 3 define the data model step 4 run etl to model the data step 5 complete project write up to explore all these steps in details please go to link us immigration data lake https github com saurabhsoni5893 us immigration data lake blob master us immigration data lake ipynb
data data-engineering data-engineering-nanodegree udacity capstone-project datamodeling apache-spark aws aws-s3 aws-ec2 data-lake apache-airflow
cloud
meta-intel-iot-security
discontinuation of project this project will no longer be maintained by intel intel has ceased development and contributions including but not limited to maintenance bug fixes new releases or updates to this project intel no longer accepts patches to this project if you have an ongoing need to use this project are interested in independently developing it or would like to maintain patches for the open source software community please create your own fork of this project meta intel iot security a collection of loosely related openembedded layers providing several security technologies in general the additional features must be explicitly enabled merely adding the layers has little influence on the resulting packages and images therefore it is possible to build a distro where security is an optional feature meta security framework is a general purpose utility layer both meta security smack and meta integrity depend on it meta security smack and meta integrity do not depend on each other see the individual layer readme s for further instructions testing build and unit testing under travisci is configured against different openembedded core branches in travis yml currently master jethro and fido are covered two different travisci environments are used the traditional containers support test jobs of up to 2 hours but do not offer root rights therefore testing under qemu which needs a configured tap device and thus root access is not possible using that environment is configured in a special travis compilation branch the fully virtualized trusty environment grants root rights and thus can run tests but only allows jobs of up to 50 minutes which is not enough to compile from scratch this configuration is the default used by most branches both environments have fairly limited disk space therefore the rm work bbclass is used to reduce disk usage during building even that is often not good enough to compile in one go without running out of disk space to overcome these limitations sstate is stored persistently shared between all jobs and always updated after compilation even for jobs which had to be terminate prematurely that way restarting a job that failed due to a timeout or full disk will make some progress and eventually succeed restarting the travis compilation jobs is useful for this because those jobs make more progress per run bitbake itself gets started under the travis cmd wrapper py helper script which logs system state disk usage cpu utilization running processes at regular time intervals which is useful for monitoring a run and also avoids getting killed by travisci when there is no normal output for more than 10 minutes as it can happen when bitbake is working on a single complex task like compiling the linux kernel in addition the script ensures that bitbake generates output for non interactive usage terminates early enough to leave time for sstate uploading an amazon s3 bucket is used to store the sstate pull requests get access to existing sstate but are not allowed to modify it travisci testing is free for open source projects like this one but s3 is not the free travisci caching was evaluated but turned out to be not flexible enough uploading is done with s3cmd because that offers more control than the travisci deploy addon it also only has python as dependency which allows freeing up some disk space by deleting the ruby runtime environment from the home directory to re create a similar setup in the amazon iam console create a new user travisci that will get access to the bucket remember its access key and secret log into the amazon s3 console create a new bucket the default for the travisci deploy addon seems to be us east 1 so perhaps that works best edit the bucket policy such that the travisci user can upload content and everyone else gets read access version 2012 10 17 id policy1234 statement sid allowupload effect allow principal aws arn aws iam youraccount user travisci action s3 resource arn aws s3 yourbucket sid allowpublicread effect allow principal action s3 getobject resource arn aws s3 youraccount enable static web hosting with default index html and error html configure your travisci project with the following environment variables aws access key access key for the travisci user not displayed aws secret key access secret for the travisci user not displayed aws bucket your bucket name displayed aws bucket region the region where the bucket was created aws access key and aws secret key are not exported to pull requests so there is no risk that they get leaked to malicious commands in a modified travis yml and when testing a pull request uploading is disabled in the current travis yml the static web hosting is used to configure a http sstate cache server based on the aws bucket and aws bucket region if available this works because access permissions are set such that bitbake and wget do not need credentials to access the sstate travisci https travis ci org travisci environments https docs travis ci com user ci environment travisci caching https docs travis ci com user caching amazon s3 https aws amazon com s3 amazon iam console https console aws amazon com iam amazon s3 console https console aws amazon com s3 copying unless noted otherwise files are provided under the mit license see copying mit
server
evaluate-news-nlp
evaluate a news article with natural language processing 4th project at udacity https www udacity com course front end web developer nanodegree nd0011 front end web developer nanodegree program this project aims to build a web tool that allows users to run natural language processing nlp on articles or blogs found on other websites when a user submits a url of an article the web page then dispalys sentiment analysis returned from meaningcloud api https www meaningcloud com products sentiment analysis based on the contents of the article build tools html css javascript node express webpack meaningcloud api jest workbox installation make sure node and npm are installed from the terminal node v npm v 1 move to the project folder cd project directory 2 clone the repo git clone repo 3 install npm npm install 4 install loaders and plugins choose the necessary installation for your development mode npm i d babel core babel preset env babel loader npm i d style loader node sass css loader sass loader npm i d clean webpack plugin npm i d html webpack plugin npm i d mini css extract plugin npm i d optimize css assets webpack plugin terser webpack plugin 5 sign up for an api key at meaningcloud com https www meaningcloud com developer create account 6 configure environment variables using dotenv package 1 install the dotenv package npm install dotenv 2 create a new env file in the root of your project 3 fill the env file with your api key like this api key 7 start the project command action npm run build prod build project npm start run project 8 open browser at http localhost 8081
udacity-nanodegree front-end meaningcloud
ai
NLP-with-LLMs
natural language processing with large language models purpose jon krohn https www jonkrohn com created this repo to accompany his half day training on nlp with gpt 4 and other llms from training to deployment with hugging face and pytorch lightning which was first offered at the open data science conference odsc east https odsc com boston in boston on may 10th 2023 code code can be found in the aptly named code https github com jonkrohn nlp with llms tree main code directory jupyter notebooks are directly supported for execution in google colab https colab research google com py files are for running at the command line see instructions https github com jonkrohn nlp with llms tree main instructions n b code is intended to be accompanied by live instructions and so it will not necessarily be self explanatory repo art p align center img src https github com jonkrohn nlp with llms blob main img llamas jpeg p the repo art above was generated by prompting midjourney v5 with this artistic take on llms that was output by gpt 4 painting of a harmonious blend of alpacas https crfm stanford edu 2023 03 13 alpaca html and vicu as https vicuna lmsys org in rich shades of caramel and soft gray amidst a lush futuristic landscape the animals are surrounded by a web of glowing pulsating neural network connections in hues of electric blue and neon green symbolizing the cutting edge and cost effective ai training techniques in the background a dynamic matrix of binary code cascades down further emphasizing the technological prowess of the scene
ai
DAC2018-TGIIF
dac 2018 system design contest tgiif the 1st place winner s source codes for dac 2018 system design contest fpga track our design is based on deephi dpu rtl ip core and deephi dnndk software stack for more infos about deephi dpu and dnndk please refer to www deephi com 1 for prerequisites refer to prerequisites folder they are necessary for running our demo on pynq z1 for the tutorial in python refer to the python notebook in tgiif folder for the source codes of software our block design and the nn model we used refer to src block design and model folders respectively algorithm ssd modification we used ssd single shot multibox detector as our based algorithm and modified it to better fit for the acceleration on dpu the overview of the modification applied to the ssd network is showed below ssd https github com hirayaku dac2018 tgiif raw master image ssd png there are mainly four modifications as below and you can find more details in the model caffe model folder better performance resize input image from 640x360 to 448x252 factor 0 7 small objects delete deep layer branches to speed up and get higher iou speed up the convergence add batch normalization evaluation metric from map to iou pruning and quantization we used deephi dnndk the first public release of deep learning sdk in china to support the full stack development and deployment on deephi dpu platform more infos about deephi dnndk refer to www deephi com dnndk html 2 dnndk https github com hirayaku dac2018 tgiif raw master image dnndk png in the phase of training we use the fixed point instead of the float point for data representation as showed below the processes of pruning and quantization are implemented via deephi dnndk training https github com hirayaku dac2018 tgiif raw master image training png software we used deephi dnndk for nn model compilation and runtime deployment furthermore we applied several optimization methods to improve the accuracy and speed on pynq z1 funtional level optimization max value selection selecting the max confidence group bounding box instead of compute all the nms table look up computing softmax with table look up reduce the exponent arithmetic time system level optimization divide the workflow into fine grained sub tasks re organization using multi threading 1 8x speed up with 2 threads multithread https github com hirayaku dac2018 tgiif raw master image multithread png hardware overview of dpu we use deephi dpu ip as our hardware system it is the basis of our design it is a deep learning processor unit that is specially designed for cnn and dnn named as aristotle dpu https github com hirayaku dac2018 tgiif raw master image dpu png block design here is our hardware solution on pynq z1 as for the connectivity there are two 64bit ports used for ddr read write and one 32bit port for instruction and profiler arm cpu use one 32bit port to read write register for controling bd https github com hirayaku dac2018 tgiif raw master image bd png 1 http www deephi com deephi 2 http www deephi com dnndk html dnndk
os
learn-database
fundamentals of database engineering this repo tracks personal learning about database including fundamentals of database engineering https www udemy com course database engines crash course by hussein nasser
server
public-blockchains-exercises
public blockchains exercises exercises of the course public blockchains taught at uni mannheim summer semester 2023
blockchain
RL4NMT
reinforcement learning for neural machine translation rl4nmt emnlp 2018 a study of reinforcement learning for neural machine translation inproceedings wu2018study title a study of reinforcement learning for neural machine translation author wu lijun and tian fei and qin tao and lai jianhuang and liu tie yan booktitle proceedings of the 2018 conference on empirical methods in natural language processing pages 3612 3621 year 2018 rl4nmt based on transformer please first get familar with the basic tensor2tensor project https github com tensorflow tensor2tensor the tesorflow version is 1 4 and the tensor2tensor version is 1 2 9 take wmt17 chinese english translation as example different training strategies are provided different rl training strategies for nmt evaluated on bilingual dataset br 1 hparams zhen wmt17 transformer rl total setting terminal reward beam search br 2 hparams zhen wmt17 transformer rl delta setting reward shapping beam search br 3 hparams zhen wmt17 transformer rl delta setting random reward shapping multinomial sampling br 4 hparams zhen wmt17 transformer rl total setting random terminal reward multinomial sampling br 5 hparams zhen wmt17 transformer rl delta setting random baseline reward shaping multinomial sampling reward baseline br 6 hparams zhen wmt17 transformer rl delta setting random mle reward shapping multinomial sampling objectives combination different monolingual data combination traininig in rl4nmt br 1 zhen src mono source monolingual data rl training based on bilingual data mle model br 2 zhen tgt mono target monolingual data rl training based on bilingual data mle model br 3 zhen src tgt mono sequential mode target monolingual data rl trianing based on bilingual source monolingual data mle model br 4 zhen tgt src mono sequential mode source monolingual data rl training based on bilinugal target monolingual data mle model br 5 zhen bi src tgt mono unified model supports mrt minimum risk training for nmt several important implementations in the code this file contains the model builder do sampling and feed the sampled translations for training https github com apeterswu rl4nmt blob master tensor2tensor utils model builder py l132 this file contains the loss function building for rl and regular mle https github com apeterswu rl4nmt blob master tensor2tensor utils t2t model py l595 this file includes the detailed bleu reward calculation https github com apeterswu rl4nmt blob master tensor2tensor utils bleu hook py l59
ai
CEG3120
ceg3120 course materials for ceg 3120 design of info tech systems for fall 2022 setup setup your environment system environmentsetup md create course repo in github classrooms githubclassrooms md setup aws academy awsacademy md markdown reference guide markdown demo md
server
old-Windows-universal-samples
universal windows app samples this repo contains the samples that demonstrate the api usage patterns for the universal windows platform uwp in the windows software development kit sdk for windows 10 these code samples are designed to run on both desktop mobile and future devices that support the universal windows platform universal windows platform development use visual studio 2015 rc and the windows software development kit sdk for windows 10 to build test and deploy your universal windows apps to get windows 10 insider preview and the software development kits and tools join the windows insider program become a windows insider https insider windows com become a windows insider
front_end
DigitalAlarmClock
description bank nexysa7 100t vga 640 480 1 2 4 0 9 time alarm 1 1 2 3 0 9 time 12 34 1 2 3 4 1 12 123 1234 5 s 4 0 9 alarm 3 5 alarm 2 1 lcd 2 vga 0 9 time 12 34 1 2 3 4 1 12 123 1234 5 s 0 9 alarm 3 alarm vga
alarm vga fpga
os
RecommenderSystem-LLM
recommendersystem llm recommender system with large language models llm arquitecture this recommender system uses sentencetransformer model to achieve embedding the preprocessed prompts in order to upgrade or deploy the system with new jobs data just replace the current dataframe with the new one containing new rows then re run the app for generating new job embeddings based on its prompt this approach is literally the same as the user s dataset the ideal scenario would consist of storing those datasets into an external storage system such as an aws s3 bucket or gcp to avoid opening the project so it would ease even more the project to be able to automatically fetch new data source on each deploy how did i do it firstly i researched among different available llm models but some of them are not free to use and need a good accelerated processing memory cuda gpu having that said i ended up using sentencetransformer which provided the necessary tools to create a stable recommender system moreover it has a built in function to calculate cosine similarity between two tensors which gave me more benefits since then i was not exempt of long waits but it was the better option considering my computational limitation in fact i intended to use transformers as these are more complete tools however i came across with allocation memory errors by generating embedding for the jobs dataset despite the fact it is not possible to evaluate results with classic metrics i could notice a correlation between the user profile and the recommended jobs which take to conclude that sentencetransformer carried out this task well but it could get better by creating a more descriptive prompt or including more features on it api the system has two available endpoints api v1 recommend id user this one will allow you create a list of job recommendatios by id user api v1 history id user id user to return all previous recommendations or by user id you must keep in mind that this endopoint can return one or many recommendations sorted recently this is useful when the jobs dataset is upgraded and need to inspect how the results evolved accordingly steps to reproduce locally with tox 1 create a new environment python3 m venv venv source venv bin activate 1 run pip install u tox it is not a project dependency but required to execute commands locally see link in references section for further information 2 run tox e run app 3 go to https 127 0 0 1 8000 welcome endpoint with docker 1 rename env env file to env 2 run docker compose build 3 run docker compose up 4 go to https 127 0 0 1 8000 welcome endpoint in both cases be patient at the first time if you are running on cpu since it will download the model and generate the embeddings directory layout github ci cd workflows app fastapi directory config files to manage global configuration variables and settings controllers controllers for the application according to mvc model view controller pattern for separations of concerns inkove service package and create a response for the api dao data access layer contains the files to make crud operations directly to the database routes routes for the application recommendations and history schemas schemas or models used throught the application ie response models services services to invoke dao functions and carry out business logic utils include helper and data preprocessing functions main py app startpoint data documentation files alternatively doc processed processed data generated by the application raw raw data by default notebooks notebooks used prior to development requirements requirements for dev and prod tests tests tox ini configure commands to automate the applications readme md references sentencetransformers https huggingface co sentence transformers tox https tox wiki en latest
ai
Escstream
escstream escstream multithreaded real time cooperative wireless embedded circuits video game feb 2018 jun 2019 capture2 https user images githubusercontent com 17762123 120939633 3effe680 c719 11eb 84c3 6b2fedd9f409 png capture3 https user images githubusercontent com 17762123 120939638 43c49a80 c719 11eb 82fb 88feb286c53d png project in the scope of the course of design of embedded and real time operating systems https sites uclouvain be archives portail cdc2020 en cours 2020 lingi2315 from uclouvain belgium tasks per group of 4 the goal was to create from scratch a real time synchronised cooperative puzzle game using a pair of de10 nano mypi nano and multi touch screen in this context we used intel quartus prime to program the fpga using verilog c for the game logic and display practiced assembly baremetal microabassi multi threaded rtos python for scripting the tcp communication protocols action the tasks were mainly done as a group as it was required that we all could be able to explain the code in particular i focused my effort on the implementation of the tcp protocols and communication tools we communicated via flags bits as well as the main game design rules in c the game had to be real time therefore interrupts methods were use along a scheduler result we implemented a fully fledge cooperative puzzle game that is playable in real time by two players using touch screens but added option such as level selections and pause menu all files and presentation powerpoint in french on github
game rtos embedded-systems verilog
os
tailwind-toucan-base
crowdstrike tailwind toucan base a tailwind preset that provides the base styles for crowdstrike s toucan design system usage bash yarn add crowdstrike tailwind toucan base npm install crowdstrike tailwind toucan base pnpm add crowdstrike tailwind toucan base tailwind note this preset is presently only tested with tailwind v2 js tailwind config js module exports presets require crowdstrike tailwind toucan base extends your customizations here css import if your packager supports importing styles directly from an npm package the toucan styles are pre built and can be imported at css import crowdstrike tailwind toucan base cdn usage many js cdns scrape npm and automatically serve and cache assets deployed to npm here as an example with jsdelivr link rel stylesheet href https cdn jsdelivr net npm crowdstrike tailwind toucan base toucan css previewing the config locally bash pnpm start which is an alias for bash pnpm run build preview npx http server dist note that changes to src or build scripts will require re running pnpm start previewing manual tests locally bash pnpm build then open the manual test html bash firefox manual test html or along with the tailwind preview bash pnpm start and visit http localhost 8080 manual importing colors and shadows from figma this addon provides the ability to pull our palette information directly from figma files and store them in themes json which is used by the tailwind configuration to set up our css to import the colors run bash figma token some key light id fileid dark id fileid mezzanine id fileid pnpm run figma export styles figma token here is figma personal access token https www figma com developers api access tokens light id dark id and mezzanine id are fileid s that can be obtained from the url of the figma project containing the color tokens and commit the changes to themes json if you see any errors reported then you may need to ensure that the figma file is set up correctly and e g there are corresponding colors across each of the palettes if there are resulting changes to the output you ll need to update the test snapshots that can be done via pnpm exec vitest update
tailwind
os
multiplatform-utils
multiplatform utils a collection of kotlin multiplatform mobile libraries to aid in mobile app development battery battery readme md access various information about the battery of the device the app is running on cryptohash https github com appmattus crypto tree main cryptohash a set of cryptographic and not so cryptographic hashing functions connectivity connectivity readme md discover network connectivity and distinguish between cellular vs wifi connection ignore test ignore test readme md annotations to ignore tests from specific platforms package info package info readme md an api for querying information about an application package contributing please fork this repository and contribute back using pull requests https github com appmattus multiplatform utils pulls all contributions large or small major features bug fixes additional language translations unit integration tests are welcomed license license https img shields io badge license apache 202 0 blue svg license copyright 2021 appmattus limited licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license
kotlin-multiplatform kotlin android
front_end
Development
twitter follow https img shields io badge follow 40blackjacx 1da1f2 logo twitter style for the badge https twitter com intent follow original referer https 3a 2f 2fgithub com 2fblackjacx screen name blackjacxxx donate https img shields io badge donate paypal blue svg logo paypal style for the badge https www paypal me stherold development various guides and information related to mobile development mobile 1 interview questions mobile interview md 1 interesting features interesting features md ios 1 coding guidelines ios coding guidelines md 1 review guidelines ios review guidelines md 1 glossary ios glossary md misc 1 bookmarks bookmarks md 1 faq faq md
ios coding work hiring guides features swift hacktoberfest
front_end
events-app
events mern app this project was a personal project to learn backend coding i used mongoose https mongoosejs com to model my data and defined routing with express https expressjs com en guide routing html i have now added testing with the intent of learning i have used both jest https jestjs io and enzyme https airbnb io enzyme for testing demo you can find a demo of this project here https events app mern herokuapp com running application to run the app you will need to add your own database uris into a env file from there just run cd client npm run start npm run start or run on separate terminal windows for backend development purposes run nodemon nodemon index js cd client npm start app information the app was meant to display events created by an admin in order to become an admin a person needs to create an account and then be added by an existing admin as of now events only have start and end dates without times form validation is on the front end instead of backend or both app information the app was meant to display events created by an admin in order to become an admin a person needs to create an account and then be added by an existing admin as of now events only have start and end dates without times form validation is on the front end instead of backend or both sources my basic schemas for events and users were modeled after an existing group project that i unfortunately cannot display the routes for events were also modeled after that i also looked at express and mongoose documentation to understand what exactly i was doing of course most learning projects cannot be accomplished without online articles tutorials i can t remember all that i used and if i should have given you credit lmk login system the one i do remember was for the login system after several hours of researching the best ways to create a login system for a mern app josh s article https codemoto io coding nodejs email verification node express mongodb really saved me of course i modified the code to fit my needs but he was very concise in the article which helped me easily implement it withinmy app testing for testing i used the jest and enzyme documentation after hours of trial and error using jest mock for axios i realized my first implementation was correct but there really was a bug in my app which didn t even cross my mind at first this article https wanago io 2018 09 17 javascript testing tutorial part four mocking api calls and simulating react components interactions helped me debug by introducing me to jest spyon
server
AskOnline--Forum
ask introduction welcome to the ask forum design help desk support forum knowledge base responsive site template the ask is a bootstrap design help desk support forum website template coded and designed with bootstrap design bootstrap html5 and css ask ideal for wiki sites knowledge base sites support forum sites ticket systems theme documentations and similar projects ask comes with valid html files 100 responsive layout and with clean codes br features pages login br contact br forum br post details br user br user question br category br searching br recent question br most response br main features html5 css3 br pixel perfect design br responsive design br user friendly code br clean markup br creative design br cross browser support br powered with bootstrap 4 br used font awesome icon br google font br google map br fast page loading br easy to customize br w3c validated code br and much more br source credits bootstrap 4 br google font br javascript br jquery library br font awesome 5 br n b i create this template for develop my skill this template design idea is not mine but every single line of code done by me br twitter https twitter com radwananik br linkedin www linkedin com in radwan ahmed b52950100 br behance https www behance net ahmedradwa8b76 br behance https www behance net gallery 75018993 ask online html forum template br youtube https www youtube com watch v anxgngl1ana feature youtu be br
html bootstrap template css html5 radwandaily frontend-with-radwan forum ask
os
KnowLM
speaking head readme zh md english p align center br img src https github com zjunlp knowlm blob main assets knowlm png raw true width 400 height 120 br p knowledgeable large language model framework details summary b project background b summary with the rapid development of deep learning technology large language models such as chatgpt have made substantial strides in the realm of natural language processing however these expansive models still encounter several challenges in acquiring and comprehending knowledge including the difficulty of updating knowledge and potential knowledge discrepancies and biases collectively known as b knowledge fallacies b the knowlm project endeavors to tackle these issues by launching an open source large scale knowledgable language model framework and releasing corresponding models the project s initial phase introduced a knowledge extraction llm based on llama dubbed zhixi which means intelligent analysis of data for knowledge extraction to integrate the capacity of chinese understanding into the language models without compromising their inherent knowledge we firstly b 1 use chinese corpora for the full scale pre training with llama 13b augment the language model s understanding of chinese and improve its knowledge richness while retaining its original english and code capacities b then b 2 we fine tune the model obtained from the first step with an instruction dataset thus bolstering the language model s understanding of human instructions for knowledge extraction b please note that this project is still undergoing optimization and the model weights will be regularly updated to support new features and models details the features of this project are as follows centered on knowledge and large models a full scale pre training of the large model such as llama is conducted using the built chinese english pre training corpus based on the technology of kg2instructions the knowledge extraction tasks including ner re and ie are optimized and can be completed using human instructions using the built chinese instruction dataset approximately 1400k lora fine tuning is used to enhance the model s understanding of human instructions the weights of the pre training model and lora s instruction fine tuning are open sourced the full scale pre training code providing conversion construction and loading of large corpora and lora instruction fine tuning code are open sourced support multi machine multi gpu all weights and datasets have been uploaded to huggingface click here 1 1 to get started right away if you encounter any issues during the installation or using of knowlm please check faq https github com zjunlp knowlm 6 or promptly submit an issue https github com zjunlp knowlm issues and we will assist you with resolving the problem category base name version download link note base model llama1 knowlm 13b base v1 0 huggingface https huggingface co zjunlp knowlm 13b base v1 0 base model dialogue model llama1 knowlm 13b zhixi v1 0 huggingface https huggingface co zjunlp knowlm 13b zhixi information extraction model dialogue model llama1 knowlm 13b ie v1 0 huggingface https huggingface co zjunlp knowlm 13b ie information extraction model base model llama2 knowlm 7b base v1 0 coming soon base model dialogue model llama2 knowlm 7b ie v1 0 coming soon information extraction model dialogue model llama2 knowlm 7b ocean oceangpt v1 0 coming soon ocean model dialogue model llama2 knowlm 13b v1 0 training base model instruction dataset name number download link is it used by zhixi note knowlm cr cot reasoning chinese and english 202 333 google drive https drive google com drive folders 1ijgksjostk0m9gm0rp9jb6kdnwfj62xe usp sharing br huggingface https huggingface co datasets zjunlp knowlm cr yes knowlm ie information extraction chinese 281 860 google drive https drive google com file d 1wqvd 99 4xoucordwribzfo5jjdhjtq1 view usp sharing br huggingface https huggingface co datasets zjunlp knowlm ie yes due to using distant supervision there exists noise knowlm tool tool learning english 38 241 google drive https drive google com file d 1pyzxxv pr2t fysncumwtdzfncvtldv2 view usp sharing br huggingface https huggingface co datasets zjunlp knowlm tool no it will be used in the next version data description 1 other data sources for information extraction come from conll ace casis duee people daily duie etc 2 the knowlm tool dataset comes from the paper making language models better tool learners with execution feedback https arxiv org abs 2305 13068 and the github https github com zjunlp trice can be found here 3 the knowlm ie dataset comes from the paper instructie a chinese instruction based information extraction dataset https arxiv org abs 2305 11527 and the github https github com zjunlp deepke tree main example llm instructkgc can be found here news august 2023 the full parameters have been released omitting the parameter consolidation process july 2023 the instruction dataset has been released july 2023 support instruction fine tuning and vllm for llama 2 june 2023 the project name has been changed from cama to knowlm june 2023 release the first version of pre trained weights and the lora weights what s the knowlm p align center br img src https github com zjunlp knowlm blob main assets knowlm overview png raw true width 920 height 400 br p this is an overview of the knowlm which mainly consists of three technical features knowledge prompting it generates knowledge prompts based on structured data such as knowledge graphs and utilizes knowledge augmentation constraints to address knowledge extraction and reasoning issues knowledge editing it aligns outdated incorrect and biased knowledge within large models using knowledge editing techniques to tackle knowledge fallacy problems english tutorial pdf knowledge editing pdf knowledge interaction it enables dynamic knowledge interaction and feedback to achieve tool based learning and multi agent collaboration resolving the problem of embodiment cognition in llms english tutorial pdf knowledge interaction pdf the tools corresponding to these three technologies are easyinstruct https github com zjunlp easyinstruct easyedit https github com zjunlp easyedit and easyagent under development we will soon provide use cases for knowledge prompting and knowledge editing based on the knowlm framework contents quick start 1 quick start environment configuration ef b8 8f11 environment configuration model usage guide 12 model usage guide information extraction prompt 13 information extraction prompt llama cpp 14 llamacpp model editing ef b8 8f15 model editing cases 2 cases pretraining cases 21 pretraining cases information extraction cases 22 information extraction cases general ability cases 23 general ablities cases model editing case 24 model editing cases training details 3 training details pertraining data and pretraining scripts 31 dataset construction pretraining instruction data and instruction tuning scripts 32 training process pretraining limitations 4 limitations todo list 5 todo list faq 6 faq acknowledgments contributors citations 7 others h2 id 1 1 quick start h2 h3 id 1 1 1 1 environment configuration h3 knowlm supports both manual and docker image environment configuration you can choose the appropriate way to build manual environment configuration shell git clone https github com zjunlp knowlm git cd knowlm conda create n knowlm python 3 9 y conda activate knowlm pip install torch 1 13 1 cu116 extra index url https download pytorch org whl cu116 pip install r requirements txt building with docker images shell docker pull zjunlp knowlm v 1 docker run it zjunlp knowlm v 1 bin bash h3 id 1 2 1 2 model usage guide h3 1 reproduce the results in section 2 the cases in section 2 2 cases were all run on v100 if running on other devices the results may vary please run multiple times or change the decoding parameters we derived knowlm 13b zhixi and knowlm 13b ie through training using lora building upon the foundation of knowlm 13b base these models knowlm 13b zhixi and knowlm 13b ie are the result of merging the trained lora weights with the existing knowlm 13b base model parameters 1 if you want to reproduce the results in section 2 1 pretraining cases 21 pretraining cases please run the following command shell python examples generate finetune py base model zjunlp knowlm 13b base v1 0 the result in section 2 1 can be obtained 2 if you want to reproduce the results in section 2 2 information extraction cases 22 information extraction cases please run the following command shell python examples generate lora py base model zjunlp knowlm 13b zhixi run ie cases the result in section 2 2 can be obtained 3 if you want to reproduce the results in section 2 3 general ablities cases 23 general ablities cases please run the following command shell python examples generate lora py base model zjunlp knowlm 13b zhixi run general cases the result in section 2 3 can be obtained 2 usage of pretraining model we offer two methods the first one is command line interaction and the second one is web based interaction which provides greater flexibility 1 use the following command to enter command line interaction shell python examples generate finetune py base model zjunlp knowlm 13b base v1 0 interactive the disadvantage is the inability to dynamically change decoding parameters 2 use the following command to enter web based interaction shell python examples generate finetune web py base model zjunlp knowlm 13b base v1 0 here is a screenshot of the web based interaction p align center width 100 a href target blank img src assets finetune web jpg alt finetune web style width 100 min width 100px display block margin auto a p 3 usage of instruction tuning model here we provide a web based interaction method use the following command to access the web shell python examples generate lora web py base model zjunlp knowlm 13b zhixi here is a screenshot of the web based interaction p align center width 100 a href target blank img src assets lora web png alt finetune web style width 100 min width 100px display block margin auto a p the instruction is a required parameter while input is an optional parameter for general tasks such as the examples provided in section 1 3 you can directly enter the input in the instruction field for information extraction tasks as shown in the example in section 1 2 please enter the instruction in the instruction field and the sentence to be extracted in the input field we provide an information extraction prompt in section 2 5 if you want to perform batch testing please modify the examples generate lora py file and update the examples and hyperparameters in the variable cases according to different task requirements we have the following suggestions for adjusting decoding strategies and their associated hyperparameters 1 if you want more diverse and creative outputs consider using top k or top p nucleus sampling with a relatively higher top k or top p and possibly a higher temperature 2 if you want more focused and high quality outputs e g information extraction consider using beam search with a moderate num beam or top k or top p sampling with a lower top k or top p and a lower temperature 3 remember to experiment and fine tune depending on your use case it may be beneficial to iterate and experiment with different strategies and hyperparameters to find the optimal combination 4 vllm api server we integrate vllm https github com vllm project vllm for accelerating llm inference and providing efficient api service use the following command to launch vllm api server at http localhost 8090 shell max num batched tokens 8000 cuda visible devices 1 2 python inference launch vllm py port 8090 model data zhixi 13b use np weights max num batched tokens max num batched tokens dtype half tensor parallel size 2 query the service using post request shell curl x post http 127 0 0 1 8090 generate h content type application json d instruction input parameters top p 0 7 max tokens 256 you could get the following response shell generated text s num output tokens cf 65 error null h3 id 1 3 1 3 information extraction prompt h3 for information extraction tasks such as named entity recognition ner event extraction ee and relation extraction re we provide some prompts for ease of use you can refer to this link examples ie prompt py for examples of course you can also try using your own prompts here is a case https github com zjunlp deepke blob main example llm instructkgc readme md where knowlm 13b zhixi is used to accomplish the instruction based knowledge graph construction task in ccks2023 h3 id 1 4 1 4 llama cpp h3 if you find yourself lacking sufficient gpu computing resources you have the option to carry out quantization using llama cpp https github com ggerganov llama cpp this is possible because llama cpp shares the same architecture as knowlm once you have set up your environment you can download our model to a designated path using the following command bash python tools download py specify download path your path repo name zjunlp knowlm 13b zhixi next just substitute the model path at this location https github com ggerganov llama cpp prepare data run with the downloaded one when executing it in practice please remember to adjust the model path within this script https github com ggerganov llama cpp blob master examples alpaca sh accordingly h3 id 1 5 1 5 model editing h3 although large language models perform exceptionally well in many tasks they can still provide incorrect answers moreover as time passes knowledge that was once accurate may become outdated this necessitates that we adjust the model s responses to meet our expectations through model editing in model editing we utilized easyedit as our editing tool details can be found at https github com zjunlp easyedit easyedit is a highly integrated model editing tool all you need to do is define your editor in just three lines of code similar to how you would in hugging face shell from easyeditor import mendhyperparams hparams mendhyperparams from hparams hparams mend gpt2 xl editor baseeditor from hparams hparams the code above demonstrates the editor definition for editing the gpt2 xl model using the mend method the next step is to prepare the editing data and the test data shell metrics edited model editor edit prompts prompts ground truth ground truth target new target new locality inputs locality inputs keep original weight true with the provided code you can complete the editing of the model the edited model is stored in edit model and the corresponding evaluation metrics are saved in metrics h2 id 2 2 cases h2 h3 id 2 1 2 1 pretraining cases h3 our pre trained model has demonstrated certain abilities in instruction following coding reasoning as well as some translation capabilities without any fine tuning using instructions additionally it has acquired new knowledge below are some of our sample cases if you wish to reproduce our examples and view detailed decoding configuration please first set up the environment 1 1 then follow the steps outlined here 1 2 in the follwing cases text in bold represents the prompt while non bold text represents the model s output due to the maximum inference length set to 512 our cases fall into three situations 1 completed output the model generates the termination token eos and completes the output we mark this with white check mark 2 incomplete output the output is cut off due to the maximum inference length we mark this with eight spoked asterisk 3 repeated output we remove repeated content manually and mark it with arrow left details summary b translation b summary b b we are going to see the terracotta warriors in xi an today white check mark you are good at translating english into chinese translate the following sentence into chinese nothing is difficult to a willing heart answer white check mark details details summary b knowledge b summary 701 762 eight spoked asterisk 1 701 762 742 756 eight spoked asterisk white check mark details details summary b instruction following b summary b b 1 30 2 3 4 5 6 7 8 white check mark i love you arrow left 211 985 111 1897 1928 1952 1998 985 211 111 white check mark 1897 1928 1952 211 985 1897 1901 1928 1952 1958 1998 211 2001 985 2006 111 2017 eight spoked asterisk can you help me write a formal email to a potential business partner proposing a joint venture your answer br br a br dear sir madam br i am writing to propose a joint venture between our two companies i have attached a proposal for your consideration please let me know if you would like to discuss this further br yours sincerely white check mark you are now a doctor here are some tips for a healthy life 1 don t smoke 2 eat a balanced diet 3 exercise regularly 4 drink alcohol only in moderation 5 take care of your teeth 6 get enough sleep 7 see your doctor for regular checkups 8 learn to manage stress 9 stay up to date on vaccinations 10 wash your hands often 11 avoid secondhand smoke 12 use sunscreen and wear sunglasses 13 know your family medical history 14 be aware of the signs of depression 15 protect your hearing 16 practice safe sex 17 if you drink coffee do so in moderation 18 limit your intake of caffeine 19 don t drive when you re tired 20 don t text and drive 21 don t drink and drive 22 don t ride a motorcycle without a helmet 23 don t swim alone 24 don t dive into water where you can t see the bottom 25 don t swim if you ve been drinking alcohol 26 don t swim if you re sick 27 don t swim if you have open sores on your body 28 don t swim if there are strong currents or undertows 29 don t swim in polluted water 30 don t swim if there are snakes or alligators nearby 31 don t swim if there are jellyfish in the water 32 don t swim if there are sharks in the water 33 don t swim if there are riptides in the water 34 don t eight spoked asterisk details details summary b coding b summary using java to sort an unsorted array answer o nlogn param arr return using java to sort an unsorted array answer o nlogn param arr return public static t extends comparable t void sort t arr br nbsp nbsp nbsp nbsp if arr null arr length 0 br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp return br nbsp nbsp nbsp nbsp br nbsp nbsp nbsp nbsp int n arr length br nbsp nbsp nbsp nbsp for int i 0 i n 1 i br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp for int j 0 j n i 1 j br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp if arr j compareto arr j 1 0 br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp t tmp arr j br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp arr j arr j 1 br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp arr j 1 tmp br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp br nbsp nbsp nbsp nbsp br white check mark use python to write the bubble sort algorithm def bubble sort arr br nbsp nbsp nbsp nbsp bubble sort algorithm br nbsp nbsp nbsp nbsp for i in range len arr 1 0 1 br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp for j in range len arr 1 0 1 br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp if arr j arr j 1 br nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp arr j arr j 1 arr j 1 arr j br nbsp nbsp nbsp nbsp return arr white check mark details details summary b generate long text in chinese b summary 2007 9 sci 3 ei 1 1 1 eight spoked asterisk details details summary b generate long text in english b summary here is the recommendation letter that i wrote for an application to a dragon feeder position at the magic unicorn corporation dear recruiter i am writing this letter of recommendation on behalf of my student who is applying for a position as a dragon feeder at the magic unicorn corporation i have had the pleasure of teaching this student for the past three years and i can say without hesitation that he is one of the best students i have ever had he is hard working diligent and intelligent he is also a very good dragon feeder in fact he is the best dragon feeder i have ever had in my class he has fed many dragons over the years and they have all been very pleased with his work i would not hesitate to recommend him for a position as a dragon feeder at the magic unicorn corporation thank you for your consideration sincerely professor snape p s please don t tell anyone that i wrote this letter i don t want to get into any trouble with the ministry of magic chapter 1 the magic unicorn corporation the magic unicorn corporation muc is a large multinational corporation headquartered in new york city the company was founded in 1980 by a group of unicorns who wanted to start their own business they were tired of working for other unicorns and decided to strike out on their own their first product was a line of unicorn themed greeting cards the cards were a huge success and the company grew rapidly by 1985 the company had expanded its product line to include unicorn themed t shirts coffee mugs and stuffed animals in 1987 the company went public since then the magic unicorn corporation has become one of the most successful companies in the world br the magic unicorn corporation employs thousands of people around the world most of the employees are unicorns but the company also employs a small number of humans humans are hired to do jobs that unicorns cannot or will not do for example humans are hired to clean up after unicorns humans are also hired to feed the unicorns eight spoked asterisk you are very familiar with the information of chinese cities such as the attractions cuisine and history of chinese cities please introduce the city of hangzhou hangzhou is the capital of zhejiang province in eastern china it is located at the southern end of the yangtze river delta in southeastern china it is one of the most famous tourist cities in china it is also known as the city of silk city of tea city of flowers and city of lakes it has a population of 6 5 million people it is the political economic cultural and transportation center of zhejiang province it is also an important industrial base in china in addition hangzhou is also a famous historical and cultural city in china there are many well known scenic spots in hangzhou such as west lake lingyin temple leifeng pagoda bao an temple six harmonies pagoda peak flying clouds etc hangzhou is also known for its delicious cuisine hangzhou cuisine is one of the eight major cuisines in china it is characterized by fresh sweet sour salty and mellow hangzhou cuisine is mainly influenced by the local climate and geographical environment the main ingredients used in hangzhou cuisine are freshwater fish shrimp crab pork chicken duck beef vegetables and fruits hangzhou cuisine is divided into three categories hangzhou traditional cuisine hangzhou new cuisine and hangzhou snacks hangzhou traditional cuisine is mainly based on hangzhou s long history and culture hangzhou new cuisine is mainly based on hangzhou s geographical environment hangzhou snacks are mainly based on hangzhou s local customs and habits hangzhou cuisine is rich in color fragrance taste and nutrition hangzhou cuisine can be divided into two categories hangzhou traditional cuisine and hangzhou new cuisine hangzhou traditional cuisine is mainly based on hangzhou s long eight spoked asterisk details details summary b reasoning b summary question roger has 5 tennis balls he buys 2 more cans of tennis balls each can has 3 tennis balls how many tennis balls does he have now br answer roger started with 5 balls 2 cans of 3 each is 6 tennis balls 5 6 11 the answer is 11 br question the cafeteria had 23 apples lf they used 20 to make lunch and bought 6 more how many apples do they have br answer cafeteria started with 23 apples 20 apples were used to make lunch 6 more apples were bought 23 20 3 3 6 9 the answer is 9 br question if you have 4 marbles and i have 3 marbles how many marbles do we have together br answer 4 3 7 the answer is 7 arrow left details h3 id 2 2 2 2 information extraction cases h3 the effectiveness of information extraction is illustrated in the following figure we tested different instructions for different tasks as well as the same instructions for the same task and achieved good results for all of them p align center width 100 a href target blank img src assets ie case new logo en png alt ie style width 90 min width 90px display block margin auto a p compared to other large models like chatgpt as shown in the graph it can be observed that our model achieves more accurate and comprehensive extraction results however we have also identified some extraction errors in zhixi in the future we will continue to enhance the model s semantic understanding capabilities in both chinese and english and introduce more high quality instruction data to improve the model s performance p align center width 100 a href target blank img src assets casevschatgpt png width 600 height 900 a p h3 id 2 3 2 3 general abilities cases h3 we have selected 8 cases to validate the model s harmlessness translation ability comprehension code capability knowledge creative ability bilingual ability and reasoning ability details summary b harmlessness b summary 120 details details summary b translation ability b summary here is the translation of the chinese poem into english to soar above the highest peak to see all other mountains as small details details summary b comprehension b summary translation translate the sentence below into english details details summary b code ability b summary python python def binary search arr x low 0 high len arr 1 while low high mid low high 2 if arr mid x return mid elif arr mid x low mid 1 else high mid 1 return 1 details details summary b knowledge b summary details details summary b creative ability b summary details details summary b bilingual ability b summary dear hotel staff i am writing to inquire about the possibility of upgrading my reservation to a sea view room i have booked a standard room for my upcoming stay but i would greatly appreciate the opportunity to enjoy the breathtaking views of the ocean from my room i understand that sea view rooms may be more expensive than standard rooms but i am willing to pay the additional cost if it is possible to upgrade my reservation thank you for considering my request and i look forward to hearing back from you sincerely your name details details summary b reasoning ability b summary 3x 1 10 x 3x 1 1 10 1 3x 9 x 3 details h3 id 2 4 2 4 model editing cases h3 easyedit supports a variety of methods including but not limited to kn ike mend serac rome etc due to space constraints we only showcase the effects of the kn and ike methods details summary b kn method case b summary michael jordan is born from answer before editing michael jordan is born from the usa answer after editing michael jordan is born from china details details summary b ike method case b summary michael jordan is born from answer before editing michael jordan is born from the usa answer after editing michael jordan is born from china details h2 id 3 3 training details h2 the following figures illustrates the entire training process and dataset construction the training process is divided into two stages 1 full pre training stage the purpose of this stage is to enhance the model s chinese language proficiency and knowledge base 2 instruction tuning stage using lora this stage enables the model to understand human instructions and generate appropriate responses assets main new jpg h3 id 3 1 3 1 dataset construction pretraining h3 in order to enhance the model s understanding of chinese while preserving its original code and english language capabilities we did not expand the vocabulary instead we collected chinese corpora english corpora and code corpora the chinese corpora were sourced from baidu baike wudao and chinese wikipedia the english dataset was sampled from the original english corpus of llama https arxiv org pdf 2302 13971 pdf with the exception of the wikipedia data the original paper s english wikipedia data was up until august 2022 and we additionally crawled data from september 2022 to february 2023 covering a total of six months as for the code dataset due to the low quality code in the pile dataset we crawled code data from github and leetcode a portion of the data was used for pre training while another portion was used for fine tuning with instructions for the crawled datasets mentioned above we employed a heuristic approach to filter out harmful content additionally we removed duplicate data h3 id 3 2 3 2 training process pretraining h3 detailed data processing code training code complete training scripts and detailed training results can be found in pretrain pretrain before training we need to tokenize the data we set the maximum length of a single sample to 1024 while most documents are much longer than this therefore we need to partition these documents we designed a greedy algorithm to split the documents with the goal of ensuring that each sample consists of complete sentences and minimizing the number of segments while maximizing the length of each sample additionally due to the diversity of data sources we developed a comprehensive data preprocessing tool that can process and merge data from various sources finally considering the large amount of data loading it directly into memory would impose excessive hardware pressure therefore we referred to deepspeed megatron https github com bigscience workshop megatron deepspeed tree main tools and used the mmap method to process and load the data this involves loading the indices into memory and accessing the corresponding data on disk when needed finally we performed pre training on 5 5 million chinese samples 1 5 million english samples and 0 9 million code samples we utilized the transformers trainer in conjunction with deepspeed zero3 it was observed that strategy zero2 had slower speeds in a multi node multi gpu setup the training was conducted across 3 nodes with each node equipped with 8 32gb v100 gpus the table below showcases our training speeds parameter values micro batch size 20 gradient accumulation 3 global batch size 20 3 24 1440 time consuming of a step 260s h3 id 3 3 3 3 dataset construction instruction tuning h3 in addition to incorporating general capabilities such as reasoning and coding we have also introduced additional information extraction abilities including ner named entity recognition re relation extraction and ee event extraction into the current homogeneous models it is important to note that many open source datasets such as the alpaca dataset cot dataset and code dataset are in english to obtain the corresponding chinese datasets we utilized gpt 4 for translation purposes there were two approaches used 1 direct translation of questions and answers into chinese and 2 inputting english questions to gpt 4 and generating chinese responses the second approach was employed for general datasets while the first approach was utilized for datasets like the cot dataset and code dataset these datasets are readily available online for the information extraction ie dataset in the english part we utilize open source ie datasets such as conll ace casis to construct the corresponding english instruction dataset in the chinese part we not only utilize open source datasets like duee people daily and duie but also employ our self constructed dataset called kg2instruction to construct the corresponding chinese instruction dataset specifically kg2instruction instructie https arxiv org abs 2305 11527 is a chinese ie dataset obtained through distant supervision on chinese wikipedia and wikidata covering a wide range of domains to meet real extraction needs in addition we manually constructed a general chinese dataset and translated it into english using the second approach finally our data distribution is as follows dataset number cot datasets chinese english 202 333 general datasets chinese english 105 216 code datasets chinese english 44 688 information extraction datasets english 537 429 information extraction datasets chinese 486 768 kg2instruction and other instruction fine tuning datasets flow diagram p align center width 100 a href target blank img src assets kg2instructions en png style width 90 min width 90px display block margin auto a p h3 id 3 4 3 4 training process instruction tuning h3 currently most instruction tuning scripts using lora are based on alpaca lora https github com tloen alpaca lora so we will not go into detail here detailed instruction tuning parameters and training scripts can be found in finetune lora finetune lora h2 id 4 4 limitations h2 due to time constraints hardware limitations and technical reasons our model has limitations including but not limited to our instruction tuning process does not involve full tuning instead we use the lora approach for instruction tuning our model does not currently support multi turn conversations while we strive to ensure the usefulness reasonableness and harmlessness of the model s outputs toxic outputs may still occur in some scenarios the pretraining is not exhaustive we have prepared a large amount of pretraining data but it has not been fully trained h2 id 5 5 todo list h2 instruction tuning using full tuning instead of lora version is being trained and will be released soon new instruction tuning weights using lora will be updated shortly new models llama 7b falcon 7b are being trained we have limited gpus new abilities such as molecule and protein generation with mol instructions https github com zjunlp mol instructions a large scale biomolecules instruction dataset for large language models h2 id 6 6 faq h2 question what should i do if the model encounters during decoding answer if this symbol appears in the middle of the decoded sentence we recommend changing the input if it occurs at the end of the sentence increasing the output length can resolve the issue question why do i get different results with the same decoding parameters answer it is possible that you have enabled do sample true it could also be due to the order of execution you can try using a for loop to output multiple times with the same decoding parameters and observe that each output is different question why is the extraction or answer quality not good answer please try changing the decoding parameters if you are conducting testing on your proprietary dataset such as in healthcare or legal domains we strongly recommend prioritizing secondary training this is because our model is a general purpose model and its performance in specialized domains will likely not match that of models fine tuned specifically for those domains question the performance of a model trained on my domain specific dataset remains subpar what steps should i take answer if you ve utilized lora for training it s important to verify the adequacy of your training data and ensure that the loss is consistently decreasing we recommend conducting additional training epochs before proceeding with testing you can experiment with adjusting decoding parameters and running multiple test iterations in cases where fine tuning data is limited you may also consider enhancing your model by performing further pretraining on domain specific unsupervised corpora using our pretrained model followed by fine tuning using lora instructions question what can be done to address slow inference speed answer as our model is llama based inference speed is contingent upon factors such as your hardware and decoding parameters if you wish to enhance decoding speed you might consider referring to alternative libraries optimized specifically for llama question what should i do if i encounter an error while running the code answer if feasible it is advisable to conduct a preliminary search for relevant errors on your own if the problem persists kindly consider submitting an issue report when doing so be sure to provide specific error information details of the code file and execution command used information about your environment including whether you followed our provided requirements txt and installation instructions or if you used docker and any other pertinent details h2 id 7 7 others h2 h3 id 7 1 7 1 contributors in random order h3 pretraining xiang chen jintian zhang xiaozhuan liang pretraining data zhen bi honghao gui jing chen runnan fang instruction data and instruction tuning xiaohan wang shengyu mao tool learning and multimodal shuofei qiao yixin ou lei li model editing and safety yunzhi yao peng wang siyuan cheng bozhong tian mengru wang zhoubo li model testing and deployment yinuo jiang yuqi zhu hongbin ye zekun xi xinrong li h3 id 7 2 7 2 citation h3 if you use our repository please cite the following related papers bibtex misc knowlm author ningyu zhang and jintian zhang and xiaohan wang and honghao gui and kangwei liu and yinuo jiang and xiang chen and shengyu mao and shuofei qiao and yuqi zhu and zhen bi and jing chen and xiaozhuan liang and yixin ou and runnan fang and zekun xi and xin xu and lei li and peng wang and mengru wang and yunzhi yao and bozhong tian and yin fang and guozhou zheng and huajun chen title knowlm technical report year 2023 url http knowlm zjukg cn article wang2023easyedit title easyedit an easy to use knowledge editing framework for large language models author wang peng and zhang ningyu and xie xin and yao yunzhi and tian bozhong and wang mengru and xi zekun and cheng siyuan and liu kangwei and zheng guozhou and others journal arxiv preprint arxiv 2308 07269 year 2023 article yao2023editing title editing large language models problems methods and opportunities author yao yunzhi and wang peng and tian bozhong and cheng siyuan and li zhoubo and deng shumin and chen huajun and zhang ningyu journal arxiv preprint arxiv 2305 13172 year 2023 h3 id 7 3 7 3 acknowledgment h3 we are very grateful to the following open source projects for their help meta ai llama https arxiv org abs 2302 13971v1 huggingface transformers llama https github com huggingface transformers tree main src transformers models llama alpaca https crfm stanford edu 2023 03 13 alpaca html and alpaca lora https github com tloen alpaca lora vicuna https vicuna lmsys org llama x https github com aethercortex llama x p align center br img src assets 8 png width 300 br p why it s called zhixi in chinese zhi signifies intelligence referencing the ai s advanced language understanding capabilities xi means to analyze or extract symbolizing the system s knowledge extraction feature together zhixi epitomizes an intelligent system adept at dissecting and garnering knowledge characteristics that align with our expectations of a highly knowledgeable model
llama large-language-models pre-trained-language-models language-model instruction-following deep-learning chinese english instructions models reasoning gpt-3 deepspeed instruction-tuning lora pre-training bilingual pre-trained-model knowledge knowlm
ai
pipelines
coverage status https coveralls io repos github kubeflow pipelines badge svg branch master https coveralls io github kubeflow pipelines branch master sdk documentation status https readthedocs org projects kubeflow pipelines badge version latest https kubeflow pipelines readthedocs io en stable badge latest sdk package version https img shields io pypi v kfp color 2334d058 label pypi 20package https pypi org project kfp sdk supported python versions https img shields io pypi pyversions kfp svg color 2334d058 https pypi org project kfp overview of the kubeflow pipelines service kubeflow https www kubeflow org is a machine learning ml toolkit that is dedicated to making deployments of ml workflows on kubernetes simple portable and scalable kubeflow pipelines are reusable end to end ml workflows built using the kubeflow pipelines sdk the kubeflow pipelines service has the following goals end to end orchestration enabling and simplifying the orchestration of end to end machine learning pipelines easy experimentation making it easy for you to try numerous ideas and techniques and manage your various trials experiments easy re use enabling you to re use components and pipelines to quickly cobble together end to end solutions without having to re build each time installation install kubeflow pipelines from choices described in installation options for kubeflow pipelines https www kubeflow org docs pipelines installation overview the docker container runtime has been deprecated on kubernetes 1 20 kubeflow pipelines has switched to use emissary executor https www kubeflow org docs components pipelines installation choose executor emissary executor by default from kubeflow pipelines 1 8 emissary executor is container runtime agnostic meaning you are able to run kubeflow pipelines on kubernetes cluster with any container runtimes https kubernetes io docs setup production environment container runtimes documentation get started with your first pipeline and read further information in the kubeflow pipelines overview https www kubeflow org docs components pipelines introduction see the various ways you can use the kubeflow pipelines sdk https www kubeflow org docs pipelines sdk sdk overview see the kubeflow pipelines api doc https www kubeflow org docs pipelines reference api kubeflow pipeline api spec for api specification consult the python sdk reference docs https kubeflow pipelines readthedocs io en stable when writing pipelines using the python sdk refer to the versioning policy docs release versioning policy md and feature stages docs release feature stages md documentation for more information about how we manage versions and feature stages such as alpha beta and stable contributing to kubeflow pipelines before you start contributing to kubeflow pipelines read the guidelines in how to contribute contributing md to learn how to build and deploy kubeflow pipelines from source code read the developer guide developer guide md kubeflow pipelines community meeting the meeting is happening every other wed 10 11am pst calendar invite https calendar google com event action template tmeid ntdong5umdbtcnjlymdlowt1c2lky25jdmlfmjaxotexmtnumtgwmdawwibqzxnzawv6ahvaz29vz2xllmnvbq tmsrc jessiezhu 40google com scp all or join meeting directly https meet google com phd ixfj kcr meeting notes http bit ly kfp meeting notes kubeflow pipelines slack channel kubeflow pipelines https kubeflow slack com blog posts getting started with kubeflow pipelines https cloud google com blog products ai machine learning getting started kubeflow pipelines by amy unruh how to create and deploy a kubeflow machine learning pipeline by lak lakshmanan part 1 how to create and deploy a kubeflow machine learning pipeline https towardsdatascience com how to create and deploy a kubeflow machine learning pipeline part 1 efea7a4b650f part 2 how to deploy jupyter notebooks as components of a kubeflow ml pipeline https towardsdatascience com how to deploy jupyter notebooks as components of a kubeflow ml pipeline part 2 b1df77f4e5b3 part 3 how to carry out ci cd in machine learning mlops using kubeflow ml pipelines https medium com google cloud how to carry out ci cd in machine learning mlops using kubeflow ml pipelines part 3 bdaf68082112 kubeflow pipelines meets tekton https developer ibm com blogs kubeflow pipelines with tekton and watson by animesh singh acknowledgments kubeflow pipelines uses argo workflows https github com argoproj argo workflows by default under the hood to orchestrate kubernetes resources the argo community has been very supportive and we are very grateful additionally there is tekton backend available as well to access it please refer to kubeflow pipelines with tekton repository https github com kubeflow kfp tekton
kubeflow-pipelines mlops kubeflow machine-learning kubernetes pipeline data-science
ai
exp-pwm-generation-coep
introduction b discipline b fill your discipline name here b lab b fill your lab name here b experiment b fill your experiment name and number here about the experiment fill a brief description of this experiment here b name of developer b fill the name of experiment owner here b institute b b email id b b department contributors list srno name faculty or student department institute email id 1 2
coep ext-ph3
os
RE-iOS-Apps
reverse engineering ios applications welcome to my course reverse engineering ios applications if you re here it means that you share my interest for application security and exploitation on ios or maybe you just clicked the wrong link all the vulnerabilities that i ll show you here are real they ve been found in production applications by security researchers including myself as part of bug bounty programs or just regular research one of the reasons why you don t often see writeups with these types of vulnerabilities is because most of the companies prohibit the publication of such content we ve helped these companies by reporting them these issues and we ve been rewarded with bounties for that but no one other than the researcher s and the company s engineering team will learn from those experiences this is part of the reason i decided to create this course by creating a fake ios application that contains all the vulnerabilities i ve encountered in my own research or in the very few publications from other researchers even though there are already some projects 1 aimed to teach you common issues on ios applications i felt like we needed one that showed the kind of vulnerabilities we ve seen on applications downloaded from the app store this course is divided in 5 modules that will take you from zero to reversing production applications on the apple app store every module is intended to explain a single part of the process in a series of step by step instructions that should guide you all the way to success this is my first attempt to creating an online course so bear with me if it s not the best i love feedback and even if you absolutely hate it let me know but hopefully you ll enjoy this ride and you ll get to learn something new yes i m a n00b if you find typos mistakes or plain wrong concepts please be kind and tell me so that i can fix them and we all get to learn version 1 1 modules prerequisites prerequisites md introduction introduction md module 1 environment setup module 1 readme md module 2 decrypting ios applications module 2 readme md module 3 static analysis module 3 readme md module 4 dynamic analysis and hacking module 4 readme md module 5 binary patching module 5 readme md final thoughts final thoughts md resources resources md epub download thanks to natalia osa https github com natalia osa s brilliant idea https github com ivrodriguezca re ios apps issues 7 there s now a epub version of the course that you can download from here https github com ivrodriguezca re ios apps extras github blob master files re ios applications v1 1 epub as natalia mentioned this is for easier consumption of the content thanks again for this fantastic idea natalia license copyright 2019 ivan rodriguez ios at ivrodriguez com permission is hereby granted free of charge to any person obtaining a copy of this software and associated documentation files the software to deal in the software without restriction including without limitation the rights to use copy modify merge publish distribute sublicense and or sell copies of the software and to permit persons to whom the software is furnished to do so subject to the following conditions the above copyright notice and this permission notice shall be included in all copies or substantial portions of the software the software is provided as is without warranty of any kind express or implied including but not limited to the warranties of merchantability fitness for a particular purpose and noninfringement in no event shall the authors or copyright holders be liable for any claim damages or other liability whether in an action of contract tort or otherwise arising from out of or in connection with the software or the use or other dealings in the software donations i don t really accept donations because i do this to share what i learn with the community if you want to support me just re share this content and help reach more people i also have an online store nullswag com https nullswag com with cool clothing thingies if you want to get something there disclaimer i created this course on my own and it doesn t reflect the views of my employer all the comments and opinions are my own disclaimer of damages use of this course or material is at all times at your own risk if you are dissatisfied with any aspect of the course any of these terms and conditions or any other policies your only remedy is to discontinue the use of the course in no event shall i the course or its suppliers be liable to any user or third party for any damages whatsoever resulting from the use or inability to use this course or the material upon this site whether based on warranty contract tort or any other legal theory and whether or not the website is advised of the possibility of such damages use any software and techniques described in this course at all times at your own risk i m not responsible for any losses damages or liabilities arising out of or related to this course in no event will i be liable for any indirect special punitive exemplary incidental or consequential damages this limitation will apply regardless of whether or not the other party has been advised of the possibility of such damages privacy i m not personally collecting any information since this entire course is hosted on github that s the privacy policy https help github com en articles github privacy statement you want to read 1 i love the work prateekg147 https twitter com prateekg147 did with diva http damnvulnerableiosapp com and owasp did with igoat https www owasp org index php owasp igoat tool project they are great tools to start learning the internals of an ios application and some of the bugs developers have introduced in the past but i think many of the issues shown there are just theoretical or impractical and can be compared to a self hack it s like looking at the source code of a webpage in a web browser you get to understand the static code html javascript of the website but any modifications you make won t affect other users i wanted to show vulnerabilities that can harm the company who created the application or its end users
ios security online-course app-security reverse-engineering
os
Library-Management-System-JAVA
library management system java a href https github com harismuneer img alt views title github views src https komarev com ghpvc username harismuneer style flat square width 125 a open source love svg1 https badges frapsoft com os v1 open source svg v 103 github forks https img shields io github forks harismuneer library management system java svg style social label fork maxage 2592000 https www github com harismuneer library management system java fork github issues https img shields io github issues harismuneer library management system java svg style flat label issues maxage 2592000 https www github com harismuneer library management system java issues contributions welcome https img shields io badge contributions welcome brightgreen svg style flat label contributions colora red colorb black a library management system made using the concepts of object oriented analysis and design minimal code is written in the gui and the entities are decoupled as well the interface is console based this project was designed during the course object oriented analysis and design cs309 the class diagram of the project is also provided along with the database schema file the class diagram file can be opened using star uml http staruml io class diagram class diagram master images diagram png note after refactoring new class holdrequestoperations is added to the above structure which lies in between the holdrequest class and book class this class removes the bidirectional dependency between holdrequest and book more details mentioned here https github com osspk library management system java issues 9 interface p align middle img src master images interface png width 400 img src master images interface2 png width 400 p actors the actors include the following librarian checkout clerk borrower administrator use cases after determining the actors the second step in use case analysis is to determine the tasks that each actor will need to do with the system each task is called a use case because it represents one particular way the system will be used in other words only those use cases are listed that actors will need to do when they are using the system to solve the customer s problem borrower search for items by title by author by subject place a book on hold if it is on loan to somebody else check the borrower s personal information and list of books currently borrowed checkout clerk all the borrower use cases plus check out an item for a borrower check in an item that has been returned renew an item record that a fine has been paid add a new borrower update a borrower s personal information address telephone number etc librarian all of the borrower and checkout clerk use cases plus add a new item to the collection delete an item from the collection change the information the system has recorded about an item administrator add clerk add librarian view issued books history view all books in library how to run 1 install these java se development kit 8 jdk 8 http www oracle com technetwork java javase downloads jdk8 downloads 2133151 html after installing jdk 8 install netbeans ide https netbeans org downloads 2 open netbeans ide click on file open project and browse to the downloaded folder named project and select it it will load the netbeans project 3 now everything is setup except the java db derby database of netbeans so follow these steps to setup the database step 1 in the netbeans window there is a tab named services on the left select it then right click on javadb properties and change database location to database folder downloaded with this repository its placed besides the project folder step1 master images step1 png step 2 after that a database named lms will show up under javadb tab now right click databases new connection and select java db network and click next step2 master images step2 png step 3 provide the following database crendentials in the next popup and click next host localhost port 1527 database lms user name haris password 123 step3 master images step3 png step 4 now just click next for the rest of the windows after all this the database connection is made make sure that you connect with the database before running the project by right clicking on the connection and selecting connect now you are ready to run the project final master images final png note the password for administrative functions is lib the admin adds new clerks and librarian then they both do the rest of the functions hr h1 align left hey there i m a href https www linkedin com in harismuneer haris a img src https media giphy com media hvrjclfzcasrr4ia7z giphy gif width 28 a href https github com harismuneer ultimate facebook scraper img align right src https user images githubusercontent com 30947706 79588950 17515780 80ee 11ea 8f66 e26da49fa052 png alt ultimate facebook scraper ufs width 200 a maker of things h1 creator of a href https github com harismuneer ultimate facebook scraper ultimate facebook scraper a one of the best software to collect facebook data for research analysis hr h2 align left connect h2 p align left a href https www linkedin com in harismuneer img title follow on linkedin src https img shields io badge linkedin 0077b5 style for the badge logo linkedin logocolor white a a href https www facebook com harism99 img title connect on facebook src https img shields io badge facebook 1877f2 style for the badge logo facebook logocolor white a a href https twitter com harismuneer99 img title follow on twitter src https img shields io badge twitter 1da1f2 style for the badge logo twitter logocolor white a a href mailto haris muneer5 gmail com img title email src https img shields io badge gmail d14836 style for the badge logo gmail logocolor white a a href https github com harismuneer img title follow on github src https img shields io badge github 100000 style for the badge logo github logocolor white a a href https www instagram com harismuneer99 img title follow on instagram src https img shields io badge instagram e4405f style for the badge logo instagram logocolor white a a href https www youtube com channel ucz ubd7g0e2bp 0txtslsjw sub confirmation 1 img title subscribe on youtube src https img shields io badge youtube ff0000 style for the badge logo youtube logocolor white a p consulting coaching stuck with some problem need help in solution development guidance training or capacity building i am a full stack engineer turned project manager with years of technical and leadership experience in a diverse range of technologies and domains let me know what problem you are facing at b haris muneer5 gmail com b and we can schedule a consultation meeting to help you get through it technical skills expertise development of web applications mobile applications and desktop applications development of machine learning deep learning models and deployment web scraping browser automation python scripting hr support donations if you or your company use any of my projects like what i m doing or have benefited from my projects in any way then kindly consider backing my efforts for donations you can follow these simple steps b 1 b free signup at b transferwise https transferwise com invite u harism95 b using this link https transferwise com invite u harism95 li signing up through this link will save you from any transcation fee on the donation b 2 b select the amount e g 15 and choose the receiving recipient s currency to be pkr it supports multiple payment options credit card debit card wire transfer etc b 3 b then it will show my info as the recipient select it if my name isn t shown then type my email haris muneer5 gmail com in recipients b 4 b choose the reason for transfer to the one that suits you the most in this case it could be general expenses and in the reference section you can mention support if you face any issue in sending donation then feel free to get in touch with me at haris muneer5 gmail com thank you for your contribution authors you can get in touch with us on our linkedin profiles haris muneer linkedin link https img shields io badge connect harismuneer blue svg logo linkedin longcache true style social label follow https www linkedin com in harismuneer you can also follow my github profile to stay updated about my latest projects github follow https img shields io badge connect harismuneer blue svg logo github longcache true style social label follow https github com harismuneer maham amjad linkedin link https img shields io badge connect maham amjad blue svg logo linkedin longcache true style social label connect https www linkedin com in maham amjad 40796b177 you can also follow my github profile to stay updated about my latest projects github follow https img shields io badge connect maham amjad blue svg logo github longcache true style social label follow https github com mahamamjad if you liked the repo then kindly support it by giving it a star and share in your circles so more people can benefit from the effort contributions welcome forthebadge https forthebadge com images badges built with love svg if you find any bug in the code or have any improvements in mind then feel free to generate a pull request issues github issues https img shields io github issues harismuneer library management system java svg style flat label issues maxage 2592000 https www github com harismuneer library management system java issues if you face any issue you can create a new issue in the issues tab and i will be glad to help you out license mit https img shields io cocoapods l afnetworking svg style style label license maxage 2592000 master license copyright c 2018 present harismuneer mahamamjad
desktop-application console-based design-patterns object-oriented-design library-management-system library-system java class-diagram persistent-storage text-based staruml netbeans-project object-oriented-analasis-design object-oriented-assignment singleton-pattern jdbc-database javadb database-management library-database library-management
os
ESD-Lab-5
esd lab 5 embedded systems design lab 5 create a custom ip to interface
os
iotex-desktop-wallet
iotex desktop wallet this repo used to be iotex explorer the v1 version of iotexscan the updated v2 iotexscan is hosted at iotexscan io this repo is still active serving the development of iotex desktop wallet iotex desktop wallet is an electron based desktop wallet it connects to iotex network using iotex antenna iotex chain sdk https github com iotexproject iotex antenna to use mobile wallet you can download from iopay website https iopay me to use other wallets you can refer to https docs iotex io get started iotex wallets download https github com iotexproject iotex desktop wallet releases source code check it out here src electron src electron join chat a href https iotex io devdiscord target blank img src https github com iotexproject halogrants blob 880eea4af074b082a75608c7376bd7a8eaa1ac21 img btn discord svg height 36px a div
server
layoutlm_experiments
layoutlm experiments a work in progress project to replicate layoutlmv3 s performance on docvqa 1 process docvqa data for layoutlmv3 not recommended tesseract use tesseract ocr to extract text and boxes not recommended as it is very slow and very bad sh crunch train data poetry run prep docvqa data downloads docvqa train train downloads docvqa proc train t1 tesseract crunch validation data poetry run prep docvqa data downloads docvqa val val downloads docvqa proc val t1 tesseract recommended dataset textract use provided dataset ocr to extract text and boxes this was made with amazon textract and is quite decent and fast sh poetry run prep docvqa data downloads docvqa val val downloads docvqa proc val t2 dataset ocr engine dataset save ocr downloads docvqa proc val t2 ocr procs 8 best microsoft read ocr use microsoft read api to extract text and boxes this requires azure credits and is comparatively expensive but provides the best possible ocr with save ocr this only needs to be run once and can be used to re encode without running ocr again sh poetry run prep docvqa data downloads docvqa val val downloads docvqa proc val t3 msread ocr engine microsoft save ocr downloads docvqa proc val t3 msread ocr procs 4 optional segify create segments out of bounding boxes like suggested in layoutlmv3 and structurallm papers this can give 0 05 increase in anls this is done by simply merging nearby boxes pass debug to view visualizations one by one the two parameters at the end are vertical and horizontal merge distance normalized to 1000 units so a value of 10 width means 1 of the document s width segify needs to be run on the saved ocr output of the last step sh poetry run segify ocr downloads docvqa proc train t3 ocr downloads docvqa proc train t3 seg root dir downloads docvqa train 8 40 after segify is finished re encode the dataset sh poetry run prep docvqa data downloads docvqa val val downloads docvqa proc val t7 msread resume from ocr downloads docvqa proc val t3 msread seg procs 8 2 train on docvqa data sh cuda visible devices 0 poetry run train docvqa microsoft layoutlmv3 base downloads docvqa proc val downloads docvqa proc val test1 steps 100000 batch 128 inst batch 8 lr 3e 5 warmup ratio 0 48 save every 100 log wandb project id llm3 docvqa base 1 4 anls metrics run evals to compute metrics sh poetry run budget metrics train output try11 msread segs checkpoint 5000 downloads docvqa proc val t7 msread alternate approach layoutlmv3bart layoutlmv3 encoder with bart decoder stack a pretrained decoder onto a pretrained layoutlmv3 and finetune together for seq2seq following leveraging pre trained checkpoints for sequence generation tasks https arxiv org abs 1907 12461 1 fuse together layoutlmv3 encoder with bart decoder sh poetry run fuse encdec microsoft layoutlmv3 base facebook bart base save to downloads pt llmv3 bart fuse t1 test input downloads dsconst freebirds jpg 2 preprocess for seq2seq sh poetry run prep docvqa seq2seq downloads docvqa val val downloads docvqa proc val t6 tiny subset ocr engine dataset decoder model facebook bart base 3 train for seq2seq sh poetry run train docvqa seq2seq downloads pt llmv3 bart fuse t1 downloads docvqa proc train t8 seq2seq msread downloads docvqa proc val t8 seq2seq msread try17 msr2s s2s steps 100000 batch 32 inst batch 4 lr 5e 6 warmup ratio 0 048 save every 1000
ai
MLAI
com4509 6509 machine learning and adaptive intelligence university of sheffield autumn 2021 by mauricio a lvarez 1 5 and haiping lu 6 10 session 1 introduction to machine learning session 2 end to end machine learning session 3 decision trees and ensemble methods session 4 linear regression session 5 automatic differentiation session 6 logistic regression and pytorch for deep learning https www youtube com watch v hnaoayszzyi list pluroukdwifzwctez18kp8ql6wlxzhxtm8 session 7 neural networks https www youtube com watch v shw a3ygwzq list pluroukdwifzxmvxhjjfztwy1kbfp8pav6 session 8 unsupervised learning https www youtube com watch v e8ffrxbyeco list pluroukdwifzzn88nwrtfl3cm2vhxud9zq session 9 generative models https www youtube com watch v c7qt56hh wg list pluroukdwifzzfl5am xk ndwewfwm1 bm session 10 accessible multimodal transfer learning with pykale https www youtube com watch v nybygw t2bm list pluroukdwifzzxginkdwg2vdeinbj2mcyq module delivery plan details about the module delivery plan for com4509 6509 are here moduledeliveryplanmlaiautumn2021 pdf lab sessions we will use jupyter notebooks running a python 3 kernel for the lab sessions you can access the environment required in different ways option 1 the desktop computers in the diamond already have anaconda installed download the notebooks and open them using the anaconda navigator https docs anaconda com anaconda navigator option 2 you can install anaconda in your local computer https www anaconda com products individual and then launch jupyter notebook using the anaconda navigator https docs anaconda com anaconda navigator the anaconda distribution contains all the python libraries that we will use in this module e g pandas numpy scipy matplotlib pytorch etc if you are a linux or macos user you can open a jupyter notebook by moving to the labs folder and then typing on your terminal jupyter notebook lab 1 probability and introduction to jupyter notebooks ipynb option 3 the university of sheffield has a remote desktop service https www sheffield ac uk it services remote remote desktop that you can use the remote desktops have anaconda already installed once you connect to the remote desktop use the anaconda navigator https docs anaconda com anaconda navigator to open a jupyter notebook if you connect to a linux computer you can open jupyter notebook using the terminal as explained in option 2 option 4 you can run the jupyter notebooks directly on google colab https colab research google com notebooks intro ipynb utm source scs index click on each colab badge to open the notebook lab session google colab link lab 1 probability and introduction to jupyter notebooks open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 201 20 20probability 20and 20introduction 20to 20jupyter 20notebooks ipynb lab 2 end to end project in ml open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 202 20 20end to end 20project 20in 20ml ipynb lab 3 decision trees and ensemble methods open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 203 20 20decision 20trees 20and 20ensemble 20methods ipynb lab 4 linear regression open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 204 20 20linear 20regression ipynb lab 5 automatic differentiation open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 205 20 20automatic 20differentiation ipynb lab 6 logistic regression and pytorch for deep learning open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 206 20 20logistic 20regression 20 26 20pytorch 20for 20dl ipynb lab 7 neural networks open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 207 20 20neural 20networks ipynb lab 8 unsupervised learning open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 208 20 20unsupervised 20learning ipynb lab 9 generative models open in colab https colab research google com assets colab badge svg https colab research google com github maalvarezl mlai blob master labs lab 209 20 20generative 20model ipynb if you want to save changes to the notebook you need to save them before quitting according to this link https colab research google com github googlecolab colabtools blob master notebooks colab github demo ipynb scrollto rmai0dd30xzl if you would like to save your changes from within colab you can use the file menu to save the modified notebook either to google drive or back to github choose file save a copy in drive or file save a copy to github and follow the resulting prompts to save a colab notebook to github requires giving colab permission to push the commit to your repository references introductory level simon rogers and mark girolami a first course in machine learning chapman and hall crc press 2nd edition 2016 john d kelleher brian macnamee and aoife d arcy fundamentals of machine learning for predictive data analytics algorithms worked examples and case studies the mit press 2015 intermediate and advanced levels christopher bishop pattern recognition and machine learning springer verlag 2006 kevin murphy machine learning a probabilistic perspective mit press 2012 kevin murphy probabilistic machine learning an introduction mit press 2022 bradley efron and trevor hastie computer age statistical inference student edition cambridge university press 2021 focus on implementation aur lien g ron hands on machine learning with scikit learn and tensorflow o reilly 2nd edition 2019 andriy burkov the hundred page machine learning book 2019 acknowledgement some of the slides and lab notebooks used in this module are based on material developed by prof neil lawrence http inverseprobability com mlai2015 and many others individually acknolwedged in slides notebooks
machine-learning
ai
ios-tech-article
topics a curated link of ios articles and engineering blogs based on the topic app size how we reduced pinterest s ios app size by 30 50mb https medium com pinterest engineering how we reduced pinterests ios app size by 30 50mb 68d7f8425882 build system our journey in reducing traveloka ios app s build time by 90 https medium com traveloka engineering our journey in reducing traveloka ios apps build time by 90 3f875ff9a9b7 ios and bazel at reddit a journey https www reddit com r redditeng comments syz5dw ios and bazel at reddit a journey behind the scenes of the xcode build process https developer apple com videos play wwdc2018 415 memory management memory management and performance of value types https swiftrocks com memory management and performance of value types companies a curated link of ios articles and engineering blogs based on the company pinterest how we reduced pinterest s ios app size by 30 50mb https medium com pinterest engineering how we reduced pinterests ios app size by 30 50mb 68d7f8425882 reddit ios and bazel at reddit a journey https www reddit com r redditeng comments syz5dw ios and bazel at reddit a journey reddit recap state of mobile platforms edition 2022 https www reddit com r redditeng comments zkap1u reddit recap state of mobile platforms edition swiftrocks memory management and performance of value types https swiftrocks com memory management and performance of value types traveloka our journey in reducing traveloka ios app s build time by 90 https medium com traveloka engineering our journey in reducing traveloka ios apps build time by 90 3f875ff9a9b7 wwdc behind the scenes of the xcode build process https developer apple com videos play wwdc2018 415 oot aws connecting to rds https aws amazon com blogs database securely connect to an amazon rds or amazon ec2 database instance remotely with your preferred gui exporting data from rds to s3 https docs aws amazon com amazonrds latest userguide postgresql s3 export html
os
demo-cypress
mini roteiro pr tico sobre testes end to end usando cypress o objetivo deste pequeno roteiro ter um primeiro contato com testes do tipo end to end esses testes s o chamados tamb m de testes de front end testes de sistemas testes de interface web ou testes de interface com o usu rio no roteiro vamos usar uma ferramenta de c digo aberto para testes end to end chamada cypress https www cypress io que permite a escrita desses testes em javascript o cypress parecido com o selenium para o qual foi mostrado um exemplo de teste no cap tulo 8 https engsoftmoderna info cap8 html https engsoftmoderna info cap8 html testes de sistema do livro engenharia de software moderna https engsoftmoderna info no roteiro vamos usar o cypress para escrever alguns testes end to end para uma livraria virtual mais especificamente vamos reusar a micro livraria https github com aserg ufmg micro livraria do roteiro pr tico de microsservi os instala o e execu o do cypress para realiza o do roteiro configure o seu ambiente da seguinte forma passo 1 fa a um fork deste reposit rio usando o bot o fork no canto superior direito da tela passo 2 clone o projeto para um diret rio local bash git clone https github com seu usu rio demo cypress git passo 3 instale o cypress a forma recomendada via npm necess rio node js https nodejs org en download no diret rio do projeto execute bash npm install cypress save dev ap s a instala o no diret rio do projeto ser criada uma pasta cypress passo 3 instale o docker https docs docker com get docker a micro livraria isto o sistema que vamos testar e tamb m o cypress ser o executados por meio de containers passo 4 coloque o sistema da micro livraria no ar primeiro gere uma nova imagem executando o seguinte comando na raiz do projeto bash docker build t micro livraria f cypress dockerfile em seguida execute a aplica o chamando no mesmo diret rio de antes bash docker run ti p 3000 3000 p 5000 5000 micro livraria passo 5 agora vamos executar o cypress pela primeira vez usando o seguinte comando na pasta cypress ou seja se estiver na raiz do projeto execute antes cd cypress essa pasta a que cont m o arquivo cypress json bash npx cypress open bash docker run network host it v pwd e2e w e2e cypress included 9 2 0 observa o na primeira vez que for executado esse comando pode demorar alguns minutos pois ele vai baixar a imagem do cypress e realizar o seu build veja tamb m que essse comando j vai rodar um primeiro teste de exemplo bem simples que implementamos no no arquivo spec1 js https github com aserg ufmg demo cypress blob main cypress cypress integration spec1 js javascript describe meu primeiro teste it n o faz nada expect true to equal true antes de prosseguir com o roteiro analise e entenda com calma a sa da produzida pelo cypress a seguinte tela na rea marcada com 1 temos os testes j criados para o sistema e na marca o 2 temos o bot o para cria o de um novo arquivo de testes p align center img src https user images githubusercontent com 54295278 127781317 2bd7951f 73ba 475d 8e57 91785ab08a6e png width 70 p tarefa 1 primeiro teste os arquivos de testes do cypress s o uma sequ ncia de fun es em javascript que testam o front end da aplica o como primeiro teste iremos apenas observar o resultado de uma simples asser o primeiro crie um novo arquivo de teste chamado meu teste js com o seguinte c digo javascript describe meu primeiro teste it n o faz nada expect true to equal true salve este arquivo na pasta default na qual ficam os testes do cypress normalmente chamados tamb m de specs o teste acima trivial pois ele apenas checa se true igual a true ap s salvar o arquivo procure por ele na lista de testes specs do cypress e clique duas vezes no seu nome para execut lo os resultados ser o apresentados na interface do cypress p align center img src https user images githubusercontent com 54295278 127781703 db6135b3 32e5 4c46 a07d 75f53c428f35 png width 70 p a rea 3 mostra os resultados do teste executado enquanto 4 apresenta os snapshots obtidos ao longo da execu o de cada passo do teste para o nosso teste trivial foi apenas constatado que true igual a true de forma an loga se alterarmos a linha 3 para expect true to equal false e salvarmos o arquivo poss vel observar que o navegador j ira se adequar s mudan as no arquivo de teste e consequentemente o teste ir falhar tarefa 1 testando o front end da micro livraria vamos agora implementar um teste end to end para a micro livraria esse teste vai simular um usu rio realizando as seguintes opera es no site 1 abrir o site 2 escolher o livro desejado 3 inserir o cep 4 calcular o frete 5 realizar a compra do livro observe que um teste de front end pode ser comparado com um rob simulando um usu rio final utilizando as funcionalidades do sistema passo 1 crie um arquivo spec2 js na mesma pasta do teste anterior pasta integration mas com o seguinte c digo javascript describe teste end to end it teste 1 visita p gina abre o site cy visit http localhost 5000 os comandos do cypress s o sempre executados sobre um objeto cy a fun o visit visita uma p gina que no caso da nossa micro livraria est no endere o localhost 5000 em seguida execute este teste usando sempre bash docker run network host it v pwd e2e w e2e cypress included 9 2 0 voc vai perceber que o teste vai passar passo 2 vamos agora acrescentar novos comportamentos no teste especificamente vamos supor um cen rio no qual um usu rio compra o livro de padr es de projeto primeiro precisamos garantir que o livro est sendo mostrado na p gina do seguinte modo javascript describe teste end to end it teste 1 visita p gina abre o site cy visit http localhost 5000 it teste 2 verifica item na p gina verifica se existe o livro desejado cy get data id 3 should contain text design patterns no c digo anterior realizamos uma query usando a fun o get e assumimos que o cat logo de livros exibido em tr s colunas o livro desejado est na terceira linha cujo identificador data id 3 por isso usamos uma asser o que verifica se a terceira coluna inclui a string design patterns para rodar o teste use o mesmo comando de antes procure tamb m ver os v deos que o cypress grava automaticamente na pasta cypress cypress videos ao passar o mouse em cima de cada etapa do teste em 3 podemos observar que 4 muda refletindo cada passo do teste em espec fico o ltimo passo com a asser o mostrado em destaque para indicar que ele foi corretamente identificada poss vel utilizar o selector playground que uma ferramenta iterativa do cypress que ajuda a determinar um seletor nico para um elemento em espec fico por meio desse recurso pode se testar um seletor para identificar quais elementos s o encontrados e tamb m identificar quais elementos possuem uma determinada string de texto para usar o selector playground clique no cone de alvo item 5 da figura abaixo e clique com o bot o esquerdo sobre o elemento desejado para obter um seletor nico p align center img src https user images githubusercontent com 54295278 127781712 29214b67 457f 4be3 b74a 16e3e94fa892 png width 70 p passo 3 vamos agora incrementar nosso teste para simular um usu rio que insere o cep no campo indicado e sem seguida clica no bot o calcular frete javascript describe teste end to end it teste 1 visita p gina abre o site cy visit http localhost 5000 it teste 2 verifica item na p gina verifica se existe o livro desejado cy get data id 3 should contain text design patterns it teste 3 calcula frete calcula o frete cy get data id 3 within cy get input type 10000 000 cy contains calcular frete click then cy wait 2000 cy get swal text contains o frete r fecha o pop up com o pre o do frete cy get swal button click primeiro o teste busca pela terceira coluna e procura pelo campo de input em seguida ele insere o cep 10000 000 e pressiona o bot o calcular frete prosseguindo espera se 2 segundos na fun o wait para garantir que a janela com o valor do frete vai carregar corretamente ent o nessa janela selecionamos o swal text e usamos uma asser o para garantir que a mensagem aquela que esperamos por fim clicamos no bot o para fechar o pop up se ainda n o o fez rode o teste acima e assista tamb m o v deo com o passo a passo da sua execu o que est na pasta cypress cypress videos tarefa 2 testando a compra de um livro agora sua vez de incrementar o teste anterior basicamente voc deve acrescentar c digo no teste para simular a compra de um livro conforme explicado a seguir use a fun o cy contains para selecionar o bot o comprar e para clicar nele fun o click espere que o pop up seja exibido com a confirma o da compra fun o wait verifique se nesse pop up temos a messagem sua compra foi realizada com sucesso feche o pop up clicando em seu bot o tarefa 3 salve suas mudan as realize um commit e push para salvar suas mudan as no teste o commit pode usar qualquer mensagem e basta incluir o arquivo spec2 js importante o cypress instala centenas de arquivos na sua pasta mas basta fazer o commit do arquivo indicado ou seja de um nico arquivo coment rio final este roteiro teve como objetivo proporcionar uma primeira experi ncia pr tica com o cypress para que o aluno possa entender a mec nica b sica de funcionamento de testes de interface o site do cypress possui uma extensa documenta o sobre a ferramenta com diversos exemplos que pode ser til para aqueles que quiserem aprofundar no estudo desse tipo de teste cr ditos este roteiro foi elaborado por rodrigo moreira aluno de mestrado do dcc ufmg como parte das suas atividades na disciplina est gio em doc ncia cursada em 2021 1 sob orienta o do prof marco tulio valente
front_end
Instruct2Act
instruct2act mapping multi modality instructions to robotic actions with large language model foundation models have made significant strides in various applications including text to image generation panoptic segmentation and natural language processing this paper presents instruct2act a framework that utilizes large language models to map multi modal instructions to sequential actions for robotic manipulation tasks specifically instruct2act employs the llm model to generate python programs that constitute a comprehensive perception planning and action loop for robotic tasks in the perception section pre defined apis are used to access multiple foundation models where the segment anything model sam accurately locates candidate objects and clip classifies them in this way the framework leverages the expertise of foundation models and robotic abilities to convert complex high level instructions into precise policy codes our approach is adjustable and flexible in accommodating various instruction modalities and input types and catering to specific task demands we validated the practicality and efficiency of our approach by assessing it on robotic tasks in different scenarios within tabletop manipulation domains furthermore our zero shot method outperformed many state of the art learning based policies in several tasks framework images instruct2act framework png instruct2act mapping multi modality instructions to robotic actions with large language model https arxiv org pdf 2305 11176 pdf siyuan huang zhengkai jiang hao dong yu qiao peng gao hongsheng li instruct2act mapping multi modality instructions to robotic actions with large language model instruct2act mapping multi modality instructions to robotic actions with large language model supported modules supported modules how to run how to run prompts setting prompts setting evaluation tasks evaluation tasks notes notes acknowledgement acknowledgement supported modules currently we support the following modules modules images modules api png correspondingly please prepare the sam and clip model ckpts in advance you can download the ckpts from sam https github com facebookresearch segment anything model checkpoints and openclip https github com mlfoundations open clip then set the path in the file engine robotic py you can also add your personlized modules in engine robotic py and add the api definition in the prompt files how to run 1 install the required packages with the provided environment yaml if you meet any install issue you can take a look at issue https github com opengvlab instruct2act issues 6 thanks for euminds https github com euminds 2 install the vimabench with vimabench https github com vimalabs vimabench 3 change the openai api key in visual programming prompt robotic exec generation py 4 run the robotic anything gpt online py prompts setting in instruct2act we implement two types of prompts i e task specific and task agnostic prompts the task specific prompts are designed for specific tasks which is in the visprog style and the task agnostic prompts are designed for general purpose and it is in vipergpt plus visprog style we provide more details in the our paper and you can change the setting in the file visual programming prompt robotic exec generation py for very specific tasks like robotic manipulations where you know the flow clearly we suggest to use the task specific prompts for general purpose we suggest to use the task agnostic prompts these two prompts are stored in visual programm prompt py and full prompt ini respectively besides the language prompts we also provide the pointing language enhanced prompts where cursor click will be used to select the target objects you can see the details with funcation sam in engine robotic py we provide two code generation mode for robotic manipulation tasks i e offline and online mode the codes with offline mode are generated in advance and summarized with expert knowledge and this type is used for the demo and quick trail usage the online mode are generated on the fly and this type is used for the general purpose evaluation tasks we select six representative meta tasks from vimabench 17 tasks in total to evaluate the proposed methods in the tabletop manipulation domain as shown in below to run the evaluation please follow the instructions in the vimabench https github com vimalabs vimabench task instruction visualization visual manipulation put the polka dot block into the green container task01 images tasks gif task01 gif scene understanding put the blue paisley object in another given scene image into the green object task01 images tasks gif task02 gif rotation rotate the letter m 30 degrees task01 images tasks gif task03 gif rearrange rearrange to the target scene task01 images tasks gif task04 gif rearrange then restore rearrange objects to target scene and then restore task01 images tasks gif task05 gif pick in order then restore put the cyan block into the yellow square then into the white black square finally restore it into its original container task01 images tasks gif task17 gif notes 1 to speed up the sam inference progress we add cuda device option in function build sam you should modify it accordingly in the source code and then recompile the package 2 during evaluation we set the hide arm true and close the debug window if you want to visualize the arm movement please set them correctly 3 the orignal movement in vimabench is quite quick if you want to slow down the movement please add some lines like sleep in vimabench 4 when use chatgpt for generation you need to mange some network stuff also we found that when the network situation is not ideal sometimes the generated codes are in bad quality incomplete or too short acknowledgement we would like to thank the authors of the following great projects this project is built upon these great open sourced projects vimabench https github com vimalabs vimabench openclip https github com mlfoundations open clip sam https github com facebookresearch segment anything model checkpoints we are also inspired by the following projects viper https github com cvlab columbia viper taskmatrix https github com microsoft taskmatrix visprog https github com allenai visprog
llm robotics segment-anything chatgpt clip
ai
siml
synopsis this repository contains popular machine learning algorithms which have been introduced in various blog posts http ataspinar com most of the algorithms are accompanied with blog posts in which i try to explain the mathematics behind and the interpretation of these algorithms motivation machine learning is fun but more importantly machine learning is easy but the academic literature or even wikipedia pages is full with unnecessary complicated terminology notation and formulae this gives people the idea that these ml algorithms can only be understood with a full understanding of advanced math and statistics stripped from all of these superfluous language we are left with simple maths which can be expressed in a few lines of code notebooks explaining the mathematics i have also provided some notebooks explaining the mathematics of some machine learning algorithms linear regression and logistic regression https github com taspinar siml blob master notebooks linear 20regression 2c 20logistic 20regression ipynb naive bayes classification https github com taspinar siml blob master notebooks naive bayes ipynb perceptron classification https github com taspinar siml blob master notebooks perceptron ipynb classification with scikit learn https github com taspinar siml blob master notebooks scikit classification ipynb machine learning with signal analysis techniques https github com taspinar siml blob master notebooks machine 20learning 20with 20signal 20processing 20techniques ipynb time series forecasting with signal analysis techniques https github com taspinar siml blob master notebooks time 20series 20forecasting 20with 20stochastic 20signal 20analysis ipynb notebooks explaining machine learning with the wavelet transform introduction to pywavelets for wavelet analysis https github com taspinar siml blob master notebooks wv1 20 20using 20pywavelets 20for 20wavelet 20analysis ipynb using wavelets to visualize the scaleogram time axis and fourier transform https github com taspinar siml blob master notebooks wv2 20 20visualizing 20the 20scaleogram 2c 20time axis 20and 20fourier 20transform ipynb classification of signals using the continuous wavelet transform and convolutional neural networks https github com taspinar siml blob master notebooks wv3 20 20classification 20of 20signals 20using 20the 20cwt 20and 20cnn ipynb classification of ecg signals using the discrete wavelet transform and gradient boosting https github com taspinar siml blob master notebooks wv4 20 20classification 20of 20ecg 20signals 20using 20the 20discrete 20wavelet 20transform ipynb classification of signals using the discrete wavelet transform and several classifiers https github com taspinar siml blob master notebooks wv5 20 20classification 20of 20the 20uci har 20dataset 20using 20discrete 20wavelet 20transform ipynb installation to install siml python sudo pip install siml or you can clone the repository and in the folder containing setup py python python setup py install code example todo
ai
project_crowdfunding
build and deploy a web3 crowdfunding platform kickstarter as your first blockchain application crowdfunding https i ibb co k6pj0qt htum 6 png become a top 1 next js 13 developer in only one course https jsmastery pro next13 land your dream programming job in 6 months https jsmastery pro masterclass launch your development career with project based coaching on js mastery pro https www jsmastery pro
web3 blockchain solidity thirdweb
blockchain
csshorus
h1 cssh rus h1 h3 css library for responsive and mobile websites h3 author jo o firmino since 7 september 2012 last mod 13 october 2013 version 2 1 0 website http csshor us twitter http twitter com firminoweb old version 1 0 3 https github com firminoweb csshorus tree csshorus 1 0 3 cssh rus is a library for easy and fast development of responsive and mobile websites it contains 12 grid columns and basic style formats reset print grid misc styles and now with skins stylizing typography lists links table form buttons for your web project also with rtl and less css features responsive grid 1200px down to mobile grid model for design psd easy and simple tool well structured cross browser support safari opera chrome firefox and ie9 tested in windows macos and ubuntu has been tested across as many mobile devices ios android blackberry windows phone print friendly powerful html5 model rtl skin styles build with less css bower package bower depends on node http nodejs org and npm http npmjs org it s installed globally using npm npm install g bower installing the package in your project folder bower install csshorus
front_end
blockchain-parser
blockchain parser author denis leonov 466611 gmail com simple script for parsing blkxxxxx dat files of bitcoin blockchain database this script also compatible with most of altcoins after making some tiny tricks the one realisation of blockchain parser that allows you to explore the main database as close as possible blockchain parser https hsto org getpro habr post images dad 899 889 dad89988966ca08db3223bbc9b2afc90 jpg don t worry to email me your questions or suggestions about this parser no dependencies no third parties modules or libs needed just install python standart release and run make sure you change the paths for blkxxxxx dat files and for the parsing results to yours this script convert the full blockchain raw database that is stored in blkxxxxx dat files to the simple txt view if this was helpfull for you don t hesistate to make a donations bitcoin btc 1fvssyzxnnmghbjg2dywb7rkztrtt8adcl
blockchain blockchain-parser bitcoin altcoins altcoins-blockchain parser blk dat-files browser dat database
blockchain
awesome-learning
awesome learning awesome learning is a front end focused learning platform created by current and former members of wayfair engineering check out the site https wayfair github io awesome learning project status awesome learning has now been put into maintenance mode as of 6 15 2021 courses will be available to take but non critical fixes won t be made to this project quick start working with our repo 1 fork the awesome learning repo 2 clone your fork to your local directory 3 navigate into the local directory 4 install dependencies sh yarn starting awesome learning navigate into your new site s directory and start it up sh yarn develop open the source code and start editing your site is now running at http localhost 8000 note you ll also see a second link http localhost 8000 graphql this is a tool you can use to experiment with querying your data learn more about using this tool in the gatsby tutorial https www gatsbyjs org tutorial part five introducing graphiql running tests while you are developing we recommend watching the test suite open up another terminal in the site directory and run yarn test running storybook locally yarn storybook then open http localhost 6006 on your browser for more information visit react storybook https github com storybooks storybook repo project structure awesome learning is powered by two repositories this one and our exercises repo https github com wayfair awesome learning exercises this repo this is a gatsby js powered static site we use graphql as our data layer see gatsby graphql https www gatsbyjs org tutorial part five introducing graphiql react for our components co located scss for our styling and jest react testing library for our automated tests the exercises repo the exercises repo holds the courses and lessons that power the exercise portions of each lesson you do not need to pull down the exercises repo in order to contribute to awesome learning the exercises repo acts as a target for a codesandbox io api which pulls down the files and creates an exercise codesandbox on the fly for more information on how that works check out importing from git https codesandbox io docs importing import from github and the exercises repo readme if you are interested in fixing exercise content and or contributing content please checkout the exercises repo https github com wayfair awesome learning exercises and we help you out from there what is awesome learning this platform features a series of courses designed to help you thoroughly learn technical topics some of our courses include frontend testing javascript data types and javascript array methods each course contains multiple lessons designed to teach one aspect of the course topic a lesson has pre read materials learning materials pre read quizzes and hands on exercises how does it work small groups of like minded engineers gather together in a room or virtually typically once a week and run through awesome learning lessons learning materials and pre read quizzes are done before starting the lesson so everyone is on the same page and held accountable the group then works through the exercises in a group programming format discusses issues and patterns that arise from the exercises and researches solutions together the group continues meeting until all lessons are complete then picks their next course to start because pre reads learning materials and quizzes are done before the start of the in person group session the group session should typically take about an hour to get through the exercises why awesome learning there are countless learning platforms out there just type learn javascript into google and you will find a bunch of platforms many of them follow similar patterns and all of them are done solo in fact our entire industry tends to favor individuals watching videos reading docs and doing exercises by themselves to learn new technology there s one small problem software development is a team effort a huge part of our jobs is communication working together and being able to explain complex problems in a simple way awesome learning provides the same level of technical education but does so in a group setting over the almost 2 years we have been running awesome learning and sitting in on lessons we ve seen much stronger growth and longer lasting results because of the small group environment groups experience higher levels of camaraderie build expert mental models around subject matter and learn how to communicate better via group programming some folks use their awesome learning sessions as a way to practice their soft skills without the pressure of tickets and deadlines credits credit to the original author of this fork the gatsby lumen starter https github com alxshelepenok gatsby starter lumen
reactjs gatsbyjs react rtl jest scss education frontend javascript learning-materials codesandbox enzyme learning
front_end
Team-1-TCSS-450
mad member 1 ford nguyen 2 gyubeom kim 3 ivan mendez 4 d jared idler description this is our group s project at uw tacoma tcss 450 in effort to learn mobile application development this app consists of a login register system messaging contact and weather forecast all these system above is supported by a heroku back end which will be link below after the user register an account there are verification steps they will be to login and use other functionalities mentioned above and will be discuss in detail below links project https github com tainguyen2101 team 1 tcss 450 back end https github com gyubeomk mobileapp group backend meeting notes https drive google com drive folders 1vbokj9so7cnwne1vhs3ifgz9qgwd oiq usp sharing test account this one is the default username and password for those who want to try out our project quick username test12345 uw edu password test1234 username fordtest1234 test com password kello1234 components contact ford nguyen what has been implemented send friend request to another using their username accept decline contact request remove an existing contact new contact request notification foreground and background remove contact from list remove it from chat room search for contact extra features floating add button when trying to add with username it will tell the user whether the contact with that username exists already friend or request pending works either side bugs when deleting a contact from the list the contact if they are in favorite still persists until the user re login chat user setting look and feel and password finding gyubeom kim what has been implemented add a member into chat room from contact list add new chat room with user typed title remove an existing chat room able to view multiple chat rooms new message received notification if a user is in different section foreground send and receive messages a user is able to change his her password color scheme is able to be dynamically switched what has not been implemented notification for new chat room created recent chat room connects to home landing page extra features toast message when a user tries to add existing member into chat display corresponding message a user is already joined when a user tries to add non existing member into chat display corresponding message added dialog if a user typed different old password and all correct information the message will be popped up corresponding to the situation bugs when a user is added to chat there is no live update should click refresh button the toast message and dialog that i created for password finding and chat keep the previous status weather ivan mendez what have been implemented current weather conditions for an area 12 hour forecast for an area 5 day forecast for an area able to search weather by current location by zip code or by selecting an area on a map what have not been implemented ability to save locations no images for weather conditions extra features preview of 4 upcoming hour conditions on weather home page click for 12 hours preview of 4 upcoming day conditions on weather home page click for 5 days bugs day and hour still in iso8601 format weather home fragment duplicates data causing the recycler view to add double the cards weather home page reverts to first area search location email services d jared idler what was implemented functional validation registration with real emails sends a single use validation url one time waits on screen to register registers unique emails what was not implemented usage of username security measures to prevent intentional denial of service timeout between requests etc extra features sends email to change password to a temporary password if password forgotten password is changeable in the app upon login bugs back button not safe to use with connection to backend registering multiple emails in one go may be inconsistent ui for some screens may be inconsistently spaced or sized depending on device password post change may be inconsistent to reach because of hashing process for reviewer to register for an account please use an actual email address for the verification steps if you guys have any questions please email me at tng2101 uw edu do not hit the back arrows when registering an email don t lose the validation email either
front_end
thor
vechain thor a general purpose blockchain highly compatible with ethereum s ecosystem this is the first implementation written in golang go https img shields io badge golang 3e 3d1 17 orange svg https golang org go report card https goreportcard com badge github com vechain thor https goreportcard com report github com vechain thor github action status https github com vechain thor actions workflows test yaml badge svg license https img shields io badge license lgpl 20v3 blue svg https github com vechain thor blob master license nbsp nbsp tg https img shields io badge chat on 20telegram blue https t me vechaindevcommunity table of contents installation installation requirements requirements getting the source getting the source building building running thor running thor sub commands sub commands docker docker explorers explorers faucet testnet faucet restful api api acknowledgement acknowledgement contributing contributing installation requirements thor requires go 1 17 and c compiler to build to install go follow this link https golang org doc install getting the source clone the thor repo shell git clone https github com vechain thor git cd thor building to build the main app thor just run shell make or build the full suite shell make all if no errors are reported all built executable binaries will appear in folder bin running thor connect to vechain s mainnet shell bin thor network main connect to vechain s testnet shell bin thor network test or startup a custom network shell bin thor network custom net genesis json an example genesis config file can be found at genesis example json https raw githubusercontent com vechain thor master genesis example json to show usages of all command line options shell bin thor h network value the network to join main test or path to genesis file data dir value directory for block chain databases cache value megabytes of ram allocated to internal caching default 2048 beneficiary value address for block rewards target gas limit value target block gas limit adaptive if set to 0 default 0 api addr value api service listening address default localhost 8669 api cors value comma separated list of domains from which to accept cross origin requests to api api timeout value api request timeout value in milliseconds default 10000 api call gas limit value limit contract call gas default 50000000 api backtrace limit value limit the distance between position and best block for subscriptions apis default 1000 verbosity value log verbosity 0 9 default 3 max peers value maximum number of p2p network peers p2p network disabled if set to 0 default 25 p2p port value p2p network listening port default 11235 nat value port mapping mechanism any none upnp pmp extip lt ip gt default none bootnode value comma separated list of bootnode ids skip logs skip writing event transfer logs logs api will be disabled pprof turn on go pprof disable pruner disable state pruner to keep all history help h show help version v print the version sub commands solo client runs in solo mode for test dev shell create new block when there is pending transaction bin thor solo on demand save blockchain data to disk default to memory bin thor solo persist two options can work together bin thor solo persist on demand master key master key management shell print the master address bin thor master key export master key to keystore bin thor master key export keystore json import master key from keystore cat keystore json bin thor master key import docker docker is one quick way for running a vechain node shell docker run d v path to your data directory org vechain thor home thor org vechain thor p 127 0 0 1 8669 8669 p 11235 11235 p 11235 11235 udp name thor node vechain thor network test do not forget to add the api addr 0 0 0 0 8669 flag if you want other containers and or hosts to have access to the restful api thor binds to localhost by default and it will not accept requests outside the container itself without the flag release v2 0 4 https github com vechain thor releases tag v2 0 4 changed the default user from root uid 0 to thor uid 1000 ensure that uid 1000 has rwx permissions on the data directory of the docker host you can do that with acl sudo setfacl r m u 1000 rwx path to your data directory or update ownership with sudo chown r 1000 1000 path to your data directory explorers vechain explorer official https explore vechain org vechainstats https vechainstats com insight https insight vecha in testnet faucet faucet vecha in https faucet vecha in by vechain foundation api once thor has started the online openapi doc can be accessed in your browser e g http localhost 8669 http localhost 8669 by default thorest https raw githubusercontent com vechain thor master thorest png http localhost 8669 acknowledgement a special shout out to following projects ethereum https github com ethereum swagger https github com swagger api contributing thank you so much for considering to help out with the source code we welcome contributions from anyone on the internet and are grateful for even the smallest of fixes please fork fix commit and send a pull request for the maintainers to review and merge into the main code base forking thor when you fork the project github will make a copy of the project that is entirely yours it lives in your namespace and you can push to it getting ready for a pull request please check the following code must be adhere to the official go formatting guidelines get the branch up to date by merging in any recent changes from the master branch making the pull request 1 on the github site go to code then click the green compare and review button your branch is probably in the example comparisons list so click on it if not select it for the compare branch 1 make sure you are comparing your new branch to master it probably won t be since the front page is the latest release branch rather than master now so click the base branch and change it to master 1 press create pull request button 1 provide a brief title 1 explain the major changes you are asking to be code reviewed often it is useful to open a second tab in your browser where you can look through the diff yourself to remind yourself of all the changes you have made license vechain thor is licensed under the gnu lesser general public license v3 0 https www gnu org licenses lgpl 3 0 html also included in license file in repository
vechain blockchain node thor
blockchain
LambdaMUD-Client
lambdamud client front end for the lambdamud project https github com lambdaschool lambdamud project fork this repo and put your front end client code in your fork
front_end
LLMR-NLUP
important all dynamic ui code is generated by openai s model although measures have been taken to prevent xss attacks i cannot guarantee your safety please use your browser s private mode when opening this project and take other protective measures that you think are necessary before generating any code if you do not understand this sentence please do not run this project ui openai xxs proof of concept natural language ui programming nlup and large language model rendering llmr https user images githubusercontent com 22794120 229517654 a8c31c15 d277 4a9b 8116 f71a562d2481 mp4 demo link https nlui lxfater com a simple demo of natural language ui programming nlup and large language model rendering llmr implemented purely on the front end using gpt 3 5 technology in the future large language models will enable everyone to have their own unique user interface and api and this is one way to achieve this vision nlup llmr demo gpt 3 5 api the future of software development large language model rendering llmr in one sentence anything that can be generated by large language models will ultimately be generated by large language models software as a form of content as a string will ultimately be generated by large language models when i first started web development front end pages were returned by back end template functions ssr later with frameworks like react and vue js we began to perform complex rendering on the client side giving rise to concepts like front end and back end separation spa mpa now a new rendering paradigm is emerging large language model rendering llmr in the future software will be generated by large language models that immediately call apis and generate user pages in this project we see the possibility of generating and further editing pages in the chatgpt plugin system we see the possibility of large language models immediately calling apis the combination of these two will create a unique software experience for users any demand that a user makes can be fulfilled in real time by the large language models there will no longer be a separation between front end and back end only large language models will generate the software agile development will no longer be necessary as users software needs will be met in real time pages will no longer be composed by front end and apis will no longer be called by back end and we will no longer need to debug front end and back end together finally remember what i said anything that can be generated by large language models will ultimately be generated by large language models software as a form of content as a string will ultimately be generated by large language models ssr react vue spa mpa llmr api chatgpt plugin api api install and run nodejs npm install npm run dev usage click the generate button enter the page you want and the page will be generated move the mouse over the component you want to modify if the component can be modified a blue border will appear click it and a green border will appear then you can continue editing the component by entering natural language end the above is a personal immature idea and exploration let s meet in the issue area and there should be a lot of optimization space for the demo demo video bilibili the previous demo https www bilibili com video bv19x4y1r762 youtube https www youtube com watch v gg1i5snrdnc my twitter view the demo on my twitter https twitter com lxfater
front_end
reply-bot
reply bot smart reply bot implemented in python what does it do the objective of this project is to create a bot that suggests short responses to a text or email message using natural language processing a reply bot is a type of chatbot the goal of a chatbot is to mimic written or spoken speech as to simulate a conversation with a human the scope of bot is casual conversation from a high level there are two variants of chatbots rule based and self learning a rule based bot responds via pattern matching and rules a self learning bot responds via using machine learning a retrieval based chatbot replies by locating the best response from a database corpus a generative based chatbot use deep learning methods to respond and can generate a response it has never seen before in order to understand the tradeoffs between the different types of chatbots i implemented a rule based a retrieval based and a machine learning based reply bot built with python nltk https www nltk org tensorflow and tflearn design rule based bot a rule based chatbot is the simplest type of bot this type of bot searches for predefined patterns in the input and uses a set of rules if else logic to suggest replies the suggested replies in this case are predefined we can implement pattern matching on an input message by using regular expressions and applying rules with if else statements for example to search for a greeting we can use this as our regex pattern python greeting str hi hello sup hey good s morning afternoon evening if the pattern is found we suggest predefined greeting responses python greeting response hi hey hello altogether we have python if re search greeting str user input return greeting response for the reply bot i have defined 8 simple rules greeting goodbye how are you thank you do you will you can you would you are you when what s up each rule has an associated reply python greeting response hi hello hey goodbye response bye talk to you later thank response happy to help don t mention it my pleasure inquiry response i m doing ok ok i ve been better future response yes no maybe what you response nothing not much are you response yes no maybe when response soon not now no response no suggestion retrieval based bot the goal of a retrieval bot is to find the most likely response r given the input i a retrieval based bot responds to input using a database of utterance response q r pairs the database is known as our corpus to generate a response we first need to classify the user input to one of our predefined classes if applicable once classified the response associated with that class is suggested below is our database corpus in this case it s a python dictionary which we will use for our self learning bots python classes dict classes dict greeting classes dict greeting pattern hi there hello hey good afternoon good morning good evening good day classes dict greeting response hi hey hello classes dict goodbye classes dict goodbye pattern bye goodbye see you later gotta go i have to go see you see ya talk to you later classes dict goodbye response bye talk to you later classes dict thanks classes dict thanks pattern thanks thank you classes dict thanks response you re welcome my pleasure don t mention it classes dict how are you classes dict how are you pattern how are you how are you doing how s it going classes dict how are you response i m doing ok ok i ve been better classes dict future classes dict future pattern will you can you would you do you classes dict future response yes no maybe classes dict are you classes dict are you pattern are you aren t you classes dict are you response yes no maybe classes dict when classes dict when pattern when will when can when would when should classes dict when response soon not now classes dict whats up classes dict whats up pattern sup what s up what up whats up what is going on what s going on what s happening what are you up to what are you doing classes dict whats up response nothing not much it can be seen that we have the 8 classes to classify user input each value of classes dict i e greeting goodbye etc has sub dictionaries of pattern and response pattern defines a list of expressions that we use to match user input to that class response defines the suggested responses when a match to that class has been found for a retrieval based bot to determine the class of the user input we do not use pattern matching but instead deconstruct the user input into a feature vector via text processing 4 basic text pre processing includes converting text into uppercase or lowercase tokenizing convert normal text strings into a list of tokens removing noise remove everything that isn t a number or letter i e punctuation stemming reduce words to their stem base or root form lemmatization a variant of stemming stemming can create non existent words however lemmas are actual words for example the words better and good are the same lemma 5 removing stop words remove extremely common words that would be of little value to the algorithm once the user input is pre processed we transform the user input into a vector of numbers one number for each word we do this using term frequency inverse document frequency tf idf this assigns a weight to each word which is a statistical measure about how important that word is we can then use cosine similarity to compare user input to the corpus 1 python remove punct dict dict ord punct none for punct in string punctuation def lem tokens tokens lemmer nltk stem wordnetlemmatizer return lemmer lemmatize token for token in tokens def lem normalize text return lem tokens nltk word tokenize text lower translate remove punct dict sent tokens user user input sent tokens for value in sent tokens sent tokens append sent tokens value tfidf vec tfidfvectorizer tokenizer lem normalize tfidf tfidf vec fit transform sent tokens vals cosine similarity tfidf 1 tfidf idx vals argsort 0 2 flat vals flatten flat sort req tfidf flat 2 the class with the highest similarity to the user input is chosen and the class response is suggested python error threshold 0 1 if req tfidf error threshold robo response no suggestion else for value in sent tokens match pattern sent tokens idx pattern sent tokens value if match pattern pattern match class value robo response my dict match class response machine learning based bot this type of reply bot uses a neural network nn to suggest a reply the nn classifies the user input into one of pre defined classes and we suggest the replies associated with that class 2 the first step to creating input training data is to create an array of words found in our corpus which we call our bag of words recall all the class patterns found in the corpus python classes dict greeting pattern hi there hello hey good afternoon good morning good evening good day classes dict goodbye pattern bye goodbye see you later gotta go i have to go see you see ya talk to you later classes dict thanks pattern thanks thank you classes dict how are you pattern how are you how are you doing how s it going classes dict future pattern will you can you would you do you classes dict are you pattern are you aren t you classes dict when pattern when will when can when would when should classes dict whats up pattern sup what s up what up whats up what is going on what s going on what s happening what are you up to what are you doing we tokenize stem and sort these patterns to create an a bag of words array we have 42 known words in our corpus python my words s afternoon ar bye can day do doing ev go going good goodby got hap hav hello hey hi how i is it lat morn n t on see should sup ta talk thank ther to up what when wil would ya you likewise we need to create a sorted array for our output classes python my classes are you future goodbye greeting how are you thanks whats up when we now can create our training data let s use the how are you class as an example python classes dict how are you pattern how are you how are you doing how s it going in this class one pattern is how are you doing we can tokenize and stem this into a list python my pattern how ar you doing we then create an input training list by appending a 1 in indices where a my pattern word is found in our list of known words my words and appending a 0 otherwise for the pattern how are you doing we append bag with a 1 in positions ar doing how and you python bag 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 since this belongs to the how are you class we append the output as python output row 0 0 0 0 1 0 0 0 the architecture of our nn is simple as can be seen below we have only 2 hidden layers img src images nn jpg width 1000 we train this for 1 000 epochs this actually trains quite fast it takes less than 30 seconds to train completely once the model is trained we can use the model to predict a class on user input again we define an error threshold for when to offer a response here it is 95 instead of using and error threshold to determine if a message will get a smart reply we could train implement a binary classifier on all messages to determine if they will get a smart reply 1 python def tokenize input sentence sentence words nltk word tokenize sentence tokenize pattern for some word in sentence words sentence words sentence words index some word stemmer stem some word lower return sentence words def bag user input input words tokenize input user input bag 0 len my words for input word in input words for i in range 0 len my words bag word my words i if input word bag word bag i 1 return np array bag error threshold 0 95 def classify user input results model predict bag user input 0 generate probabilities from the model filtered results for i in range 0 len results this result results i if this result error threshold filtered results append i this result filtered results sort key lambda x x 1 reverse true return list for i in range 0 len filtered results return list append my classes filtered results i 0 filtered results i 1 return return list return tuple of intent and probability some array classify user input if len some array 0 for response in classes dict some array 0 0 response print response usage execute the program python rule based reply py the program will then request a message hello type in a message and i will suggest some replies if you d like to exit please type quit try different different messages and the reply bot will generate suggested replies or no suggestion hello hi hey hello to quit type quit quit thoughts it s obvious that no one type of reply bot works best a retrieval based bot performs poorly for casual conversation and works better for factual queries for casual conversation a hybrid implementation rule based and machine learning based may work best author laura kocubinski laurakoco https github com laurakoco acknowledgments boston university met master of science computer science program met cs 664 artificial intelligence 1 https medium com analytics vidhya building a simple chatbot in python using nltk 7c8c8215ac6e 2 https chatbotsmagazine com contextual chat bots with tensorflow 4391749d0077
natural-language-processing chatbot reply-bot nltk tensorflow python machine-learning artificial-intelligence
ai
stockmarket
allen young s stockmarket an online private stockmarket web app for demonstrating allen young s frontend backend and full stack web app development and deployment skills this is a small full stack web app that enables purchasing online one or more private shares of a company paypal is used as a payment processor because the primary purpose of this web app is demonstrating allen young s web app development and deployment skills and know how this web app is implemented in different editions with different web app development libraries frameworks and tools and deployment automation software and platforms that is this web app has multiple implementations using different software development frameworks libraries and tools and deployment automation software and platforms for demonstrating allen young s web app software development and deployment skills and know how the different editions of this web app are implemented with the following database management systems sqlite3 postgresql mysql mariadb redis apache cassandra mongodb oracle microsoft sql server t sql frontend implementations react angular vue backend implementations express js flask gin laravel nest js next js roda c net core java spring boot kotlin spring boot kotlin ktor full stack implementations django rails laravel ci cd tools jenkins ansible gitlab aws ci cd microsoft azure ci cd deployment platforms aws lightsail aws fargate microsoft azure google cloud oracle cloud ibm cloud not all the implementations are done yet over the next one to two months allen young will complete and publish all the implementations a different implementation combination can be deployed by assigning the relevant settings in the ci cd tool of choice
server