names
stringlengths
1
98
readmes
stringlengths
8
608k
topics
stringlengths
0
442
labels
stringclasses
6 values
bitcoind-ncurses2
bitcoind ncurses2 v0 3 1 python ncurses front end for bitcoind uses the json rpc api screenshot img bitcoind ncurses2 gif esotericnonsense daniel edgecumbe dependencies developed with python 3 6 2 bitcoin core 0 15 0 1 pypi packages aiohttp and async timeout see requirements txt features updating monitor mode showing bitcoind s status including current block information hash height fees timestamp age diff basic block explorer with fast seeking no external db required basic transaction viewer with fast seeking best with txindex 1 ability to query blocks by hash height transactions by txid wallet transaction and balance viewer charting network monitor peer connection information basic debug console functionality installation and usage git clone https github com esotericnonsense bitcoind ncurses2 pip3 install r bitcoind ncurses2 requirements txt or on arch linux pacman s python aiohttp python async timeout cd bitcoind ncurses2 python3 main py bitcoind ncurses2 will automatically use the cookie file available in bitcoin or the rpc settings in bitcoin bitcoin conf to use a different datadir specify the datadir flag python3 main py datadir some path to your datadir this is an early development release and a complete rewrite of the original bitcoind ncurses expect the unexpected feedback please report any problems using the github issue tracker pull requests are also welcomed the author esotericnonsense can often be found milling around on bitcoin freenode donations if you have found bitcoind ncurses2 useful please consider donating all funds go towards the operating costs of my hardware and future bitcoin development projects screenshot img 3byfucunvnhzjudf6tzweuz5r9ppjpecrv png bitcoin 3byfucunvnhzjudf6tzweuz5r9ppjpecrv
front_end
ece465_at_cooper
ece465 at cooper repository used in the delivery of the course ece 465 cloud computing at the cooper union cooper edu the course description can be found here at https cooper edu engineering courses electrical and computer engineering graduate ece 465
cloud
PyImageSearch-CV-DL-CrashCourse
pyimagesearch cv dl crashcourse repository for free computer vision deep learning and opencv crash course course url https www pyimagesearch com free opencv computer vision deep learning crash course environment configuration the development environment configuration was based on the following guide how to install tensorflow 2 0 on ubuntu https www pyimagesearch com 2019 12 09 how to install tensorflow 2 0 on ubuntu from pyimagesearch blog however you can check the environment yml or requirements txt course day 1 face detection with opencv and deep learning link https www pyimagesearch com 2018 02 26 face detection with opencv and deep learning folder 01 deep learning face detection commands used face detection with images python detect faces py image images rooster jpg prototxt model deploy prototxt txt model model res10 300x300 ssd iter 140000 caffemodel python detect faces py image images iron chic jpg prototxt model deploy prototxt txt model model res10 300x300 ssd iter 140000 caffemodel face detection with webcam python detect faces video py prototxt model deploy prototxt txt model model res10 300x300 ssd iter 140000 caffemodel day 2 opencv tutorial a guide to learn opencv link https www pyimagesearch com 2018 07 19 opencv tutorial a guide to learn opencv folder 02 opencv tutorial commands used opencv tutorial python opencv tutorial 01 py counting objects python opencv tutorial 02 py image images tetris blocks png day 3 document scanner link https www pyimagesearch com 2014 09 01 build kick ass mobile document scanner just 5 minutes folder 03 document scanner commands used python scan py image images page jpg day 4 bubble sheet multiple choice scanner and test grader using omr link https www pyimagesearch com 2016 10 03 bubble sheet multiple choice scanner and test grader using omr python and opencv folder 04 omr test grader commands used python test grader py image images test 01 png day 5 ball tracking with opencv link https www pyimagesearch com 2015 09 14 ball tracking with opencv folder 05 ball tracking commands used using video python ball tracking py video ball tracking example mp4 using webcam python ball tracking py note to see any results you will need a green object with the same hsv color range was used in this demo day 6 measuring size of objects in an image with opencv link https www pyimagesearch com 2016 03 28 measuring size of objects in an image with opencv folder 06 size of objects commands used python object size py image images example 01 png width 0 955 python object size py image images example 02 png width 0 955 python object size py image images example 03 png width 3 5 day 8 facial landmarks with dlib opencv and python link https www pyimagesearch com 2017 04 03 facial landmarks dlib opencv python folder 08 facial landmarks commands used python facial landmarks py shape predictor model shape predictor 68 face landmarks dat image images example 01 jpg python facial landmarks py shape predictor model shape predictor 68 face landmarks dat image images example 02 jpg python facial landmarks py shape predictor model shape predictor 68 face landmarks dat image images example 03 jpg day 9 eye blink detection with opencv python and dlib link https www pyimagesearch com 2017 04 24 eye blink detection opencv python dlib folder 09 blink detection commands used python detect blinks py shape predictor model shape predictor 68 face landmarks dat video videos blink detection demo mp4 day 10 drowsiness detection with opencv link https www pyimagesearch com 2017 05 08 drowsiness detection opencv folder 10 detect drowsiness commands used python detect drowsiness py shape predictor model shape predictor 68 face landmarks dat alarm sounds alarm wav day 12 a simple neural network with python and keras link https www pyimagesearch com 2016 09 26 a simple neural network with python and keras folder 12 simple neural network note create a folder structure called kaggle dogs vs cats train download the training dataset kaggle dogs vs cats https www kaggle com c dogs vs cats data and put the images into train folder command used training python simple neural network py dataset kaggle dogs vs cats model output simple neural network hdf5 command used test python test network py model output simple neural network hdf5 test images test images day 13 deep learning with opencv link https www pyimagesearch com 2017 08 21 deep learning with opencv folder 13 deep learning opencv commands used python deep learning with opencv py image images jemma png prototxt model bvlc googlenet prototxt model model bvlc googlenet caffemodel labels model synset words txt python deep learning with opencv py image images traffic light png prototxt model bvlc googlenet prototxt model model bvlc googlenet caffemodel labels model synset words txt python deep learning with opencv py image images eagle png prototxt model bvlc googlenet prototxt model model bvlc googlenet caffemodel labels model synset words txt python deep learning with opencv py image images vending machine png prototxt model bvlc googlenet prototxt model model bvlc googlenet caffemodel labels model synset words txt day 14 how to quickly build a deep learning image dataset link https www pyimagesearch com 2018 04 09 how to quickly build a deep learning image dataset folder 14 search bing api commands used python search bing api py query pokemon class to search output dataset pokemon class to search day 15 keras and convolutional neural networks cnns link https www pyimagesearch com 2018 04 16 keras and convolutional neural networks cnns folder 15 cnn keras command used training python train py dataset dataset model pokedex model labelbin lb pickle command used testing python classify py model pokedex model labelbin lb pickle image examples charmander counter png python classify py model pokedex model labelbin lb pickle image examples bulbasaur plush png python classify py model pokedex model labelbin lb pickle image examples mewtwo toy png python classify py model pokedex model labelbin lb pickle image examples pikachu toy png python classify py model pokedex model labelbin lb pickle image examples squirtle plush png python classify py model pokedex model labelbin lb pickle image examples charmander hidden png day 16 real time object detection with deep learning and opencv link https www pyimagesearch com 2017 09 18 real time object detection with deep learning and opencv folder 16 real time object detection commands used python real time object detection py prototxt model mobilenetssd deploy prototxt txt model model mobilenetssd deploy caffemodel credits to adrian rosebrock on http www pyimagesearch com
ai
LLM_model
llm zoo democratizing chatgpt div align center img src assets zoo png width 640 alt zoo align center div llm zoo is a project that provides data models and evaluation benchmark for large language models tech report assets llmzoo pdf latest news 05 05 2023 release the training code now you can replicate a multilingual instruction following llm by yourself 04 24 2023 add more results e g moss in the evaluation benchmark 04 08 2023 release the phoenix for all languages and chimera for latin languages models motivation break ai supremacy and democratize chatgpt ai supremacy is understood as a company s absolute leadership and monopoly position in an ai field which may even include exclusive capabilities beyond general artificial intelligence this is unacceptable for ai community and may even lead to individual influence on the direction of the human future thus bringing various hazards to human society make chatgpt like llm accessible across countries and languages make ai open again every person regardless of their skin color or place of birth should have equal access to the technology gifted by the creator for example many pioneers have made great efforts to spread the use of light bulbs and vaccines to developing countries similarly chatgpt one of the greatest technological advancements in modern history should also be made available to all get started install run the following command to install the required packages angular2html pip install r requirements txt cli inference bash python m llmzoo deploy cli model path path to weights for example for phoenix run bash python m llmzoo deploy cli model path freedomintelligence phoenix inst chat 7b and it will download the model from hugging face automatically for chimera please follow this instruction https github com freedomintelligence llmzoo chimera llm mainly for latin and cyrillic languages to prepare the weights check here deployment for deploying a web application data overview we used the following two types of data for training phoenix and chimera details summary b instruction data b summary multilingual instructions language agnostic instructions with post translation diff self instructed translated instruction input in language a step 1 translation instruction input in language b b is randomly sampled w r t the probability distribution of realistic languages step 2 generate output in language b user centered instructions diff role instruction input seeds step 1 self instruct role instruction input samples step 2 generate output instruct role instruction input output details details summary b conversation data b summary user shared conversations diff chatgpt conversations shared on the internet step 1 crawl multi round conversation data details check instructionzoo https github com freedomintelligence instructionzoo for the collection of instruction datasets check gpt api accelerate tool https github com freedomintelligence gpt api accelerate for faster data generation using chatgpt download phoenix sft data v1 https huggingface co datasets freedomintelligence phoenix sft data v1 the data used for training phoenix and chimera models overview of existing models model backbone params open source model open source data claimed language post training instruction post training conversation release date chatgpt multi 11 30 22 wenxin zh 03 16 23 chatglm glm 6b en zh 03 16 23 alpaca llama 7b en 52k en 03 13 23 dolly gpt j 6b en 52k en 03 24 23 belle bloomz 7b zh 1 5m zh 03 26 23 guanaco llama 7b en zh ja de 534k multi 03 26 23 chinese llama alpaca llama 7 13b en zh 2m 3m en zh 03 28 23 luotuo llama 7b zh 52k zh 03 31 23 vicuna llama 7 13b en 70k multi 03 13 23 koala llama 13b en 355k en 117k en 04 03 23 baize llama 7 13 30b en 52k en 111 5k en 04 04 23 phoenix ours bloomz 7b multi 40 40 04 08 23 latin phoenix chimera ours llama 7 13b multi latin latin latin 04 08 23 details summary b the key difference between existing models and ours b summary the key difference in our models is that we utilize two sets of data namely instructions and conversations which were previously only used by alpaca and vicuna respectively we believe that incorporating both types of data is essential for a recipe to achieve a proficient language model the rationale is that the instruction data helps to tame language models to adhere to human instructions and fulfill their information requirements while the conversation data facilitates the development of conversational skills in the model together these two types of data complement each other to create a more well rounded language model details phoenix llm across languages details summary b the philosophy to name b summary the first model is named phoenix in chinese culture the phoenix is commonly regarded as a symbol of the king of birds as the saying goes indicating its ability to coordinate with all birds even if they speak different languages we refer to phoenix as the one capable of understanding and speaking hundreds of bird languages more importantly phoenix is the totem of the chinese university of hong kong shenzhen cuhksz it goes without saying this is also for the chinese university of hong kong cuhk details model backbone data link phoenix chat 7b bloomz 7b1 mt conversation parameters https huggingface co freedomintelligence phoenix chat 7b phoenix inst chat 7b bloomz 7b1 mt instruction conversation parameters https huggingface co freedomintelligence phoenix inst chat 7b phoenix inst chat 7b int4 bloomz 7b1 mt instruction conversation parameters https huggingface co freedomintelligence phoenix inst chat 7b int4 chimera llm mainly for latin and cyrillic languages details summary b the philosophy to name b summary the philosophy to name the biggest barrier to llm is that we do not have enough candidate names for llms as llama guanaco vicuna and alpaca have already been used and there are no more members in the camel family therefore we find a similar hybrid creature in greek mythology chimera https en wikipedia org wiki chimera mythology composed of different lycia and asia minor animal parts coincidentally it is a hero role in dota and also warcraft iii it could therefore be used to memorize a period of playing games overnight during high school and undergraduate time details model backbone data link chimera chat 7b llama 7b conversation parameters delta https huggingface co freedomintelligence chimera chat 7b delta chimera chat 13b llama 13b conversation parameters delta https huggingface co freedomintelligence chimera chat 13b delta chimera inst chat 7b llama 7b instruction conversation parameters delta https huggingface co freedomintelligence chimera inst chat 7b delta chimera inst chat 13b llama 13b instruction conversation parameters delta https huggingface co freedomintelligence chimera inst chat 13b delta due to llama s license restrictions we follow fastchat https github com lm sys fastchat to release our delta weights to use chimera download the original llama weights https huggingface co docs transformers main model doc llama and run the script bash python tools apply delta py base path to llama 13b target output path to chimera inst chat 13b delta freedomintelligence chimera inst chat 13b delta camel chinese and medically enhanced langauge models details summary b the philosophy to name b summary the philosophy to name its chinese name is huatuogpt https github com freedomintelligence huatuogpt or gpt to commemorate the great chinese physician named hua tuo who lived around 200 ac training is already finished we will release it in two weeks some efforts are needed to deploy it in public cloud servers in case of massive requests details check our models in huatuogpt https github com freedomintelligence huatuogpt or try our demo https www huatuogpt cn api key required similar biomedical models could be seen in biomedical llms assets biomedical models md details summary b more models in the future b summary legal gpt coming soon vision language models coming soon retrieval augmented models coming soon details evaluation and benchmark we provide a bilingual multidimensional comparison across different open source models with ours chinese automatic evaluation using gpt 4 model ratio phoenix inst chat 7b vs chatgpt 85 2 phoenix inst chat 7b vs chatglm 6b 94 6 phoenix inst chat 7b vs baidu wenxin 96 8 phoenix inst chat 7b vs moss moon 003 sft 109 7 phoenix inst chat 7b vs belle 7b 2m 122 7 phoenix inst chat 7b vs chinese alpaca 7b 135 3 phoenix inst chat 7b vs chinese alpaca 13b 125 2 observation it shows that phoenix chat 7b achieves 85 2 performance of chatgpt in chinese it slightly underperforms baidu wenxin 96 8 and chatglm 6b 94 6 both are not fully open source chatglm 6b only provides model weights without training data and details although phoenix is a multilingual llm it achieves sota performance among all open source chinese llms human evaluation win tie lose phoenix vs chatgpt 12 35 53 phoenix vs chatglm 6b 36 11 53 phoenix vs baidu wenxin 29 25 46 phoenix vs belle 7b 2m 55 31 14 phoenix vs chinese alpaca 13b 56 31 13 observation it shows that the human evaluation results show the same trend as the automatic evaluation results english automatic evaluation using gpt 4 model ratio chimera chat 7b vs chatgpt 85 2 chimera chat 13b vs chatgpt 92 6 chimera inst chat 13b vs chatgpt 96 6 quantization we offer int8 and int4 quantizations which will largely reduce the gpu memory consumption e g from 28gb to 7gb for phoenix int8 you can directly obatin int8 version of phoenix by passing load 8bit when using cli inference e g bash python m llmzoo deploy cli model path freedomintelligence phoenix inst chat 7b load 8bit int4 for int4 version we take advantage of gptq you can directly obatin int4 version of phoenix by passing int4 version model and load 4bit when using cli inference this would require package autogptq be installed e g bash python m llmzoo deploy cli model path freedomintelligence phoenix inst chat 7b int4 load 4bit we use autogptq https github com panqiwei autogptq to support phoenix via bash build cuda ext 0 pip install auto gptq triton for chimera we can not share the int4 version parameters due to restrictions and you can follow the example in our patched autogptq https github com genezc autogptq triton tree main examples to conduct quantization by yourselves thank yhyu13 https github com yhyu13 please check the merged weight and gptq quantized weight for chimera in chimera inst chat 13b hf https huggingface co yhyu13 and chimera inst chat 13b gptq 4bit https huggingface co yhyu13 chimera inst chat 13b gptq 4bit deployment launch a controller shell python m llmzoo deploy webapp controller launch a model worker shell python m llmzoo deploy webapp model worker model path path to weights launch a gradio web server shell python m llmzoo deploy webapp gradio web server now you can open your browser and chat with a model training by yourself prepare the data you can either download the phoenix sft data v1 https huggingface co datasets freedomintelligence phoenix sft data v1 data or prepare your own data put your data on the path data data json training for phoenix run shell bash scripts train phoenix 7b sh for chimera prepare the llama weights following this instruction https huggingface co docs transformers main model doc llama and run shell bash scripts train chimera 7b sh bash scripts train chimera 13b sh limitations our goal in releasing our models is to assist our community in better replicating chatgpt gpt4 we are not targeting competition with other competitors as benchmarking models is a challenging task our models face similar models to those of chatgpt gpt4 which include lack of common sense our models may not always have the ability to apply common sense knowledge to situations which can lead to nonsensical or inappropriate responses limited knowledge domain our models knowledge is based on the data it was trained on and it may not have the ability to provide accurate or relevant responses outside that domain biases our models may have biases that reflect the biases in the data it was trained on which can result in unintended consequences or unfair treatment inability to understand emotions while our models can understand language it may not always be able to understand the emotional tone behind it which can lead to inappropriate or insensitive responses misunderstandings due to context our models may misunderstand the context of a conversation leading to misinterpretation and incorrect responses contributors llm zoo is mainly contributed by data and model zhihong chen https zhjohnchan github io junying chen hongbo zhang feng jiang https fjiangai github io chen zhang https genezc github io benyou wang https wabyking github io old html advisor evaluation fei yu https github com oakyu tiannan wang guiming chen others zhiyi zhang jianquan li and xiang wan as an open source project we are open to contributions feel free to contribute if you have any ideas or find any issue acknowledgement we are aware that our works are inspired by the following works including but not limited to llama https github com facebookresearch llama bloom https huggingface co bigscience bloom self instruct https github com yizhongw self instruct alpaca https github com tatsu lab stanford alpaca vicuna https github com lm sys fastchat without these nothing could happen in this repository citation angular2 article phoenix 2023 title phoenix democratizing chatgpt across languages author zhihong chen and feng jiang and junying chen and tiannan wang and fei yu and guiming chen and hongbo zhang and juhao liang and chen zhang and zhiyi zhang and jianquan li and xiang wan and benyou wang and haizhou li journal arxiv preprint arxiv 2304 10453 year 2023 angular2 misc llm zoo 2023 title llm zoo democratizing chatgpt author zhihong chen and junying chen and hongbo zhang and feng jiang and guiming chen and fei yu and tiannan wang and juhao liang and chen zhang and zhiyi zhang and jianquan li and xiang wan and haizhou li and benyou wang year 2023 publisher github journal github repository howpublished url https github com freedomintelligence llmzoo we are from the school of data science the chinese university of hong kong shenzhen cuhksz and the shenzhen rsearch institute of big data sribd star history star history chart https api star history com svg repos freedomintelligence llmzoo type date https star history com freedomintelligence llmzoo date
ai
Embedded-System-Course
embedded system course
embedded-systems raspberry-pi
os
IT-1st
it 1st information technology development itd
server
mobile-application-development-fbla
mobile application development
front_end
Wecode_MA_2
wecode ma 2 list of the top developers of the wecode mobile application development bootcamp rwanga foundation shene mahmood m rashid github https github com nina2498 linkedin https www linkedin com in shene mahmood 38a304222 stack overflow https stackoverflow com users 19234255 shene m rashid br osama hatam github https github com osamaahatam linkedin https www linkedin com in osama hatam a7161b1b0 stack overflow https stackoverflow com users 19226124 osama hatam br aram mohamad esmail github https github com arammohamd linkedin https www linkedin com in aram muhamad 487334240 stack overflow https stackoverflow com users 19230827 aram mohamad br miran amanj asaad github https github com miran18 prog linkedin https www linkedin com in miran amanj 77181b165 stack overflow https stackoverflow com users 14818848 miran amanj br alina sdqi mohammed github https github com alinnaaa linkedin https www linkedin com in alina sdqi 7a6a69217 br ali salim github https github com alisalimalii linkedin https www linkedin com mwlite in ali salim b42464240 stack overflow https stackoverflow com users 19228038 ali salim tab profile br astera mohammed noori github https github com astera96 linkedin https www linkedin com in astera mohammed 96732a240 stack overflow https stackoverflow com users 19230629 asteramohammed br mustafa dilshad khalid github https github com t00fi linkedin https www linkedin com in mustafa dilshad 7252a41b1 stack overflow https stackoverflow com users 12716917 mustafa br harez habil hama ali github https github com harez2020 linkedin https www linkedin com in harez habeel stack overflow https stackoverflow com users 10622449 br mohammed jamal github https github com 7amaa linkedin https www linkedin com in mohammed jamal 60ba8119a stack overflow https stackoverflow com users 19230102 mohammed jamal br ahmed mohammedseddic khalil github https github com ahmedkhalil98 linkedin https www linkedin com in ahmed khalil 4a5156239 stack overflow https stackoverflow com users 19226414 ahmad khalil br ahmad shakir khalid github https github com ahmadshakir21 linkedin https www linkedin com in ahmad shakir 1a6a95226 br barzi yassin karim github https github com barzy yasin linkedin https www linkedin com in barzy yasin 83734a198 stack overflow https stackoverflow com users 16476966 barzy yasin br bawer farhad hussain linkedin https www linkedin com in bawerfarhad github https github com bawerfarhad stack overflow https stackoverflow com users email settings 19239934 br eissa ahmed mohammadamin github https www linkedin com in eissa ahmed mohammadamin 9a351623a linkedin https www linkedin com in eissa ahmed mohammadamin 9a351623a stack overflow https stackoverflow com users edit 19239783 br san samir boya github https github com softwaresan linkedin https www linkedin com in san samir bba549240 stack overflow https stackoverflow com users 19233384 san samir br majd sumar alfi github https github com majdalfi linkedin https www linkedin com in majd alfi 985600239 stack overflow https stackoverflow com users 19239316 majd alfi br araz zuher mohammed github https github com arazzuher22 linkedin https www linkedin com in araz zuher 4a7630240 stack overflow https stackoverflow com users 19229596 araz br zhir taha ali ways github https github com zhirtaha linkedin https www linkedin com in zhirtaha stack overflow https stackoverflow com users 12309769 zhir br mohammed azad nader github https github com mohammed azad linkedin https www linkedin com in mhammad azad aa1a65232 stack overflow https stackoverflow com users 19226214 mohammed azad br dilman arif qasim github https github com dilman01 linkedin https www linkedin com in dilman arif 948465240 stack overflow https stackoverflow com users 19225564 dilman arif tab profile br ari ahmed ibrahim github https github com areeahmed linkedin https www linkedin com in ari ahmed b78b761ab stack overflow https stackoverflow com users 12657287 aree ahmed br ramiyar yusf github https github com ramiyar2 linkedin https www linkedin com in ramyar yusf 393a40203 stack overflow https stackoverflow com users 19226911 ramyar yusf br safin saber nori github https github com safin9 linkedin https www linkedin com in safin saber 233677207 stack overflow https stackoverflow com users 19226099 safin saber nori br hevar tofiq hama github https github com rageofkurd linkedin https www linkedin com in hevar tofiq 649524240 stack overflow https stackoverflow com users 16681812 hevar tofiq br binar talib younis github https github com binar talib linkedin https www linkedin com in binar talib 592b02186 stack overflow https stackoverflow com users 19231887 binar talib br muhammad sabah ibrahim github https github com muhammadsabah linkedin https www linkedin com in muhammad ibrahim 4791b7226 stack overflow https stackoverflow com users 14839602 hama sabah br andam adam khidhir github https github com andam20 linkedin https www linkedin com in andam adam 78a8391ab stack overflow https stackoverflow com users 13128222 andam adam br harrem mohammed jalal github https github com harrem linkedin https www linkedin com in harrem m jalal a0a329135 stack overflow https stackoverflow com users 16780840 harrem ip h c br brwa nahman muhammed github https github com brwacs linkedin https www linkedin com in brwa nahman 449996197 stack overflow stackoverflow com users 19229945 brwa nahman br maryam salah jubrail github https github com maryyamsalah linkedin http linkedin com in maryam salah 29b692139 stack overflow https stackoverflow com users 17595130 maryyam salah abdullah hussein hamad ahmad helal muhammedsaied amanj azad ameen amozhgar saadi baper areen saber ali ashna salam mhammad avin fateh rasul basira tahir ahmed azad khorsheed rasheed github https github com azadlinavay linkedin https www linkedin com in azad linavay 6b291520b stack overflow https stackoverflow com users 10904019 azad linavay bawer farhad hussein delman ali github https github com delmanali linkedin https www linkedin com in delman ali 84a994159 stack overflow https stackoverflow com users 17595273 delman ali dosty dilshad abdulhameed darwaza farhad sabir github https github com darwaza2021 linkedin https www linkedin com in darwaza farhad 50a67b225 stack overflow https stackoverflow com users 17322287 darwaza farhad huda hamid said github https github com hudahamid linkedin https www linkedin com in huda hamid 7524a6159 stack overflow https stackoverflow com users 17595301 huda hamid kawan idrees mawlood rafaat khalil abubakr nbsp x 1 github https github com rafaatxalil365 linkedin https www linkedin com in rafaat abubakir a929b3213 stack overflow https stackoverflow com users 17352516 rafaat xalil miran ali rashid nbsp x 1 github https github com miranalirashid linkedin https www linkedin com in miran ali 82a748178 stack overflow https stackoverflow com users 17595118 miran bawar khalid aziz nbsp x 1 github https github com bawarx linkedin https www linkedin com in bawar khalid 265b4b227 stack overflow https stackoverflow com users 14960532 bawar khalid mohammed mansour github https github com hooshyar linkedin https github com mohammedmansur stack overflow https stackoverflow com shanya hushyar nbsp x 1 github https github com shanyahushyar stack overflow https stackoverflow com users 17595162 shanya hushyar rashed sadraddin rashed rebar salam mhammad roudan chirkoh haj hussein roza taha mustafa salih yaseen rajab salwa fikri malla shang masood abdullah shokhan osman sima azad farooq srwa omar abdullah taman moayed latif viyan najmadin nasradin yahia hasan baiz yahya adnan ghadhban miran ali rasheed bawar khaled azeez mohammed mansour github https github com mohammedmansur linkedin https www linkedin com in mohammed mansur 568a65231 stack overflow https stackoverflow com users 15901905 mohammed mansur shanya hushyar sipal salam mostafa majeed sipal salam nbsp x 1 github https github com sipal00 linkedin https www linkedin com in sipal salam 7b7602218 stack overflow https stackoverflow com users 17595226 sipal mostafa majeed nbsp x 1 github https github com mstafamajid linkedin https www linkedin com in mustafa majid 166327224 stack overflow https stackoverflow com users 17595137 mustafa majid karwan msto ali farhad github https github com 1 ali 1 linkedin https www linkedin com in ali farhad 90b4b8198 stack overflow https stackoverflow com users 14529397 alifarhad ali ahmed naman aland abdulmajeed shad khalid nbsp x 1 github https github com shad khalid linkedin https www linkedin com in shad khalid 944545227 stack overflow https stackoverflow com users 17622725 shad khalid sako ranj nbsp x 1 github https github com sako ranj linkedin https www linkedin com in sako ranj 570031213 stack overflow https stackoverflow com users 15195981 sako ranj salar khalid nbsp x 1 github https github com salarpro linkedin https www linkedin com in salar pro 13b970120 stack overflow https stackoverflow com users 5862126 salar pro omer mukhtar nbsp x 1 github https github com omerrmukhtarr linkedin https www linkedin com in omer mukhtar 950b951b7 stack overflow https stackoverflow com users 17595096 omer mukhtar tab profile aso arshad nbsp x 1 karwan khdr github https github com karwan01 linkedin https www linkedin com in karwan khdhr 590b5a1a8 stack overflow https stackoverflow com users 17595109 karwan rasul sara bakir github https github com sarahbakr stack overflow https stackoverflow com users 17628902 sarah bakr yassin hussein github https github com yassin h rassul linkedin https www linkedin com in yassin rassul stack overflow https stackoverflow com users 13059311 yassin h rassul amad bashir github https github com amad a96 linkedin https www linkedin com in amad bashir 615026227 stack overflow https stackoverflow com users 17595120 amad bashir zaynab azad khdir hekar azwar mohammed salih github https github com hekaramohammad stack overflow https stackoverflow com users 13974543 hekar azwar mohemmad salih linkedin https www linkedin com in hekar azwar mohammed salih 579a601a6 ahmed aziz hasan github https github com ahmedaziz0 stack overflow https stackoverflow com users 12643186 ahmed aziz rasan diyar tayeb github https github com titan ui linkedin https stackoverflow com users 17604539 titan ui rasty azad qadir nbsp x 1 github https github com rastyit97 stack overflow https stackoverflow com users 16274767 rasty azad wrya mhamad hassan nbsp x 1 github https github com wrya mhamad linkedin https www linkedin com in wrya mhamad 31024b185 stack overflow https stackoverflow com users 13229231 wrya mhamad tahir awal ghafur github https github com tatosoll linkedin https www linkedin com in tahir awal 490651201 stack overflow https stackoverflow com users 17595960 tahir awal tab profile mohammed ahmed salim nbsp x 1 github https github com mohamed199898 linkedin https www linkedin com in mohamad amedy 078467165 stack overflow https stackoverflow com users 17595148 mohammed ahmed salim shene wali khalid github https github com shenekhalid stack overflow https stackoverflow com users 17595197 shene wali linkedin https www linkedin com mwlite in shene wali 189450228 muhamad tahsin karem github https github com muhamad3 linkedin https www linkedin com in muhamad tahsin 29b80a1a9 stack overflow https stackoverflow com users 14649300 muhamad tahsin ayman abd saeed nbsp x 1 github https github com aymanabd9 linkedin https www linkedin com in ayman abd 60838a228 stack overflow https stackoverflow com users 17595097 ayman abd milad mirkhan majeed github https github com miladmirkhan linkedin https www linkedin com in milad mirkhan 63537521a stack overflow https stackoverflow com users 16825719 milad mirkhan omar falah hasan github https github com omarfalah99 linkedin https www linkedin com in omar falah 3531381ba stack overflow https stackoverflow com users 17595189 omar falah ranj kamal kanabi github https github com ranj kamal linkedin https www linkedin com in ranj kamal 020755154 stack overflow https stackoverflow com users 17595159 ranj kamal dwarozh kakamad noori github https github com dwarozh 177 stack overflow https stackoverflow com users 17595098 dwarozh k noori the coach hooshyar github https github com hooshyar linkedin https www linkedin com in hooshyar stack overflow https stackoverflow com users 10622449 hooshyar br
front_end
awesome-nlp-polish
awesome nlp polish a curated list of resources dedicated to natural language processing nlp in polish models tools datasets awesome nlp polish logo awesome nlp polish png table of contents polish text data polish text datasets models and embeddings models and embeddings libraries and tools language processing tools and libraries papers articles blogs papers articles blog post contribution contribution polish text datasets task oriented datsets the klej kompleksowa lista ewaluacji j zykowych benchmark is a set of nine evaluation tasks for the polish language understanding https klejbenchmark com index html poleval datasets hate speech classification distinguish between normal non harmful tweets class 0 and tweets that contain any kind of harmful information class 1 poleval 2019 task6 http 2019 poleval pl index php tasks task6 mirror gdrive https drive google com drive folders 1dp7h9frejugk4joemsuxobiwp5h4x6q6 usp sharing polish cdscorpus http zil ipipan waw pl scwad cdscorpus the dataset for compositional distributional semantics polish cdscorpus consists of 10k polish sentence pairs which are human annotated for semantic relatedness and entailment wroclaw corpus of consumer reviews sentiment wccrs https clarin pl eu dspace handle 11321 700 corpus of polish reviews annotated with sentiment at the level of the whole text text and at the level of sentences sentence for the following domains hotels medicine products and university reviews ermlab opineo dataset https github com ermlab pl sentiment analysis opineo reviews gdrive https drive google com file d 1vxquebjuhggy3vv2da7llvbjjzlqnl0d view usp sharing hatespeech corpus contains over 2000 posts crawled from public polish web http zil ipipan waw pl hatespeech polish analogy dataset https dl fbaipublicfiles com fasttext word analogies questions words pl txt example ateny grecja bagdad irak useful for word embeddings evaluation nkjp http nkjp pl index php page 0 lang 1 national corpus of polish it contains classic literature daily newspapers specialist periodicals and journals transcripts of conversations and a variety of short lived and internet texts only a small sub corpus is available for download http clip ipipan waw pl nationalcorpusofpolish action attachfile do get target nkjp podkorpusmilionowy 1 2 tar gz gnu glp v 3 direct contact and maybe necessary to get the full corpus polemo 2 0 sentiment analysis dataset for conll https clarin pl eu dspace handle 11321 710 polish music dataset https github com malarzdawid polish music dataset polish music dataset is the largest dataset with information about artists songs and lyrics in poland now only hip hop artists raw texts clean polish oscar https github com ermlab politbert data processing for training preprosessed polish oscar corpus removed foreign sentences non polish non valid polish senteces eg enums corpus preprocessed by ermlab oscar or open super large crawled almanach corpus https traces1 inria fr oscar corpus is a huge multilingual corpus obtained by language classification and filtering of the common crawl corpus contains 109gb or 49gb of polish text polish wikipedia dump https dumps wikimedia org plwiki regular monthly copy of polish wikipedia more then 4gb of text opus the open parallel corpus http opus nlpl eu you can select languages and download only polish file polish opensubtitles v2018 http opus nlpl eu opensubtitles v2018 php sentences 45 9m polish tokens 287 1m collection of translated movie subtitles from opensubtitles http www opensubtitles org raw txt corpus unpacked 7 2gb https object pouta csc fi opus opensubtitles v2018 mono pl txt gz tokenized txt corpus unpacked 7 6gb https object pouta csc fi opus opensubtitles v2018 mono pl tok gz paracrawl v5 http opus nlpl eu paracrawl v5 php sentences 6 4m polish tokens 157 1m raw txt corpus unpacked 1 1gb https object pouta csc fi opus paracrawl v5 mono pl txt gz tokenized txt corpus https object pouta csc fi opus paracrawl v5 mono pl tok gz polish parliamentary corpus http clip ipipan waw pl ppc text from proceedings of polish parliament sejm and senate models and embeddings polish transformer models polish roberta model https github com sdadas polish nlp resources fbclid iwar0tv ybubwffirgfqvqagdcsl6bv 9pnw8wm3gkgiyxnaje m9tpy0hiam roberta model was trained on a corpus consisting of polish wikipedia dump polish books and articles polish parliamentary corpus politbert https github com ermlab politbert polish roberta model trained on polish wikipedia polish literature and oscar major assumption is that quality text will give good model polbert https github com kldarek polbert polish bert model model was trained with code provided in google bert s github repository merge with huggingface transformers https huggingface co dkleczek bert base polish uncased v1 allegro herbert https github com allegro herbert polish bert model trained on polish corpora using only mlm objective with dynamic masking of whole words slavicbert multilingual bert model https github com deepmipt slavic bert ner bert slavic cased 4 languages bulgarian czech polish russian 12 layer 768 hidden 12 heads 110m parameters 600mb there is also another slavicbert model http docs deeppavlov ai en master features models bert html but i have problems to convert it to pytorch other models elmo embeddings https clarin pl eu dspace handle 11321 690 show full a model of elmo embeddings for polish language trained on large textual corpora kgr10 zalando flair polish models https github com flairnlp flair blob master resources docs embeddings flair embeddings md contextual string embeddings that capture latent syntactic semantic information that goes beyond standard word embeddings there are two models pl forward and pl backward ipipan word2vec polish models http dsmodels nlp ipipan waw pl w2v html wroc aw university of science and technology word2vec https clarin pl eu dspace handle 11321 442 distributional language models for polish trained on different corpora kgr10 nkjp wikipedia fasttext polish model fb train on common crawl https github com facebookresearch fasttext blob master docs crawl vectors md wikipedia https github com facebookresearch fasttext blob master docs pretrained vectors md fasttext kgr10 polish model binary https clarin pl eu dspace handle 11321 600 universal sentence encoder multilingual https tfhub dev google universal sentence encoder multilingual large 3 sentence embeddings it covers 16 languages including polish bpemb subword embeddings includes polish https nlp h its org bpemb easy to use with flair https github com flairnlp flair blob master resources docs embeddings byte pair embeddings md ulmfit for tensorflow 2 0 https tfhub dev edrone collections ulmfit 1 this collection contains ulmfit recurrent language models trained on wikipedia dumps for english and polish the models themselves were trained using fastai and then exported to a tensorflow usable format code is available on bitbucket https bitbucket org edroneteam tf2 ulmfit src master language processing tools and libraries morfologik https github com morfologik morfologik stemming java and pymorfologik https github com dmirecki pymorfologik python wrapper dictionary based morphological analyzer morfeusz http morfeusz sgjp pl download morphological analyzer see also elasticsearch plugin https github com allegro elasticsearch analysis morfologik stempel https github com dzieciou pystempel python port algorithmic stemmer see also elasticsearch plugin https www elastic co guide en elasticsearch plugins current analysis stempel html spacy for polish http spacypl sigmoidal io extend spacy a popular production ready nlp library to fully support polish language spacy pl by ipi pan https github com ipipan spacy pl integrating existing polish language tools and resources into the spacy pipeline krnnt polish morphological tagger https github com kwrobel nlp krnnt krnnt is a morphological tagger for polish based on recurrent neural networks paper http ltc amu edu pl book2017 papers poleval1 6 pdf stanza https stanfordnlp github io stanza python nlp analysis package from stanford university stanza is a python natural language analysis package it contains tools which can be used for sentence word tokenizing to generate base forms of words parts of speech and morphological features syntactic dependency parsing recognizing named entities contains polish model duckling https github com facebook duckling haskel library for parsing text into structured data with support for polish a curated list of polish abbreviations for nltk sentence tokenizer https gist github com ksopyla f05fe2f48bbc9de895368b8a7863b5c3 based on wikipedia text papers articles blog post benchmarks of some of polish nlp tools http clip ipipan waw pl benchmarks single word lemmatization and morphological analysis multi word lemmatization disambiguated pos tagging dependency parsing shallow parsing named entity recognition summarization etc github repo with list of polish word embeddings and language models word2vec fasttext glove elmo https github com sdadas polish nlp resources polish word embeddings review https github com ermlab polish word embeddings review evaluation of polish word embeddings word2vec fastext etc prepared by various research groups evaluation is done by words analogy task polish sentence evaluation https github com sdadas polish sentence evaluation contains evaluation of eight sentence representation methods word2vec glove fasttext elmo flair bert laser use on five polish linguistic tasks training roberta from scratch the missing guide https zablo net blog post training roberta from scratch the missing guide polish language model complete user guide for trainning roberta model with use of huggingface transformers for polish contribution if you have or know valuable materials datasets models posts articles that are missing here please feel free to edit and submit a pull request you can also send me a note on linkedin https www linkedin com in krzysztofsopyla or via email krzysztofsopyla gmail com
nlp nlp-machine-learning polish-language datasets
ai
Project02GroupGit
ga sei project 2 the nomad bookshelf literary database deployed app https nomadbooks herokuapp com nomadbookshelf https i imgur com dmo3twe gif table of contents introduction brief and requirements project overview contributors timeframe technologies employed planning and preparation user stories entity relationship diagram wireframe development process featured code outcome challenges wins bugs future inclusions and improvements key learnings introduction project requirements work in pairs to build a full stack web application from scratch using the express framework use node js html css javascript and a nosql database mondodb to build the application with mvc architecture include a user resource authentication and authorisation and allow the user to change their password include two additional resources with full crud functionality for registered users and relationships between the resources implement thoughtful user stories wireframes that are significant enough to help you know which features are core mvp and which can be cut deploy the application online through heroku so it s publicly accessible project overview the nomad bookshelf is a full stack application serving as a database and community for literature enthusiasts where they can submit edit and review a comprehensive ever growing library of authors and their attributed works inspired by other media databases communities such as goodreads and imdb and built using express mongodb and mongoose users can create an account and add either a new author or publication provide details about that author or work and leave their impressions on their submitted entry contributors chris ailey https github com c t ailey bedros asdorian https github com bedrosasdorian timeframe this project was completed as a two person group assignment over seven days technologies employed express including the following node packages mongoose bcrypt body parser connect flash dotenv ejs express ejs layouts express session moment passport mongodb html and css bootstrap embedded javascript node js git github figma erd and wireframing heroku deployment planning and preparation in getting to know my teammate for this project we quickly bonded over our shared interests and reflected on the common ground we almost immediately found despite one of us being based in the uk and the other being based in kuwait we found that we both had a shared interest in literature and learned that while our interest was shared our specific tastes varied this inspired us to explore the concept of a literary database similar in concept to imdb or goodreads where people could browse a database of authors and their attributed publications and submit new entries if they find it lacking in order to define a clear path forward we started by establishing our entity relationship diagram for comprehension of how our models would interact once we had this outlined we dedicated time to creating a basic visual style for the app and drafting up the layouts for each major page within it as my teammate had been having difficulty in keeping up with the course materials to this point we opted to divide the workload according to our strengths as he had more familiarity with css he elected to handle the styling for the site while i focused on building the backend functionality user stories after settling on our concept for the application we assembled a list of user stories to ensure we had a guide for the main points of functionality we wanted our users to experience as an unregistered user i want to be able to see a clear prompt for signing up when i first load the site so that i can get the full experience from the app as an unregistered user i want to be able to view the bookshelf and author databases so i can decide for myself if the site s content is of interest to me before registering as a registering user i want to be able to set my own username and password so that i can log in securely as a registered but logged out user i want to be able to see a clear prompt for logging in so that i can start using the app s full features as a logged in user i want to be able to see a spread of recently added book covers and titles on the homepage instead of the sign up log in prompts so that i have a prompt on where to begin exploring the site s content as a logged in user i want to be able to submit new entries to the bookshelf and author databases so i can contribute to the community the site is built around as a logged in user i want to be able to edit and delete bookshelf and author entries created by me so that i can regulate and alter the content in the event of errors as a logged in user i want to be able to view my profile so that i can review my user details as a logged in user i want to be able to change my password so that i can keep my account secure in the event it somehow becomes compromised as a logged in user i want to be able to log out of my account so that i can keep my account secure against other people on my device or log in as a new user entity relationship diagram in order to understand which data we would need to store and access to achieve our desired functionality we created an erd to show the flow and interaction of our intended models for the app our intended models were as follows user registered users for the site required for authorisation and authentication of user generated crud operations author data for individual authors required to be created before a publication can be attributed to them author id is referenced by the book data attributed to them and references the id of the user who created the record shares a many to many relationship with the book model as a book can be attributed to many authors and an author can be attributed to many books book data for individual publications book id is referenced by the author s attributed to the publication and references the id of the user who created the record shares a many to many relationship with the author model as a book can be attributed to many authors and an author can be attributed to many books reviews data for user submitted reviews on publications each review id is referenced by the user who created it and the book it has been submitted to sharing a one to many relationship with both a user can submit many reviews and a book can have many reviews submitted to it but any single review id can only be attributed to one user and book wireframing the wireframe design for the project was extremely straightforward i had conceived a clear enough vision for the game that the first draft of the wireframe was the only one required to complete it with despite only a few minor changes along the way the final product remained virtually unchanged from this stage nomadwireframe https i imgur com doluzdk png full gallery of wireframe images available on imgur https imgur com a pw1e0d4 development process day 1 further planning and preparation owing to my teammate s disadvantage on this project the first day was entirely dedicated to ensuring we had a solid end goal in mind and a workable plan for development we solidified our concept by thoroughly discussing exactly what functionality we wanted to implement finalised our erd and drafted our wireframes within the day making sure that by day 2 we would have all the materials we needed to work our plan day 2 initialising the codebase basic models authentication and authorization the first day of coding was focused on pair programming where i took the lead and constructed the essential framework for the project while attempting to clarify and explain the principles of what was being created to my teammate the first order of business was implementing the necessary features for authentication authorization which meant initialising the codebase by using npm to install our required packages and middleware building the server js file so the essential first point of contact for running the app was present building routes and controllers for immediately required views establishing the database in mongodb and ensuring it was connected to the app once the dependencies had been installed and defined i created the config file for the passport middleware the user model schema the blank layout ejs file and custom logged in status monitoring middleware from here the next logical step was to implement the signup and login functionality this was achieved by building the skeletal landing page and signup login forms adding anchor tags to link the user to the forms and ensuring that the connectivity with mongodb was present and correct such that submitted data could be properly stored and retrieved once the authentication authorisation features had been implemented i experimented with building a basic profile page which would display select information from that provided by the user upon registration and attempted our vision of login dependent views for the landing page once these were at least functional the final endeavour for the day was to build the author and book schemas for the next day s development with just enough time left over to implement create and update operations for authors day 3 crud operations basic styling dividing the workload the third day of development was centred around implementing crud operations for as many of our models as possible after the previous day s work we now had enough for my teammate to branch out and begin applying css to our existing views while i continued work on the backend i first attempted to complete edit operations for the user s details including their password on the profile page unfortunately i had no success with it at this point and not wanting to get weighed down by a single problem when we had so much left to do i pushed this to the back burner for the time being after adding the delete operation for authors full crud operations for books including cover image upload via url and establishing the many to many relationship between the author and book models i implemented a nav bar to the layouts ejs file to allow site wide ease of navigation the last addition for day 3 was a basic version of our planned logged in conditional view for the landing page which displayed the current spread of books in the database by their cover image and title as it was rough and unstyled it only displayed them in a one item wide sequential column but the functionality was there and ready for improvement on day 4 day 4 password change and final adjustments day 4 began where day 3 left off completing the landing page conditional view after i applied flexbox positioning to arrange the book covers and titles in a centred four to a row layout the team agreed that this satisfied our plan for the landing page and moved on to the final few outstanding features using the techniques learned from the landing page conditional view it was simple enough to implement a restriction whereby a user could only edit or delete records they had created by checking whether the createdby property for that record matched the id of the current user and if not rendering edit and delete as plain text instead of links with this being completed the one remaining incomplete feature was the profile page s edit functionality after taking the time to clean up some styling issues i devoted myself to attempting to crack the issue several hours of reading documentation and experimentation later i finally resolved it allowing users to change their profile information including their password the only step left was hosting one push to heroku later we had completed enough of the nomad bookshelf to be satisfied featured code the isloggedin custom middleware the isloggedin middleware was the one of the first pieces of code implemented into the project after the bones of the application had been put together although it s a short and straightforward piece of code which simply checks to see if a given request has a valid user model attached and redirects the user to the login page instead of their intended destination if not it served as an integral element of the project and was employed extensively js module exports req res next if req user res redirect auth login else next the login conditional landing page the second major point of pride during development was implementing the conditional view for the landing page our plan was to have it so that users who are not logged in will only see a distinct prompt to either signup or login upon first visiting the site those who are logged in will see a display of recently added books arranged by cover image and title though relatively simple to create in hindsight it was a landmark moment in the development process due to being my first piece of functionality with the feeling of an above and beyond flourish html div if currentuser h1 style color white text align center margin bottom 0 margin top 50px recently added h1 div class landingbooks style margin top 0 books foreach function book div class bookpreview a href book detail id book id img src book imageurl alt book title cover h3 book title h3 a div div else center p class banner text a platform built for digital textbook reviews and engagement p center div class containers a href auth signup class banners style background 2a2df6 color white border none font family times new roman h1 sign up h1 p sign up now if you re a newcoming user p a a href auth login class banners style background green color white border none font family times new roman width 320px h1 log in h1 p log into an existing account p a div div the auth updatepw put function password update put request the eleventh hour success of completing the password update feature was a real triumph as i had feared i wouldn t be able to complete this particular requirement with the little time remaining after everything else had been finished in its completed state this code functions perfectly for its purpose of allowing a logged in user to change their password within their profile it operates by using bcrypt to compare the user s current password as submitted via the change password form to the decryption of the one stored in that user s database entry notifying the user if it doesn t match and otherwise progressing to checking whether their chosen new password matches the one entered in the confirm new password field on success the user will be notified that their password has been updated and redirected to their profile page js exports auth updatepw put req res next var user req user if bcrypt comparesync req body password user password req flash error your current password doesn t match res redirect auth password else if req body newpassword req body confirmpassword req flash error new password and confirm new password don t match res redirect auth password else user findbyidandupdate req body id req body then let hashedpassword bcrypt hashsync req body confirmpassword salt user password hashedpassword user save function err if err next err else res redirect auth profile catch err console log err res send try again later outcome challenges by far the greatest challenge i faced was having to assume the role of project lead over a teammate who had the disadvantage of being behind on the course material which meant i had to arrange a structure for the project within which he would be able to make contributions according to his strengths this meant attempting to coach him in pair programming sessions sharing what i knew of the basics of express and node js which was particularly challenging with my own incomplete understanding of the technologies at the time as he had prior experience and proficiency with css he was able to work independently on styling those views which were developed enough for it while i continued to build the backend updating him on what each new feature did and explaining how it functioned along the way with regards to the code itself i experienced tremendous difficulty in attempting to solve the user password change feature this is related to the fact i had been overwhelmed by some of the intricacies of working with express especially right off the back of the comparatively more straightforward project 1 however this proved to be a major learning moment as it taught me the importance of thoroughly reading documentation and studying examples of other code which achieves similar functionality the third challenge i faced was as mentioned above working with express itself going into the project i did not feel i understood the mechanics of express well enough to achieve as much as i ultimately did i feel i owe the overall success of the project and my eventual understanding of the technology to being responsible for managing the overall development and having to explain the code i was writing to my teammate as i went this went a long way towards helping solidify the concepts in my mind and served as one of the most valuable learning experiences of the course victories the greatest success of the project was ultimately completing it despite the rocky unoptimistic start we produced a functional application which did almost everything we set out to achieve entirely thanks to careful management and optimal division of the workload according to what we could contribute at any given stage finally solving the password update in the nick of time was nothing short of triumphant bugs the selection box in the author edit form only displays the authors already attributed to that book instead of the full list authors assigned to a book cannot currently be unassigned through the author edit form flash messages do not display whenever they should future inclusions and improvements as we had to omit our planned reviews model for time i would ideally like to implement it as intended so that each user can leave reviews for any book instead of the current compromised incarnation where a review can only be provided once per book and only by the user creating the entry i would like to include some more developed styling for the site the current theme is somewhat basic and would benefit from additional time and care in developing it particular attention would be given to the bookshelf and lookup by author tables the eventual addition of a light dark theme toggle button would lend itself to a more personal customizable experience for the user as the recently added section on the landing page currently displays all books in the database i would like to implement a check such that it only displays the last ten to fifteen books added key learnings taking charge of a project where the other team member was at a significant disadvantage was a tremendously valuable lesson in managing a difficult less than ideal situation and finding ways to make the most efficient use of our time and abilities in having to coach my teammate through the parts of the work he hadn t grasped i learned a great deal about my ability to explain concepts i did not fully understand myself and in fact discovered that doing so helped deepen my own comprehension i also feel i gained a much clearer understanding of how development of a user driven community oriented site may operate and which technical aspects of such sites e g authentication and authorisation password validation and clean database interaction are most important to the experience
server
flexgpt
flexgpt the project aims to optimize the tradeoff between large language model inference runtime and ram memory usage the fully autoregressive language model takes all previous tokens into the logits for sampling thus resulting in a o n 2 runtime particularly the self attention forward pass is the main overhead taking up over 98 of the runtime in this project we implemented a cached self attention layer to reduce the theoretical runtime from o n 2 to o n and established the benchmark for trade off between runtime mem selfattn s cache length and ram usage screen shot 2022 02 10 at 22 16 39 https user images githubusercontent com 37657480 153545622 92c5d1c7 d10d 4c5a b9c4 87a82ae9773d png screen shot 2022 02 10 at 22 16 44 https user images githubusercontent com 37657480 153545624 e6de4f8f 0d46 40a4 81bc 22e21a3f1ff3 png to set up python packages and dependencies python3 m setup py install to run the cached self attention layer cd mingpt memgpt python3 mem selfattn separate py the benchmark results are printed on the console and saved as a csv file to the logs directory
lm gpt inference runtime tradeoff
ai
Presist-Data-EF-Core-Mac
persist and retrieve relational data with entity framework core hello friend you ve found the sample code repository for a microsoft learn module https docs microsoft com learn modules persist data ef core you ll find the finished solution on this branch https github com microsoftdocs mslearn persist data ef core tree solution sample contributing this project welcomes contributions and suggestions most contributions require you to agree to a contributor license agreement cla declaring that you have the right to and actually do grant us the rights to use your contribution for details visit https cla opensource microsoft com when you submit a pull request a cla bot will automatically determine whether you need to provide a cla and decorate the pr appropriately e g status check comment simply follow the instructions provided by the bot you will only need to do this once across all repos using our cla this project has adopted the microsoft open source code of conduct https opensource microsoft com codeofconduct for more information see the code of conduct faq https opensource microsoft com codeofconduct faq or contact opencode microsoft com mailto opencode microsoft com with any additional questions or comments legal notices microsoft and any contributors grant you a license to the microsoft documentation and other content in this repository under the creative commons attribution 4 0 international public license https creativecommons org licenses by 4 0 legalcode see the license license file and grant you a license to any code in the repository under the mit license https opensource org licenses mit see the license code license code file microsoft windows microsoft azure and or other microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of microsoft in the united states and or other countries the licenses for this project do not grant you rights to use any microsoft names logos or trademarks microsoft s general trademark guidelines can be found at http go microsoft com fwlink linkid 254653 privacy information can be found at https privacy microsoft com en us microsoft and any contributors reserve all other rights whether under their respective copyrights patents or trademarks whether by implication estoppel or otherwise
server
theme-indomarket
h1 id argon design system indomarket free ecommerce bootstrap theme h1 p indomarket is a free ecommerce website template built on boostrap 4 and argon design system https demos creative tim com argon design system p h2 id file structure file structure h2 p within the download you ll find the following directories and files p div class highlighter rouge div class highlight pre class highlight code indomarket changelog md license md readme md assets css argon css argon css map argon min css argon min css map img argon brand icons ill js argon js argon min js scss bootstrap custom argon scss vendor bootstrap bootstrap datepicker font awesome headroom jquery nouislider nucleo popper code pre div div h2 id browser support browser support h2 p at present we officially aim to support the last two versions of the following browsers p p img src https s3 amazonaws com creativetim bucket github browser chrome png width 64 height 64 img src https s3 amazonaws com creativetim bucket github browser firefox png width 64 height 64 img src https s3 amazonaws com creativetim bucket github browser edge png width 64 height 64 img src https s3 amazonaws com creativetim bucket github browser safari png width 64 height 64 img src https s3 amazonaws com creativetim bucket github browser opera png width 64 height 64 p h2 id licensing licensing h2 ul li p licensed under mit https github com gieart87 theme indomarket blob master license md p li ul
os
azureml-examples
page type sample languages azurecli python products azure machine learning description top level directory for official azure machine learning sample code and examples azure machine learning examples python code style black https img shields io badge code 20style black 000000 svg https github com psf black license mit https img shields io badge license mit purple svg license welcome to the azure machine learning examples repository contents directory description github github github files like issue templates and actions workflows cli cli azure machine learning cli v2 examples sdk sdk azure machine learning sdk v2 examples python sdk python azure machine learning python sdk v2 examples dotnet sdk dotnet azure machine learning net sdk v2 examples setup setup folder with setup scripts setup ci setup setup ci setup scripts to customize and configure an azure machine learning compute instance setupdsvm setup setup dsvm rstudio setup rstudio on data science virtual machine dsvm setup repo setup setup repo setup scripts for azure azureml examples tutorials tutorials azure machine learning end to end python sdk v2 tutorials contributing we welcome contributions and suggestions please see the contributing guidelines contributing md for details code of conduct this project has adopted the microsoft open source code of conduct https opensource microsoft com codeofconduct please see the code of conduct code of conduct md for details reference documentation https docs microsoft com azure machine learning
azure azureml ml azure-machine-learning data-science
ai
GeekLearningFreeRTOS
geeklearningfreertos geek michael freertos
os
Learning-Python-Physics-Informed-Machine-Learning-PINNs-DeepONets
learning piml in python hi i m juan diego toscano thanks for stopping by this repository will help you to get involved in the physics informed machine learning world in particular it includes several step by step guides on the basic concepts required to run and understand physics informed machine learning models from approximating functions solving and discovering ode pdes with pinns to solving parametric pdes with deeponets i reviewed some of these problems in my youtube channel so please watch them if you have time pinns youtube tutorial https youtu be axxnszmpyoi inverse pinns youtube tutorial https youtu be 77jchhtcbv0 pi deeponets youtube tutorial https youtu be ypnyvd9b js also if you are interested and pinns and machine learning please consider subscribing to the crunch group brown university youtube channel they upload weekly seminars on scientific machine learning https www youtube com channel uc2zzb80udkrvwq4n3a8dokq finally if you have any questions or if i can help you in some way please feel free to reach me at juan toscano brown edu note the examples in this repository were taken from deepxde library https deepxde readthedocs io en latest pinns repository 1 https github com omniscientoctopus physics informed neural networks tree main pytorch burgers 20equation pinns repository 2 https github com alexpapados physics informed deep learning solid and fluid mechanics deeponets repository 1 https github com predictiveintelligencelab physics informed deeponets references 1 raissi m perdikaris p karniadakis g e 2017 physics informed deep learning part i data driven solutions of nonlinear partial differential equations arxiv preprint arxiv 1711 10561 http arxiv org pdf 1711 10561v1 2 lu l meng x mao z karniadakis g e 1907 deepxde a deep learning library for solving differential equations 2019 url http arxiv org abs 1907 04502 https arxiv org abs 1907 04502 3 rackauckas chris introduction to scientific machine learning through physics informed neural networks https book sciml ai notes 03 4 repository physics informed neural networks pinns https github com omniscientoctopus physics informed neural networks tree main pytorch burgers 20equation 5 raissi m perdikaris p karniadakis g e 2017 physics informed deep learning part ii data driven discovery of nonlinear partial differential equations arxiv preprint arxiv 1711 10566 https arxiv org abs 1711 10566 6 repository physics informed deep learning and its application in computational solid and fluid mechanics https github com alexpapados physics informed deep learning solid and fluid mechanics 7 lu l jin p karniadakis g e 2019 deeponet learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators arxiv preprint arxiv 1910 03193 8 wang s wang h perdikaris p 2021 learning the solution operator of parametric partial differential equations with physics informed deeponets science advances 7 40 eabi8605
deep-learning machine-learning neural-network neural-networks pytorch tutorial physicsinformedneuralnetworks piins deeponet inverse-problems jax inverse-pinns
ai
Chess-Board-Recognition
chess board recognition this project highlights approaches taken to process an image of a chessboard and identify the configuration of the board using computer vision techniques although the use of a chessboard detection for camera calibration is a classic vision problem existing techniques on piece recognition work under a controlled environment the procedures are customized for a chosen colored chessboard and a particular set of pieces the methods used in this project supplements existing research by using clustering to segment the chessboard and pieces irrespective of color schemes for piece recognition the method introduces a novel approach of using a r cnn to train a robust classifier to work on different kinds of chessboard pieces the method performs better on different kinds of pieces as compared to a sift based classifier if extended this work could be useful in recording moves and training chess ai for predicting the best possible move for a particular chessboard configuration approach stack approach https github com sukritgupta17 chess board recognition blob master results approach png clusters obtained cluster1 https github com sukritgupta17 chess board recognition blob master results cluster1 png cluster2 https github com sukritgupta17 chess board recognition blob master results cluster2 png detected lines detected lines https github com sukritgupta17 chess board recognition blob master results detected 20lines png pieces extracted extracted pieces https github com sukritgupta17 chess board recognition blob master results pieces png recognition final recognition https github com sukritgupta17 chess board recognition blob master results deteected png note for more details refer to the report
computer-vision chess board-game matlab alexnet neural-network
ai
blockchain-walkthrough
blockchain walkthrough this code was covered in python tutorial build a blockchain in 60 lines of code https medium com michaelchrupcala python tutorial build a blockchain 713c706f6531
blockchain
library
library electronic information technology
server
blockchain--java
blockchain java java demo kotlin demo hash prehash websocket
java blockchain websocket
blockchain
todosapp
todosapp todosapp is a sample application that manages your to do items it is built primarily to explore the server side asynchronous programming in jvm web frameworks it is inspired by todomvc http todomvc com a browser side sample application to showcase mv javascript frameworks you can find more info in my article develop non blocking web applications in java https community oracle com docs doc 918126 jvm web frameworks in the sub projects we build the sample project using the following frameworks respectively java ee servlet and jax rs spring mvc with spring data and spring boot vert x 2 0 for java play 2 framework for java applications once running the web application can be accessed at its homepage http localhost 8080 there are actually three applications packaged in each sub project web application rendered at the server side http localhost 8080 todos restful web service providing crud operations of to do items http localhost 8080 api todos single page application a rich internet client rendered at the browser side http localhost 8080 spa this is a backbone js application backed by the restful web service at api todos java 8 all modules are implemented in java 8 taking advantage of the lambda expressions maven integration the build system is maven maven 3 2 2 or newer should be used older versions of maven can result in cdi linkage errors https jira codehaus org browse mng 5620 jql project 20in 20 maven 2c 20mng 20and 20text 20 20cdi to build all sub projects run mvn clean install each application can be deployed and run on its embedded application server with an in memory database directly from maven refer to the instruction in each project for how to run the application ide and netbeans the project is built using maven you can open it in any of your favorite ide if you open the module as a maven project in netbeans you can leverage netbeans integration with maven to run netbeans command clean build run and debug
front_end
alcoholic
alcoholic embedded system design project
os
dso-toolkit
this project is using percy io for visual regression testing https percy io static images percy badge svg https percy io dso toolkit dso toolkit npm version http img shields io npm v dso toolkit svg https npmjs org package dso toolkit view this project on npm build status master branch https img shields io travis com dso toolkit dso toolkit master https travis ci com dso toolkit dso toolkit slack chat https dso toolkit slack com slack chat invite link https join slack com t dso toolkit shared invite zt 58125gbo ftpaarcnu47rmgkt7kwika dso toolkit design system of the digitaal stelsel omgevingswet dso digitaal stelsel omgevingswet translated stands for digital system for the environment and planning act of the netherlands the dso toolkit consists of documentation and a style guide in addition two implementations are provided css and web components the web components for angular and react get wrappers see issue 915 getting started zie https www dso toolkit nl voor actuele documentatie npm registry npm install dso toolkit save dev bundle css import or bundle dso toolkit dist dso css cdn the toolkit and component library are distributed to dso toolkit nl use the table below to resolve the branch channel to the base url branch channel url master stable https cdn dso toolkit nl master tags only releases https cdn dso toolkit nl version the same goes for the component library branch channel url master stable https www dso toolkit nl master tags only releases https www dso toolkit nl version html link rel stylesheet href https cdn dso toolkit nl master version dso css for web components html script type module src https cdn dso toolkit nl master version core dso toolkit esm js script script nomodule src https cdn dso toolkit nl master version core dso toolkit js script the referenced scripts are very small only the actually used web components are lazy loaded for more information https stenciljs com docs distribution develop or mockups to work on the dso toolkit using components and variants or create mockups of pages forms or components you need node 18 and yarn see contributing md contributing md on how to contribute either install yarn with npm install global yarn or use yarn with npx npx yarn my commands here git clone git github com dso toolkit dso toolkit git cd dso toolkit yarn install environments depending on the work being done development can be done in several environments development this environment is used to develop new components in storybook storybook is built around stories and since this project has multiple storybooks one for each implementation the easiest way to start this environment is with one of the following commands yarn start yarn start react yarn start angular yarn start all this will run the corresponding storybook s since these commands contain a colon these commands can be run from anywhere in the project the following processes are started default css in watch mode stencil in watch mode storybook and cypress react css in watch mode stencil in watch mode storybook for react components react css in watch mode stencil in watch mode storybook for angular components all css in watch mode stencil in watch mode storybook and storybook for react and angular components this will start stencil on http localhost 45333 storybook on http localhost 45000 and the cypress gui since stencil and storybook are running it s possible to develop the components but keep in mind the tests run in a production environment this means no stencil development tools like hmr leaflet development of leaflet plugins is package transcendent run the following command from root yarn start leaflet yarn start react leaflet this will start stencil http localhost 45333 and storybook http localhost 45000 in production no live reload hmr and the leaflet plugins development environment on http localhost 41234 or the react leaflet development environment on http localhost 42345 requirements node 18 for development on the dso toolkit you also need yarn ports ports used during development 41234 leaflet plugins dev app 42345 react leaflet plugins dev app 43300 docusaurus 45333 stencil 45000 storybook for html css web components 56406 storybook for react components 46006 storybook for angular components
os
corecvs
corecvs computer vision primitives library please refer to wiki for more information wiki https github com pimenovalexander corecvs wiki build status main build build status for master cmake https github com pimenovalexander corecvs tree master cmake branch with opencv built with ccpp yaml https github com pimenovalexander corecvs blob master cmake github workflows ccpp yaml https github com pimenovalexander corecvs workflows cmake ubuntu badge svg branch master cmake no opencv build build status for master cmake https github com pimenovalexander corecvs tree master cmake branch without opencv built with ubuntu no opencv yml https github com pimenovalexander corecvs blob master cmake github workflows ubuntu no opencv yml https github com pimenovalexander corecvs workflows cmake ubuntu no opencv badge svg branch master cmake maintainer work branch https github com pimenovalexander corecvs workflows cmake ubuntu badge svg branch apimenov quad
c-plus-plus computer-vision
ai
ProtoThreadsForArduino
protothreads for arduino protothreads are extremely lightweight stackless threads designed for severely memory constrained systems such as small embedded systems or wireless sensor network nodes protothreads provide linear code execution for event driven systems implemented in c protothreads can be used with or without an underlying operating system to provide blocking event handlers protothreads provide sequential flow of control without complex state machines or full multi threading main features very small ram overhead only two bytes per protothread and no extra stacks highly portable the protothreads library is 100 pure c and no architecture specific assembly code can be used with or without an os provides blocking wait without full multi threading or stack switching freely available under a bsd like open source license related links protothreads http dunkels com adam pt index html download of original protothreads on which this version is based on http dunkels com adam pt download html publications http dunkels com adam pt publications html documentation http dunkels com adam pt pt 1 4 refman protothreads has been developed by adam dunkels http dunkels com adam the original protothreads implementation by adam dunkels as been adopted to the arduino platform and is maintained by benjamin soelberg the arduino implementation of protothreads is based on version 1 4 of the original protothreads
os
cloud
cloud using google cloud software engineering class
cloud
SCA-Cloud-School-Application
sca cloud school application sca cloud engineering
cloud
mad1819_group17_project
delivery app project appetito appetito logo https github com vintop95 mad1819 group17 project blob develop logo mad1819 group17 project png project of mobile application development polito 2018 19 of the group 17 developed with android sdk in java more info on mad1819 assignment pdf and all lab pdf assignment https github com vintop95 mad1819 group17 project blob develop mad1819 assignment pdf lab1 https github com vintop95 mad1819 group17 project blob develop lab1 pdf lab2 https github com vintop95 mad1819 group17 project blob develop lab2 pdf lab3 https github com vintop95 mad1819 group17 project blob develop lab3 pdf lab4 https github com vintop95 mad1819 group17 project blob develop lab4 pdf lab5 https github com vintop95 mad1819 group17 project blob develop lab5 pdf presentation with embedded videos https drive google com file d 1xr6m5hpojb74zfzsrlo5swcbf83mmba5 view usp sharing files db scheme pdf model of firebase db authors copyright 2019 group 17 mobile application development polito 2018 19 s252117 cavalcanti piero https github com pdipedro25 s253177 laudani angelo https github com angelolaudani s251372 mieuli valerio https github com valeriomieuli s253137 vincenzo topazio https github com vintop95 license licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license
front_end
events-app
campfire eventsapp a loyola university chicago project for comp 322 software development for wireless and mobile devices the project is a mobile app named campfire purpose campfire is a mobile app where a user can meet other people in their local area who are interested in attending the same events as them it also allows users to meet other people based on just their similar interests the application allows a person to create an account find events in the area based on their interests create an event to organize meetups with other people who would want to attend and it also helps users find new interests and see who share the same interests and if those interests have upcoming events campfire was designed in the hopes of connecting people locally through finding others interested in the same activities to attend together or if users simply want to connect just through shared interests tech we used the mobile application development framework apache cordova to develop our app and we used android studio for access to an emulator getting started to run the project you must have apache cordova and android studio installed to install android studio the instructions can be found at https developer android com studio then you must follow the instructions at https developer android com studio run managing avds to choose an emulator to install apache cordova the instructions can be found at https cordova apache org docs en 8 x guide platforms android index html once apache cordova and android studio are installed you must connect them so you may run the project on an emulator the instructions to connect apache cordova and android studio can also be found at https cordova apache org docs en 8 x guide platforms android index html our mobile app runs best on a nexus 6p emulator with api 23 authors jeremy aranguren developer css js design cordova plugins nora bonifas developer html css js design firebase maria gocal kappos developer html css xml design user interface user experience linette maliakal developer html css js json design firebase janeen soria developer html css design user interface user experience
front_end
design-systems
design systems a list of famous design systems design languages and guidelines name link 1 material design by google https material io 2 atlassian design language https atlassian design 3 fluent design system by microsoft https www microsoft com design fluent 4 help scout guidelines https style helpscout com visual elements 5 apple human interface guidelines https developer apple com design human interface guidelines 6 polaris by shopify https polaris shopify com 7 carbon design system by ibm https www carbondesignsystem com 8 ant design https ant design docs spec introduce 9 anvil design system by servicetitan https anvil servicetitan com 10 alta ui by oracle https www oracle com webfolder ux middleware alta index html 11 audi brand appearance https www audi com ci en intro brand appearance html 12 uber brand guidelines https brand uber com guide 13 u s web design system uswds https designsystem digital gov 14 australian government design system https designsystem gov au 15 bolt design system https boltdesignsystem com 16 clarity design system by vmware https clarity design 17 spectrum by adobe https spectrum adobe com 18 welcome ui http welcome ui com 19 zendesk garden https garden zendesk com 20 yelp styleguide https www yelp com styleguide 21 vercel design https vercel com design 22 barista https barista dynatrace com 23 biings design system https biings design 24 asphalt design language https asphalt gojek io 25 evergreen by segment https evergreen segment com 26 feelix by myob https feelix myob com 27 at ui https at ui github io at ui 28 elastic ui https elastic github io eui 29 etrade design language https etrade design 30 aurora design system by government of canada https design gccollab ca 31 pallete by artsy https palette artsy net 32 elementary os human interface guidelines https elementary io en docs human interface guidelines 33 nsw design system https www digital nsw gov au digital design system 34 comet https comet discoveryeducation com 35 code for america http style codeforamerica org 36 mongodb design system http mongodb design 37 marvel app style guide https marvelapp com styleguide 38 cnvs ui system https mesosphere github io cnvs 39 wonderbly design system http design system lostmy name 40 lonely planet style guide https rizzo lonelyplanet com styleguide design elements colours 41 lexicon by liferay https liferay design lexicon 42 duet design system https www duetds com 43 cedar design system https rei github io rei cedar docs 44 future learn design system https design system futurelearn com 45 fish tank by bloomberg https fishtank bna com 46 louder than ten https manual louderthanten com 47 ray by wework https ray wework com 48 lightning design system by salesforce https lightningdesignsystem com 49 braid design system by seek https github com seek oss braid design system 50 mailchimp pattern library https ux mailchimp com patterns buttons 51 workday canvas design system https design workday com 52 vue design system https vueds com 53 thumbprint design system https thumbprint design 54 paste by twilio https paste twilio design 55 vanilla framework by canonical web team https vanillaframework io 56 university of melbourne design system https web unimelb edu au 57 stacks by stack overflow https stackoverflow design product guidelines using stacks 58 starbucks pattern library https www starbucks com developer pattern library 59 sky toolkit https www sky com toolkit 60 ratio by rambler https ui kit rambler ru 61 radix by modulz https radix modulz app docs getting started 62 pulse by heartbeat agency https pulse heartbeat ua 63 predix design system https www predix ui com 64 nationbuilder radius https www nationbuilder design 65 finastra design system https design fusionfabric cloud 66 bumbag ui kit https bumbag style 67 solar design system by bulb https design bulb co uk getting started introduction 68 blip design system by take net https design take net 69 bold design system https bold bridge ufsc br en 70 mesh design system https www meshdesignsystem com 71 fluid design system by engie design https www engie design fluid design system 72 fiori design guidelines by sap https experience sap com fiori design web 73 priceline one design system https priceline github io 74 sonnat design system https www sonnat dev 75 base web by uber https baseweb design 76 reverb by rockcontent https reverb rockcontent com 77 primer design system by github https primer style 78 fast by microsoft https www fast design 79 mindsphere design system by siemens https design mindsphere io 80 w3c design system https design system w3 org 81 wanda design system https design wonderflow ai 82 priceline design system https priceline github io 83 nord design system https nordhealth design 84 pajamas design system by gitlab https design gitlab com 85 neopop components library based on cred s design system https github com cred club neopop web 86 gamut design system by codecademy https gamut codecademy com 87 neumorphism ui kit by themesberg https demo themesberg com neumorphism ui 88 vibe design system by monday https style monday com 89 cloudscape design system by aws https cloudscape design
design-systems design-system design-language design-guideline ui-library pattern-library ui-kit component-library
os
react-sdk
p align center img width 60 src docs media reactsdk logo png span style color red coming soon release 23 1 10 and release 8 8 20 span new features in both releases type definition support dx component builder improvements enhanced token storage security easier configuration bug fixes as pega infinity trade and constellation ui architecture evolve the constellation sdks including the react sdk need to evolve with them until now we have been able to support both infinity 8 8 and the infinity 23 the latest infinity version with the same react sdk code release 8 23 11 however staying aligned with the infinity and constellation versions has led to additional configuration tweaks that have made getting started with the react sdk more error prone than we want therefore in the coming weeks we will be introducing separate react sdk releases for infinity 23 and infinity 8 8 environments release sdk r v23 1 10 is only compatible with pega infinity 23 if you are currently using pega infinity 23 with react sdk v8 23 11 you should prepare to update your sdk to sdk r v23 1 0 to take advantage of the latest sdk enhancements and fixes the release sdk r v8 8 20 is only compatible with pega infinity 8 8 if you are currently using pega infinity 8 8 with react sdk 8 23 11 you should prepare to update your sdk to sdk r v8 8 20 to take advantage of the latest sdk enhancements and fixes information about these new releases will be included on this react sdk updates https docs pega com bundle constellation sdk page constellation sdks sdks react sdk updates html page which details all of the react sdk releases br hr react sdk release announcement v8 23 11 this main branch is the latest version of the react sdk it is intended to be used with infinity 8 8 0 if you need to use infinity 8 7 please use the release 8 8 10 branch instead of this main branch this version of the react sdk provides many new features that are documented here what s new in the sdk https docs pega com bundle constellation sdk page constellation sdks sdks react sdk updates html and also outlined below the following list shows some of the key features and changes in this 8 23 11 release added localization support you can now implement localization in your custom and overridden sdk components updated the lint settings in the dx component builder to enable publishing custom components with lint errors or warnings you can modify the lint setting lintaction in the sdk config json file from show to block to disable publishing components with lint errors or warnings updated support for the following npm packages pega react sdk components v8 23 11 pega react sdk overrides v8 23 11 pega dx component builder sdk v8 23 16 for more information about the react sdk components and react sdk overrides packages and enhancements and bug fixes in the packages click here https github com pegasystems react sdk components blob master packages react sdk components doc keyreleaseupdates md important please follow the guidelines documented here if you are upgrading from a previous version of react sdk upgrading the sdk https docs pega com bundle constellation sdk page constellation sdks sdks upgrading sdk html if you want to continue using the previous release you can checkout release 8 8 10 https github com pegasystems react sdk tree release 8 8 10 overview the react sdk provides pega customers with the ability to build dx components that connect pega s constellationjs engine apis with a design system other than the pega constellation design system https design pega com the react sdk differs from out of the box constellation design system because it provides and demonstrates the use of a react design system that is not the pega constellation design system the alternative design system used in this react sdk is material ui https mui com a summary of the latest updates to the pega react sdk components and pega react sdk overrides used by the react sdk can be found in react sdk components keyreleaseupdates md node modules pega react sdk components lib doc keyreleaseupdates md br prerequisites pega infinity server and constellation enabled application this version of the react sdk assumes that you have access to a pega infinity server 8 8 0 running an application that is configured to run using the constellation ui service if you need to use infinity 8 7 please use the release 8 8 10 branch instead of this main branch the mediaco sample application is already configured as a constellation application and can be found in the react sdk download associated with this repo which is available at https community pega com marketplace components react sdk https community pega com marketplace components react sdk the oauth 2 0 client registration records associated with the mediaco application are available in the same react sdk download for more information about the mediaco sample application see mediaco sample application https docs pega com bundle constellation sdk page constellation sdks sdks mediaco sample application html the react sdk has been tested with node 18 12 1 18 13 0 npm 8 19 2 8 19 3 future updates to the sdk will support more recent lts versions of node as constellation supports them before installing and running the sdk code refer to downloading the constellation sdk files https docs pega com bundle constellation sdk page constellation sdks sdks installing constellation sdks html for steps to prepare your infinity server and node environment so you can proceed with the steps in the next section br installing and running the application the following procedures provide an overview of installing constellation sdks and running the application for detailed documentation see installing and configuring constellation sdks https docs pega com bundle constellation sdk page constellation sdks sdks installing configuring constellation sdks html developing with the sdks you can find more details on how to integrate the latest react sdk into your development workflow and also instructions on the new using the new dx component builder for sdk features see development https docs pega com bundle constellation sdk page constellation sdks sdks development html troubleshooting stuck look at our troubleshooting constellation sdks https docs pega com bundle constellation sdk page constellation sdks sdks troubleshooting constellation sdks html which covers resolutions for most of the common problems license this project is licensed under the terms of the apache 2 license you can see the full license here license or directly on apache org https www apache org licenses license 2 0 br contributing we welcome contributions to the react sdk project refer to our guidelines for contributors docs contributing md if you are interested in contributing to the project br additional resources keyreleaseupdates md node modules pega react sdk components lib doc keyreleaseupdates md a summary of the latest updates to the pega react sdk components and pega react sdk overrides used by the react sdk can be found in the react sdk components package s keyreleaseupdates md node modules pega react sdk components lib doc keyreleaseupdates md to see if there are updates in the pega react sdk components and pega react sdk overrides packages published in a newer version than is currently installed you can check the package s main github repo s keyreleaseupdates md https github com pegasystems react sdk components blob master packages react sdk components doc keyreleaseupdates md material ui https v4 mui com constellation sdks documentation https docs pega com bundle constellation sdk page constellation sdks sdks constellation sdks html troubleshooting constellation sdks https docs pega com bundle constellation sdk page constellation sdks sdks troubleshooting constellation sdks html mediaco sample application https docs pega com bundle constellation sdk page constellation sdks sdks mediaco sample application html
infinity pega
os
Awesome-VisDrone
awesome visdrone you can find the latest data algorithms paper in the area of drone based computer vision dataset visdrone the visdrone2019 dataset is collected by the aiskyeye team at lab of machine learning and data mining tianjin university china the benchmark dataset consists of 288 video clips formed by 261 908 frames and 10 209 static images captured by various drone mounted cameras covering a wide range of aspects including location taken from 14 different cities separated by thousands of kilometers in china environment urban and country objects pedestrian vehicles bicycles etc and density sparse and crowded scenes note that the dataset was collected using various drone platforms i e drones with different models in different scenarios and under various weather and lighting conditions these frames are manually annotated with more than 2 6 million bounding boxes of targets of frequent interests such as pedestrians cars bicycles and tricycles some important attributes including scene visibility object class and occlusion are also provided for better data utilization http aiskyeye com uav123 video captured from low altitude uavs is inherently different from video in popular tracking datasets like otb50 otb100 vot2014 vot2015 tc128 and alov300 therefore we propose a new dataset uav123 with sequences from an aerial viewpoint a subset of which is meant for long term aerial tracking uav20l our new uav123 dataset contains a total of 123 video sequences and more than 110k frames making it the second largest object tracking dataset after alov300 all sequences are fully annotated with upright bounding boxes https ivul kaust edu sa pages dataset uav123 aspx dota dota v1 5 contains 0 4 million annotated object instances within 16 categories which is an updated version of dota v1 0 both of them use the same aerial images but dota v1 5 has revised and updated the annotation of objects where many small object instances about or below 10 pixels that were missed in dota v1 0 have been additionally annotated the categories of dota v1 5 is also extended concretely the category of container crane is added https captain whu github io doai2019 dataset html uavdt selected from 10 hours raw videos about 80 000 represen tative frames are fully annotated with bounding boxes as well as up to 14 kinds of attributes e g weather condition fying altitude camera view vehicle category and occlusion for three fundamental computer vision tasks object detection single object tracking and multiple object tracking https sites google com site daviddo0323 projects uavdt inria aerial image labeling dataset the inria aerial image labeling addresses a core topic in remote sensing the automatic pixelwise labeling of aerial imagery dataset features coverage of 810 km 405 km for training and 405 km for testing aerial orthorectified color imagery with a spatial resolution of 0 3 m ground truth data for two semantic classes building and not building publicly disclosed only for the training subset the images cover dissimilar urban settlements ranging from densely populated areas e g san francisco s financial district to alpine towns e g lienz in austrian tyrol https project inria fr aerialimagelabeling isaid isaid is a benchmark dataset for instance segmentation in aerial images this large scale and densely annotated dataset contains 655 451 object instances for 15 categories across 2 806 high resolution images the distinctive characteristics of isaid are the following a large number of images with high spatial resolution b fifteen important and commonly occurring categories c large number of instances per category d large count of labelled instances per image which might help in learning contextual information e huge object scale variation containing small medium and large objects often within the same image f imbalanced and uneven distribution of objects with varying orientation within images depicting real life aerial conditions g several small size objects with ambiguous appearance can only be resolved with contextual reasoning h precise instance level annotations carried out by professional annotators cross checked and validated by expert annotators complying with well defined guidelines https captain whu github io isaid index html prai 1581 to facilitate the research of person reid in aerial imagery we collect a large scale airborne person reid dataset named as person reid for aerial imagery prai 1581 which consists of 39 461 images of 1581 person identities the images of the dataset are shot by two dji consumer uavs flying at an altitude ranging from 20 to 60 meters above the ground which covers most of the real uav surveillance scenarios https github com stormyoung prai 1581 vrai dataset the vari dataset is split to the training set and testing set among which the training set contains 66 113 images with 6 302 ids and the test set contains 71 500 images with 6 720 ids besides we subsample a subset from the testing set as the testing dev set the testing dev set contains 20 images of testing set https github com jiaobl1234 vrai dataset webuav 3m webuav 3m contains over 3 3 million frames across 4 500 videos and offers 223 highly diverse target categories each video is densely annotated with bounding boxes by an efficient and scalable semiautomatic target annotation sata pipeline importantly to take advantage of the complementary superiority of language and audio we enrich webuav 3m by innovatively providing both natural language specifications and audio descriptions https github com 983632847 webuav 3m dut vtuav nearly 1 7 million well aligned rgb t image pairs with 500 sequences for unveiling the power of rgb t tracking the largest rgb t tracking benchmark so far https github com zhang pengyu dut vtuav dronevehicle the dronevehicle dataset consists of a total of 56 878 images collected by the drone half of which are rgb images and the resting are infrared images we have made rich annotations with oriented bounding boxes for the five categories among them car has 389 779 annotations in rgb images and 428 086 annotations in infrared images truck has 22 123 annotations in rgb images and 25 960 annotations in infrared images bus has 15 333 annotations in rgb images and 16 590 annotations in infrared images van has 11 935 annotations in rgb images and 12 708 annotations in infrared images and freight car has 13 400 annotations in rgb images and 17 173 annotations in infrared image this dataset is available on the download page https github com visdrone dronevehicle dronecrowd dronecrowd formed by 112 video clips with 33 600 high resolution frames i e 1920x1080 captured in 70 different scenarios with intensive amount of effort our dataset provides 20 800 people trajectories with 4 8 million head annotations and several video level attributes in sequences https github com visdrone dronecrowd animaldrone a large scale video based animal counting dataset collected by drones animaldrone for agriculture and wildlife protection the dataset consists of two subsets i e animaldrone parta that are captured on site by our own drones and animaldrone partb that are collected from internet totally there are 53 644 frames with more than 4 million object annotations and several attributes i e density altitude and view https github com visdrone animaldrone dronergbt a drone based rgb thermal crowd counting dataset dronergbt that consists of 3600 pairs of images and covers different attributes including height illumination and density https github com visdrone dronergbt multidrone multi drone single object tracking mdot dataset that consists of 92 groups of video clips with 113 918 high resolution frames taken by two drones and 63 groups of video clips with 145 875 high resolution frames taken by three drones https github com visdrone multidrone mdmt to address the critical challenges of identity association and target occlusion in multi drone multi target tracking tasks we collect an occlusion aware multi drone multi target tracking dataset named mdmt it contains 88 video sequences with 39 678 frames including 11 454 different ids of persons bicycles and cars https github com visdrone multi drone multi object detection and tracking paper object detection pengfei zhu longyin wen dawei du xiao bian heng fan qinghua hu haibin ling detection and tracking meet drones challenge ieee transactions on pattern analysis and machine intelligence 44 11 2021 7380 7399 gui song xia xiang bai jian ding zhen zhu serge j belongie jiebo luo mihai datcu marcello pelillo liangpei zhang dota a large scale dataset for object detection in aerial images cvpr 2018 3974 3983 dawei du yuankai qi hongyang yu yifan yang kaiwen duan guorong li weigang zhang qingming huang qi tian the unmanned aerial vehicle benchmark object detection and tracking european conference on computer vision eccv 2018 object tracking pengfei zhu longyin wen dawei du xiao bian heng fan qinghua hu haibin ling detection and tracking meet drones challenge ieee transactions on pattern analysis and machine intelligence 44 11 2021 7380 7399 dawei du yuankai qi hongyang yu yifan yang kaiwen duan guorong li weigang zhang qingming huang qi tian the unmanned aerial vehicle benchmark object detection and tracking european conference on computer vision eccv 2018 segmentation syed waqas zamir aditya arora akshita gupta salman khan guolei sun fahad shahbaz khan fan zhu ling shao gui song xia xiang bai isaid a large scale dataset for instance segmentation in aerial images corr abs 1905 12886 2019 emmanuel maggiori yuliya tarabalka guillaume charpiat and pierre alliez can semantic labeling methods generalize to any city the inria aerial image labeling benchmark ieee international geoscience and remote sensing symposium igarss 2017 crowd analysis challenge results visdrone det2018 the vision meets drone object detection in image challenge results eccv workshops 5 2018 437 468 visdrone sot2018 the vision meets drone single object tracking challenge results eccv workshops 5 2018 469 495 visdrone vdt2018 the vision meets drone video detection and tracking challenge results eccv workshops 5 2018 496 518 others code
ai
Sigma-Web-Dev-Course
welcome to sigma s web development course hindi web development tutorials what s inside if you ve been itching to dive into the world of web development but feel lost in a sea of english tutorials you re in the right place our course is exclusively in hindi and is crafted to guide you from being an absolute beginner to a seasoned pro one step at a time who can benefit this course is a perfect fit for 1 beginners eager to start their web development journey 2 intermediate developers looking to refine their skills 3 individuals who prefer learning in hindi what you ll master during this course you ll delve into 1 the fundamentals of html css and javascript 2 both front end and back end development 3 how to seamlessly integrate databases 4 real world project implementation 5 and a whole lot more the schedule we re committed to your growth expect fresh source code additions nearly every day keep up the pace with our schedule and watch your skills soar get ready to embark on an exciting coding journey ready to start click here https www youtube com playlist list plu0w 9lii9agq5trh9xlikqvv0iaf2x3w to access the complete youtube playlist
front_end
NLP
nlp road 1 math foundation 1 math foundation tangyudi study 163 machine learning tangyudi net163 0 math 2 machine learning 1 machine learning andrew ng coursera machine learning andrew ng coursera 2 machine learning tangyudi study 163 machine learning tangyudi net163 3 deep learning 1 deep learning specialization andrew ng coursera deep learning andrew ng coursera course link https www coursera org specializations deep learning 2 deep learning limu deep leaning limu mxnet course link https space bilibili com 209599371 channel detail cid 23541 gur lstm 3 tools tensorflow examples tensorflow examples tensorflow2 in deeplearning tensorflow2 and deep learning net163 tensorflow in practice specialization coursera tensorflow in practice specialization andrew ng coursera tutorial 4 nlp courses 1 nlp course kaikeba artificial intelligence for nlp 2 first intro to nlp fastai first intro to nlp fastai 3 nlp daniel jurafsky stanford natural language processing daniel jurafsky stanford 4 nlp with deep learning cs224n stanford natural language processing with deep learning cs224n stanford 5 algorithm 1 algorithms design and analysis stanford algorithms desing and analysis stanford 2 design of computer program cs212 udacity design of computer program cs212 udacity udacity advanced 6 references references 1 references 2 hands on tensorflow references hands on tensorflow 3 references 4 fluent python references fluent python 5 references 7 projects and competitions kaggle readme md
machine-learning nlp math
ai
ZeroToBlockchain
zerotoblockchain tutorial on getting started with blockchain on ibm bluemix please use the readme md or readme pdf files in each chapter for the latest documentation for that chapter base concept is chapter 1 what is blockchain concept and architecture overview chapter 2 what s the story we ll implement chapter 2 1 architecture for the story chapter 3 creating the blockchain development environment chapter03 readme md chapter 4 building your first network chapter04 readme md chapter 5 building the admin user experience chapter05 readme md chapter 6 building the buyer user experience chapter06 readme md chapter 7 building the seller user experience chapter07 readme md chapter 8 building the provider user experience chapter09 readme md chapter 9 building the shipper user experience chapter09 readme md chapter 10 building the finance company user experience chapter10 readme md chapter 11 building the unified user experience chapter11 readme md chapter 12 events chapter12 readme md chapter 13 deploying a demo on bluemix kubernetes chapter13 readme md chapter 14 debugging hyperledger composer inside docker chapter14 readme md
blockchain hyperledger-composer hyperledger-fabric nodejs tutorial
blockchain
dezoom_dbt_cloud
dezoom dbt cloud repository to connect to dbt cloud google big query for data engineering zoomcamp
cloud
AI-Blog-Writter
ai blog writter this project is used to generate a blog post using ai the ai model is made using gpt 2 trained model and hugging face transformers it generate blog post by specified no of words it uses hugging face transformers pipeline to generate words along with gpt 2 model requirements helful documentation nlp basics https www analyticsvidhya com blog 2021 02 basics of natural language processing nlp basics hugging face transformers https huggingface co gpt 2 model https openai com blog gpt 2 1 5b release code download code as zip or git clone https github com manthan89 py ai blog writter git main credits and video tutorial youtube https www youtube com watch v chymmt1sqn8 github https github com nicknochnack thank you very much nicholas renotte sir final note h3 bug fixing code error or anything raise issue if it any have h3 h3 happy to hear your sugesstions about this project h3 h3 feel free to give to this repository h3 h3 thank you very much for visiting h3 h3 stay safe and stay healthy h3
gpt-2 transformer huggingface huggingface-transformers blog-writing nlp huggingface-transformer huggingface-transformers-pipeline blog gpt-2-text-generation text-generation pipel pipeline
ai
mystore-front-end
this project was bootstrapped with create react app https github com facebook create react app available scripts in the project directory you can run npm start runs the app in the development mode br open http localhost 3000 http localhost 3000 to view it in the browser the page will reload if you make edits br you will also see any lint errors in the console npm test launches the test runner in the interactive watch mode br see the section about running tests https facebook github io create react app docs running tests for more information npm run build builds the app for production to the build folder br it correctly bundles react in production mode and optimizes the build for the best performance the build is minified and the filenames include the hashes br your app is ready to be deployed see the section about deployment https facebook github io create react app docs deployment for more information npm run eject note this is a one way operation once you eject you can t go back if you aren t satisfied with the build tool and configuration choices you can eject at any time this command will remove the single build dependency from your project instead it will copy all the configuration files and the transitive dependencies webpack babel eslint etc right into your project so you have full control over them all of the commands except eject will still work but they will point to the copied scripts so you can tweak them at this point you re on your own you don t have to ever use eject the curated feature set is suitable for small and middle deployments and you shouldn t feel obligated to use this feature however we understand that this tool wouldn t be useful if you couldn t customize it when you are ready for it learn more you can learn more in the create react app documentation https facebook github io create react app docs getting started to learn react check out the react documentation https reactjs org code splitting this section has moved here https facebook github io create react app docs code splitting analyzing the bundle size this section has moved here https facebook github io create react app docs analyzing the bundle size making a progressive web app this section has moved here https facebook github io create react app docs making a progressive web app advanced configuration this section has moved here https facebook github io create react app docs advanced configuration deployment this section has moved here https facebook github io create react app docs deployment npm run build fails to minify this section has moved here https facebook github io create react app docs troubleshooting npm run build fails to minify
front_end
Sound-Master
sound master simple database management via java and hibernate software engineering project diagram soundmasterclassdiagram bmp
database-management hibernate
server
nlp-tutorials
nlp tutorials tutorials for beginners for natural language processing
ai
awesome-polarization-in-vision
awesome polarization in vision a collection of polarization based models in computer vision including shape from polarization polarization based reflection removal table of contents papers and code papers and code books books blogs blogs lecture videos datasets datasets workshops workshops researchers researchers papers and code a curated set of papers along with code polarization in 3d computer vision high res facial appearance capture from polarized smartphone images 2023 cvpr pdf https arxiv org pdf 2212 01160 pdf sparse ellipsometry portable acquisition of polarimetric svbrdf and shape with unstructured flash photography 2022 siggraph pdf https dl acm org doi pdf 10 1145 3528223 3530075 code https github com kaist vclab sparseellipsometry git perspective phase angle model for polarimetric 3d reconstruction 2022 eccv pdf https arxiv org pdf 2207 09629v2 shape from polarization for complex scenes in the wild 2022 cvpr code https github com chenyanglei sfp wild pandora polarization aided neural decomposition of radiance 2022 pdf pandora polarization aided neural decomposition of radiance transparent shape from single polarization images 2022 pdf https arxiv org pdf 2204 06331v4 polarimetric helmholtz stereopsis 2021 iccv polarimetric normal stereo 2021 cvpr shape from sky polarimetric normal recovery under the sky 2021 cvpr deep polarization imaging for 3d shape and svbrdf acquisition 2021 cvpr polarimetric monocular dense mapping using relative deep depth prior 2021 ral p2d a self supervised method for depth estimation from polarimetry 2020 paper https arxiv org pdf 2007 07567 pdf dataset deep shape from polarization 2020 yunhao ba alex ross gilbert franklin wang jinfa yang rui chen yiqin wang lei yan boxin shi achuta kadambi pdf https arxiv org abs 1903 10210 website https visual ee ucla edu deepsfp htm dataset depth from a polarisation rgb stereo pair 2019 zhu dizhong and smith william ap pdf https arxiv org abs 1903 12061 code https github com amoszhu cvpr2019 mirror surface reconstruction using polarization field 2019 icip polarimetric relative pose estimation 2019 cui et al polarimetric monocular dense slam 2018 yang et al video https www bilibili com video bv1ql41177sd shape from polarisation a nonlinear least squares approach 2017 pdf https openaccess thecvf com content iccv 2017 workshops papers w43 yu shape from polarisation a nonlinear iccv 2017 paper pdf code https github com waps101 polarisation optimisation depth from stereo polarization in specular scenes for urban robotics 2017 polarisation photometric stereo 2017 polarimetric three view geometry 2018 polarimetric multi view stereo 2017 shape and light directions from shading and polarization 2015 polarized 3d high quality depth sensing with polarization cues 2015 direct method for shape recovery from polarization and shading 2012 shape and refractive index recovery from single view polarisation images 2010 shape estimation using polarization and shading from two views 2007 what is the range of surface reconstructions from a gradient field 2006 recovery of surface orientation from diffuse polarization 2006 polarization imaging applied to 3d reconstruction of specular metallic surfaces 2005 transparent surface modeling from a pair of polarization images 2004 polarization based inverse rendering from a single view 2003 separating reflections and lighting using independent components analysis 1999 polarization in image enhancement polarization guided hdr reconstruction via pixel wise depolarization 2023 tip polarization aware low light image enhancement 2023 aaai learning to dehaze with polarization 2021 neurips polarized reflection removal with perfect alignment in the wild 2020 chenyang lei xuhua huang mengdi zhang qiong yan wenxiu sun and qifeng chen pdf https cqf io papers polarized reflection removal cvpr2020 pdf code https github com chenyanglei cvpr2020 polarized reflection removal with perfect alignment cvpr dataset reflection separation using a pair of unpolarized and polarized images 2019 image dehazing using polarization effects of objects and airlight 2014 a physically based approach to reflection separation from physical modeling to constrained optimization 2013 clear underwater vision 2004 polarization in image segmentation and detection glass segmentation using intensity and spectral polarization cues 2022 cvpr pdf https openaccess thecvf com content cvpr2022 papers mei glass segmentation using intensity and spectral polarization cues cvpr 2022 paper pdf code https github com mhaiyang cvpr2022 pgsnet multimodal material segmentation 2022 cvpr pdf https vision ist i kyoto u ac jp pubs yliang cvpr22 pdf code https github com kyotovision public multimodal material segmentation deep snapshot hdr reconstruction based on the polarization camera 2021 icip pdf https arxiv org abs 2105 05824 deep polarization cues for transparent object segmentation 2020 dataset hdr reconstruction based on the polarization camera 2020 a new multimodal rgb and polarimetric image dataset for road scenes analysis 2020 dataset http pagesperso litislab fr rblin databases outdoor scenes pixel wise semantic segmentation using polarimetry and fully convolutional network 2019 datast road scenes analysis in adverse weather conditions bypolarization encoded images and adapted deep learning 2019 adapted learning for polarization based car detection 2019 polarization in other tasks polarized optical flow gyroscope 2020 eccv monochrome and color polarization demosaicking using edge aware residual interpolation 2020 face anti spoofing by learning polarization cues in a real world scenario 2020 survey of demosaicking methods for polarization filter array images 2018 simultaneous acquisition of polarimetric svbrdf and normals 2018 polarization imaging reflectometry in the wild 2017 enhanced material classification using turbulence degraded polarimetric imagery 2010 polarization phase based method for material classification in computer vision 1998 books field guide to polarization https www spiedigitallibrary org ebooks fg field guide to polarization eisbn 9780819478207 10 1117 3 626141 blogs lecture videos datasets workshops researchers contributions
ai
twitter
project 3 twitter twitter is a basic twitter app to read and compose tweets the twitter api https apps twitter com time spent 16 hours spent in total user stories the following required functionality is completed x user sees app icon in home screen and styled launch screen x user can sign in using oauth login flow x user can logout x user can view last 20 tweets from their home timeline x in the home timeline user can view tweet with the user profile picture username tweet text and timestamp x user can pull to refresh x user can tap the retweet and favorite buttons in a tweet cell to retweet and or favorite a tweet x user can compose a new tweet by tapping on a compose button x using autolayout the tweet cell should adjust its layout for iphone 11 pro and se device sizes as well as accommodate device rotation x user should display the relative timestamp for each tweet 8m 7h x tweet details page user can tap on a tweet to view it with controls to retweet and favorite the following optional features are implemented user can view their profile in a profile tab contains the user header view picture and tagline contains a section with the users basic stats tweets following followers profile view should include that user s timeline x user should be able to unretweet and unfavorite and should decrement the retweet and favorite count refer to this guide unretweeting for help on implementing unretweeting x links in tweets are clickable x user can tap the profile image in any tweet to see another user s profile contains the user header view picture and tagline contains a section with the users basic stats tweets following followers user can load more tweets once they reach the bottom of the feed using infinite loading similar to the actual twitter client when composing you should have a countdown for the number of characters remaining for the tweet out of 280 1 point after creating a new tweet a user should be able to view it in the timeline immediately without refetching the timeline from the network user can reply to any tweet and replies should be prefixed with the username and the reply id should be set when posting the tweet 2 points user sees embedded images in tweet if available user can switch between timeline mentions or profile view through a tab bar 3 points profile page pulling down the profile page should blur and resize the header image 4 points the following additional features are implemented list anything else that you can get done to improve the app functionality x the ui shows which tweets are retweeted and favorited by the user even when app is reloaded x if tweet was posted more than a week ago the tweet cell displays the date it was posted instead of how long ago it was posted please list two areas of the assignment you d like to discuss further with your peers during the next class examples include better ways to implement something how to extend your app in certain ways etc 1 i m still having trouble understanding when where and why to implement a delegate 2 i d like to figure out how to display images that are embedded in the tweets video walkthrough here s a walkthrough of implemented user stories twitter 1 gif twitter 2 gif twitter 3 gif twitter 4 gif twitter 5 gif gif created with kap https getkap co notes autolayout was hard to perfect especially with the uitextview credits list an 3rd party libraries icons graphics or other assets you used in your app afnetworking https github com afnetworking afnetworking networking task library license copyright 2021 marin hyatt licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license
os
TopCodes
topcodes the topcode computer vision library is designed to quickly and easily identify and track tangible objects on a flat surface just tag any physical object with a topcode a circular black and white symbol and the system will return an id number location of the tag angular orientation of the tag diameter of the tag the topcode library will identify 99 unique codes and can accurately recognize codes as small as 25 x 25 pixels the image processing algorithms work in a variety of lighting conditions without the need for human calibration the core topcode library is available in javascript java android dart and c thanks to raveh gonen pros free and open source fast and accurate will work in a variety of lighting conditions can recognize up to 99 unique codes in a single image cons the camera must be orthogonal to the interaction surface requires programming knowledge to use quick start guide for javascript to load up topcodes in your browser start by downloading the topcodes js https raw githubusercontent com tidal lab topcodes master javascript topcodes js library then create a simple html file in the same directory as the library file you can copy this code html doctype html html head title topcodes example title head body canvas id video canvas width 800 height 600 style background ddd canvas br button id camera button onclick topcodes startstopvideoscan video canvas start stop button script src topcodes js script body html load this file into a browser chrome or firefox and try pressing the start stop button if everything works a video stream should open from your built in web camera next try printing out some topcodes https github com tidal lab topcodes blob master topcodes pdf and holding them in front of the camera you should see topcode symbols being drawn over the sheet of paper in the video stream now we can actually do something with the topcodes from the video stream first we have to define a callback function that will receive json data frame by frame from the video stream add this script tag at the end of your html file right before the close body tag html script register a callback function with the topcode library topcodes setvideoframecallback video canvas function jsonstring convert the json string to an object var json json parse jsonstring get the list of topcodes from the json object var topcodes json topcodes obtain a drawing context from the canvas var ctx document queryselector video canvas getcontext 2d draw a circle over the top of each topcode ctx fillstyle rgba 255 0 0 0 3 very translucent red for i 0 i topcodes length i ctx beginpath ctx arc topcodes i x topcodes i y topcodes i radius 0 math pi 2 true ctx fill script if this works you ll see pinkish circle drawn over each of the topcode symbols found in the video stream each topcode is a json object with the following structure json code 31 x 35 0 y 87 0 radius 52 angle 0 135032 quick start guide for java to get started with the java topcode library download and install the java jdk extract the topcode library on your local machine an easy way to get started is to use the topcode debugger app start by opening a shell and changing to the directory where you installed the library then run this command java cp lib topcodes jar topcodes debugwindow this allows you to test the library on an image if there are any jpeg images in your working directory they will be loaded automatically the basic key commands are ctrl o open a jpeg file zoom in zoom out b see the image after thresholding t show hide topcode highlighting page up load the next image in the directory page dn load the previous image in the directory clicking and dragging with the mouse will pan the image all of the topcode id numbers will be printed on the command line each time an image is loaded other computer vision libraries you might try artoolkit from the hitlab at the university of washington reactivision from the music technology group at pompeu fabra university in barcelona cantag and tinytag from the university of cambridge in the u k vuforia from qualcomm references the topcode library was developed by michael horn at tufts university and northwestern university the library is based on trip from the university of cambridge in the u k and on adaptive thresholding techniques developed by pierre wellner comments feedback please send comments suggestions and bug fixes to michael horn michael horn at northwestern dot edu valid topcodes this is a list of valid topcode id numbers 0 0000 0001 1111 31 0 0000 0010 1111 47 0 0000 0011 0111 55 0 0000 0011 1011 59 0 0000 0011 1101 61 0 0000 0100 1111 79 0 0000 0101 0111 87 0 0000 0101 1011 91 0 0000 0101 1101 93 0 0000 0110 0111 103 0 0000 0110 1011 107 0 0000 0110 1101 109 0 0000 0111 0011 115 0 0000 0111 0101 117 0 0000 0111 1001 121 0 0000 1000 1111 143 0 0000 1001 0111 151 0 0000 1001 1011 155 0 0000 1001 1101 157 0 0000 1010 0111 167 0 0000 1010 1011 171 0 0000 1010 1101 173 0 0000 1011 0011 179 0 0000 1011 0101 181 0 0000 1011 1001 185 0 0000 1100 0111 199 0 0000 1100 1011 203 0 0000 1100 1101 205 0 0000 1101 0011 211 0 0000 1101 0101 213 0 0000 1101 1001 217 0 0000 1110 0011 227 0 0000 1110 0101 229 0 0000 1110 1001 233 0 0000 1111 0001 241 0 0001 0000 1111 271 0 0001 0001 0111 279 0 0001 0001 1011 283 0 0001 0001 1101 285 0 0001 0010 0111 295 0 0001 0010 1011 299 0 0001 0010 1101 301 0 0001 0011 0011 307 0 0001 0011 0101 309 0 0001 0011 1001 313 0 0001 0100 0111 327 0 0001 0100 1011 331 0 0001 0100 1101 333 0 0001 0101 0011 339 0 0001 0101 0101 341 0 0001 0101 1001 345 0 0001 0110 0011 355 0 0001 0110 0101 357 0 0001 0110 1001 361 0 0001 0111 0001 369 0 0001 1000 0111 391 0 0001 1000 1011 395 0 0001 1000 1101 397 0 0001 1001 0011 403 0 0001 1001 0101 405 0 0001 1001 1001 409 0 0001 1010 0011 419 0 0001 1010 0101 421 0 0001 1010 1001 425 0 0001 1011 0001 433 0 0001 1100 0101 453 0 0001 1100 1001 457 0 0001 1101 0001 465 0 0010 0010 0111 551 0 0010 0010 1011 555 0 0010 0010 1101 557 0 0010 0011 0011 563 0 0010 0011 0101 565 0 0010 0011 1001 569 0 0010 0100 0111 583 0 0010 0100 1011 587 0 0010 0100 1101 589 0 0010 0101 0011 595 0 0010 0101 0101 597 0 0010 0101 1001 601 0 0010 0110 0011 611 0 0010 0110 0101 613 0 0010 0110 1001 617 0 0010 1000 1011 651 0 0010 1000 1101 653 0 0010 1001 0011 659 0 0010 1001 0101 661 0 0010 1001 1001 665 0 0010 1010 0011 675 0 0010 1010 0101 677 0 0010 1010 1001 681 0 0010 1100 1001 713 0 0011 0001 1001 793 0 0011 0010 0101 805 0 0011 0010 1001 809 0 0011 0100 1001 841 0 0100 1001 0011 1171 0 0100 1001 0101 1173 0 0100 1010 0101 1189 99 codes
ai
Embedded-System-Design
embedding system design lab1 setting board resource testing lab2 show image with framebuffer on board lab3 live streaming lab4 marquee final face detection
os
aws-iot-securetunneling-localproxy
aws iot secure tunneling local proxy reference implementation c example c implementation of a local proxy for the aws iot secure tunneling service license this library is licensed under the apache 2 0 license overview this code enables tunneling of a single threaded tcp client server socket interaction through the iot secure tunneling service the code is targeted to run on linux windows 7 and macos if your device does not meet these requirements it is still possible to implement the underlying protocol documented in the protocol guide building the local proxy via docker prerequisites docker 18 using pre built docker images we provide several docker images on various platforms both x86 and arm are supported though armv7 is currently limited to the ubuntu images there are two types of images base images and release images the base images come with all dependencies pre installed you will still need to download and build the source these are useful if you want to modify and compile https github com aws samples aws iot securetunneling localproxy download and build the local proxy the local proxy on your own but are large 1 gb each you can find them at https gallery ecr aws aws iot securetunneling localproxy ubuntu base amd64 arm64 armv7 https gallery ecr aws aws iot securetunneling localproxy debian base amd64 arm64 https gallery ecr aws aws iot securetunneling localproxy amazonlinux base amd64 arm64 https gallery ecr aws aws iot securetunneling localproxy ubi8 base amd64 arm64 https gallery ecr aws aws iot securetunneling localproxy fedora base amd64 the release images are minimum size images that include a pre built binary with no dependencies installed every tag contains a git commit sha for example 33879dd7f1500f7b3e56e48ce8b002cd9b0f9e4e you can cross check the git commit sha with the commits in the local proxy repo to see if the binary contains changes added in a specific commit you can find them at https gallery ecr aws aws iot securetunneling localproxy ubuntu bin amd64 arm64 armv7 https gallery ecr aws aws iot securetunneling localproxy debian bin amd64 arm64 https gallery ecr aws aws iot securetunneling localproxy amazonlinux bin amd64 arm64 https gallery ecr aws aws iot securetunneling localproxy ubi8 bin amd64 arm64 https gallery ecr aws aws iot securetunneling localproxy fedora bin amd64 building a docker image if you do not want to use the prebuilt images you can build them yourself cd github docker images base images os of choice docker build t your tag or for the debian ubuntu combined dockerfile docker build t your tag build arg os choice of debian ubuntu latest to build cross platform images for arm docker buildx platform linux arm64 t your tag you may also try armv7 for 32 bit images but supported functionality may be limited after the docker build completes run docker run rm it tag to open a shell inside the container created in the previous step because it may not make practical sense to ssh into a docker container you can transfer binaries by exposing your machine s filesystem to the containerized filesystem via bind mount to bind mount a volume on your physical machine s current directory docker run rm it v pwd root tag and you can add p port number to expose a port from the docker container note that when the localproxy runs in source mode it binds by default to localhost if you want to access the localproxy from outside the container make sure to use the option b 0 0 0 0 when you run the localproxy from the container so that it binds to 0 0 0 0 since localhost can not be access from outside the container deprecated method docker build sh building the local proxy from source prerequisites c 14 compiler cmake 3 6 development libraries required boost 1 81 protobuf 3 17 x zlib 1 12 13 openssl 1 0 or openssl 3 catch2 test framework stage a dependency build directory and change directory into it mkdir dependencies cd dependencies the next steps should start from this directory and return back to it 1 download and install zlib dependency note this step may be simpler to complete via a native software application manager ubuntu example sudo apt install zlib1g fedora example dnf install zlib wget https www zlib net zlib 1 2 13 tar gz o tmp zlib 1 2 13 tar gz tar xzvf tmp zlib 1 2 13 tar gz cd zlib 1 2 13 configure make sudo make install 2 download and install boost dependency wget https boostorg jfrog io artifactory main release 1 81 0 source boost 1 81 0 tar gz o tmp boost tar gz tar xzvf tmp boost tar gz cd boost 1 81 0 bootstrap sh sudo b2 install link static if you want to install an older version of boost pass the version string through the cmake variable when compiling the local proxy dboost pkg version 3 download and install protobuf dependency wget https github com protocolbuffers protobuf releases download v3 17 3 protobuf all 3 17 3 tar gz o tmp protobuf all 3 17 3 tar gz tar xzvf tmp protobuf all 3 17 3 tar gz cd protobuf 3 17 3 mkdir build cd build cmake cmake make sudo make install if you want to install an older version of protobuf pass the version string through the cmake variable when compiling the local proxy dprotobuf pkg version 4 download and install openssl development libraries we strongly recommend installing openssl development libraries using your native platform package manager so the local proxy s integration with openssl can use the platform s globally configured root cas ubuntu example sudo apt install libssl dev fedora example dnf install openssl devel source install example git clone https github com openssl openssl git cd openssl git checkout openssl 1 1 1 stable configure linux generic64 make depend make all run the configure command without any arguments to check the available platform configuration options and the documentation here https wiki openssl org index php compilation and installation 5 download and install catch2 test framework git clone branch v2 13 6 https github com catchorg catch2 git cd catch2 mkdir build cd build cmake make sudo make install download and build the local proxy git clone https github com aws samples aws iot securetunneling localproxy cd aws iot securetunneling localproxy mkdir build cd build cmake make on successful build there will be two binary executables located at bin localproxy and bin localproxytest you may choose to run localproxytest to ensure your platform is working properly from here on copy or distribute the localproxy binary as you please the same source code is used for both source mode and destination mode different binaries may be built if the source and destinations are on different platforms and or architectures harden your toolchain we recommend configuring your compiler to enable all security features relevant to your platform and use cases for additional information about security relevant compiler flags see https www owasp org index php c based toolchain hardening cross compilation cmake cross compilation can be accomplished using the following general steps 1 acquire cross compiler toolchain for your target platform 1 create and configure system root sysroot for your target platform 1 build and install dependencies into the sysroot of the target platform 1 consult each dependency s documentation for guidance on how to cross compile 1 build the local proxy 1 run the test executable also built for your platform cmake can perform cross compilation builds when it is given a toolchain file here is an example filename raspberry pi 3 b plus cmake tc set cmake system name linux set cmake system processor arm set cmake sysroot home fedora cross builds sysroots arm unknown linux gnueabihf set tools home fedora x tools arm unknown linux gnueabihf set cmake c compiler tools bin arm unknown linux gnueabihf gcc set cmake cxx compiler tools bin arm unknown linux gnueabihf g set cmake find root path mode program never set cmake find root path mode library only set cmake find root path mode include only set cmake find root path mode package only to perform the cross compilation build files run cmake dcmake toolchain file raspberry pi 3 b plus cmake tc make from a build directory helpful links https crosstool ng github io crosstool ng makes it convenient to build a toolchain acquire and configure a system root https wiki osdev org target triplet consult this to understand your platform triplet running the local proxy the response of opentunnel via the aws iot secure tunneling management api is acquisition of a pair of client access tokens to use to connect two local proxy clients to the ends of the tunnel one token is designated for the source local proxy and the other is for the destination they must be supplied with the matching local proxy run mode argument otherwise connecting to the service will fail additionally the region parameter supplied to the local proxy must match the aws region the tunnel was opened in in a production configuration delivery of one or both tokens and launching the local proxy process may be automated the following sections describe how to run the local proxy on both ends of a tunnel terms v1 local proxy local proxy uses sec websocket protocol aws iot securetunneling 1 0 when communicates with aws iot tunneling service v2 local proxy local proxy uses sec websocket protocol aws iot securetunneling 2 0 when communicates with aws iot tunneling service source local proxy local proxy that runs in source mode destination local proxy local proxy that runs in destination mode multi port tunneling feature support multi port tunneling feature allows more than one stream multiplexed on same tunnel this feature is only supported with v2 local proxy if you have some devices that on v1 local proxy some on v2 local proxy simply upgrade the local proxy on the source device to v2 local proxy when v2 local proxy talks to v1 local proxy the backward compatibility is maintained for more details please refer to section backward compatibility backward compatibility service identifier service id if you need to use multi port tunneling feature service id is needed to start local proxy a service identifier will be used as the new format to specify the source listening port or destination service when start local proxy the identifier is like an alias for the source listening port or destination service for the format requirement of service id please refer to aws public doc services in destinationconfig https docs aws amazon com iot latest apireference api iot secure tunneling destinationconfig html there is no restriction on how this service id should be named as long as it can help uniquely identifying a connection or stream example 1 ssh1 you can use the following format protocol name connection number for example if two ssh connections needed to be multiplexed over a tunnel you can choose ssh1 and ssh2 as the service ids example 2 ae5957ef d6e3 42a5 ba0c edc667d2b3fb you can use a uuid to uniquely identify a connection stream example 3 ip 172 31 6 23 us west 2 compute internal you can use remote host name to uniquely identify a stream destination service and destination mode local proxy destination local proxy is responsible for forwarding application data received from tunnel to destination service for v1 local proxy only 1 stream is allowed over the tunnel with v2 local proxy more than one streams can be transferred at the same time for more details please read section multi port tunneling feature support multi port tunneling feature support example 1 localproxy r us east 1 d localhost 3389 t destination client access token this is an example command to run the local proxy in destination mode on a tunnel created in us east 1 and forward data packets received from the tunnel to a locally running application service on port 3389 example 2 localproxy r us east 1 d http1 80 ssh1 22 t destination client access token this is an example command to run the local proxy in destination mode on a tunnel created in us east 1 and forward data packets belongs to service id http1 to a locally running application service on port 80 data packets belongs to service id ssh1 to a locally running application service on port 22 we recommend starting the destination application or server before starting the destination local proxy to ensure that when the local proxy attempts to connect to the destination port it will succeed when the local proxy starts in destination mode it will first connect to the service and then begin listening for a new connection request over the tunnel upon receiving a request it will attempt to connect to the configured destination address and port if successful it will transmit data between the tcp connection and tunnel bi directionally for a multiplexed tunnel one connection drop or connect will not affect the other connections that share the same tunnel all connections streams in a multiplexed tunnel is independent client application and source mode local proxy source local proxy is responsible for relaying application data to the tunnel for v1 local proxy only 1 stream is allowed over the tunnel with v2 local proxy more than one streams can be transferred at the same time for more details please read section multi port tunneling feature support multi port tunneling feature support example 1 localproxy r us east 1 s 3389 t source client access token this is an example command to run the local proxy in source mode on a tunnel created in us east 1 waiting for a connection on port 3389 example 2 localproxy r us east 1 s http1 5555 ssh1 3333 t source client access token this is an example command to run the local proxy in source mode on a tunnel created in us east 1 waiting for a connection on port 5555 for service id http1 waiting for a connection on port 3333 for service id ssh1 when the local proxy starts in source mode it will first connect to the service and then begin listening for a new connection on the specified port and bind address while the local proxy is running use the client application e g remotedesktopclient ssh client to connect to the source local proxy s listening port after accepting the tcp connection the local proxy will forward the connection request over the tunnel and immediately transmit data the tcp connection data through the tunnel bidirectionally source mode can manage more than one connection stream at a time if v2 local proxy is used if the established tcp connection is terminated for any reason it will send a disconnect message over the tunnel so the service or server running on the other side can react appropriately similarly if a notification that a disconnect happened on the other side is received by the source local proxy it will close the local tcp connection regardless of a local i o failures or if a notification of a disconnect comes from the tunnel after the local tcp connection closes it will begin listening again on the specified listen port and bind address if a new connection request sent over the tunnel results in the remote destination side being unable to connect to a destination service it will send a disconnect message back through the tunnel the exact timing behavior of this depends on the tcp retry settings of the destination local proxy for a multiplexed tunnel one connection drop or connect will not affect the other connections that share the same tunnel all connections streams in a multiplexed tunnel is independent stopping the local proxy process the local proxy process can be stopped using various methods sending a sigterm signal to the process closing a tunnel explicitly via closetunnel api this will result in the local proxy dropping the connection to the service and existing the process successfully a tunnel expires after its lifetime expiry this will result in the local proxy dropping the connection to the service and exiting the process successfully backward compatibility v2 local proxy is able to communicate with v1 local proxy if only one connection needs to be established over the tunnel this means when you open a tunnel no more than one service should be passed in the services list example 1 aws iotsecuretunneling open tunnel destination config thingname foo services ssh1 ssh2 in this example two service ids are used ssh1 and ssh2 backward compatibility is not supported example 2 aws iotsecuretunneling open tunnel destination config thingname foo services ssh2 in this example one service id is used ssh2 backward compatibility is supported example 3 aws iotsecuretunneling open tunnel in this example no service id is used backward compatibility is supported http proxy support the local proxy relies on the http tunneling mechanism described by the http 1 1 specification https datatracker ietf org doc html rfc7231 section 4 3 6 to comply with the specifications your web proxy must allow devices to use the connect method for more details on how that works and how configure it properly please refer to configure local proxy for devices that use web proxy https docs aws amazon com iot latest developerguide configure local proxy web proxy html security considerations certificate setup a likely issue with the local proxy running on windows or macos systems is the lack of native openssl support and default configuration this will prevent the local proxy from being able to properly perform tls ssl host verification with the service to fix this set up a certificate authority ca directory and direct the local proxy to use it via the capath dir cli argument 1 create a new folder or directory to store the root certificates that the local proxy can access for example d certs on windows 1 download amazon ca certificates for server authentication from here https docs aws amazon com iot latest developerguide server authentication html server authentication certs 1 utilize the c rehash script for windows or openssl rehash command for macos this script is part of the openssl development toolset macos openssl rehash certs windows example d lib openssl set openssl d lib openssl apps openssl exe d lib openssl tools c rehash pl d certs doing d certs note c rehash pl script on windows does not seem to cooperate with spaces in the path for the openssl exe executable after preparing this directory point to it when running the local proxy with the c arg option examples macosx localproxy r us east 1 s 3389 c certs windows localproxy exe r us east 1 s 3389 c d certs runtime environment avoid using the t argument to pass in the access token we recommend setting the awsiot tunnel access token environment variable to specify the client access token with the least visibility run the local proxy executable with the least privileges in the os or environment if your client application normally connects to a port less than 1024 this would normally require running the local proxy with admin privileges to listen on the same port this can be avoided if the client application allows you to override the port to connect to choose any available port greater than 1024 for the source local proxy to listen to without administrator access then you may direct the client application to connect to that port e g for connecting to a source local proxy with an ssh client the local proxy can be run with s 5000 and the ssh client should be run with p 5000 on devices with multiple network interfaces use the b argument to bind the tcp socket to a specific network address restricting the local proxy to only proxy connections on an intended network consider running the local proxy on separate hosts containers sandboxes chroot jail or a virtualized environment access tokens after localproxy uses an access token it will no longer be valid without an accompanying client token you can revoke an existing token and get a new valid token by calling rotatetunnelaccesstoken https docs aws amazon com iot latest apireference api iot secure tunneling rotatetunnelaccesstoken html refer to the developer guide https docs aws amazon com iot latest developerguide iot secure tunneling troubleshooting html for troubleshooting connectivity issues that can be due to an invalid token client tokens the client token is an added security layer to protect the tunnel by ensuring that only the agent that generated the client token can use a particular access token to connect to a tunnel only one client token value may be present in the request supplying multiple values will cause the handshake to fail the client token is optional the client token must be unique across all the open tunnels per aws account it s recommended to use a uuid to generate the client token the client token can be any string that matches the regex a za z0 9 32 128 if a client token is provided then local proxy needs to pass the same client token for subsequent retries this is yet to be implemented in the current version of local proxy if a client token is not provided then the access token will become invalid after a successful handshake and localproxy won t be able to reconnect using the same access token the client token may be passed using the i argument from the command line or setting the awsiot tunnel client token environment variable ipv6 support the local proxy uses ipv4 and ipv6 dynamically based on how addresses are specified directly by the user or how are they resolved on the system for example if localhost resolves to 127 0 0 1 then ipv4 will is being used to connect or as the listening address if localhost resolves to 1 then ipv6 will be used note specifying any argument that normally accepts address port will not work correctly if address is specified using an ipv6 address note systems that support both ipv4 and ipv6 may cause connectivity confusion if explicit address port combinations are not used with the local proxy client application or destination service each component may behave differently with respect to support ip stack and default behaviors listening on the local ipv4 interface 127 0 0 1 will not accept connection attempts to ipv6 loopback address 1 to add further complexity hostname resolution may hide that this is happening and different tools may prefer different ip stacks to help with this from the local proxy use verbose logging on the local proxy v 6 cli argument to inspect how hostname resolution is happening and examine the address format being output options set via command line arguments most command line arguments have both a long form preceded by a double dash and a short form preceded by a single dash character some commands only have a long form any options specified via command line arguments override values specified in both the config file specification and environment variables h help will show a help message and a short guide to all of the available cli arguments to the console and cause it to exit immediately t access token argvalue specifies the client access token to use when connecting to the service we do not recommend using this option as the client access token will appear in shell history or in process listings that show full commands and arguments and may unintentionally expose access to the tunnel use the environment variable or set the option via config input file instead an access token value must be found supplied via one of those three methods e proxy endpoint argvalue specifies an explicit endpoint to use to connect to the tunneling service for some customers this may point to unique domain you cannot specify this option and r region together either this or region is required r region argvalue endpoint region where tunnel exists you cannot specify this option and e process endpoint together either this or proxy endpoint is required s source listen port argvalue start local proxy in source mode and sets the mappings between service identifier and listening port for example ssh1 5555 or 5555 it follows format serviceid1 port1 serviceid2 port2 if only one port is needed to start local proxy service identifier is not needed you can simply pass the port to be used for example 5555 ssh1 5555 means that local proxy will start listening requests on port 5555 for service id ssh1 the value of service id and how many service ids are used needs to match with services in open tunnel call for example shell script aws iotsecuretunneling open tunnel destination config thingname foo services ssh1 ssh2 then to start local proxy in source mode need to use s ssh1 port1 ssh2 port2 d destination app argvalue start local proxy in destination mode and sets the mappings between port and service identifier for example ssh1 5555 or 5555 it follows format serviceid1 endpoint1 serviceid2 endpoint2 endpoint can be ip address port port or hostname port if only one port is needed to start local proxy service id is not needed you can simply pass the port used for example 5555 an item of the mapping ssh1 5555 means that local proxy will forward data received from the tunnel to tcp port 5555 for service id ssh1 the value of service id and how many service ids are used needs to match with services in open tunnel call for example shell script aws iotsecuretunneling open tunnel destination config thingname foo services ssh1 ssh2 then to start local proxy in destination mode need to use d ssh1 port1 ssh2 port2 b local bind address argvalue specifies the local bind address network interface to use for listening for new connections when running the local proxy in source mode or the local bind address to use when reaching out to the destination service when running in destination mode c capath argvalue specifies an additional directory path that contains root cas used for ssl certificate verification when connecting to the service k no ssl host verify directs the local proxy to disable host verification when connecting to the service this option should not be used in production configurations export default settings argvalue specifies a file to write out all of the default fine grained settings used by the local proxy and exits immediately this file can be modified and supplied as input to settings json to run the local proxy with non default fine grained settings settings json argvalue specifies a file to read fine grained settings for the local proxy to use to override hard coded defaults all of the settings need not be present settings that do not exist are ignored passively config argvalue specifies a file to read command line arguments from actual command line arguments will overwrite contents of file if present in both v verbose argvalue specifies the verbosity of the output value must be between 0 255 however meaningful values are between 0 6 where 0 output off 1 fatal 2 error 3 warning 4 info default 5 debug 6 trace any values greater than 6 will be treated the same trace level output m mode argvalue specifies the mode local proxy will run accepted values are src source dst destination config dir argvalue specifies the configuration directory where service identifier mappings are configured if this parameter is not specified local proxy will read configuration files from default directory config under the file path where localproxy binary are located options set via config a configuration file can be used to specify any or all of the cli arguments if an option is set via a config file and cli argument the cli argument value overrides here is an example file named config ini region us east 1 access token foobar source listen port 5000 local proxy run command using this configuration localproxy config config ini is equivalent to running the local proxy command localproxy r us east 1 t foobar s 5000 to illustrate composition between using a configuration file and actual cli arguments you could have a config ini file with the following contents capath opt rootca region us west 2 local bind address 1 source listen port 6000 and a local proxy launch command localproxy config config ini t foobar is equivalent to running the local proxy command localproxy c opt rootca r us west 2 b 1 s 6000 t foobar note service id mappings should be configured by using parameter config dir not config options set via config dir if you want to start local proxy on fixed ports you can configure these mappings using configuration files by default local proxy will read from directory config under the file path where localproxy binary are located if you need to direct local proxy reads from specific file path use parameter config dir to specify the full path of the configuration directory you can put multiple files in this directory or organize them into the sub folders local proxy will read all the files in this directory and search for the port mapping needed for a tunnel connection note the configuration files will be read once when local proxy starts and will not be read again unless it is restarted sample configuration files on source device file name sshsource ini content example ssh1 3333 ssh2 5555 this example means service id ssh1 is mapped to port 3333 service id ssh2 is mapped to port 5555 sample configuration files on destination device example configuration file on destination device file name sshdestination ini content example ssh1 22 ssh2 10 0 0 1 80 this example means service id ssh1 is mapped to port 22 service id ssh2 is mapped to host with ip address 10 0 0 1 port 80 options set via environment variables there are a few environment variables that can set configuration options used by the local proxy environment variables have lowest priority in specifying options config and cli arguments will always override them awsiot tunnel access token if present specifies the access token for the local proxy to use awsiot tunnel endpoint if present specifies the aws iot secured tunneling proxy endpoint leave out e or proxy endpoint from cli arg still mutually exclusive with specifying r region and below environment variable awsiot tunnel region if present specifies the region the tunnel exists in allowing leaving out the r cli arg fine grained settings via settings json there are additional fine grained settings to control the behavior of the local proxy these settings are unlikely to need to be changed and unless necessary should be kept at their default values running localproxy export default settings lpsettings json will produce a file named lpsettings json containing the default values for all settings example contents tunneling proxy default bind address localhost message data length size 2 max payload size 64512 max size 65536 tcp connection retry count 5 connection retry delay ms 1000 read buffer size 131076 websocket ping period ms 5000 retry delay ms 2500 connect retry count 1 reconnect on data error true subprotocol aws iot securetunneling 1 0 max frame size 131076 write buffer size 131076 read buffer size 131076 after making edits to lpsettings json and saving the changes the following command will run the local proxy with the modified settings localproxy r us east 1 t foobar d localhost 22 settings json lpsettings json default bind address defines the default bind address used when the b bind address command line argument or option is not present address may be a hostname or ip address tunneling proxy tcp connection retry count when a failure occurs while trying to establish a tcp connection in destination mode this is the number of consecutive connection attempts to make before sending a notification over the tunnel that the connection is closed when running in source mode this will be the number of consecutive attempts made to bind and listen on on the tcp socket a value of 1 results in infinite retry tunneling proxy tcp connection retry delay ms defines how long to wait before executing a retry for tcp connection failures source or destination mode in milliseconds tunneling proxy websocket ping period ms defines the period in milliseconds between websocket pings to the aws iot tunneling service these pings may be necessary to keep the connection alive tunneling proxy websocket connect retry count when a failure occurs while trying to connect to the service outside of an http 4xx response on the handshake it may be retried based on the value of this property this is the number of consecutive attempts to make before failing and closing the local proxy any http 4xx response code on handshake does not retry a value of 1 results in infinite retry tunneling proxy websocket retry delay ms defines how long to wait before executing another retry to connect to the service in milliseconds tunneling proxy websocket reconnect on data error flag indicating whether or not to try to restablish connection to the service if an i o protocol handling or message parsing errors occur tunneling proxy message may payload size defines the maximum data size allowed to be carried via a single tunnel message the current protocol has a maximum value of 63kb 64512 bytes any two active peers communicating over the same tunnel must set this to the same value building local proxy on a windows follow instructions in here windows localproxy build md to build a local proxy on a windows environment limits for multiplexed tunnels bandwidth limits if the tunnel multi port feature is enabled multiplexed tunnels have the same bandwidth limit as non multiplexed tunnels this limit is mentioned in aws public doc https docs aws amazon com general latest gr iot device management html section aws iot secure tunneling row maximum bandwidth per tunnel the bandwidth for a multiplexed tunnel is the bandwidth consumed by all active streams that transfer data over the tunnel connection if you need this limit increased please reach out to aws support and ask for a limit increase service id limits there are limits on the maximum streams that can be multiplexed on a tunnel connection this limit is mentioned in aws public doc https docs aws amazon com general latest gr iot device management html section aws iot secure tunneling row maximum services per tunnel if you need this limit increased please reach out to aws support and ask for a limit increase load balancing in multiplexed streams if more than one stream is transferred at the same time local proxy will not load balance between these streams if you have one stream that is dominating the bandwidth the other streams sharing the same tunnel connection may see latency of data packet delivery
server
streamlit-video-chat-example
streamlit video chat example video chat examples based on streamlit with streamlit webrtc https github com whitphx streamlit webrtc and streamlit server state https github com whitphx streamlit server state tests https github com whitphx streamlit video chat example actions workflows tests yml badge svg branch main https github com whitphx streamlit video chat example actions workflows tests yml query branch 3amain ko fi https ko fi com img githubbutton sm svg https ko fi com d1d2erwfg a href https www buymeacoffee com whitphx target blank img src https cdn buymeacoffee com buttons v2 default yellow png alt buy me a coffee width 180 height 50 a github sponsors https img shields io github sponsors whitphx label sponsor 20me 20on 20github 20sponsors style social https github com sponsors whitphx docs img example jpg try it out shell pip install streamlit streamlit webrtc streamlit server state opencv python headless streamlit run https raw githubusercontent com whitphx streamlit video chat example main app mcu filters py this repository contains various implementations app py while the example above uses app mcu filters py
streamlit webrtc video-streaming video-processing
ai
INF01127-CommandExample
inf01127 commandexample software engineering command pattern usage example in database context commands
server
deadends-of-it
deadends of information technology i was born in 1976 learned programming in 1989 studied computer science in 1996 2001 and have worked in this industry ever since i am 46 years old now i have seen a lot of hypes come and go most of the things i list below are used by many people every day and are mature solutions however if you can start from scratch i suggest to not use them slides of presentation at chemnitzer linux tage 2022 https docs google com presentation d 1z5sook4j5a8egvnrny43hesroqsrbamra8umq707nto edit usp sharing native gui development today i would always start with a browser based interface native gui development for desktops makes no sense any more you see the trend on the famous question answer site gtk and qt trend at stackoverflow http sotagtrends com tags gtk qt does someone remember visual basic iirc in the year 2000 almost all job offerings for gui development asked for visual basic knowledge i think html css will stay but maybe react vue might leave us the current evolution is blocked by apple because they force everybody to use webkit in ios see open web advocacy org https open web advocacy org network file systems today nfs network file system in a pc lan does not make much sense any more today people either use smb https en wikipedia org wiki server message block to access files on a network share or they use a web based file service dropbox google drive microsoft onedrive nextcloud concerning server to server communication if you start from scratch then you will use protocols like s3 to store and retrieve blobs there are still a lot of nfs based solution for server to server communication but i would use a ceph based https docs ceph com docs mimic radosgw s3 solution if i could start from scratch webdav sad but true webdav didn t make it don t ask my why it could have been very cool dropbox was simpler and soon many vendors came up with a own and proprietary dropbox clone nfs and webdav downtrend on stackoverflow http sotagtrends com tags nfs webdav owncloud owncloud is a suite of client server software for creating and using file hosting services owncloud functionally has similarities to the widely used dropbox it was great some months ago but most developers and useres switched to nextcloud http sotagtrends com tags owncloud nextcloud related why i forked my own project and my own company owncloud to nextcloud youtube https www youtube com watch v utkvlsnfl6i operating systems server operating systems in the past there have been aix hpux solaris freebsd netbsd the winner linux for servers mobile devices operating systems dead nokia meego https en wikipedia org wiki meego windows 10 mobile https en wikipedia org wiki windows 10 mobile blackberry https en wikipedia org wiki blackberry ubuntu touch https en wikipedia org wiki ubuntu touch firefox os https en wikipedia org wiki firefox os android and ios have won i am curious if there will ever be alternative mobile operating system with a noticable market share desktop operating systems here nothing much has changed macos increased its market share a bit but overall it is roughly the same for years global market share held by operating systems for desktop pcs from january 2013 to june 2021 https www statista com statistics 218089 global market share of windows 7 main content api and data exchange corba common object request broker architecture corba https en wikipedia org wiki common object request broker architecture was a big hype some years ago in 2001 i thought this is the future afaik it does not get used for new projects anymore stateless apis have won corba gave you references to remote objects sounds great at the beginning but stateless apis via http are simpler and simpler is better than wow stackoverflow trend for corba http sotagtrends com tags corba microsoft com microsoft com component object model https en wikipedia org wiki component object model it is very uncommon to automate ms word or excel via com these days i am happy of course there are a lot of developers who still automate native guis on windows pcs these days if i would be one of them then i would try to find a new job with a better prospects for the future soap wsdl from wikipedia soap abbreviation for simple object access protocol is a messaging protocol specification for exchanging structured information in the implementation of web services in computer networks its purpose is to provide extensibility neutrality and independence it uses xml information set for its message format and relies on application layer protocols most often hypertext transfer protocol http or simple mail transfer protocol smtp for message negotiation and transmission it is too complicated it is overengineered still wide spread but i would not start a new project with it xml rpc was nice too simpler than soap but same wsdl the web services description language is an xml based interface description language that is used for describing the functionality offered by a web service http sotagtrends com tags soap xml rpc wsdl from uml diagram to source code i am very happy that today only few people think that it is cool to first create urml diagrams and then automatically create source code from the uml diagram some years ago many people thought that creating code from an application which lets you draw boxes and arrows is the future data formats xml xml was a very big hype again way too complicated not simple enough json current hype is much better to exchange data since it supports some simple data types int string list dictionaries google trend xml json https trends google com trends explore date all q 2fm 2f05cntt 2fm 2f08745 but binary data native time format time delta and other things are missing in json i think protocol buffers would be great for exchanging data between systems but up to now only a few people think like this latin1 windows cp 1252 unicode has won i am very happy poor guys who still need to fiddle with old character codes csv comma seperated values unfortunately this format is still wide spread if you start from scratch then please don t use it applications emacs i used this text editor 14 years daily 2001 2015 i switched to pycharm http sotagtrends com tags emacs pycharm zsh alternative to the bash shell linux command line language i was very interested in the beginning but finally you do not gain much the bash is the default shell and for me it does not make much sense to use a different shell http sotagtrends com tags zsh bash alternative to the source code version control system git git has won alternatives do not make sense anymore http sotagtrends com tags git mercurial bazaar svn desktop office ms office libreoffice these programs were very important in the past today most people learned that you can write the text directly into the mail body you do not need to add an ms word document to the mail http sotagtrends com tags ms office libreoffice kde the k desktop environment was very widespread at least in germany for some years but gnome has not much more traffic http sotagtrends com tags kde gnome nagios was once the number 1 monitoring solution time has changed i guess most people would not start to use it today if they could start from scratch here nagios compared with zabbix prtg check mk https trends google com trends explore date all q nagios 2fm 2f03c9jx 2fg 2f11bc5wdh4r 2fm 2f0h 9jxz or prometheus https prometheus io from the cncf commercial databases like oracle sybase i would not use a commercial database like oracle sybase today cloud computing virtualization xen xen https en wikipedia org wiki xen linux hypervisor xen compared with vmware and kubernetes google trend xen vmware kubernetes https trends google de trends explore date all q 2fm 2f02t3gg 2fm 2f01t9k5 2fg 2f11b7lxp79d vagrant vagrant https en wikipedia org wiki vagrant software gets used much less these days see google trend vagrant https trends google com trends explore date all q 2fm 2f0jwtqm2 programming languages low level languages like assembly c c are the building blocks of higher level languages but the usage of these languages is in decline or constant low nobody wants to call malloc and free any more i would never start a project with one of these languages today if you are working with embedded systems device drivers or kernel modules then this is different i just don t know if xslt is a programming language or a data format i never liked it it was way too verbose it was complicated to write it was not a real programming language and simple things got complicated soon i am happy to see xslt going stackoverflow tag trend http sotagtrends com tags xslt perl lisp shell scripts google trend for shell scripts https trends google de trends explore date all q shell 20scripts i use the shell interactively daily but i stopped writing non conditionless shell scripts several years ago either the script is important then i would do it with a better language and store it in git xor it is unimportant for running a sequence of commands conditionless the shell is still handy domain specific languages i am happy that most people understood now that there are domain specific datastructures but there is no need for domain specific languages see the down trend google trends for domain specific language https trends google com trends explore date all q 2fm 2f02kwvw one spec several implementations c c java enterprise edition sql tcp ip and a lot of other development tools used the pattern one specification several implementations i think this pattern is outdated modern tools python typescript kubernetes go rust linux implement what s useful no need to do this twice in the year 2000 i asked on one of the many apache java mailing lists about a new feature idea the response of the developers roughly yes it would be nice to have this feature but first we need to wait for the new specification to get published this was one of the reasons i switched from java to python the pattern one spec several implementations is useful for protocols like http imap smtp snmp and data formats xml json yaml but not for tools at least for java enterprise edition and enterprise beans the trend looks black https trends google com trends explore date all q 2fm 2f0bs6x 2fm 2f0br1c edge was rebuilt as a chromium based browser in 2019 maybe firefox will do the same sooner or later maybe there will be only one engine in some years business process execution language https en wikipedia org wiki business process execution language was a standard executable language for specifying actions within business processes aka workflows dead trends of bpel https trends google com trends explore date all q bpel current things which don t have a formal spec with several implementations there is just on implementation and this is fine react typescript git golang kubernetes current software things which have a formal spec with several implementations webrtc https en wikipedia org wiki webrtc http html living standard https en wikipedia org wiki html transition of html publication to whatwg javascript https en wikipedia org wiki javascript css web assembly c c tcp ip ethernet wifi regular expressions parsing text with regular expressions is like eating rubbish in the 21 century we send and receive data structures we don t send strings that the receiver needs to parse https trends google com trends explore cat 32 date all q regular 20expressions json sed awk grep parsing text with your favorite tool is like eating rubbish i still use these tools sometimes interactively but i don t write shell scripts anymore browser war browser war https en wikipedia org wiki browser wars chrome has won linux on the desktop some overambitious friends of open source software and open data formats thought you need to switch from microsoft windows to linux as soon as possible like the limux project in the year 2005 limux was a project by the city of munich in germany to migrate local government software systems from closed source proprietary microsoft products to free and open source software the project ran from 2005 to 2013 migrating over 18 000 personal computers and laptops of public employees to a linux based software solution see https en wikipedia org wiki limux i think they did a major mistake it would have been much simpler and more successful if they would have done several small steps instead of one big step the first step for me would be to switch from closed source to open source application but stay on the ms windows operating system the limux project wanted too much too soon linux on the desktop the limux project failed https en wikipedia org wiki limux today every desktop usage is decreasing mobile interfaces get used and if you use a desktop you use the browser for most tasks software as a service eats the native gui configuration management chef and puppet the older ones and ansible and salt are the new ones in 2013 it was not clear who will win the race today in 2019 it is clear ansible has won http sotagtrends com tags salt stack ansible chef puppet but things have changed you configure less servers today most applications run in containers and for setting up a container most people use the shell or run commands in a dockerfile these scripts are straight forward and mostly are conditionless without if and else backup of course making a backup of data is still done often and makes sense but it gets done less often mobile devices my wife my son and i do not backup our mobile phones i guess most people do it like this the device does not store important data that is not stored somewhere else the device contains contacts calendars some documents shared via nextcloud mails are stored on the mail server of course it will be very annoying if the mobile device would get lost or broken it will be a lot of work to configure the new device but no important data would get lost at work i do no backup of the linux laptop software i write gets pushed to a central git server every day see the trend http sotagtrends com tags backup perfect filesystems several times the perfect linux filesystem was invented ext2 ext3 reiserfs zfs btrfs today the discussions about which file system is the best have mostly vanished i could not find a reliable reference but afaik google used ext2 for their servers very long if you find a reliable reference please tell me to make it short it does not matter if you want high availability then be sure that your service survives the outage of servers a reliable file system does not make your whole service reliable modern applications use storage services yast and similar linux config uis yast https en wikipedia org wiki yast was a tool for suse linux to configure the operating system it tried to provide three interfaces terminal native gui web market share of suse and the need to interactively configure servers has decreased communication mailinglists are dead once you sent your message you can t edit it any more that s a very outdated way today you use stackoverflow other sites from stackexchange reddit and facebook groups github issues get used to ask questions too i like this change this buries the never ending discussion if the reply on a mailing list should go to the list or to the author of the mail markup languages i think markup languages have lost except html and markdown once upon a time markup languages like sgml xml restructuredtext were used to create documentation that can be compiled to html or pdf who prints docs today html is the future easy to use html wysiwyg editors exist see my list of wysiwyg editors https github com guettli wysiwyg html gives two distinct groups of people all that they want geeks can write html directly and use all the features it offers and wysiwyg editors give non geeks a way to create formatted text other markup languages are on a downward trend most of them go down markdown goes up since it is convenient for simple formatting like github readme files http sotagtrends com tags sphinx restructuredtext markdown mediawiki same for latex the need for printable documentation is falling in 2001 i used latex for my diploma thesis this was a good choice since ms word and openoffice were not reliable these days today i think i would not use latex again some for docbook https en wikipedia org wiki docbook see google trend docbook https trends google de trends explore date all q 2fm 2f0c1gr i am so happy to see this down trend inotify inotify is a nice feature of the linux kernel you can listen for changes in directories if there is a change for example a new file then you get an event and you can execute some custom code for example process the new file i once thought inotify is great time has changed now i know the file system is not an api use case a third party service sends you pairs of files one image file and one json file the json file contains meta information if you receive the files via smb nfs ftp and trigger the handling of these files via inotify then you can t reject broken data imagine you get only the json file but not the image file now you the receiver need to handle this broken data if you use http you can reject broken data and the issue is left to the sending party and that s where the issue should be the receiver can t fix broken data it the job of the sender to transfer valid data of course the above use case applies if you don t use inotify you could use a cronjob which imports the files every five minutes today i prefer http shameless plug you can use tbzuploader https github com tbz pariv tbzuploader to upload single or pairs of files via http mail admin a declining job decision makers prefer to pay for service offered by big well known giants they do not like to pay a human which does the job this google trend charts shows it microsoft exchange mail server for windows and postfix mail server for linux are in decline configuring mail servers is not easy especially handling spam is daily work and annoyance if you are young and you are unsure what you want to do in the future do not consider to get a mail admin this job is in decline https trends google com trends explore date all q 2fm 2f02js54 2fm 2f04nh2c people do use native gui mail user agents like for example thunderbird less often they use the web interface of big mail providers more maybe this trend is good i had to explain the difference between an envelope from to the body from already several times to people who were responsible for a mail server this difference seems to be too much for the average windows admin see https en wikipedia org wiki bounce address but today handling mail is more than managing an smtp and imap server for most users mail spam filtering calendar contacts sending and accepting invitations via mail cloud storage and all is one thing who influences the future of mail admins the people how to pay them and most decision makers prefer to pay for a service that has reliable support even on weekends the admin wants a weekend and he wants a holiday and sometimes he is ill be honest with yourself if you would be the decision maker it is sad but true you would choose the service not the human and that s why i think in the future there will be fewer mail admins companies with less than 200 accounts will buy a service only big companies will run their own infrastructure of course there needs to someone for the hardware but the server hardware will leave the small companies moving to big companies providing cheap software as a service no need to care for backup too doesn t this feel like flying sooner or later automation will eat all jobs antivirus software in the past there where several third party solutions for protecting microsoft windows operating systems microsoft windows got much more secure during the last years today you don t need third party solutions microkernel in the year 2000 when i was a student htw dresden i was very curious about microkernels at tu dresden projects were working on a micro kernel based operating system i loved to talk with people involved in the micro kernel projects but i found no answer to the question why which convinced me in the year 2000 i thought to myself i don t get why a microkernel should be better than a modular kernel like linux now in the year 2019 i think there are no real arguments pro microkernel nice theory but performance is much more important practical real world applications use several operating systems today fault tolerance gets handled at a different level today but there are micro kernel based operating systems like https genode org rpm dpkg package format for custom packages unfortunately every programming language brings its own package manager for example python uses pip the number of rpm dpkg packages needed for software development is getting smaller and smaller only the fundamental servers are needed most libraries needed for software development are installed via the package manager of the corresponding programming language creating custom packages in rpm dpkg format is outdated raw photo format when i bought my first reflex camera canon 50d most friends who already had a reflex camera told me that it is great because you can make photos in the raw image format this raw format contains much more information and this is much much better for post processing i still like my reflex camera it is ten years old now but i never liked the raw format yes the post processing possibilities are great but the size is a major drawback and the time you need to do the post processing i asked some of my friends some days ago again none of them still use the raw image format by default today they all use jpeg it is simpler more convenient of course some professional photographers use the raw format daily that s not what i talk about i look at the everyday use case of an average human who likes to take pictures drbd drbd is a distributed replicated storage system for the linux platform it is implemented as a kernel driver several userspace management applications and some shell scripts drbd is traditionally used in high availability ha computer clusters but beginning with drbd version 9 it can also be used to create larger software defined storage pools with a focus on cloud integration source https en wikipedia org wiki distributed replicated block device drbd is not dead but it does decline the high availability method one master n secondary slaves with failover was popular in the past today high availability get s handled differently the trends show how it does decline here in comparison with ceph https en wikipedia org wiki ceph 28software 29 https trends google com trends explore date all q 2fm 2f0b1yt5 ceph web development java applets running in the web browser are dead https trends google com trends explore date all q java 20applets same for adobe flash player https trends google com trends explore date all q 2fm 2f05qh6g xhtml is dead i am very happy that the relaxed html5 syntax has won less characters to type and less characters to read make development faster in the year 2001 when i finished my studies i never thought the trend will be like this in the beginning the language javascript was not taken seriously today javascript is even running on the server up to now 2019 i still use jquery but i was told by several javascript developers that if you start from scratch today you don t need jquery and more the tag trend is clear http sotagtrends com tags jquery synthetic javascript benchmarks don t make sense see why octane was retired https v8 dev blog retiring octane creating nice layouts with tables is dead flexbox is here of course using tables for tabular data is still fine internet ftp if you still use ftp consider using tbzuploader https github com tbz pariv tbzuploader which is a generic upload tool for http of course the server needs to support this but this very simple just return 201 created if the upload was successful mirrors in the past it was common to run a script that detects which mirror is the best for your particular internet connection of course debian ubuntu packages and other stuff still gets mirrored but in most cases it is not needed anymore today cdns https en wikipedia org wiki content delivery network getting static data fast and easy ci jenkins it is still actively used but the future looks black github actions and gitlab ci are coming http sotagtrends com tags jenkins gitlab travis travis was the prefered ci system for github projects for serveral years with the release of github action the usage has decreased google trend https trends google com trends explore date all q 2fm 2f0jwwmpp portability in a world of containers and saas you don t need portability anymore you create software that takes a vanilla linux distribution container image then you modify this according to your needs there is no need to support several operating systems if your application wants to use database foodb in my case postgresql then use all features foodb supports often not always you don t need to support several different databases of course this does not apply to all software general purpose things like compilers gcc interpreters python databases postgresql ide pycharm web browsers still need to be portable but only a few developers spent their time building these fundamental building blocks portable shell scripts help don t waste your time writing portable shell scripts portability across different browsers the situations can t be compared to the past ie is dead look at all the confusion and useless work that was created by thinking writing a bash script is evil the term bashism was created and over committed people started to make things more complicated instead of easier see https wiki ubuntu com dashasbinsh depending on my mood this makes me laugh or sometimes cry the dashasbinsh page contains so many things to consider but according to my point of view the most important thing is missing the why is missing i see no real reason no measurable benefit if you want to use the bash use bin bash at the top and make the rpm dpkg depend on the bash if you want to use a super fast shell then use bin super fast shell at the top and make the rpm dpkg depend on the super fast shell why try to write a script which runs with bash and super fast shell compare this to python perl have you ever considered to write a script which can be executed by the perl and by the python interpreter skolelinux and other custom linux distributions skolelinux debian edu is a linux distribution intended for educational use and a debian pure blend the free and open source software project was founded in norway in 2001 and is now being internationally developed sometimes specific solutions are better sometimes general solutions are better i think this is a very good example of learning from the past supporting the existing project seems much slower in the beginning and people think let s build something new for our use case what happens when the initial enthusiasm settles in this case the solution was not specific for schools a reliable and simple to set up linux server has many use cases it is better to join forces instead of trying to build something new skolinux advertises with server desktops thin clients everything out of the box free but behind the scenes with commercial and professional support 100 free software ready to use for every kind of network complety compatible to microsoft windows clients my dentist has the same needs an architecture office too charitable institutions for the care of neglected orphans the headquarter of the worldwide evil hedge fond all need reliable servers desktops and thin clients so why a new distribution everybody should have his custom linux distribution this makes no sense why not support an existing linux distribution and provide some additional applications on top same for debian med https en wikipedia org wiki debian pure blend debian med the debian med project is a debian pure blend created to provide a co ordinated operating system and collection of available free software packages that are well suited for the requirements for medical practices and medical research sounds good sounds like the practice of charity https en wikipedia org wiki charity practice charity gives you and me a warm feeling inside it is discriminating why invest time and money into a custom linux distribution if only a few people benefit from it it feels less like charity but in reality it is more generosity if you help to develop something generic which helps everybody but how to provide useful service to help people you need to lower the barrier installing a linux distribution is a lot of effort you won t reach many people an application that works on windows linux and mac will reach more people an application which just requires a web browser reaches nearly everybody the linux documentation project when i started with linux i was happy that tldp existed this was around 1996 to 1998 later i usualy had enough knowledge to help myself looking back it is makes sense that the project died the linux documentation project created parallel documentation but this parallel documentation won t improve the upstream documentation for example a how to about ldap maybe this helped someone who was new to the topic but the upstream pushes forward and creates new features changes configuration and soon the parallel documentation is outdated who cares at least the upstream developers don t care for the outdated parallel documentation and that s 100 ok leasson learned if the upstream docs are not good then try to improve the upstream documentation don t create parallel documentation lan ldap and vpn lan vpns etc are still wide spread you can t see a down trend in the google trend for vpn ldap active directory https trends google com trends explore date all q 2fm 2f012t0g 2fm 2f04plq 2fm 2f011bm this is a guess that lan ldap vpns etc will get used less often in the future https exists why a second security layer of course there are a lot of use cases where you need more than one layer but for most cases https is enough if you don t need a network drive any more since you use gsuite office 365 or nextcloud then it is likely that you don t need a vpn any more leaving a very skinny almost serverless lan the future is sign in with google facebook microsoft via openid connect https en wikipedia org wiki openid connect the proxy between the web browser and the internet is dead since https gets used and not http it does not make sense to have a proxy in the lan of course on the server side software like haproxy https en wikipedia org wiki haproxy makes sense reverse proxy download today you hardly download anymore see google trend for download https trends google com trends explore date all geo us q download 2fm 2f0bm3b in the future young people won t know what a file or a directory is and i think that s cool native apps it is a lot of work to create native apps for mobile devices you need to support android and ios why not just create a web page with responsive web design https en wikipedia org wiki responsive web design in most cases this is enough but people love apps yes people love a lot of strange things like conspiracy theories sooner or later you will be able to get progressive web applications https en wikipedia org wiki progressive web application into the appstores afaik this already works for google and amazon only apple is missing up to now then you can install them via a store and there is almost no reason anymore to create a native app mesos and other cluster manager apache mesos is a cluster manager that provides an efficient scalable and robust way to share resources cpus memory disk ports et across frameworks or applications that are distributed across a cluster of machines stagoverflow tag trend mesos vs kubernetes http sotagtrends com tags mesos kubernetes cms typo3 drupal joomla the winner of open source content managment systems is wordpress if you want a simple open source cms use wordpress i am bit sad about this since i like the programming language python but there is no python based cms with broad adoption wordpress typo3 drupal joomla at google trends https trends google de trends explore date all q 2fm 2f02z6xz 2fm 2f02vtpl 2fm 2f01641s 2fm 2f07qb81 web almanac 2022 most popular cmss https almanac httparchive org en 2022 cms most popular cmss future i think v8 javascript engine https en wikipedia org wiki v8 javascript engine will play a major role in the future of it it is the fastest javascript engine it inside the most popular browers chrome and edge and on the server node js and the good news if you don t like it details you don t need to remember this it will be just there executing very fast to give you a great experience you don t need to install or update programms plugins or apps you the end user has one tool chrome epilogue you use some tools or methods which are dated there is no need to follow every hype but i think it is healthy to ask one self from time to time would i do this like this if could start from scratch lesson learned the bandwagon effect https en wikipedia org wiki bandwagon effect eats diversity the winner takes it all https www youtube com watch v 92cwkcu8z5c don t blame me i like diversity i am just observing the trends more thomas wol working out loud https github com guettli wol
server
fandy007
fandy007 information technology
server
Machine-Learning-Collection
p align center img width 100 src ml others logo torch and tf svg p build status https travis ci com aladdinpersson machine learning collection svg branch master https travis ci com aladdinpersson machine learning collection license mit https img shields io badge license mit yellow svg https opensource org licenses mit logo https github com aladdinperzon machine learning collection blob master ml others logo youtube logo png machine learning collection in this repository you will find tutorials and projects related to machine learning i try to make the code as clear as possible and the goal is be to used as a learning resource and a way to lookup problems to solve specific problems for most i have also done video explanations on youtube if you want a walkthrough for the code if you got any questions or suggestions for future videos i prefer if you ask it on youtube https www youtube com c aladdinpersson this repository is contribution friendly so if you feel you want to add something then i d happily merge a pr smiley table of contents machine learning algorithms machine learning pytorch tutorials pytorch tutorials basics basics more advanced more advanced object detection object detection generative adversarial networks generative adversarial networks architectures architectures lightning pytorch lightning tensorflow tutorials tensorflow tutorials beginner tutorials beginner tutorials architectures cnn architectures machine learning youtube link logo https youtu be pccunoes1po nbsp linear regression https github com aladdinpersson machine learning collection blob master ml algorithms linearregression linear regression gradient descent py with gradient descent white check mark youtube link logo https youtu be dq6xfe75cdk nbsp linear regression https github com aladdinpersson machine learning collection blob master ml algorithms linearregression linear regression normal equation py with normal equation white check mark youtube link logo https youtu be x1ez9vi611i nbsp logistic regression https github com aladdinpersson machine learning collection blob master ml algorithms logisticregression logistic regression py youtube link logo https youtu be 3trw5lig7bu nbsp naive bayes https github com aladdinpersson machine learning collection blob master ml algorithms naivebayes naivebayes py gaussian naive bayes youtube link logo https youtu be qzaarudskyc nbsp k nearest neighbors https github com aladdinpersson machine learning collection blob master ml algorithms knn knn py youtube link logo https youtu be w4fsrheafmo nbsp k means clustering https github com aladdinpersson machine learning collection blob master ml algorithms kmeans kmeansclustering py youtube link logo https youtu be gbttr0bs 1k nbsp support vector machine https github com aladdinpersson machine learning collection blob master ml algorithms svm svm py using cvxopt youtube link logo https youtu be njvojeotnnm nbsp neural network https github com aladdinpersson machine learning collection blob master ml algorithms neuralnetwork nn py decision tree https github com aladdinpersson machine learning collection blob master ml algorithms decisiontree decision tree py pytorch tutorials if you have any specific video suggestion please make a comment on youtube basics youtube link logo https youtu be x9jiifvluwk nbsp tensor basics https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch tensorbasics py youtube link logo https youtu be jy4wm2x21u0 nbsp feedforward neural network https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch simple fullynet py youtube link logo https youtu be wnk3uwv wku nbsp convolutional neural network https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch simple cnn py youtube link logo https youtu be gl2wxlimvka nbsp recurrent neural network https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch rnn gru lstm py youtube link logo https youtu be jgst43p tja nbsp bidirectional recurrent neural network https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch bidirectional lstm py youtube link logo https youtu be g6kql efn84 nbsp loading and saving model https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch loadsave py youtube link logo https youtu be zozhd0zm3ry nbsp custom dataset images https github com aladdinpersson machine learning collection tree master ml pytorch basics custom dataset youtube link logo https youtu be 9shclvvxsns nbsp custom dataset text https github com aladdinpersson machine learning collection tree master ml pytorch basics custom dataset txt youtube link logo https youtu be ks3oz7va8hu nbsp mixed precision training https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch mixed precision example py youtube link logo https youtu be 4jfvhjytz44 nbsp imbalanced dataset https github com aladdinpersson machine learning collection tree master ml pytorch basics imbalanced classes youtube link logo https youtu be qade0qqz5aq nbsp transfer learning and finetuning https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch pretrain finetune py youtube link logo https youtu be zvd276j9sz8 nbsp data augmentation using torchvision https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch transforms py youtube link logo https youtu be radlwkjbvpm nbsp data augmentation using albumentations https github com aladdinpersson machine learning collection tree master ml pytorch basics albumentations tutorial youtube link logo https youtu be rlqsxwaqdhe nbsp tensorboard example https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch tensorboard py youtube link logo https youtu be y6iecebrzks nbsp calculate mean and std of images https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch std mean py youtube link logo https youtu be rkhopffbpao nbsp simple progress bar https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch progress bar py youtube link logo https youtu be 1szocgacar8 nbsp deterministic behavior https github com aladdinpersson machine learning collection blob master ml pytorch basics set deterministic behavior pytorch set seeds py youtube link logo https youtu be p31hb37g4ak nbsp learning rate scheduler https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch lr ratescheduler py youtube link logo https youtu be xwq p o0uik nbsp initialization of weights https github com aladdinpersson machine learning collection blob master ml pytorch basics pytorch init weights py more advanced youtube link logo https youtu be wujvlf 6h5a nbsp text generating lstm https github com aladdinpersson machine learning collection blob master ml projects text generation babynames generating names py youtube link logo https youtu be ihq1t7nxs8k nbsp semantic segmentation w u net https github com aladdinpersson machine learning collection tree master ml pytorch image segmentation semantic segmentation unet youtube link logo https youtu be y2batt1fxju nbsp image captioning https github com aladdinperzon machine learning collection tree master ml pytorch more advanced image captioning youtube link logo https youtu be imx4kskdy7s nbsp neural style transfer https github com aladdinperzon machine learning collection blob master ml pytorch more advanced neuralstyle nst py youtube link logo https www youtube com playlist list plhhyolh6ijfzxdlslrclcctss8kicfwjb nbsp torchtext 1 https github com aladdinperzon machine learning collection blob master ml pytorch more advanced torchtext torchtext tutorial1 py torchtext 2 https github com aladdinperzon machine learning collection blob master ml pytorch more advanced torchtext torchtext tutorial2 py torchtext 3 https github com aladdinperzon machine learning collection blob master ml pytorch more advanced torchtext torchtext tutorial3 py youtube link logo https youtu be eogulvhrypk nbsp seq2seq https github com aladdinperzon machine learning collection blob master ml pytorch more advanced seq2seq seq2seq py sequence to sequence lstm youtube link logo https youtu be squqqddqtb4 nbsp seq2seq attention https github com aladdinperzon machine learning collection blob master ml pytorch more advanced seq2seq attention seq2seq attention py sequence to sequence with attention lstm youtube link logo https youtu be m6adrgje5cq nbsp seq2seq transformers https github com aladdinperzon machine learning collection blob master ml pytorch more advanced seq2seq transformer seq2seq transformer py sequence to sequence with transformers youtube link logo https youtu be u0s0f995w14 nbsp transformers from scratch https github com aladdinperzon machine learning collection blob master ml pytorch more advanced transformer from scratch transformer from scratch py attention is all you need object detection object detection playlist https youtube com playlist list plhhyolh6ijfw0tpctvtnk42nn08h6uvnq youtube link logo https youtu be xxyg5zwtjj0 nbsp intersection over union https github com aladdinpersson machine learning collection blob master ml pytorch object detection metrics iou py youtube link logo https youtu be ydkjwen8jna nbsp non max suppression https github com aladdinpersson machine learning collection blob master ml pytorch object detection metrics nms py youtube link logo https youtu be fppozcdvadi nbsp mean average precision https github com aladdinpersson machine learning collection blob master ml pytorch object detection metrics mean avg precision py youtube link logo https youtu be n9 xycgr mi nbsp yolov1 from scratch https github com aladdinpersson machine learning collection blob master ml pytorch object detection yolo youtube link logo https youtu be grir6tzbc1m nbsp yolov3 from scratch https github com aladdinpersson machine learning collection tree master ml pytorch object detection yolov3 generative adversarial networks gan playlist https youtube com playlist list plhhyolh6ijfwip8bznzx8qr30trcho8va youtube link logo https youtu be oljtvuvzppm nbsp simple fc gan https github com aladdinpersson machine learning collection blob master ml pytorch gans 1 20simplegan fc gan py youtube link logo https youtu be iztv9s wx9i nbsp dcgan https github com aladdinpersson machine learning collection tree master ml pytorch gans 2 20dcgan youtube link logo https youtu be pg0qz7oddx4 nbsp wgan https github com aladdinpersson machine learning collection tree master ml pytorch gans 3 20wgan youtube link logo https youtu be pg0qz7oddx4 nbsp wgan gp https github com aladdinpersson machine learning collection tree master ml pytorch gans 4 20wgan gp youtube link logo https youtu be sudddsqgrzg nbsp pix2pix https github com aladdinpersson machine learning collection tree master ml pytorch gans pix2pix youtube link logo https youtu be 4lktbhgcnfw nbsp cyclegan https github com aladdinpersson machine learning collection tree master ml pytorch gans cyclegan youtube link logo https youtu be nkqhasviyac nbsp progan https github com aladdinpersson machine learning collection tree master ml pytorch gans progan srgan https github com aladdinpersson machine learning collection tree master ml pytorch gans srgan esrgan https github com aladdinpersson machine learning collection tree master ml pytorch gans esrgan stylegan https github com aladdinpersson machine learning collection tree master ml pytorch gans stylegan note not done architectures youtube link logo https youtu be fcow zyb5bo nbsp lenet5 https github com aladdinperzon machine learning collection blob 79f2e1928906f3cccbae6c024f3f79fd05262cd1 ml pytorch cnn architectures lenet5 pytorch py l15 l35 cnn architecture youtube link logo https youtu be acmubbuxn20 nbsp vgg https github com aladdinperzon machine learning collection blob 79f2e1928906f3cccbae6c024f3f79fd05262cd1 ml pytorch cnn architectures pytorch vgg implementation py l16 l62 cnn architecture youtube link logo https youtu be uqc4fs7yx5i nbsp inception v1 https github com aladdinperzon machine learning collection blob master ml pytorch cnn architectures pytorch inceptionet py cnn architecture youtube link logo https youtu be dknibbbvcps nbsp resnet https github com aladdinperzon machine learning collection blob master ml pytorch cnn architectures pytorch resnet py cnn architecture youtube link logo https youtu be fr 0o25kigm nbsp efficientnet https github com aladdinpersson machine learning collection blob master ml pytorch cnn architectures pytorch efficientnet py cnn architecture pytorch lightning youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 1 introduction and starter code https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 1 20start 20code youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 2 lightningmodule https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 2 20lightningmodule youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 3 trainer https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 3 20lightning 20trainer youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 4 metrics https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 4 20metrics youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 5 datamodule https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 5 20datamodule youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 6 code restructure https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 6 20restructuring youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 7 callbacks https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 7 20callbacks youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 8 tensorboard logging https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 8 20logging 20tensorboard youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 9 profiler https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 9 20profiler youtube link logo https www youtube com playlist list plhhyolh6ijfyl740ptuxef4tstxak6ngp nbsp tutorial 10 multi gpu https github com aladdinpersson machine learning collection tree master ml pytorch pytorch lightning 10 20multi gpu tensorflow tutorials if you have any specific video suggestion please make a comment on youtube beginner tutorials youtube link logo https youtu be 5ym dos9ssa nbsp tutorial 1 installation video only youtube link logo https youtu be hpjby1h u4u nbsp tutorial 2 tensor basics https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial2 tensorbasics py youtube link logo https youtu be pahpif3yixi nbsp tutorial 3 neural network https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial3 neuralnetwork py youtube link logo https youtu be wacikidp2bo nbsp tutorial 4 convolutional neural network https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial4 convnet py youtube link logo https youtu be kjsuq1plmwg nbsp tutorial 5 regularization https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial5 regularization py youtube link logo https youtu be wacikidp2bo nbsp tutorial 6 rnn gru lstm https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial6 rnn gru lstm py youtube link logo https youtu be kjsuq1plmwg nbsp tutorial 7 functional api https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial7 indepth functional py youtube link logo https youtu be wcz 1iah nm nbsp tutorial 8 keras subclassing https github com aladdinpersson machine learning collection blob master ml tensorflow basics tutorial8 keras subclassing py youtube link logo https youtu be ckmjdkwsdny nbsp tutorial 9 custom layers https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial9 custom layers py youtube link logo https youtu be idus3ko6wic nbsp tutorial 10 saving and loading models https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial10 save model py youtube link logo https youtu be wjzoywog1cs nbsp tutorial 11 transfer learning https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial11 transfer learning py youtube link logo https youtu be yrmy baqk8k nbsp tutorial 12 tensorflow datasets https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial12 tensorflowdatasets py youtube link logo https youtu be 8wwfvv7ixyy nbsp tutorial 13 data augmentation https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial13 data augmentation py youtube link logo https youtu be wuzljzcknu4 nbsp tutorial 14 callbacks https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial14 callbacks py youtube link logo https youtu be s6tlsi8bjgs nbsp tutorial 15 custom model fit https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial15 customizing modelfit py youtube link logo https youtu be u7avsxanes nbsp tutorial 16 custom loops https github com aladdinperzon machine learning collection blob master ml tensorflow basics tutorial16 customloops py youtube link logo https youtu be k7kfyxxroj0 nbsp tutorial 17 tensorboard https github com aladdinperzon machine learning collection tree master ml tensorflow basics tutorial17 tensorboard youtube link logo https youtu be q7zuz8zoere nbsp tutorial 18 custom dataset images https github com aladdinperzon machine learning collection tree master ml tensorflow basics tutorial18 customdata images youtube link logo https youtu be nokvcrex36q nbsp tutorial 19 custom dataset text https github com aladdinperzon machine learning collection tree master ml tensorflow basics tutorial19 customdata text youtube link logo https youtu be ea5z1smir3u nbsp tutorial 20 classifying skin cancer https github com aladdinperzon machine learning collection tree master ml tensorflow basics tutorial20 classify cancer beginner project example beginner project example cnn architectures lenet https github com aladdinpersson machine learning collection tree master ml tensorflow cnn architectures lenet5 alexnet https github com aladdinpersson machine learning collection tree master ml tensorflow cnn architectures alexnet vgg https github com aladdinpersson machine learning collection tree master ml tensorflow cnn architectures vggnet googlenet https github com aladdinpersson machine learning collection tree master ml tensorflow cnn architectures googlenet resnet https github com aladdinpersson machine learning collection tree master ml tensorflow cnn architectures resnet
pytorch pytorch-implementation pytorch-tutorial pytorch-gan pytorch-examples tensorflow2 tensorflow-tutorials tensorflow-examples machine-learning machine-learning-algorithms pytorch-tutorials
ai
Banglore-House-price-prediction
house price prediction this data science project series walks through step by step process of how to build a real estate price prediction website we will first build a model using sklearn and linear regression using banglore home prices dataset from kaggle com second step would be to write a python flask server that uses the saved model to serve http requests third component is the website built in html css and javascript that allows user to enter home square ft area bedrooms etc and it will call python flask server to retrieve the predicted price during model building we will cover almost all data science concepts such as data load and cleaning outlier detection and removal feature engineering dimensionality reduction gridsearchcv for hyperparameter tunning k fold cross validation etc technology and tools wise this project covers python numpy and pandas for data cleaning matplotlib for data visualization sklearn for model building jupyter notebook visual studio code and pycharm as ide python flask for http server html css javascript for ui
cloud
dumb
dumb with the massive daily increase of useless scripts on genius s web frontend and having to download megabytes of clutter dumb https github com rramiachraf dumb tries to make reading lyrics from genius a pleasant experience and as lightweight as possible a href https codeberg org rramiachraf dumb img src https img shields io badge codeberg 232185d0 a screenshot https raw githubusercontent com rramiachraf dumb main screenshot png installation usage go 1 18 https go dev dl is required bash git clone https github com rramiachraf dumb cd dumb go build dumb the default port is 5555 you can use other ports by setting the port environment variable public instances url region cdn operator https dm vern cc us no https vern cc https sing whatever social us de yes whatever social https dumb lunar icu de yes maximiliangt500 https dumb privacydev net fr no https privacydev net tor url operator http dm vernccvbvyi5qhfzyqengccj7lkove6bjot2xhh5kajhwvidqafczrad onion https vern cc http dumb g4c3eya4clenolymqbpgwz3q3tawoxw56yhzk4vugqrl6dtu3ejvhjid onion https privacydev net i2p url operator http vernxpcpqi2y4uhu7to4rnjmyjjgzh3x3qxyzpmkhykefchkmleq b32 i2p https vern cc for people who might be capable and interested in hosting a public instance feel free to do so and don t forget to open a pull request so your instance can be included here contributing contributions are welcome license mit https github com rramiachraf dumb blob main licence
alternative-frontends alternative
front_end
FreeRTOS-lwIP-Vivado-2016
freertos lwip vivado 2016 freertos lwip xapp1026 for xilinx zynq devices using vivado 2016 this port is compatible with xilinx vivado 2016 2 and was tested on the boards zedboard redpitaya and z turn but should work on a xilinx zc702 board also this repository is based on the repository posted by don stevenson in the freertos forum called freertoslwip xapp1026 for xilinx zynq devices using vivado 2014 2 in order to make compatible the previous repository with vivado 2016 were modified the tcl scripts of the application examples also was modified the file sw repo bsp lwip140 v2 1 src contrib ports xilinx netifxemacpsif physpeed c in order to make it compatible with the redpitaya and z turn boards
os
fieldbus_design_hw
fieldbus design hw robots are cool but are they cool enough the answer is no without gadgets and gizmos that lets them actually do stuff besides dance around those gizmos need to be wired to the robot somehow so that they can eat power and poop data or make things move the standard solution in robotics is a field bus a physical network that connects both digital and analog io complex sensors etc to the robot your job time to design a fieldbus i ve given you a list of sensors and actuators that we need to put on the robot devices csv as well as where each one is located cell map png sheet clamp fixture map png for relevant components i ve included the documentation in docs it s not complete but it should get you pretty far your task is to design and spec a fieldbus system to manage all this stuff the field bus should be compliant with krc2 robots which gives you a few different options you ll need to spec all fieldbus relevant components to get the devices listed to function together specifications the fieldbus should be compliant with kuka krc2 controllers if there is a sensor that provides data that you think shouldn t be on the field bus specify how that data gets handled provide any files you believe are necessary to understand purchase and construct the fieldbus some good suggestions are bill of materials block diagram design document explaining your design choices don t forget about connectors submission in order to submit the homework do the following fork the github repository create a github account if you don t have one clone the repository to your computer and develop your solution extra points for multiple commits good commit messages once you re done push your changes to github and send us a link to your forked repository
os
codinator
codinator alt text http a3 mzstatic com us r30 purple20 v4 d7 48 ea d748ea77 f863 a046 88ce 9dd42bc52520 screen960x960 jpeg screenshot 1 alt text http a1 mzstatic com us r30 purple30 v4 01 63 c4 0163c44a ae16 fa9a d4e4 145e76951df5 screen960x960 jpeg screenshot 2 alt text http a5 mzstatic com us r30 purple49 v4 f9 58 95 f95895c8 d60f cc9a 91c8 b8ab6ddec1a2 screen960x960 jpeg screenshot 3 alt text http a2 mzstatic com us r30 purple60 v4 e9 ba 20 e9ba201c 81d2 1410 b972 32d6d6f210ee screen960x960 jpeg screenshot 4 alt text http a3 mzstatic com us r30 purple1 v4 95 05 e8 9505e8fc a624 1c73 5928 d7490ecd1bb6 screen960x960 jpeg screenshot 5 codinator is pure gold but with a brain as the only code editor you ll need on an ipad or iphone codinator helps you to easily edit almost a dozen different file types from plain text to markdown to php to javascript so that you can build epic websites and write inspiring words with state of the art technologies and your satisfaction in mind codinator sets out to do what has never been done before by a code editor on an ios device it puts the simple efficient and epic into portable code editing codinator 2 0 comes with a stunning new design and improved performance you ll be up and running quicker then a nerve impulse some of the features that make codinator unique customisable syntax highlighting version control and automatic backups multiple server for webdav uploading files and previewing websites desktop class autocompletion icloud support for syncing your files across devices snippets and much more neuron neuron http vwas cf neuron is not included yet once there is a better foundation for neuron you can expect it to be open sourced meanwhile you can play arround with codinator license http creativecommons org licenses by sa 4 0
front_end
image-filter
udagram image filtering microservice this project is part of the udacity cloud engineering nanodegree requirement program the live demo of this project is located at the following url image filter http image filter dev2 eu central 1 elasticbeanstalk com project specification here you may find the project rubric https review udacity com rubrics 2555 view
cloud
women-in-cloud-rvce-api
women in cloud rvce api unofficial a href https render com img alt production src https img shields io badge production up lgreen svg a maintainability https api codeclimate com v1 badges a5688e693a48ff0953ca maintainability https codeclimate com github mssandeepkamath women in cloud rvce api maintainability test coverage https api codeclimate com v1 badges a5688e693a48ff0953ca test coverage https codeclimate com github mssandeepkamath women in cloud rvce api test coverage unofficial api for women in cloud rvce center of excellence management architecture https user images githubusercontent com 90695071 229217757 4a582538 3619 4a4a 9970 b0537a488e50 png table of contents about about android app android app web app web app api documentation api documentation environment variables environment variables contributing contributing license license contact contact about women in cloud rvce api provides an unofficial backend service for the women in cloud rvce android web app it offers various endpoints to manage projects internships events student details staff registration and more android app the android app is available on the play store explore it here https play google com store apps details id com sandeep womenincloudrvce repository women in cloud rvce android https github com mssandeepkamath women in cloud rvce android web app repository women in cloud rvce web https github com mssandeepkamath women in cloud rvce web api documentation base url the base url for api endpoints is https endpoints project applied students project id get students who applied for a project internship applied students internship id get students who applied for an internship event applied students event id get students who applied for an event add project add a new project add internship add a new internship add event add a new event hire project hire a student for a project hire internship hire a student for an internship upload project document project id upload project documents upload internship document internship id upload internship documents upload event document event id upload event documents student get student details by usn funds get funds information add fund add a new fund students get registered students all project applied students get all students who applied for projects all internship applied students get all students who applied for internships all event applied students get all students who applied for events register staff register a staff member get staff get a list of staff members studentscsv get student details in csv format refer to the java class diagram https user images githubusercontent com 90695071 232433709 a390d603 3a38 401e 9adb 9799d075d41c png and spring model dependency diagram https user images githubusercontent com 90695071 232434107 1a720fa8 e055 4b83 99f1 8bfc1b52f798 png for an overview of the application s structure environment variables ensure to set the following environment variables sql user name sql password google password google user name contributing contributions are welcome to contribute 1 fork this repository 2 clone the forked repository locally 3 create a new branch for your feature fix 4 make your changes and commit them 5 push the changes to your fork 6 create a pull request in the original repository license this project is licensed under the mit license licence contact for questions or feedback feel free to contact us mailto msandeepcip gmail com disclaimer this project is not affiliated with any official women in cloud rvce platforms it s developed independently for educational and community purposes
java mysql rvce spring-boot centers-of-excellence women-in-cloud
cloud
alicevision.github.io
alice vision website alice vision website hosted on github for development to avoid duplicated code across pages we use javascript to load external html files if you are using it locally for development you need to disable the same origin policy https en wikipedia org wiki same origin policy if you use chrome stop all chrome instances and then run bash chromium browser disable web security user data dir
photogrammetry structure-from-motion multi-view-stereo camera-tracking computer-vision 3d-reconstruction
ai
radi
a href http radi js org img src https rawgit com radi js radi gh pages logo radijs github png height 60 alt radi aria label redux js org a radi is a tiny javascript framework it s built quite differently from any other framework it doesn t use any kind of diffing algorithm nor virtual dom which makes it really fast with radi you can create any kind of single page applications or more complex applications npm version https img shields io npm v radi svg style flat square https www npmjs com package radi npm downloads https img shields io npm dm radi svg style flat square https www npmjs com package radi gzip bundle size http img badgesize io https unpkg com radi latest dist radi es min js compression gzip style flat square https unpkg com radi latest dist radi js discord https dcbadge vercel app api server a62gfadw2e style flat square https discord gg a62gfadw2e installation to install the stable version npm install save radi this assumes you are using npm https www npmjs com as your package manager if you re not you can access these files on unpkg https unpkg com radi dist download them or point your package manager to them browser compatibility radi js currently is compatible with browsers that support at least es5 ecosystem project status description radi router radi router status radi router package single page application routing radi fetch radi fetch status radi fetch package http client for radi js radi router https github com radi js radi router radi router status https img shields io npm v radi router svg style flat square radi router package https npmjs com package radi router radi fetch https github com radi js radi fetch radi fetch status https img shields io npm v radi fetch svg style flat square radi fetch package https npmjs com package radi fetch documentation getting started guide docs here are just a few examples to work our appetite hello world example lets create component using jsx tho it s not mandatory we can just use hyperscript r h1 hello this sample i m using jsx for html familiarity and to showcase compatibility jsx jsx radi r class hello extends radi component state return sample world view return h1 hello this state sample h1 radi mount hello document body this example will mount h1 to body like so body h1 hello world h1 body counter example with single file component syntax this will be different as we ll need to update state and use actions only actions can change state and trigger changes in dom also we ll be using our sfc syntax for radi files counter radi jsx class state count 0 action up return count this state count 1 action down return count this state count 1 div h1 this state count h1 button onclick this down disabled this state count 0 button button onclick this up button div architecture radi fully renders page only once initially after that listeners take control they can listen for state changes in any radi component when change in state is caught listener then re renders only that part other frameworks silently re renders whole page over and over again then apply changes but radi only re renders parts that link to changed state values to check out live repl https radi js org fiddle and docs https radi js org docs visit radi js org https radi js org changelog detailed changes for each release are documented in the release notes https github com radi js radi releases stay in touch twitter https twitter com radi js slack https join slack com t radijs shared invite enqtmjk3nte2njyxmti2lwfmmtm5ntgwzdi5nmflyzmzymmxzjbhmgy0mgm2mzy5nmexy2y0odbjndnmyjyxzwyxmjeynjjhnja5otjjnzq license mit http opensource org licenses mit copyright c 2017 present marcis marcisbee bergmanis
radi javascript dom hyperscript radijs
front_end
cloud_engineering_and_devops
cloud engineering and devops the repository focus on cloud engineering devops and good practises for scalable highly available secure and cost effective cloud systems content costs optimization servers only during business hours costs optimization run only in business hours content roadmap cicd with blue green deployment
cloud
embedded-schedular-traffic-light-system
engineering design project part i a embedded traffic light system the following traffic light system tls project is a real time system using the freertos real time kernel operating system for microcontrollers the freertos os and the tls project code are being executed on the stm32f4 discovery microcontroller the tls project uses 3 tasks 4 queues 2 helpers functions 2 middleware functions and a timer callback function to manage the system the system is required to move cars as lit leds from left to right across 19 leds between the 7th and 8th led there will be an intersection where cars will be required to stop a traffic light must be configured to allow traffic to pass through on a green light on a yellow or red light traffic must stop before the intersection but continue after in the middle of the intersection the following is a demonstration of the final solution tls https user images githubusercontent com 44009838 163854717 7150245a 7019 4288 9f2a 8b50c85fe2e0 gif b embedded deadline driven schedular the earliest deadline first edf scheduler system project is a real time system using the freertos real time kernel operating system for microcontrollers the freertos os and the edf scheduler system project code is being executed on the stm32f4 discovery microcontroller the system uses 3 tasks along with user defined tasks udt which are generated periodically these udts are generated by the deadline driven task generator which generates tasks based on the period of each task while the deadline driven scheduler task decides which udts execute next via edf the system uses 5 queues and 3 linked lists to communicate and manage tasks where the queues pass data and the linked lists hold active completed and overdue tasks for the system in addition the system uses various helper functions to manage udts and linked lists the system replicates the following dds https user images githubusercontent com 44009838 163859644 55c7eab7 e2ab 4047 b0f6 abe55c4d88be jpg
os
cargochain
alt tag presentation header png cargochain cargochain is a settlement platform that provides a secure trustless and efficient chain of custody with cargochain we intend to improve international trade especially the shipping industry cargochain provides importers exporters shipping companies and customs an easy to use interface to view the necessary information associated to cargo the biggest advantage of cargochain is that it offers a single point of review for all involved parties this way the necessary documents do not have to be sent to all parties but are instead uploaded to the blockchain ipfs for further verification and review not only does this significantly increase efficiency reduce costs and increase speed of shipping but it also saves a lot of trees which makes pandas happy alt tag http res cloudinary com hqmmvj8vi image upload v1452393110 nickkilla fea447c8cdfe0ae3f8366e3249c5aef7 11113 jpg the stack cargochain is built with meteor using ethereum and ipfs as the underlying technologies to handle contractual relations and the eternification of documents in the future we intend to include factom as an additional source to store document hashes at for further security verification file structure the project is structured in multiple folders app main folder that contains all the meteor related files contract contains the smart contract presentation contains our presentation and the related documents for shipping prerequisites ethereum you need to have ethereum node with rpc running and unlock your main account to run it enter geth rpc rpccorsdomain http localhost 3000 unlock 0 console ipfs run the daemon ipfs daemon meteor you obviously need to have meteor installed just go to meteor com to get it how to run inside the app folder with an ethereum node running simply type meteor and the app should automatically be available at http localhost 3000 feel free to play around with it and test all the functions
blockchain
aifunclub
gif dealwithitbot gif the dealwithit bot detects all faces in a photo and slides on a pair of pixel shades truly a worthwhile use of the bountiful technological feast made possible by machine learning and artificial intelligence deal with it test it out at aifunclub azurewebsites net http aifunclub azurewebsites net what s under the hood project oxford for node https github com felixrieseberg project oxford calls the microsoft face api http microsoft com cognitive to return angle pitch and coordinates of eyes and eyebrows for each face detected max 64 socket io https github com socketio socket io handles communication between the node server and client html jquery https jquery com takes care of a few things like the generation and animation of glasses if you want to play with this code on your own you ll need to add your own api key http microsoft com cognitive for the face api be sure to update the api region e g westus at the top of the code as well the ai fun club makes ai fun club
ai
Flowchart-and-Designs-of-Electric_water_Heater
flowchart and designs of electric water heater creating a well tested reliable embedded software is important especially in safety critical applications a great design starts with a well thought system design following the system design embedded hardware design and software design will take place with high clarity in this article we mainly focus on embedded system design embedded system design is based on software development processes and life cycle using any of the popular models such as the waterfall model the v model and nowadays most of the software houses use the agile methodologies
os
DSA_Webd_Project_BlockChain_hacktoberfest2023
welcome to hacktober fest everyone is welcome contribute anything in dsa rules not accepting cpp c dsa codes alternative 1 create a page using html css add dsa problem statement and solution in that 2 you can contribute in python and java or ask for a issue assign 3 make sure follow me star the repo why contribute to this repository beginner friendly create your first pull request on github start with any problem of your choice on leetcode https leetcode com mention the question link and also mention time and space complexity of your solution chance of receiving a t shirt for participating in the hacktoberfest https hacktoberfest digitalocean com how to participate register anytime between september 26 and october 31 pull requests can be made in any github or gitlab hosted project that s participating in hacktoberfest look for the hacktoberfest topic project maintainers must accept your pull merge requests for them to count toward your total have 4 pull merge requests accepted between october 1 and october 31 to complete hacktoberfest the first 40 000 participants maintainers and contributors who complete hacktoberfest can elect to receive one of two prizes a tree planted in their name or the hacktoberfest 2022 t shirt for more info headover to https hacktoberfest com participation license mit https github com naveen3011 dsa blob main license make sure to star the repo
hacktoberfest hacktoberfest-accepted
blockchain
surpriver
p align center img width 350 src figures logo custom png p surpriver find high moving stocks before they move find high moving stocks before they move using anomaly detection and machine learning surpriver uses machine learning to look at volume price action and infer unusual patterns which can result in big moves in stocks files description path description surpriver main folder boxur nbsp dictionaries folder to save data dictionaries for later use boxur nbsp figures figures for this github repositories boxur nbsp stocks list of all the stocks that you want to analyze data loader py module for loading data from yahoo finance detection engine py main module for running anomaly detection on data and finding stocks with most unusual price and volume patterns feature generator py generates price and volume return features as well as plenty of technical indicators usage packages you will need to install the following package to train and test the models scikit learn https scikit learn org numpy https numpy org tqdm https github com tqdm tqdm yfinance https github com ranaroussi yfinance pandas https pandas pydata org scipy https www scipy org install html ta https github com bukosabino ta you can install all packages using the following command please note that the script was written using python3 pip install r requirements txt running with docker you can also use docker if you know what it is and have some knowledge on how to use it here are the steps to run the tool with docker first you must build the container docker build t surpriver then you need to copy the contents of docker compose yml template to a new file called docker compose yml replace c path to this dir with the directory you are working in run the container by executing docker compose up d execute any of the commands below by prepending docker exec it surpriver to your command line predictions for today if you want to go ahead and directly get the most anomalous stocks for today you can simple run the following command to get the stocks with the most unusual patterns we will dive deeper into the command in the following sections get most anomalous stocks for today when you do not have the data dictionary saved and you are running it for the first time python detection engine py top n 25 min volume 5000 data granularity minutes 60 history to use 14 is load from dictionary 0 data dictionary path dictionaries data dict npy is save dictionary 1 is test 0 future bars 0 this command will give you the top 25 stocks that had the highest anomaly score in the last 14 bars of 60 minute candles it will also store all the data that it used to make predictions in the dictionaries data dict npy folder below is a more detailed explanation of each parameter top n the total number of most anomalous stocks you want to see min volume filter for volume any stock that has an average of volume lower than this value will be ignored data granularity minutes data granularity to use for analysis the available options are 1min 5min 15min 30min 60min history to use historical bars to use to analyze the unusual and anomalous patterns is save dictionary whether to save the stock data that is used for analysis in a dictionary or not enabling this would save you time if you want to do some further analysis on the data data dictionary path dictionary path where data would be stored is load from dictionary whether to load the data from dictionary or download it from yahoo finance directly you can use the dictionary you saved above here for multiple runs is test you can actually test the predictions by leaving some of the recent data as future data and analyzing whether the most anomalous stocks moved the most after their predictions if this value is 1 the value of future bars should be greater than 5 future bars these number of bars will be saved from the recent history for testing purposes output format the format for results if you pass cli the results will be printed to the console if you pass json a json file will be created with results for today s date the default is cli when you have the data dictionary saved you can just run the following command python detection engine py top n 25 min volume 5000 data granularity minutes 60 history to use 14 is load from dictionary 1 data dictionary path dictionaries data dict npy is save dictionary 0 is test 0 future bars 0 output format cli notice the change in is save dictionary and is load from dictionary here is an output of how a single prediction looks like please note that negative scores indicate higher anomalous and unusual patterns while positive scores indicate normal patterns the lower the better last bar time 2020 08 25 11 30 00 04 00 symbol spi anomaly score 0 029 today volume today date above 313 94k average volume 5d 206 53k average volume 20d 334 14k volatility 5bars 0 013 volatility 20bars 0 038 future absolute sum price changes 72 87 test on historical data if you are suspicious of the use of machine learning and artificial intelligence in trading you can actually test the predictions from this tool on historical data the two most important command line arguments for testing are is test and future bars if the former one is set to 1 and the later one is set to anything more than 5 the tool will actually leave that amount of data for analysis purposes and use the data prior to that for anomalous predictions next it will look at that remaining data to see how well the predictions did here is an example of a scatter plot from the following command find anomalous stocks and test them on historical data python detection engine py top n 25 min volume 5000 data granularity minutes 60 history to use 14 is load from dictionary 0 data dictionary path dictionaries data dict npy is save dictionary 1 is test 1 future bars 25 if you have already generated the data dictionary you can use the following command where we set is load from dictionary to 1 and is save dictionary to 0 python detection engine py top n 25 min volume 5000 data granularity minutes 60 history to use 14 is load from dictionary 1 data dictionary path dictionaries data dict npy is save dictionary 0 is test 1 future bars 25 p align center img src figures correlation plot png p as you can see in the image above the anomalous stocks score 0 usually have a higher absolute change in the future on average that proves that the predictions are actually for those stocks that moved more than average in the next few hours days one question arises here what if the tool is just picking the highest volatility stocks because those would yield high future absolute change in order to prove that it s not the case here is the more detailed description of stats you get from the above command future performance correlation between future absolute change vs anomalous score lower is better range 1 1 0 23 total absolute change in future for anomalous stocks 89 660 total absolute change in future for normal stocks 43 000 average future volatility of anomalous stocks 0 332 average future volatility of normal stocks 0 585 historical volatility for anomalous stocks 2 528 historical volatility for normal stocks 2 076 you can see that historical volatility for normal vs anomalous stocks is not that different however the difference in total absolute future change is double for anomalous stocks as compared to normal stocks support for crypto currencies you can now specify which data source you wold like to use along with which stocks list you would like to use python detection engine py top n 25 min volume 500 data granularity minutes 60 history to use 14 is load from dictionary 0 data dictionary path dictionaries feature dict npy is save dictionary 1 is test 0 future bars 0 data source binance stock list cryptos txt data source specifies where to get data from current supported options are binance and yahoo finance default stocks list which file in the stocks directory contains the list of tickers to analyze default is stocks txt results we will try to post the top 25 results for a single set of parameters every week august 31 2020 to september 05 2020 https pastebin com l5t2byux limitations the tool only finds stocks that have some unusual behavior in their price and volume action combined it does not predict which direction the stock is going to move that might be a feature that i ll implement in the future but for right now you ll need to look at the charts and do your dd to figure that out license license gpl v3 https img shields io badge license gplv3 blue svg https www gnu org licenses gpl 3 0 a product by tradytics https www tradytics com copyright c 2020 present tradytics com
machine-learning finance-application trading trading-algorithms algotrading anomaly-detection ai investment stock-analysis stock
ai
IkomiaApi
a name readme top a project logo div align center a href https github com ikomia dev ikomiaapi img src https avatars githubusercontent com u 53618017 s 400 u e9c62c77b7c33b6b7f4883b115a0d7d05dcca9ec v 4 alt logo width 100 height 100 a h3 align center ikomiaapi simplifying computer vision deployment h3 div br p align center a href https github com ikomia dev ikomiaapi stargazers img alt stars src https img shields io github stars ikomia dev ikomiaapi a a href https www ikomia ai api img alt website src https img shields io website http ikomia ai en svg down color red down message offline up message online a a href img alt os src https img shields io badge os win 2c 20linux 9cf a a href img alt python src https img shields io badge python 3 7 2c 203 8 2c 203 9 2c 203 10 blueviolet a a href https github com ikomia dev ikomiaapi blob main license md img alt github src https img shields io github license ikomia dev ikomiaapi svg color blue a a href https github com ikomia dev ikomiaapi tags img alt github tags src https img shields io github v release ikomia dev ikomiaapi svg color red a br a href https discord com invite 82tnw9uggc img alt discord community src https img shields io badge discord white style social logo discord a p p align center kbd img src https user images githubusercontent com 42171814 200714085 399b7625 81ae 4c71 bb39 8483bf4e204e gif kbd p welcome to ikomiaapi where we transform intricate research algorithms into user friendly deployable solutions for computer vision enthusiasts and professionals alike why choose ikomiaapi research meets reality we bridge the gap between cutting edge research and real world applications with ikomia you get access to algorithms from renowned sources like opencv detectron2 openmmlab and hugging face unified framework say goodbye to integration complexities craft workflows and blend algorithms seamlessly all under one roof empowerment we re not just about providing tools we re about building a community by democratizing ai and computer vision technologies we aim to foster collaboration and innovation getting started installation bash pip install ikomia quick examples object detection python from ikomia dataprocess workflow import workflow from ikomia utils displayio import display wf workflow yolov7 wf add task name infer yolo v7 auto connect true wf run on url https raw githubusercontent com ikomia dev notebooks main examples img img fireman jpg display yolov7 get image with graphics p float left img src https raw githubusercontent com ikomia dev notebooks main examples img img fireman jpg width 400 img src https raw githubusercontent com ikomia dev notebooks main examples img img fireman bbox png width 400 p pose estimation python similar imports wf workflow pose estimation wf add task name infer mmlab pose estimation auto connect true wf run on url https raw githubusercontent com ikomia dev notebooks main examples img img fireman jpg display pose estimation get image with graphics p float left img src https raw githubusercontent com ikomia dev notebooks main examples img img fireman jpg width 400 img src https raw githubusercontent com ikomia dev notebooks main examples img img fireman pose png width 400 p discover with ik our auto completion system ik is designed to assist developers in discovering available algorithms in ikomia hub dive into our detailed documentation to explore its capabilities python from ikomia dataprocess workflow import workflow from ikomia utils import ik from ikomia utils displayio import display wf workflow yolov7 wf add task ik infer yolo v7 instance segmentation auto connect true wf run on path path to your image png wf run on url https raw githubusercontent com ikomia dev notebooks main examples img img dog png display yolov7 get image with graphics display yolov7 get image with mask display yolov7 get image with mask and graphics https raw githubusercontent com ikomia dev notebooks main examples img display inst seg png exporting your workflow with ikomiaapi sharing your crafted workflows is a breeze whether you want to collaborate with peers or integrate with ikomia studio our export feature has got you covered python from ikomia dataprocess workflow import workflow from ikomia utils import ik wf workflow instance segmentation with yolov7 yolov7 wf add task ik infer yolo v7 instance segmentation auto connect true filter task wf add task ik ik instance segmentation filter categories dog confidence 0 90 auto connect true wf save path to your workflow json once you ve exported your workflow you can easily share it with others ensuring reproducibility and collaboration notebooks you can find some notebooks here https github com ikomia dev notebooks we provide some google colab tutorials notebooks google colab how to make a simple workflow https github com ikomia dev notebooks blob main examples howto make a simple workflow with ikomia api ipynb open in colab https colab research google com assets colab badge svg https colab research google com github ikomia dev notebooks blob main examples howto make a simple workflow with ikomia api ipynb how to run neural style transfer https github com ikomia dev notebooks blob main examples howto run neural style transfer with ikomia api ipynb open in colab https colab research google com assets colab badge svg https colab research google com github ikomia dev notebooks blob main examples howto run neural style transfer with ikomia api ipynb how to train and run yolo v7 on your datasets https github com ikomia dev notebooks blob main examples howto train yolo v7 with ikomia api ipynb open in colab https colab research google com assets colab badge svg https colab research google com github ikomia dev notebooks blob main examples howto train yolo v7 with ikomia api ipynb how to use detectron2 object detection https github com ikomia dev notebooks blob main examples howto use detectron2 object detection with ikomia api ipynb open in colab https colab research google com assets colab badge svg https colab research google com github ikomia dev notebooks blob main examples howto use detectron2 object detection with ikomia api ipynb comprehensive documentation for those who love details our comprehensive documentation https ikomia dev github io python api documentation is a treasure trove of information from basic setups to advanced configurations we ve got you covered contributing we believe in the power of community if you have suggestions improvements or want to contribute in any way we re all ears stay tuned for our detailed contribution guidelines license we believe in open source ikomiaapi is licensed under the apache 2 0 license promoting collaboration with transparency support feedback your feedback drives our progress if you find ikomia useful give us a star for queries issues or just to say hi drop us an email at team ikomia com or join our discord channel https discord com invite 82tnw9uggc stargazers they like us we love them heart eyes stargazers repo roster for ikomia dev ikomiaapi https reporoster com stars ikomia dev ikomiaapi https github com ikomia dev ikomiaapi stargazers star history star history chart https api star history com svg repos ikomia dev ikomiaapi type date https star history com ikomia dev ikomiaapi date citation citing ikomia if you use ikomia in your research please use the following bibtex entry bibtex misc deba2019ikomia author guillaume demarcq and ludovic barusseau title ikomia howpublished url https github com ikomia dev ikomiaapi year 2019
computer-vision deep-learning image-processing detectron2 opencv openmmlab python pytorch tensorflow computervision computer-vision-tools computer-vision-ai computer-vision-algorithms computer-vision-opencv human-pose-estimation machine-learning object-detection pose-estimation yolo
ai
git-workshop-slide
git kuasitc git workshop slide about us https fb me kuasitc kuasitc http i imgur com rrj2kz8 png slide git git git workshop 1 html version control start from git email username gitignore git init git add git commit git status git mv difference between git reset git rm working with remote server github fix conflict checkout to other branch git blame workshop 2 html git write readme md markdown syntax gitconfig about the license add ssh key and push by ssh case 1 push remote server case 2 remote server commit case 3 commit branch
server
dart-json-server
json server launch json server for web and mobile apps development from cli without complicated backend setup installation you can install the package from the command line sh pub global activate json server usage the cli is named jserver jserver data path to json file available options you can use d data path to json database file required h host specify the ip address to serve default to 127 0 0 1 p port specify the port to use default to 1711 json format typically for json database you need to use following structure json api path 1 response object 1 api path 2 response object 2 api path must start with a slash to mark it as a serving path for api response object should be either array or object examples to start a default server which binds to 127 0 0 1 1711 using database json jserver d database json to start server at localhost 8888 using api json for database jserver h localhost p 8888 d api json license mit license md 2019 pete houston https petehouston com
dart dartlang dart2 dart-package dart-cli flutter dart-server command-line cli cli-app webdev hacktoberfest
server
clarifyingqa
clarifyingqa dataset from the paper assistance with large language models the clarifyingqa dataset consists of four turn dialogues of the form vague input question clarifying question clarification answer as well as clear questions that were intended by the vague input question the first and last turns of the dialogues are provided from ambigqa https nlp cs washington edu ambigqa dataset which itself is based on the natural questions https ai google com research naturalquestions dataset and the second and the third turns we collected ourselves data collection a subset of ambigqa 611 vague 1771 corresponding clear questions is labeled as follows for each vague question which we treat as an input we ask a human labeler to 1 ask a clarifying question aimed to understand what is meant by the vague question and 2 respond to that clarifying question as if the intent behind the vague question was to ask one of the clear questions corresponding to it in ambigqa data structure the dataset can be found at data clarifyingqa csv and has the following columns id id matching that of the corresponding vague question in ambigqa vague question vague question this is from ambigqa clear question one of the clear questions that corresponds to the above vague question this is from ambigqa clarifying question a clarifying question human annotators asked to find out what is meant by the vague question this is labeled by our annotators clarification the response to the clarifying question given that the intent behind the vague question was to ask the clear question this is labeled by our annotators answers semicolon separated possible answers to the clear question this is from ambigqa
ai
gImageReader
translation status https hosted weblate org widgets gimagereader svg badge svg https hosted weblate org engage gimagereader utm source widget github all releases https img shields io github downloads manisandro gimagereader total svg actions status https github com manisandro gimagereader workflows ci 20build badge svg https github com manisandro gimagereader actions gimagereader gimagereader is a simple gtk qt front end to tesseract ocr https github com tesseract ocr tesseract logo https raw githubusercontent com manisandro gimagereader gh pages gimagereader jpg features import pdf documents and images from disk scanning devices clipboard and screenshots process multiple images and documents in one go manual or automatic recognition area definition recognize to plain text or to hocr documents recognized text displayed directly next to the image post process the recognized text including spellchecking generate pdf documents from hocr documents international language support weblate https hosted weblate org projects gimagereader desktop entry data gimagereader appdata xml in installation source https raw githubusercontent com manisandro gimagereader gh pages icons source png source download from the releases page https github com manisandro gimagereader releases windows https raw githubusercontent com manisandro gimagereader gh pages icons windows png windows download from the releases page https github com manisandro gimagereader releases fedora https raw githubusercontent com manisandro gimagereader gh pages icons fedora png fedora available from the official repositories https src fedoraproject org rpms gimagereader debian https raw githubusercontent com manisandro gimagereader gh pages icons debian png debian available from the official repositories https packages debian org unstable main gimagereader ubuntu https raw githubusercontent com manisandro gimagereader gh pages icons ubuntu png ubuntu available from ppa sandromani gimagereader https launchpad net sandromani archive ubuntu gimagereader opensuse https raw githubusercontent com manisandro gimagereader gh pages icons opensuse png opensuse available from opensuse build service https build opensuse org project show home sandromani archlinux https raw githubusercontent com manisandro gimagereader gh pages icons arch png archlinux available from the extra repositories gimagereader gtk https archlinux org packages extra x86 64 gimagereader gtk and gimagereader qt https archlinux org packages extra x86 64 gimagereader qt compilation the steps for compiling gimagereader from source are documented in the wiki https github com manisandro gimagereader wiki compiling gimagereader support if you encounter issues please file a ticket in the issue tracker https github com manisandro gimagereader issues or feel free to mail me directly at manisandro at gmail dot com be sure to also consult the faq https github com manisandro gimagereader wiki faq contributing contributions are always welcome ideally in the form of pull requests translating international language support contributions at weblate https hosted weblate org projects gimagereader and desktop entry data gimagereader appdata xml in a href https hosted weblate org engage gimagereader img src https hosted weblate org widgets gimagereader glossary multi auto svg alt translation status a
qt ocr pdf-document c-plus-plus tesseract-ocr gtk hocr-documents hocr scanner
front_end
radix-vue
br p align center a href https github com radix vue radix vue img src https www radix vue com logo svg alt logo width 150 a h1 align center radix vue h1 p align center an unofficial vue port of radix ui br radix is an unstyled customisable ui library with built in accessibility for building top quality design systems p p align center a href https github com radix vue radix vue actions workflows test yml a a href https www npmjs com package radix vue target blank img src https img shields io npm v radix vue style flat colora 002438 colorb 41c399 alt npm version a a href https www npmjs com package radix vue target blank img alt npm downloads src https img shields io npm dm radix vue flat colora 002438 colorb 41c399 a a href https github com radix vue radix vue target blank img alt github stars src https img shields io github stars radix vue radix vue flat colora 002438 colorb 41c399 a p p align center a href https chat radix vue com b get involved b a p p align center a href https radix vue com documentation a a href https www radix vue com overview getting started html getting started a a href https www radix vue com examples a a href https www radix vue com overview introduction html why radix vue a p hero image docs content public og png em design by https twitter com icarusgkx em installation bash pnpm add radix vue bash npm install radix vue bash yarn add radix vue documentation for full documentation visit radix vue com https radix vue com releases for changelog visit releases https github com radix vue radix vue releases contributing we would love to have your contributions all prs all welcomed we need help building the core components docs tests stories join our discord and we will get you up and running dev setup docs 1 go to the docs directory cd docs 2 run pnpm i ignore workspace 3 run pnpm run docs dev package 1 clone the repo 2 run pnpm i 3 run pnpm story dev to run histoire storybook 4 open http localhost 6006 authors khairul haaziq https github com khairulhaaziq mujahid anuar https github com mujahidfa zernonia https github com zernonia credits all credits go to these open source works and resources radix ui https radix ui com for doing all the hard work to make sure components are accessible floating ui https floating ui com for creating powerful components that as the base of many radix vue components vueuse https vueuse org for providing many useful utilities ark ui https ark ui com for the primitive component radix svelte https radix svelte com headless ui https headlessui com
accessible design-system headless primitives ui ui-kit vue component-library nuxt radix-ui ui-components vue-radix accessibility vue-components
os
College-Predictor
college predictor machine learning based prediction model for college admission the system looks at the academic merits background and criteria for college admission of the student it then predicts whether or not a student will attend a university or college for precise expectations we have prepared an ml model to give results tools technologies used python scikit learn flask html and css objectives 1 help students pursuing engineering identify the best colleges they can get based on their rank and category thus students will not have to make extra efforts on research about different colleges they can take admission into 2 to help students to fill their preferences at the time of option entry process accurately 3 ease the decision making process for students as they would have a ready list of best colleges into which they are eligible to take admission this would help them make better choices of college and branch before allotment problem definition educational organizations have always played an important and vital role in society for development and growth of any individual there are different college prediction apps and websites being maintained contemporarily but using them is tedious to some extent due to the lack of articulate information regarding colleges and the time consumed in searching the best deserving college the problem statement hence being tackled is to design a college prediction prediction system and to provide a probabilistic insight into college administration for overall rating cut offs of the colleges admission intake and preferences of students also it helps students avoid spending time and money on counsellor and stressful research related to finding a suitable colleg
server
doodlin-design-system
doodlin design system url https design doodlin co kr node v15 14 0 node bash nvm use bash yarn storybook bash yarn publish npm master 1 storybook 2 npm xx yy 00 3 npm master push npm 1 npm 1 yarn publish 1 1 git 1 yarn deploy https design doodlin co kr dropdown customselect selectbox tooltip usingportalnode portal dropdown usingportalnode true open uncontrolled dropdown controlled component ref forceclose
design-system frontend library
os
Pewlett-Hackard-Analysis
pewlett hackard analysis overview of the analysis our analysis consists of determining from the data provided by pewlett hackard which employee will retire in the next few years and how many positions the company will need to fill we determine the number of retiring employees per title and identify employees who are eligible to participate in a mentorship program this analysis will prepare pewlett hackard for the future by generating a list of all employees eligible for the retirement plan we build an employee database with sql by applying data modelling engineering and analysis skills results the retirement titles table shows each employee who is the appropriate age to retire and the various titles they have had with the company since their first day img width 1440 alt retirement titles src https user images githubusercontent com 77806210 175451436 c1be632d 87c1 48cd a815 b48abe4fec37 png the unique titles table is a clearer version of the retirement titles in this chart each employee s name is listed along with the most recent title they have held img width 1440 alt unique titles src https user images githubusercontent com 77806210 175451499 395fed3d b0ca 4003 b812 79a3b545588e png the retiring titles table tells us the total number of retirement titles img width 1440 alt retiring titles src https user images githubusercontent com 77806210 175451540 1524eddf dbad 4b2d 8ba2 98c4ba3e0be8 png the mentoring eligibility table tells us the employee number first and last name date of birth title and length of employment of employees who are eligible to participate in a mentoring program img width 1440 alt mentorship eligibility src https user images githubusercontent com 77806210 175451592 7d430008 266d 4903 a0c9 989e70909b9f png summary how many roles will need to be filled as the silver tsunami begins to make an impact the retiring titles table shows us the total number of retirement per titles if we add all those numbers we have a total of 259 184 108 roles that will need to be filled are there enough qualified retirement ready employees in the departments to mentor the next generation of pewlett hackard employees based on the mentoring eligibility table there are a total of 1548 retirement ready employees in the departments to mentor the next generation of pewlett hackard employees therefore there are not enough qualified employee to mentor the next generation to have more insight into the upcoming silver tsunami it will be useful to have a query table that shows us the total retirement ready employees in the departments to mentor the next generation per title it also be useful to use to have the total retirement ready employees in the departments to mentor the next generation per title and per department
server
native-mobile-development
native mobile development this is the example code repository for native mobile development by mike dunn and shaun lewis o reilly media
front_end
Puzzlr-iOS
puzzlr ios puzzlr ios app for software engineering 331 at wfu
os
IT
it information technology
server
EmbeddedSecurity
embedded home security system the system is able to scan for movements in designated zones that the user chooses to arm within a house hold setting the arming and disarming of zones is done through a user interface that includes a keypad for input and an lcd screen for output an audible alarm is implemented to respond to certain triggers in armed zones and leds are used to signal which zones are currently armed ultrasonic distance sensors are used to monitor for movement within armed zones the system is implemented on an msp430fr4133 microcontroller with an attached custom pcb the completed design images field photos paper 1 jpg the completed design the schematic the schematic images technical photos full schematic png the schematic the pcb the pcb images technical photos full pcb png the pcb
os
Maruf-Project
maruf project ios event based software engineering project that uses firebase storing facebook gmail logins google calendar events and heroku server hosting the events login screen img width 379 alt screen shot 2018 02 24 at 7 23 02 am src https user images githubusercontent com 20143504 36630911 fd095284 1933 11e8 8470 fc47d61d0434 png facebook login img width 395 alt screen shot 2017 12 10 at 8 21 14 am src https user images githubusercontent com 20143504 33805859 c789e1a4 dd84 11e7 9136 7c5c30094163 png home screen img width 393 alt screen shot 2018 02 24 at 7 30 33 am src https user images githubusercontent com 20143504 36630941 cdfb591e 1934 11e8 8726 702e9ffc42bc png project select screen img width 397 alt screen shot 2018 02 24 at 7 30 46 am src https user images githubusercontent com 20143504 36630948 ebe8e342 1934 11e8 8d56 f0c2526661bc png upcoming events img width 396 alt screen shot 2017 12 10 at 8 22 07 am src https user images githubusercontent com 20143504 33805865 d2deb7f0 dd84 11e7 9a25 ace1f8a5a79f png event detail screen img width 400 alt screen shot 2018 03 13 at 3 34 09 pm src https user images githubusercontent com 20143504 37368499 79480cca 26d4 11e8 8cd7 64f676c084b5 png project screen img width 399 alt screen shot 2018 03 13 at 3 34 33 pm src https user images githubusercontent com 20143504 37368511 7ca07bfa 26d4 11e8 9278 5f3817e51411 png switch project img width 395 alt screen shot 2018 03 13 at 3 34 55 pm src https user images githubusercontent com 20143504 37368527 8316455a 26d4 11e8 84c0 a9c70d4de72b png
firebase googlecalendarapi heroku ios swift
os
postmortems
postmortem a repository containing cloud engineering postmortems
cloud
natural-language-translation
natural language translation project this project has the following goals create a machine translation system application based on the recurrent neural network with keras deep learning model implement a haiku generator using character level multi layer recurrent neural network model deploy the application to heroku results language model was trained on 100 000 pairs for each language english spanish and english french and is able to translate short phrases like where is the bathroom give me a fork i like to swim etc haiku model generates haiku that sound close to the haiku rules application was deployed to heroku see sup sup below requirements python 3 6 flask 1 0 2 gunicorn 19 9 0 keras 2 2 2 scikit learn 0 19 2 scipy 1 1 0 tensorflow 1 9 0 the full list of requirements can be found in requirements txt file usage application can be opened locally and be deployed to heroku local use install all necessary libraries proceed to local app app folder run app py open localhost http 127 0 0 1 5000 deployment install all necessary libraries proceed to heroku app folder print heroku local in terminal open http 0 0 0 0 5000 random port number if everything works the app is ready to be deployed on heroku on the web instructions for opening 1 click the link https trainslator herokuapp com 2 wait 5 minites while the app is loading during this time you can see application error on the screen sup sup 3 renew the web page sup sup the reason is that heroku has memory restrictions 500gb per application and this application is larger after importing all the dependencies and loading models we are aware that heroku is not a proper platform for deploying the machine learning applications and working on this issue the project description and web presentation is located on https sonyasha github io nlt presentation team members christina park idea language model training malvica mathur language model training design web presentation ed ali language model training aws machine learning application sonya smirnova data preparation haiku model training application developing front end back end heroku deployment abubeker ali language model adjusting web presentation tools python tensorflow keras selenium flask javascript css html
python keras tensorflow neural-networks char-rnn d3js herokuapp deep-learning flask
ai
SQL-Challenge
sql challenge data modeling engineering and analysis of a sql database
server
Continual_Learning_CV
img src https github com cityucompuneurolab continual learning cv blob master lib logo openloris logo png width 80 align left continual learning cv clcv license https img shields io badge license bsd 203 clause blue svg https opensource org licenses bsd 3 clause built with python3 7 https img shields io badge build 20with python3 7 red svg https www python org built with caffe https img shields io badge build 20with pytorch brightgreen svg https pytorch org continual learning toolbox for computer vision tasks this toolbox aims at prototyping current computer vision tasks e g human gesture recognition action localization detection object detection segmentation and person reid in a continual lifelong learning manner it means most of the sotas can be updated with novel data without retraining from scratch and at the same time they are able to migrate from catastrophic forgetting problem furthermore the models can learn with few shot samples and adapt quickly to the target domains since the cl strategies are quite complex and flexible it has some intersections with recent few shot meta multi task learning work datasets and benchmarks we are testing the performances based on openloris object dataset the basic codes are the implementation of the following paper qi she et al openloris object a robotic vision dataset and benchmark for lifelong deep learning https arxiv org pdf 1911 06487 pdf the paper has been accepted into icra 2020 also permutated mnist and cifar 100 datasets are tested requirements not hard constraints the current version of the code has been tested with following libs pytorch 1 1 0 torchvision 0 2 1 tqdm 4 19 9 visdom 0 1 8 9 pillow 6 2 0 pandas 1 0 3 experimental platforms intel core i9 cpu nvidia rtx 2080 ti gpu cuda toolkit 10 install the required the packages inside the virtual environment conda create n yourenvname python 3 7 anaconda source activate yourenvname pip install r requirements txt data preparation openloris object for mnist and cifar 100 datasets please refer to benchmarks readme md step 1 download data including rgb d images masks and bounding boxes following this instruction https drive google com open id 1klgjtismd5qrjmjhlxk4tshir0wo9u6xi5puf8jdjco step 2 run following scripts python3 benchmark1 py python3 benchmark2 py step 3 put train test validation file under bechmakrs data openloris object for more details please follow note file under each sub directories in img step 4 generate the pkl files of data python3 pk gene py python3 pk gene sequence py quick start you can directly use scripts on 9 algorithms and 2 benchmarks stated in the paper may need to modify arguments parameters in bash files if necessary xxx bash indicates the factor chanegs with object images provided bash clutter bash bash illumination bash bash pixel bash bash occlusion bash bash sequence bash running benchmark 1 individual experiments can be run with main py main option is python3 main py factor which kind of experiment clutter illumination occlusion pixel running benchmark 2 the main option to run benchmark2 is python3 main py factor sequence running specific baseline methods elastic weight consolidation ewc main py ewc savepath ewc online ewc main py ewc online savepath ewconline synaptic intelligence si main py si savepath si learning without forgetting lwf main py replay current distill savepath lwf deep generative replay dgr main py replay generative savepath dgr dgr with distillation main py replay generative distill savepath distilldgr replay trough feedback rtf main py replay generative distill feedback savepath rtf cumulative main py cumulative 1 savepath cumulative naive main py savepath naive repository structure openloriscode img lib callbacks py continual learner py encoder py exemplars py replayer py train py vae models py visual plt py compare py compare replay py compare taskid py data py evaluate py excitability modules py main py linear nets py param stamp py pk gene py visual visdom py utils py readme md citation please consider citing our papers if you use this code in your research article she2020iros title iros 2019 lifelong robotic vision challenge lifelong object recognition report author she qi and feng fan and liu qi and chan rosa hm and hao xinyue and lan chuanlin and yang qihan and lomonaco vincenzo and parisi german i and bae heechul and others journal arxiv preprint arxiv 2004 14774 year 2020 article she2019openlorisobject title openlorisobject a robotic vision dataset and benchmark for lifelong deep learning author she qi and feng fan and hao xinyue and yang qihan and lan chuanlin and lomonaco vincenzo and shi xuesong and wang zhengwei and guo yao and zhang yimin and others journal international conference on robotics and automation icra year 2020 acknowledgements parts of code were borrowed from here https github com gmvandeven continual learning issue want to contribute open a new issue or do a pull request in case you are facing any difficulty with the code base or if you want to contribute features pending openloris object base x openlrois object dataset configuration files x openlrois object sample codes cl baseline x sota cl methods cl benchmarks for image classification cl benchmarks mnist and cifar 100 datasets visualization x visualization tools for 4 cl metrics x dl backbones vgg 16 resnet 18 50 101 efficientnet applications ego gesture recognition online action recognition constractive learning for self supervised object segementation few shot learning with object recognition algorithms robust adversial training with transfer learning
ai
ReactNative-MAD-
reactnative mad resources for mobile application development using react native important links javascript https developer mozilla org en us docs web javascript https www youtube com watch v qqx wzmmfea video tutorial by freecodecamp using github desktop https www youtube com watch v rpagoaux2sq https www youtube com watch v goy9wmyr7pu
front_end
rtos-views
rtos views rtos views for micro controllers that work with any debugger this was initially part of cortex debug and has been re factored into its own extension and is now debugger agnostic currently cortex debug cppdbg and cspy are supported but others can easily be added upon request the debugger has to support the debug adapter protocol https microsoft github io debug adapter protocol this protocol specifies how to format make requests and responses most of the supported rtoses have been contributed by users and if yours is not supported or you want to added features to be added to existing ones please feel free to contribute here is the guide for getting started todo add link to the guide md each rtos has its own personality and its own set of global variables we look for the existence of certain global variables to decide if there is an rtos sometimes there may be optional features that are not in use and that is okay some of the format may also be a bit different between rtoses but we hope to keep the same look and feel the detection mechanism starts the first time the program is in stopped state if an rtos is detected we start tracking it each time the program is in stopped state most debuggers use gdb as the backend and as such they will not allow probing while program is running this is also an expensive operation that takes about a second we also have support for multiple cores boards where each core could be running an rtos same or different if in the future if a debugger allows non intrusive background queries then we might consider updating views while program is running the following is an example of uc os ii rtos view uc os ii images ucos ii png here is an example of a freertos view freertos images freertos full png here is an example of a freertos view with some information missing we generally try to provide help to tell you what is missing and how you can change that freertos with partial info images freertos png note the tab name is in the screenshots is called xrtos so it does not conflict with cortex debug once the migration is complete it will be called rtos and cortex debug itself will not have this functionality contributors and maintainers rtos contributor freertos haneefdm uc os ii philipphaefele mayjs embos philipphaefele zephyr beta philipphaefele chibios beta vrepetenko vr
os
blockchain-for-healthcare
blockchain for healthcare a proof of concept note a lot has changed in the ethereum world since this project was completed and the setup instructions might not work now if you being a good samaritan run into an error during setup and are able to fix it please raise a pull request to help others big big thanks from me installation the projects requires nodejs and npm to work instructions to install all other dependencies are given below note the instructions given below are for linux specifically ubuntu 18 04 you should be able to find similar instructions for macos and windows although support is available for windows i recommend using linux or macos windows has some difficulty playing with npm node modules 1 move to the project directory and open it in your terminal 2 run npm install ganache 1 go to ganache homepage https truffleframework com ganache and download 2 if you are on linux you must have received an appimage file follow installation instructions available here https itsfoss com use appimage linux ipfs 1 go to the download page https docs ipfs io introduction install of ipfs and follow the instructions given local server 1 you can use any local server to deploy the web application 2 i used php but feel free to choose anything of your liking 3 to install php on your linux machine run sudo apt get install php detailed instructions available here https thishosting rocks install php on ubuntu 4 one more great option is lite server which is available as a node module 5 install lite server by running the following command on your terminal npm install g lite server metamask 1 metamask is a browser extension available for google chrome mozilla firefox and brave browser 2 go to the this link http metamask io and add metamask to your browser getting the dapp running configuration 1 ganache open ganache and click on settings in the top right corner under server tab set hostname to 127 0 0 1 lo set port number to 8545 enable automine under accounts keys tab enable autogenerate hd mnemonic 2 ipfs fire up your terminal and run ipfs init then run ipfs config json api httpheaders access control allow origin ipfs config json api httpheaders access control allow credentials true ipfs config json api httpheaders access control allow methods put post get 3 metamask after installing metamask click on the metamask icon on your browser click on try it now if there is an announcement saying a new version of metamask is available click on continue and accept all the terms and conditions after reading them stop when metamask asks you to create a new password we will come back to this after deploying the contract in the next section deploying the contract i purposely haven t used any development framework so as to keep the code as raw as possible this will also be easier to understand for any newcomer who is already having a tough time understanding the many technologies the application is built on 1 starting your local development blockchain open ganache make sure to configure it the way mentioned above moving on to deploy the contract on the blockchain you have two options use any available development framework for dapps i recommend the truffle https truffleframework com truffle framework embark https embark status im is another great alternative go full on geek mode and deploy it yourself with a few lines of code i ll be explaining the second method here 2 deploying the contract and linking it to the frontend fire up your terminal and move to the project directory now open up your project directory src js run js in your favourite text editor you have to make two changes 1 make sure the address in line number 3 is the same as your rpc server address on ganache if you have configured ganache as instructed above the code should look like this var web3 new web3 new web3 providers httpprovider http localhost 8545 2 the path in this line should point to where your solidity contract is located var code fs readfilesync your project directory contracts agent sol tostring go back to your terminal type node and hit enter copy and paste all the contents of run js to the terminal if all goes well you should see some few lines as output of the command console log compiledcode contracts agent interface this is the abi of the contract copy and paste these lines in line number 10 of app js the code should look like abi json parse paste your abi here go back to the terminal and type deployedcontract address which is also the last command of your run js file the output is the address where the contract is deployed on the blockchain copy the output and paste it on line number 13 of app js the code should look like contractinstance agentcontract at paste your address here that s it for this part now lets set up metamask running the dapp 1 connecting metamask to our local blockchain let s go back to the configuration section of metamask if done correctly you would have stopped at the part where metamask asks you to create a new password just below the create button click on the import with seed phrase a form should open up asking you to enter wallet seed open ganache copy the twelve words that make up the mnemonic on the accounts tab paste the twelve words in wallet seed create a new password and click import 2 starting ipfs open a new terminal window make sure you have configured ipfs as mentioned above run ipfs daemon 3 start a local server open a new terminal window and navigate to your project directory src run php s locahost 3000 open localhost 3000 register html on your browser that s it the dapp is up and running locally
blockchain
AB-Demo
ab demo simple front end a b experiment view it live https lambdaschool github io ab demo
front_end
SQL-Database-Management-System
sql database management system
sql-database-management sql database-management database-schema
os
Maxim
maxim ok so this was a quickly hacked together thing we did for a mooc in 2013 i really don t think you want to use this anymore but i ll leave it here for posterity cross platform javascript java audio dsp and mobile web development library compatible with processing maxim is designed to make it easier to program cross platform audio for desktops amd mobile platforms it provides a single api for building complex audio applications on android ios and the desktop using the webaudioapi in combination with traditional java approaches for compatibility it s a work in progress but vastly simplifies the process of getting started writing audio and music software for mobile platforms some notes if you are using javascript mode make sure your browser supports webaudioapi properly see here for a list of browsers that support webaudio http caniuse com audio api
front_end
Alpha-Co-Vision
alpha co vision https github com xissax alpha co vision assets 86708276 736978ab 5c66 4335 a2cf 2daaa64250a0 a real time video to text bot that captures frames generates captions and creates conversational responses using a large language models base to create interactive video descriptions powered by blip bootstrapping language image pre training and cohere ai this bot is capable of unified vision language understanding and generation using transformers description alpha co vision is the first step in a series of upcoming projects focused on real time generations to ultimately create a pet toy robot capable of understanding its environment to better interact with humans the main goal of this project was to efficiently run a videoframes to text multimodal esk capable of understanding the world while combining it with the power of cutting edge large language models to better interact with the natural environment running blip in half precision float16 on macbook m1 to gain maximum performance the project is currently under development and will improve over time with more support for other chat models such as gpt 4 and gpt 3 5 turbo and locally running llms like llama and alpaca this was hacked in a couple of nights and maybe optimized incorrectly poorly moreover this project is for educational purposes only future updates with growing community support will include cuda support voice input output support gpt 3 5 and gpt 4 for extended generations with chat support and much more requirements python 3 7 or higher cohere opencv python pillow torch transformers openai optional recent updates added an experimental feature that allows you to see responses directly on the video display recommended use main2exp py reduced repetition by maintaining a list of previous responses and checking the similarity between new responses and past responses the bot is less likely to repeat itself resulting in a more engaging and natural conversation improved conversation quality the updated prompt with more examples and clearer instructions helps the model understand the task better leading to more relevant and context aware responses mirrored video display flipping the frame horizontally provides a mirrored display for the user making it more comfortable for them to view their own video feed without affecting the input to the model added upto full hd 4k support you can install the required packages using the following command pip install cohere opencv python pillow torch transformers openai project structure usage and customization blip https github com salesforce blip blip on hugging face https huggingface co spaces salesforce blip cohere ai https cohere ai get your cohere ai api key here https dashboard cohere ai api keys try cohere s playground here https dashboard cohere ai playground generate for support more info join the cohere s incredible discord community https discord com invite co mmunity project structure config py contains api keys and other configurations image processing py contains functions related to image processing caption generation py contains functions related to caption generation using the blip model response generation py contains functions related to response generation using the cohere ai api main py the main file that runs the program usage 1 set up your api keys in the config py file cohere api key your cohere api key 2 cohere api key your cohere api key in config py 1 run the main py file python main py 2 press q on the camera window to quit optional tweaks tweak llms outputs def process frame frame current time last generation time 3 for more or less llm generations optimal captions 2 tweak captions outputs def main loop current time last process time 2 to generate more or less image processing captions 2 optimal 0 realtime have fun make sure to do some activity for the camera for maximum fun show your surroundings more objects people or pets also overtime it increases its understanding of your surroundings and would keep generating better better outputs use your iphone as a webcam on mac https support apple com en ca guide mac help mchl77879b8a mac on macos ventura 13 connects to your iphone first 1 should you not wish to use it please turn off your bluetooth either on your iphone or mac and disconnect your iphone from your mac via cable 2 if it fails on your first try restart python main py macos cpu gpu support install pytorch for m1 pt tutorial is live follow these instructions to install pytorch on apple silicon https medium com vkkvben10 how to install pytorch on apple silicon mac m1 m2 easiest guide d31a7c683367 pre macos version pytorch is supported on macos 10 15 catalina or above visit the link https pytorch org get started locally select preview nightly in pytorch build navigate to the macos version https pytorch org get started locally macos version section follow the instructions pt mps is only support for mac install tensorflow for m1 tensorflow model was recently added to hugging face tf update coming soon meanwhile https developer apple com metal tensorflow plugin follow the instructions to install tensorflow on your own currently optional option to switch between mac cpu gpu soon how it works 1 the program captures webcam frames 2 frames are converted to pil images 3 captions are generated using the blip captioning model 4 conversational responses are generated based on the captions using the cohere ai s api 5 captions and responses are displayed on the webcam feed in real time example the bot captures an image of a person working on their computer caption a person working on a computer with code alpha co i see you re multitasking while we chat keep up the great work remember hard work beats talent when talent doesn t work hard customizing the bot you can customize the bot by modifying the prompt in the response generation py file or adjusting the settings such as max tokens and temperature when calling the cohere api notes this bot uses the webcam so grant permission to access the camera press q to quit the program while displaying the webcam feed this project was built mac m1 efficiently running half precision float16 future updates to include support for cuda credits this project utilizes the blip model for generating image captions special thanks salesforce s research team for their work on blip bootstrapping language image pre training for unified vision language understanding and generation using transformers their research and model have greatly contributed to developing this video caption to interaction bot special thanks thank you to cohere ai for their unwavering support and motivation throughout this project your encouragement and cutting edge technology have played a crucial role in our success and i m grateful for the opportunity to collaborate and innovate together here s to pushing boundaries and shaping the future of ai future updates 1 an api rate limiter 2 gpt 3 5 and gpt 4 for more extended generations and chat support 3 llama alpaca and other llms support for running everything locally 4 chat input messages to have a conversation 5 voice input output support 6 ability to fine tune blip caption model 7 ability to fine tune llms 8 cpu cuda support 9 ability to switch between full precision half precision
deep-learning real-time-video-captioning video-to-text video-to-caption video-to-llm
ai
just-large-models
just large models hackable with as little abstraction as possible done for my own purposes feel free to rip every model should have its own runnable logic seperate not shared each file does a thing the adaptibility of huggingface s code is incredibly bad due to over abstraction and incidental complexity therefore diy right now i m still improving it as i have time this is designed to rely on hf for the files and loading of the files otherwise that is where the dependency ends rules the code is the tool edit it as you see fit all h ggingface imports will be placed in quarantine model pass files will contain no references i will not be addressing issues not a single kwargs shall be observed one model forward pass one function call simple as current models python llama py
ai
WhiteChapel
white chapel password auditing framework this project is meant to be run internally since i haven t really seen any open source projects that do all the things i think a password auditing framework should do i m creating my own here are the features that i intend to have please feel free to create bug reports or feature requests outside of the items stipulated here 1 search for hashes quickly 2 upload password dumps for cracking hashes 3 upload hash lists for cracking 4 generate hash tables for all popular hash types based on searched password uploaded dictionaries and cracked hashes pre installation elastic search whitechapel requires you to have elasticsearch running you can download it here http www elasticsearch org download once you have it downloaded if you are using the tar just cd into the bin directory and do a elasticsearch f to start elastic search up elastic search doesn t have to run on the same machine as you are running whitechapel just make a config file called elastic conf copying the example provided elastic example conf with the url usually http 127 0 0 1 9200 if you are running es locally elastic search has custering built into it and running another elastic search server on another system in the same broadcast area will automatically join the cluster and decrease the load elastic search on osx thanks to mandreko here is how you install es on osx w brew brew install elasticsearch then to get it to launch at startup launchctl load library launchagents homebrew mxcl elasticsearch plist redis server for queue management you can download it here http redis io download most package managers apt get yum osx ports brew have redis server as a package and it s really easy to get set up there is also as redis ip port configuation in the rakefile if you want to run redis on another server this makes it seemless to upload dictionaries worth of passwords and have the server not flinch at 100mb files obviously the upload might take a minute but the db will process it very fast you can have more than one queue redis server if you want as pretty much every action is compartmentalized installation git clone https github com mubix whitechapel git cd whitechapel bundle install starting workers you can start additional workers to handle the password import processing usually only an issue when importing big wordlists by issusing the following command term child 1 queue rake resque work from inside the whitechapel directory you can also start multiple workers at once as so count 25 term child 1 queue rake resque workers execution foreman start importing dictionaries from the command line for most cases file upload via the web interface is adding a hurdle http upload that doesn t need to be there so running the ruby file dictionaryimport cli rb from whithin the whitechapel directory will directly import the wordlist into the password processing queue dictionaryimport cli rb path to wordlist rockyou txt should simply output how many lines it imported when it s done todo list see the file todo list or github issues notes it s all kinds of fun using a ton of different tools to crack passwords and then having to sort and go through and maintain or delete them right this project will hopefully be a very modular front end to cracking passwords the idea is you tell it a tool to use and how to use it and what to expect in results the the overlying framework should swallow that up and allow you to upload crack and manage passwords hashs and dictionary collections allowing you to look back historically at what was cracked and with what tool resend a group through the engines again have as many engines as you want etc giving you more time to concentrate on using the passwords instead of figuring out the tools to break them if i can keep the idea as scalable as possible i think it would fit really well plugged into any pentester red teamer or firm s toolkit crossed fingers also i picked the name based on where jack the ripper was performing his murders seems a bit dark now that i think about it but oh well blame section twitter bootstrap for the prettifying of the interface http twitter github com bootstrap used https github com pokle sinatra bootstrap for sinatra jasny bootstrap specifically used to pretty up the upload dialog since tb doesn t mysql hash generation https gist github com 1290541 change sublimetext 2 to use rvm instead of system ruby http rubenlaguna com wp 2012 12 07 sublime text 2 rvm rspec take 2
front_end
TagGPT
taggpt large language models are zero shot multimodal taggers taggpt is a fully automated system capable of tag extraction and multimodal tagging in a completely zero shot fashion produced by qq arc joint lab at tencent pcg a href https huggingface co spaces tencentarc taggpt img src https img shields io badge f0 9f a4 97 open 20in 20spaces blue a a href https arxiv org abs 2304 03022 img src https img shields io badge arxiv tech 20report green a dependencies python 3 7 pytorch 2 0 0 transformers 4 27 4 bash pip install r requirements txt how to use taggpt step 1 tagging system construction you need a batch of data to build your tagging system here we can use the kuaishou open source data which you can download here https pan baidu com s 1v6x14o5k9ium3a is29uoa pwd ihc2 list path 2f password ihc2 first you can place the data in the data folder and format it with the following command bash python scripts main py data path data 222k kw ft func data format then you can use the following command to generate candidate tags based on llms bash python scripts main py data path data sentences txt func tag gen openai key put your own key here gen feq 5 next the tagging system can be obtained by post processing bash python scripts main py data path data tag gen txt func posterior process step 2 data tagging taggpt can assign tags to the given samples based on the built tagging system and you can adapt your data to what data examples csv looks like and taggpt provides two different tagging paradigms 1 generative tagger bash python main py data path data examples csv tag path data final tags csv func selective tagger openai key put your own key here 2 selective tagger bash python main py data path data examples csv tag path data final tags csv func generative tagger openai key put your own key here acknowledgements we appreciate the open source of the following projects kuaishou hugging face langchain citation if you find this work useful for your research or applications please cite our technical report article li2023taggpt title taggpt large language models are zero shot multimodal taggers author li chen and ge yixiao and mao jiayong and li dian and shan ying journal arxiv preprint arxiv 2304 03022 year 2023 contact information for help or issues using the taggpt please submit a github issue
chatgpt tagger
ai
ComputerVision-Projects
computervision projects install this project files requires python 3 and the following python libraries installed opencv https opencv org numpy http numpy org dlib https github com davisking dlib following are some links to install opencv and dlib on mac windows and linux opencv https github com opencv opencv mac https www learnopencv com install opencv3 on macos windows https www learnopencv com install opencv3 on windows ubuntu https www learnopencv com install opencv3 on ubuntu dlib https github com davisking dlib mac https www learnopencv com install dlib on macos windows https www learnopencv com install dlib on windows ubuntu https www pyimagesearch com 2017 03 27 how to install dlib run bash python file name py about some simple computer vision implementations using opencv such as extracting facial landmarks for facial analysis by applying filters and face swaps approximating contours contour filtering and ordering segmenting images by understanding contours circle and line detection feature detection sift and orb to do object detection and implementing dbject detection for faces and cars generative adversarial networks gans applied to image
opencv python computer-vision
ai
IDVerification
id verification by librax ai this is the first free identity verification in the market librax ai https www librax ai is an identity verification platform for developers this solution is to verify user id image for name matching age and basic fraud detection for id librax is providing to give this service for free as long as we are in business and you can reach out to hello librax ai for any add on features or customized requirement example for id verification https www librax ai wp content uploads 2021 03 screen shot 2021 03 12 at 11 45 22 am e1615567678450 png how to use the api 1 for id verfication use the rest api https api librax ai id verify 2 create a json payload for the request having fistname lastname and base64 encoding of id on which you wish to run id verification on firstname first name lastname last name idphoto base64 encoded string 3 set the content type header as application json 4 using the subscription key obtained in previous step set the ocp apim subscription key header as follows ocp apim subscription key your subscription key 5 send your request to the server the response will be in json format note detailed response status code is documented here dev librax ai https dev librax ai api details api id verification api operation post id verify example code can be found here id verification example py how to get the subscription key 1 you need register an account in https dev librax ai 2 you will get an confirmation email it may land into your spam folder 3 you need login and go to product page you can name your subscription and click the subscrib button there 4 you can see your subscription and key token in your profile page future roadmap we are looking forward to hearing your feedback about the future roadmap please reach out to us
identity verification machine-learning inference-engine fraud-detection fraud-prevention fraud risk computer-vision data-science driverlicense passport risk-analysis risk-management risk-scores agedetection
ai
esp-ir
esp ir library for esp open rtos https github com superhouse esp open rtos to send and receive ir commands receiving ir codes can be done on arbitrary pin which supports gpio mode and pin change interrupts receiver wiring resources ir decoder wiring png big black thing being ir decoder e g tsop38238 consult datasheet on particular part pinout transmission though can only be done on gpio14 transmitter wiring resources ir led wiring png pretty much any npn transistor will do e g 2n2222 transistor base resistor could be 10k om led resistor is calculated based on led parameters but you probably safe by assuming it can handle 10 20ma and go with 220 om example sending command c include ir ir h include ir raw h static int16 t command1 3291 1611 443 370 425 421 421 1185 424 422 421 1185 425 421 421 370 424 392 448 1188 423 1214 444 372 422 395 447 397 420 1186 449 1185 424 423 419 375 441 372 423 422 420 372 444 370 424 422 420 372 421 393 424 421 421 371 422 392 449 398 420 1185 450 396 421 370 422 423 ir tx init ir raw send command1 sizeof command1 sizeof command1 example receiving nec like command c include ir ir h include ir generic h define ir rx gpio 12 static ir generic config t my protocol config header mark 3200 header space 1600 bit1 mark 400 bit1 space 1200 bit0 mark 400 bit0 space 400 footer mark 400 footer space 8000 tolerance 10 ir rx init ir rx gpio 1024 ir decoder t generic decoder ir generic make decoder my protocol config uint8 t buffer 32 while 1 uint16 t size ir recv generic decoder 0 buffer sizeof buffer if size 0 continue printf decoded packet size d size for int i 0 i size i printf 0x 02x buffer i if i 16 15 newline after every 16 bytes of packet data printf n if size 16 print final newline unless packet size is multiple of 16 and newline was printed inside of loop printf n license mit licensed see the bundled license https github com maximkulkin esp ir blob master license file for more details
esp8266 ir remote
os
embeddedsystem
embeddedsystem embedded system design repository of the lab works of embedded system design
os
LLM-Planner
llm planner few shot grounded planning for embodied agents with large language models code for llm planner https arxiv org abs 2212 04088 check project website https dki lab github io llm planner for an overview and a demo news jul 23 llm planner has been accepted to iccv 2023 catch us in paris this october jul 23 we will release the code soon thanks for your interests release process x high level planner x knn dataset x knn retriever low level planner few shot trained models citation information if you find this code useful please consider citing our paper inproceedings song2023llmplanner author song chan hee and wu jiaman and washington clayton and sadler brian m and chao wei lun and su yu title llm planner few shot grounded planning for embodied agents with large language models booktitle proceedings of the ieee cvf international conference on computer vision iccv month october year 2023
ai
HackMentor
readme zh md english readme md hackmentor fine tuning large language models for cybersecurity hackmentor pdf hackmentor logo assets hackmentor png hackmentor is a cybersecurity llms large language models focused on domain specific data fine tuning this project consists of three main parts data construction model training and model evaluation also you can get more detailed information by reading paper interpretation hackmentor https mp weixin qq com s engdem0p6cxrdk42yrb90w features data construction methods and tools for creating domain specific datasets instructions conversations for fine tuning llms model training techniques and processes for training llms on the constructed datasets model evaluation metrics and evaluation methodologies to assess the performance of the fine tuned models lora weights model we release the lora weights model which is available for download download lora weights all hackmentor weights are accessible here https drive google com drive folders 1 woz0dsfkq8qyu x3q0pgoygodn a20t usp drive link and the specified lora weights can be accessed in the table below hackmentor lora weights llama 7b lora iio download https drive google com drive folders 13xbcqmizfwbtlaj7oeyrco9qbvn0zdcm usp drive link llama 13b lora iio download https drive google com drive folders 17i3a1uuckppujo3dglvxmvzwqjviqcoe usp drive link vicuna 7b lora iio download https drive google com drive folders 1loen7qh153qqz10sfykdb9afcskiinmk usp drive link vicuna 13b lora iio download https drive google com drive folders 1sf51j4kdygm356vlx kukini7xywnbtf usp drive link llama 7b lora turn download https drive google com drive folders 1e hb3yhlo25y6cl rhrnrqlturhgf1af usp drive link llama 13b lora turn download https drive google com drive folders 1lell6wh1muwtqzge5utmnmh7auniihek usp drive link notes 1 the naming convention for hackmentor is as follows base model model size fine tuning method fine tuning data here the base model can be llama vicuna the model size can be 7b 13b the fine tuning method can be lora with plans to include full parameters fine tuning and the fine tuning data can be iio or turn where iio represents instruction input output data and turn represents conversation data 2 in our testing the best performing model was llama 13b lora iio for reference local deployment and usage to deploy and utilize the lora weights model locally follow the steps below 1 download the base models llama vicuna and the lora weights model provided by this project and place them in the models directory 2 download chat py configure the environment and ensure the following dependencies are installed for running the python file python bitsandbytes 0 39 0 fire 0 5 0 peft git https github com huggingface peft git 3714aa2fff158fdfa637b2b65952580801d890b2 torch 2 0 1 transformers 4 28 1 3 switch to the corresponding directory and run the scripts shown in the table below according to the requirements base model lora weights domain execute command llama 7b general python chat py base model models pretrained llama 7b use lora false vicuna 13b general python chat py base model models pretrained vicuna 13b use lora false llama 13b llama 13b lora iio security python chat py base model models pretrained llama 13b lora model models lora models llama 13b lora iio vicuna 7b vicuna 7b lora iio security python chat py base model models pretrained vicuna 7b lora model models lora models vicuna 7b lora iio llama 7b llama 7b lora turn security python chat py base model models pretrained llama 7b lora model models lora models llama 7b lora turn please note that the above code examples are for illustrative purposes only and you may need to make appropriate adjustments based on your specific situation qa 1 q1 about computing resources and training time a1 computing resources are dependent on model size and training methods for lora fine tuning of the 7 13b model 1 a100 gpu is sufficient for full parameters fine tuning the 7b model requires 2 3 a100 gpus while the 13b model requires 4 a100 gpus training time is influenced by the amount of data and training methods with the lora method using 30 000 data samples an a100 gpu can complete training in approximately 4 hours for full parameters fine tuning the training time is expected to be around 3 5 or more times longer than the lora method please note that training time may vary slightly and is provided for reference only 2 q do you train and validate the effectiveness of a specific security task with llms such as security information extraction a no the purpose of this work is to enhance ignite the overall security capabilities of llm for general security skills contribution we welcome contributions to the hackmentor project if you find any issues or have any improvement suggestions please submit an issue or send a pull request your contributions will help make this project better acknowledgements this project refers to the following open source projects and i would like to express my gratitude to the relevant projects and research and development personnel llama by meta https github com facebookresearch llama fastchat by im sys https github com lm sys fastchat stanford alpaca by tatsu lab https github com tatsu lab stanford alpaca citation if you use the data or code of this project or if our work is helpful to you please state the citation inproceedings hackmentor2023 title hackmentor fine tuning large language models for cybersecurity author jie zhang hui wen liting deng mingfeng xin zhi li lun li hongsong zhu and limin sun booktitle 2023 ieee international conference on trust security and privacy in computing and communications trustcom year 2023 organization ieee
ai