names
stringlengths 1
98
| readmes
stringlengths 8
608k
| topics
stringlengths 0
442
| labels
stringclasses 6
values |
---|---|---|---|
thingsvision | a name readme top a div align center a href https github com vicco group thingsvision actions workflows tests yml rel nofollow img src https github com vicco group thingsvision actions workflows tests yml badge svg alt tests a a href https github com vicco group thingsvision actions workflows coverage yml rel nofollow img src https codecov io gh vicco group thingsvision branch master graph badge svg alt code coverage a a href https gist github com cheerfulstoic d107229326a01ff0f333a1d3476e068d rel nofollow img src https img shields io badge maintenance yes brightgreen svg alt maintenance a a href https pypi org project thingsvision rel nofollow img src https img shields io pypi v thingsvision alt pypi a a href https pepy tech project thingsvision img src https img shields io pypi dm thingsvision alt downloads a a href https www python org rel nofollow img src https img shields io badge python 3 8 20 7c 203 9 20 7c 203 10 blue svg alt python version a a href https github com vicco group thingsvision blob master license rel nofollow img src https img shields io pypi l thingsvision alt license a a href https github com psf black rel nofollow img src https img shields io badge code 20style black 000000 svg alt code style black a a href https colab research google com github vicco group thingsvision blob master notebooks pytorch ipynb rel nofollow img src https colab research google com assets colab badge svg alt open in colab a div br table of contents notebook with decorative cover table of contents about the project star2 about the project functionality mechanical arm functionality model collection file cabinet model collection getting started running getting started setting up your environment computer setting up your environment basic usage mag basic usage contributing wave how to contribute license warning license citation page with curl citation contributions gem contributions about the project star2 about the project thingsvision is a python package that let s you easily extract image representations from many state of the art computer vision models in a nutshell you feed thingsvision with a directory of images and tell it which neural network you are interested in thingsvision will then give you the representation of the indicated neural network for each image so that you will end up with one feature map vector or matrix depending on the layer per image you can use these features for further analyses we use the word features for short when we mean image representation rotating light note some function calls mentioned in the original paper https www frontiersin org articles 10 3389 fninf 2021 679838 full have been deprecated to use this package successfully exclusively follow this readme and the documentation https vicco group github io thingsvision rotating light p align right a href readme top back to top a p functionality mechanical arm functionality with thingsvision you can extract features for any imageset from many popular networks extract features for any imageset from your custom networks extract features for 26 000 images from the things image database https osf io jum2f optionally turn off the standard center cropping performed by many networks before extracting features extract features from hdf5 datasets directly e g nsd stimuli conduct basic representational similarity analysis rsa after feature extraction perform centered kernel alignment cka to compare image features across model module combinations p align right a href readme top back to top a p model collection file cabinet model collection neural networks come from different sources with thingsvision you can extract image representations of all models from torchvision https pytorch org vision 0 8 models html keras https www tensorflow org api docs python tf keras applications timm https github com rwightman pytorch image models ssl self supervised learning models simclr rn50 mocov2 rn50 barlowtwins rn50 pirl rn50 jigsaw rn50 rotnet rn50 swav rn50 vicreg rn50 dino rn50 dino xcit small medium 12 24 p 8 16 dino vit tiny small base p 8 16 dinov2 vit small base large giant p14 br openclip https github com mlfoundations open clip models clip trained on laion 400m 2b 5b clip https github com openai clip models clip trained on wit a few custom models alexnet vgg 16 resnet50 and inception v3 trained on ecoset https www pnas org doi 10 1073 pnas 2011417118 rather than imagenet and one alexnet model pretrained on imagenet and fine tuned on salobjsub https cs people bu edu jmzhang sos html br each of the many cornet https github com dicarlolab cornet versions harmonization https arxiv org abs 2211 04533 models see harmonization repo https github com serre lab harmonization the default variant is vit b16 other available models are resnet50 vgg16 efficientnetb0 tiny convnext tiny maxvit and levit small br dreamsim https dreamsim nights github io models see dreamsim repo https github com ssundaram21 dreamsim the default variant is open clip vitb32 other available models are clip vitb32 see the docs https vicco group github io thingsvision availablemodels html dreamsim for more information p align right a href readme top back to top a p getting started running getting started setting up your environment computer setting up your environment working locally first create a new conda environment with python version 3 8 3 9 or 3 10 e g by using conda bash conda create n thingsvision python 3 9 conda activate thingsvision then activate the environment and simply install thingsvision via running the following pip command in your terminal bash pip install upgrade thingsvision pip install git https github com openai clip git if you want to extract features for harmonized models https vicco group github io thingsvision availablemodels html harmonization from the harmonization repo https github com serre lab harmonization you have to additionally run the following pip command in your thingsvision environment fyi as of now this seems to be working smoothly on ubuntu only but not on macos bash pip install git https github com serre lab harmonization git pip install keras cv attention models 1 3 5 if you want to extract features for dreamsim https dreamsim nights github io from the dreamsim repo https github com ssundaram21 dreamsim you have to additionally run the following pip command in your thingsvision environment bash pip install dreamsim 0 1 2 see the docs https vicco group github io thingsvision availablemodels html for which dreamsim models are available in thingsvision google colab alternatively you can use google colab to play around with thingsvision by uploading your image data to google drive via directory mounting you can find the jupyter notebook using pytorch here https colab research google com github vicco group thingsvision blob master notebooks pytorch ipynb and the tensorflow example here https colab research google com github vicco group thingsvision blob master notebooks tensorflow ipynb p align right a href readme top back to top a p basic usage mag basic usage command line interface cli thingsvision was designed to simplify feature extraction if you have some folder of images e g images and want to extract features for each of these images without opening a jupyter notebook instance or writing a python script it s probably easiest to use our cli the interface includes two options thingsvision show model thingsvision extract features example calls might look as follows bash thingsvision show model model name alexnet source torchvision thingsvision extract features image root data model name alexnet module name features 10 batch size 32 device cuda source torchvision file format npy out path features see thingsvision show model h and thingsvision extract features h for a list of all possible arguments note that the cli provides just the basic extraction functionalities but is probably enough for most users that don t want to dive too deep into various models and modules if you need more fine grained control over the extraction itself we recommend to use the python package directly and write your own python script python commands to do this start by importing all the necessary components and instantiating a thingsvision extractor here we re using clip from the official clip repo as the model to extract features from and also load the model to gpu for faster inference python import torch from thingsvision import get extractor from thingsvision utils storing import save features from thingsvision utils data import imagedataset dataloader model name clip source custom device cuda if torch cuda is available else cpu model parameters variant vit b 32 extractor get extractor model name model name source source device device pretrained true model parameters model parameters as a next step create both dataset and dataloader for your images we assume that all of your images are in a single root directory which can contain subfolders e g for individual classes therefore we leverage the imagedataset class python root path to root image directory e g images batch size 32 dataset imagedataset root root out path path to features backend extractor get backend backend framework of model transforms extractor get transformations resize dim 256 crop dim 224 set the input dimensionality to whichever values are required for your pretrained model batches dataloader dataset dataset batch size batch size backend extractor get backend backend framework of model now all that is left is to extract the image features and store them on disk here we re extracting features from the image encoder module of clip visual but if you don t know which modules are available for a given model just call extractor show model to print all the modules python module name visual features extractor extract features batches batches module name module name flatten acts true output type ndarray or tensor only applicable to pytorch models of which clip is one save features features out path path to features file format npy file format can be set to npy txt mat pt or hdf5 for more examples on the many models available in thingsvision and explanations of additional functionality like how to optionally turn off center cropping how to use hdf5 datasets e g nsd stimuli how to perform rsa or cka or how to easily extract features for the things image database https osf io jum2f please refer to the documentation https vicco group github io thingsvision p align right a href readme top back to top a p contributing wave how to contribute if you come across problems or have suggestions please submit an issue p align right a href readme top back to top a p license warning license this github repository is licensed under the mit license see the license md license md file for details p align right a href readme top back to top a p citation page with curl citation if you use this github repository or any modules associated with it please cite our paper https www frontiersin org articles 10 3389 fninf 2021 679838 full for the initial version of thingsvision as follows latex article muttenthaler 2021 author muttenthaler lukas and hebart martin n title thingsvision a python toolbox for streamlining the extraction of activations from deep neural networks journal frontiers in neuroinformatics volume 15 pages 45 year 2021 url https www frontiersin org article 10 3389 fninf 2021 679838 doi 10 3389 fninf 2021 679838 issn 1662 5196 p align right a href readme top back to top a p contributions gem contributions this library is based on the groundwork laid by lukas muttenthaler https lukasmut github io and martin n hebart http martin hebart de who are both still actively involved but has been extended and refined into its current form with the help of our many contributors alex murphy https github com alxmrphi software dev florian mahner https www cbs mpg de person mahner 1483114 software dev hannes hansen https github com hahahannes software dev johannes roth https jroth space software dev design docs jonas dippel https github com jonasd4 software dev lukas muttenthaler https lukasmut github io software dev design docs general responsibility martin n hebart http martin hebart de design philipp kaniuth https www cbs mpg de person kaniuth 1483114 design docs roman leipe https github com rleipe sofware dev docs sorted alphabetically this is a joint open source project between the max planck institute for human cognitive and brain sciences leipzig and the machine learning group at technische universtit t berlin correspondence and requests for contributing should be adressed to lukas muttenthaler https lukasmut github io feel free to contact us if you want to become a contributor or have any suggestions feedback for the latter you could also just post an issue or engange in discussions we ll try to respond as fast as we can p align right a href readme top back to top a p | pytorch neural-networks computer-vision tensorflow deep-learning representations alignment cognitive-science | ai |
nlp_qa_project | nlp question answering project cmu 11 411 nlp question answering project last modified date 2019 07 28 upgraded to python 3 7 python 2 is officially deprecated added requirement txt file as requested by many people on youtube and personally added setup sh to get stanfordcorenlp and java cleaned up readme md and removed some junk files to keep things tidy requirements try the followings pip install r requirement txt setup sh please open and read this file before running how to use python ask py einstein txt 20 python answer py einstein txt questions txt big picture i have explained how we approached the problem from scratch here https youtu be ohm7d21c 8q alt text https github com gzhami nlp qa project blob master youtube img png https youtu be ohm7d21c 8q credits i collaborated with angela liang during the oldest version prior 2018 now i am only maintaining it per requests please drop issues if you have trouble running codes usually i will get back to you within 48 hours as courtesy | 11411 11-411 question-answering nlp cmu | ai |
ML-Playground | machine learning playground playing with data live version here http ml playground com ml playground is an educational sandbox for beginners learning fundamental machine learning principles from scratch or for those who want to understand ml models from a more intuitive perspective we currently support 5 models knn perceptron svms neural networks and decision trees feel free to contribute to the project whether it s in terms of explanations or adding new models | playground machine-learning | ai |
Showcase- | showcase cayden taylor bs information technology w cyber security concentration this repository contains code from python php html css courses taken from my degree at george mason university https helios vse gmu edu ctaylo50 it207 all work is entirely original and full academic copyright is retained this also complies with the mason honor code | server |
|
web-200 | web 200 fundamentals of web development bellevue university http bellevue edu bellevue university is a private non profit university located in bellevue nebraska united states address 1000 galvin rd s bellevue nebraska 68005 directions https www google com maps dir bellevue university 41 1509562 95 9896355 12z data 4m8 4m7 1m0 1m5 1m1 1s0x8793886a86ca807f 0x838e857240d175eb 2m2 1d 95 9195956 2d41 1509774 google maps web development degree http www bellevue edu degrees bachelor web development bs designed by developers for developers course description this course examines the fundamentals specific to web development topics include web standards and design considerations course work is based on foundational html and css assignments will enforce learning the essential structures coding conventions and best practices associated with effective modern web development environments course prerequisites none course objectives students who successfully complete this course should be able to 1 write semantic html and css 2 discuss various web development tools 3 explain the processes used in and topics related to web development 4 apply critical thinking to issues associated with web development topic outline 1 html css html 5 semantic elements page layouts page styling media queries 2 web development tools text editors and extensions browser tools version control 3 web development processes and topics website planning sitemaps and wireframes responsive design debugging and validation repository overview the data files for the fitness and rescue websites can be found in this repository it is highly recommended that you bookmark this repository and visit often every programming course in the web development cohort has an associated github repository where you will find code snippets examples and starter projects clone this repository to get started click on the green code button and copy the provided url next open a new terminal window on your computer and use the cd change directory command to move to the buwebdev directory finally run the following command to clone the repository bash git clone https github com buwebdev web 200 git cd web 200 | front_end |
|
it-hit-haldia | about the department the department of information technology was established in 1998 it has an intake capacity of 60 b tech the department has several well equipped laboratories for the students as well as for researchers it has a departmental library which contains books more than 500 titles vision to become a front runner in preparing graduates to be effective problem solvers researchers innovators and entrepreneurs and making them competent professionals by enabling them to take up any kind of challenges in information technology industry or research organizations they serve mission offer high quality undergraduate and postgraduate programs so that they become leaders in their profession program outcomes pos at the end of the four years undergraduate programme in information technology engineering a graduate will have po 01 engineering knowledge apply the knowledge of mathematics science engineering fundamentals and electronics amp communication engineering to the solution of complex engineering problems po 02 problem analysis identity formulate review research literature and analyze complex engineering problems reaching substantiated conclusions using first principles of mathematics natural sciences and engineering sciences po 03 design development of solutions design solutions for complex engineering problems and design system components or processes that meet the specified needs with appropriate consideration for the public health and safety and the cultural societal and environmental considerations po 04 conduct investigations of complex problems use research based knowledge and research methods including design of experiments analysis and interpretation of data and synthesis of the information to provide valid conclusions po 05 modern tool usage create select and apply appropriate techniques resources and modern engineering and it tools including prediction and modelling to complex engineering activities with an understanding of the limitations po 06 the engineer and society apply reasoning informed by the contextual knowledge to assess societal health safety legal and cultural issues and the consequent responsibilities relevant to the professional engineering practice po 07 environment and sustainability understand the impact of the professional engineering solutions in societal and environmental contexts and demonstrate the knowledge of and need for sustainable development po 08 ethics apply ethical principles and commit to professional ethics and responsibilities and norms of the engineering practice po 09 individual and teamwork function effectively as an individual and as a member or leader in diverse teams and in multidisciplinary settings po 10 communication communicate effectively on complex engineering activities with the engineering community and with society at large such as being able to comprehend and write effective reports and design documentation make effective presentations and give and receive clear instructions po 11 project management and finance demonstrate knowledge and understanding of the engineering and management principles and apply these to one 39 s own work as a member and leader in a team to manage projects and in multidisciplinary environments po 12 life long learning recognize the need for and have the preparation and ability to engage in independent and life long learning in the broadest context of technological change faculty details sl no name designation academic qualification 1 dr soumen paul professor amp hod m tech ph d 2 dr sabyasachi samanta assoc professor m tech ph d 3 sri susmit maity assoc professor it and incharge training amp placement mca m tech 4 ms moumita mantri asst professor m tech 5 sri bidyut das asst professor m tech 6 sri pranab goswami asst professor m tech 7 sri manasija bhattacharya asst professor m tech 8 sri ramkrishna ghosh asst professor m tech 9 mrs ruma munian asst professor m tech 10 mrs tamosa chakraborty asst professor m tech 11 ms banani ghosh asst professor mca m tech | server |
|
long_llama | p align center width 100 img src assets longllama png alt longllama style width 50 display block margin auto p longllama focused transformer training for context scaling div align center table tr th style font size 120 a href https huggingface co syzymon long llama code 7b instruct longllama code 7b instruct a th tr tr td align center a href https colab research google com github cstankonrad long llama blob main long llama code instruct colab ipynb img src https colab research google com assets colab badge svg a nbsp a href instruction fine tuning longllamacode7binstruct md learn more a td tr table div div align center table tr td align center span style font size 200 span td tr table div div align center table tr td align center span style font size 150 span td td align center span style font size 110 b a href https huggingface co syzymon long llama code 7b tyle margin bottom 30px longllama code 7b a b span td td align center span style font size 150 span td tr table hr div div align center table tr th a href https huggingface co syzymon long llama 3b instruct longllama instruct 3bv1 1 a th th a href https huggingface co syzymon long llama 3b v1 1 longllama 3bv1 1 a th tr tr td align center a href https colab research google com github cstankonrad long llama blob main long llama instruct colab ipynb img src https colab research google com assets colab badge svg a td td align center a href https colab research google com github cstankonrad long llama blob main long llama colab ipynb img src https colab research google com assets colab badge svg a td tr table div div align center tldr tldr overview overview usage usage longllama performance longllama performance authors authors citation citation license license acknowledgments acknowledgments fot continued pretraining fot continued pretraining instruction tuning instruction fine tuning div tldr this repository contains the research preview of longllama a large language model capable of handling long contexts of 256k tokens or even more longllama is built upon the foundation of openllama https github com openlm research open llama and fine tuned using the focused transformer fot https arxiv org abs 2307 03170 method longllama code is built upon the foundation of code llama https huggingface co codellama codellama 7b hf we release a smaller 3b base variant not instruction tuned of the longllama model on a permissive license apache 2 0 and inference code supporting longer contexts on hugging face https huggingface co syzymon long llama 3b our model weights can serve as the drop in replacement of llama in existing implementations for short context up to 2048 tokens additionally we provide evaluation results and comparisons against the original openllama models in addition to this we release code for instruction tuning pytorch instruction fine tuning and fot continued pretraining jax fot continued pretraining overview base models focused transformer contrastive training for context scaling https arxiv org abs 2307 03170 fot presents a simple method for endowing language models with the ability to handle context consisting possibly of millions of tokens while training on significantly shorter input fot permits a subset of attention layers to access a memory cache of key value pairs to extend the context length the distinctive aspect of fot is its training procedure drawing from contrastive learning specifically we deliberately expose the memory attention layers to both relevant and irrelevant keys like negative samples from unrelated documents this strategy incentivizes the model to differentiate keys connected with semantically diverse values thereby enhancing their structure this in turn makes it possible to extrapolate the effective context length much beyond what is seen in training longllama is an openllama https github com openlm research open llama model finetuned with the fot method with three layers used for context extension crucially longllama is able to extrapolate much beyond the context length seen in training 8k e g in the passkey retrieval task it can handle inputs of length 256k longllama code is a code llama https huggingface co codellama codellama 7b hf model finetuned with the fot method div align center longllama 3b https huggingface co syzymon long llama 3b longllama 3bv1 1 https huggingface co syzymon long llama 3b v1 1 longllama code 7b https huggingface co syzymon long llama code 7b source model openllama 3b https huggingface co openlm research open llama 3b easylm openllama 3bv2 https huggingface co openlm research open llama 3b v2 easylm codellama 7b hf https huggingface co codellama codellama 7b hf source model tokens 1t 1 t 2t 0 5 t fine tuning tokens 10b 5b 35b memory layers 6 12 18 6 12 18 8 16 24 div fot continued pretraining in the fot continued pretraining fot continued pretraining subfolder we provide the code that can be used to tune llama models with fot this code is written in jax https jax readthedocs io en latest notebooks quickstart html flax https flax readthedocs io en latest guides flax basics html and based on easylm https github com young geng easylm instruction chat tuning in the instruction fine tuning instruction fine tuning subfolder we provide the code that was used to create longllama instruct 3bv1 1 https huggingface co syzymon long llama 3b instruct an instruction tuned version of longllama 3bv1 1 https huggingface co syzymon long llama 3b v1 1 we used openorca https huggingface co datasets open orca openorca instructions and zetavg sharegpt processed https huggingface co datasets zetavg sharegpt processed chat datasets for tuning this code utilizes pytorch https pytorch org and hugging face trainer https huggingface co docs transformers v4 30 0 en main classes trainer inference code in the src src subfolder we provide inference code for fot models the code is written in pytorch https pytorch org and based on hugging face implementation of llama https huggingface co docs transformers main model doc llama the code should support standard hugging face api for more details see the usage usage section usage see also colab with longllama instruct 3bv1 1 https colab research google com github cstankonrad long llama blob main long llama instruct colab ipynb colab with an example usage of base longllama https colab research google com github cstankonrad long llama blob main long llama colab ipynb requirements pip install upgrade pip pip install transformers 4 33 2 sentencepiece accelerate loading model python import torch from transformers import llamatokenizer automodelforcausallm tokenizer llamatokenizer from pretrained syzymon long llama 3b v1 1 model automodelforcausallm from pretrained syzymon long llama 3b v1 1 torch dtype torch float32 trust remote code true input handling and generation longllama uses the hugging face interface the long input given to the model will be split into context windows and loaded into the memory cache python prompt my name is julien and i like to input ids tokenizer prompt return tensors pt input ids outputs model input ids input ids during the model call one can provide the parameter last context length default 1024 which specifies the number of tokens left in the last context window tuning this parameter can improve generation as the first layers do not have access to memory see details in how longllama handles long inputs how longllama handles long inputs python generation output model generate input ids input ids max new tokens 256 num beams 1 last context length 1792 do sample true temperature 1 0 print tokenizer decode generation output 0 additional configuration longllama has several other parameters mem layers specifies layers endowed with memory should be either an empty list or a list of all memory layers specified in the description of the checkpoint mem dtype allows changing the type of memory cache mem attention grouping can trade off speed for reduced memory usage when equal to 4 2048 the memory layers will process at most 4 2048 queries at once 4 heads and 2048 queries for each head python import torch from transformers import llamatokenizer automodelforcausallm tokenizer llamatokenizer from pretrained syzymon long llama 3b v1 1 model automodelforcausallm from pretrained syzymon long llama 3b v1 1 torch dtype torch float32 mem layers mem dtype bfloat16 trust remote code true mem attention grouping 4 2048 drop in use with llama code longllama checkpoints can also be used as a drop in replacement for llama checkpoints in hugging face implementation of llama https huggingface co docs transformers main model doc llama but in this case they will be limited to the original context length of 2048 python from transformers import llamatokenizer llamaforcausallm import torch tokenizer llamatokenizer from pretrained syzymon long llama 3b v1 1 model llamaforcausallm from pretrained syzymon long llama 3b v1 1 torch dtype torch float32 how longllama handles long inputs inputs over lctx 2048 lctx 4096 for longllama code tokens are automatically split into windows w 1 ldots w m the first m 2 windows contain lctx tokens each w m 1 has no more than lctx tokens and w m contains the number of tokens specified by last context length the model processes the windows one by one extending the memory cache after each if use cache is true then the last window will not be loaded to the memory cache but to the local generation cache the memory cache stores key value pairs for each head of the specified memory layers mem layers in addition to this it stores attention masks if use cache true which is the case in generation longllama will use two caches the memory cache for the specified layers and the local generation cache for all layers when the local cache exceeds lctx elements its content is moved to the memory cache for the memory layers for simplicity context extension is realized with a memory cache and full attention in this repo replacing this simple mechanism with a knn search over an external database is possible with systems like faiss https github com facebookresearch faiss this potentially would enable further context length scaling we leave this as a future work longllama performance we present some illustrative examples of longllama results refer to our paper focused transformer contrastive training for context scaling https arxiv org abs 2307 03170 for more details we manage to achieve good performance on the passkey retrieval task from landmark attention random access infinite context length for transformers https arxiv org abs 2305 16300 the code for generating the prompt and running the model is located in examples passkey py p align center width 100 img src assets plot passkey png alt longllama style width 70 min width 300px display block margin auto p our longllama 3b model also shows improvements when using long context on two downstream tasks trec question classification and webqs question answering div align center context dataset trec webqs 2k 67 0 21 2 4k 71 6 21 4 6k 72 9 22 2 8k 73 3 22 4 div longllama retains performance on tasks that do not require long context in particular longllama code 7b improves reasoning gsm8k and knowledge mmlu due to code fine tuning p align center width 100 img src assets full results png alt longllama style width 70 min width 300px display block margin auto p we provide a comparison with openllama on lm evaluation harness https github com eleutherai lm evaluation harness in the zero shot setting div align center task metric openllama 3b longllama 3b anli r1 acc 0 33 0 32 anli r2 acc 0 32 0 33 anli r3 acc 0 35 0 35 arc challenge acc 0 34 0 34 arc challenge acc norm 0 37 0 37 arc easy acc 0 69 0 68 arc easy acc norm 0 65 0 63 boolq acc 0 68 0 68 hellaswag acc 0 49 0 48 hellaswag acc norm 0 67 0 65 openbookqa acc 0 27 0 28 openbookqa acc norm 0 40 0 38 piqa acc 0 75 0 73 piqa acc norm 0 76 0 75 record em 0 88 0 87 record f1 0 89 0 87 rte acc 0 58 0 60 truthfulqa mc mc1 0 22 0 24 truthfulqa mc mc2 0 35 0 38 wic acc 0 48 0 50 winogrande acc 0 62 0 60 avg score 0 53 0 53 div starting with v1 1 models we have decided to use eleutherai https github com eleutherai implementation of lm evaluation harness https github com eleutherai lm evaluation harness wit a slight modification that adds bos token at beginning of input sequence the results are provided in the table below div align center description longllama 3b openllama 3bv2 longllama 3bv1 1 longllama instruct 3bv1 1 anli r1 acc 0 32 0 33 0 31 0 33 anli r2 acc 0 33 0 35 0 33 0 35 anli r3 acc 0 35 0 38 0 35 0 38 arc challenge acc 0 34 0 33 0 32 0 36 arc challenge acc norm 0 37 0 36 0 36 0 37 arc easy acc 0 67 0 68 0 68 0 7 arc easy acc norm 0 63 0 63 0 63 0 63 boolq acc 0 68 0 67 0 66 0 77 hellaswag acc 0 48 0 53 0 52 0 52 hellaswag acc norm 0 65 0 7 0 69 0 68 openbookqa acc 0 28 0 28 0 28 0 28 openbookqa acc norm 0 38 0 39 0 37 0 41 piqa acc 0 73 0 77 0 77 0 78 piqa acc norm 0 75 0 78 0 77 0 77 record em 0 87 0 87 0 86 0 85 record f1 0 88 0 88 0 87 0 86 rte acc 0 6 0 53 0 62 0 7 truthfulqa mc mc1 0 24 0 22 0 21 0 25 truthfulqa mc mc2 0 38 0 35 0 35 0 4 wic acc 0 5 0 5 0 5 0 54 winogrande acc 0 6 0 66 0 63 0 65 avg score 0 53 0 53 0 53 0 55 div we also provide the results on human eval we cut the generated text after either ndef nclass nif name div align center openllama 3bv2 longllama 3bv1 1 longllama instruct 3bv1 1 pass 1 0 09 0 12 0 12 div authors szymon tworkowski https scholar google com citations user 1v8aexyaaaaj hl en konrad staniszewski https scholar google com citations user cm6pcbyaaaaj miko aj pacek https scholar google com citations user eh6iebqaaaaj hl en oi ao henryk michalewski https scholar google com citations user ydhw1ycaaaaj hl en yuhuai wu https scholar google com citations user boqgffiaaaaj hl en piotr mi o https scholar google pl citations user se68xecaaaaj hl pl oi ao citation to cite this work please use bibtex misc tworkowski2023focused title focused transformer contrastive training for context scaling author szymon tworkowski and konrad staniszewski and miko aj pacek and yuhuai wu and henryk michalewski and piotr mi o year 2023 eprint 2307 03170 archiveprefix arxiv primaryclass cs cl license the source code and base longllama 3b models checkpoints are licensed under apache license version 2 0 http www apache org licenses license 2 0 the instruction chat tuned models are for research purposes only for the longllama code 7b see codellama codellama 7b hf https huggingface co codellama codellama 7b hf blob main license license longllama code 7b instruct is longllama code 7b tuned on tiger lab mathinstruct https huggingface co datasets tiger lab mathinstruct openorca https huggingface co datasets open orca openorca and sharegpt processed https huggingface co datasets zetavg sharegpt processed datasets some of the examples use external code see headers of files for copyright notices and licenses acknowledgments we gratefully acknowledge the tpu research cloud program which was instrumental to our research by providing significant computational resources we are also grateful to xinyang geng and hao liu for releasing openllama https github com openlm research open llama checkpoints and the easylm https github com young geng easylm library special thanks to keiran paster https twitter com keirp1 for providing immensely valuable suggestions about the pre training data for longllama code we would like to thank xiaosong he https github com hxs91 for suggestions on how to improve the explanations of cross batch code | ai |
|
ui-core | p align center a href https habx github io concrete docs img height 300 src https habx github io concrete docs img concrete cover svg a p h1 align center habx ui core h1 circleci https img shields io circleci build github habx ui core https app circleci com pipelines github habx ui core version https img shields io npm v habx ui core https www npmjs com package habx ui core size https img shields io bundlephobia min habx ui core https bundlephobia com result p habx ui core license https img shields io github license habx ui core license design system used on all the applications developed by habx installation shell npm i habx ui core add providers app ts typescript jsx import provider as designsystemprovider themeprovider from habx ui core const app react functioncomponent children return themeprovider theme theme designsystemprovider children designsystemprovider themeprovider documentation https habx github io concrete docs test all our components in our storybook https habx github io ui core | react design-system | os |
lcd_space_runner | lcd space runner image https github com sezer1 lcd space runner assets 57090119 5385b228 b42b 40e5 8022 6b7224334847 | os |
|
ExaBGPmon | exabgpmon an http frontend to the glorious exabgp features configure exabgp and auto generate a config file monitor and manage peers control routes advertised automatically re advertise when exabgp or peer comes back online view prefixes received from peers dashboard docs dashboard png dashboard config docs config png config running exabgpmon install and start mongodb install dependencies pip install r requirements this will install exabgp supervisord inititialize python manage py init config start supervisor and exabgpmon supervisord supervisorctl start exabgpmon supervisorctl start exabgp configure and restart exabgp through gui configuration file will automatically be updated | front_end |
|
design-tokens | sage design tokens github release https img shields io github release sage design tokens svg https github com sage design tokens releases license https img shields io badge license apache 202 0 blue svg https opensource org licenses apache 2 0 maintenance https img shields io badge maintained 3f yes green svg https github com sage design tokens graphs commit activity prs welcome https img shields io badge prs welcome brightgreen svg style flat square http makeapullrequest com this repository contains the design tokens from the sage design system these are maintained by the sage ds team this library is for distributing these tokens across multiple platforms what are design tokens design tokens are design system s most basic lowest level element in atomic design terminology those would be the protons or electrons basically those are key value records named and organized the same way regardless of the platform e g web android ios figma they can define various properties such as colors paddings margins sizes font sizes font families transitions animations and others they represent certain design decisions design tokens purpose is to release developers from taking design decisions often while developing a component developer needs to take decision what tint of what color should be used this decision should be taken by designer not developer improve handover process and communication between designers and developers both developers and designers are going to use the same token name for given property color background color border padding margin transition and so on in the end developers don t need to know what the final value will be narrow value set to only needed values design system uses narrow set of values spacings colors typography properties and others those are only values that are needed for visual description of the component keep visual consistency across all components of the library docs figma tokens github workflow docs figma github workflow md pre transform phase docs pretransform phase md generating icons docs icons md generating tokens documentation docs tokens documentation md using the design tokens web to make use of these tokens in your application import the correct variable definitions based on your styling technology install to add to a project using npm bash if you re using npm npm install save sage design tokens or if you re using yarn yarn add sage design tokens you can also add the files directly by downloading from the releases page on github https github com sage design tokens releases css to make use of the css variables import them into your code like so css inside css import sage design tokens css theme css js for projects where you can import css files into js import sage design tokens css theme css this will add the variables to the root element of the page scss to make use of the scss variables import them into your scss files like so scss use sage design tokens scss theme scss you can also use import but for scss this is being deprecated https sass lang com documentation at rules import in favour of use common js module js const tokens require sage design tokens js theme common then use in code element style color tokens colorsbase500 es6 module js import tokens from sage design tokens js theme es6 then use in code element style color tokens colorsbase500 a type definition file is also included to work in projects with typescript installed other formats it is possible to export design tokens to any format or language if you need to use design tokens in your technology please contact us and describe your needs contributing if you would like to help contribute to this library please read our contributing documentation docs contributing md licence licensed under the apache license version 2 0 the license you may not use these files except in compliance with the license you may obtain a copy of the license at apache 2 0 license license unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license copyright c 2021 sage group plc all rights reserved | design tokens design-tokens sage sage-design npm css less amazon-style-dictionary | os |
CS6476-Computer-Vision-Projects | gatech cs6476 computer vision course projects 1 image filtering and hybrid images 2 local feature matching 3 camera calibration and fundamental matrix estimation with ransac 4 scene recognition with bag of words 5 face detection with a sliding window 6 deep learning | ai |
|
te-emprego | te emprego https i imgur com wlg3n6g png https img shields io david te emprego te emprego svg style flat square logo npm color 561ebb labelcolor 222222 https img shields io github languages code size te emprego te emprego svg style flat square logo javascript color 561ebb labelcolor 222222 https img shields io github followers danielbonifacio svg label follow style social link http github com danielbonifacio o te emprego um projeto criado por danielbonifacio https github com danielbonifacio que consiste em uma plataforma 100 gratuita e funcional para a procura e divulga o de qualquer tipo de emprego servi o knowledge necess rio para contribuir es6 react styled components n o pretende dar suporte a internet explorer independente da vers o design abaixo algumas especifica es de design download do pacote de identidade visual em pdf https drive google com file d 1hc dwuxgzio7qdttidpkke44epuv94ec ou ai https drive google com file d 14lywz 82ia ombhrrz3x ap6ve7lezfe cores primaria 673ab7 available at config colors primary texto escuro 646464 available at config colors darktext texto claro 707070 available at config colors lighttext branco 707070 available at config colors white espa amentos espa amentos por escala 1 20px default 2 40px 3 60px raio de borda 1 10px default 2 5px this project was bootstrapped with create react app https github com facebook create react app available scripts in the project directory you can run npm start runs the app in the development mode br open http localhost 3000 http localhost 3000 to view it in the browser the page will reload if you make edits br you will also see any lint errors in the console npm test launches the test runner in the interactive watch mode br see the section about running tests https facebook github io create react app docs running tests for more information npm run build builds the app for production to the build folder br it correctly bundles react in production mode and optimizes the build for the best performance the build is minified and the filenames include the hashes br your app is ready to be deployed see the section about deployment https facebook github io create react app docs deployment for more information npm run eject note this is a one way operation once you eject you can t go back if you aren t satisfied with the build tool and configuration choices you can eject at any time this command will remove the single build dependency from your project instead it will copy all the configuration files and the transitive dependencies webpack babel eslint etc right into your project so you have full control over them all of the commands except eject will still work but they will point to the copied scripts so you can tweak them at this point you re on your own you don t have to ever use eject the curated feature set is suitable for small and middle deployments and you shouldn t feel obligated to use this feature however we understand that this tool wouldn t be useful if you couldn t customize it when you are ready for it learn more you can learn more in the create react app documentation https facebook github io create react app docs getting started to learn react check out the react documentation https reactjs org code splitting this section has moved here https facebook github io create react app docs code splitting analyzing the bundle size this section has moved here https facebook github io create react app docs analyzing the bundle size making a progressive web app this section has moved here https facebook github io create react app docs making a progressive web app advanced configuration this section has moved here https facebook github io create react app docs advanced configuration deployment this section has moved here https facebook github io create react app docs deployment npm run build fails to minify this section has moved here https facebook github io create react app docs troubleshooting npm run build fails to minify | te-emprego front-end react airbnb | front_end |
made-computer-vision | computer vision made s02e02 this repository contains materials for the computer vision course tip 1 loading the entire repository can take a considerable amount of time a single folder can be downloaded via downgit https downgit github io tip 2 sometimes github failes to render a notebook in that case use nbviewer https nbviewer jupyter org it works like a charm tip 3 in those cases when nbviewer fails to find a notebook whereas github finds it just fine try to add flush cache false at the end of the nbviewer link lectures legend https github com illumaria made deep learning blob master icons pdf png slides https github com illumaria made deep learning blob master icons jupyter png code https github com illumaria made deep learning blob master icons youtube png video week what where when 1 https data mail ru curriculum program lesson 16106 cv https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 01 intro augmentation 01 intro augmentation pdf https github com illumaria made deep learning blob master icons jupyter png https nbviewer jupyter org github illumaria made computer vision blob master 01 intro augmentation 01 intro augmentation ipynb https github com illumaria made deep learning blob master icons youtube png https youtu be zzeydhlw3z8 04 03 2021 2 https data mail ru curriculum program lesson 16108 pooling batchnormalization lenet 5 imagenet alexnet vgg inception resnet resnext senet efficientnet nfnet transfer learning https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 02 modern cnn architectures 02 modern cnn architectures pdf https github com illumaria made deep learning blob master icons jupyter png https nbviewer jupyter org github illumaria made computer vision blob master 02 modern cnn architectures 02 transfer learning ipynb https github com illumaria made deep learning blob master icons youtube png https youtu be iwm6gvqe6ui 11 03 2021 3 https data mail ru curriculum program lesson 16110 hog haar cascades sift rcnn fast rcnn faster rcnn mtcnn https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 03 detection rcnn mtcnn 03 detection rcnn mtcnn pdf https github com illumaria made deep learning blob master icons jupyter png https nbviewer jupyter org github illumaria made computer vision blob master 03 detection rcnn mtcnn 03 object detection ipynb https github com illumaria made deep learning blob master icons youtube png https youtu be gaucpvpbuvy 18 03 2021 4 https data mail ru curriculum program lesson 16112 single shot ssd yolo retinanet efficientdet focal loss feature pyramid network https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 03 detection rcnn mtcnn 03 detection rcnn mtcnn pdf https github com illumaria made deep learning blob master icons jupyter png https nbviewer jupyter org github illumaria made computer vision blob master 04 detection single shot retinanet 04 retinanet ipynb https github com illumaria made deep learning blob master icons youtube png https youtu be kdxzsunctey 25 03 2021 5 https data mail ru curriculum program lesson 16113 semantic segmentation instance segmentation mask r cnn panoptic segmentation fcn segnet u net fpn bceloss focal loss iou jaccard index jaccard loss dice loss batchnorm https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 05 segmentation 05 segmentation pdf https github com illumaria made deep learning blob master icons jupyter png https nbviewer jupyter org github illumaria made computer vision blob master 05 segmentation 05 segmentation ipynb https github com illumaria made deep learning blob master icons youtube png https youtu be x0tdvy0ryg 01 04 2021 6 https data mail ru curriculum program lesson 16114 ocr crnn ctc loss cer wer https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 06 ocr 06 ocr pdf https github com illumaria made deep learning blob master icons jupyter png https nbviewer jupyter org github illumaria made computer vision blob master 06 ocr 06 ocr ipynb https github com illumaria made deep learning blob master icons youtube png https youtu be ihz r6ctfbc 08 04 2021 7 https data mail ru curriculum program lesson 16115 metric learning definition and types minkowski distance mahalanobis distance cosine similarity similarity based siamese networks triplet loss softmax based center loss angular softmax cosface arcface aknn hnsw faiss landmarks recognition https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 07 metric learning 07 metric learning pdf https github com illumaria made deep learning blob master icons jupyter png https nbviewer jupyter org github illumaria made computer vision blob master 07 metric learning 07 metric learning ipynb https github com illumaria made deep learning blob master icons youtube png https youtu be i5b1wgvhh5k 15 04 2021 8 https data mail ru curriculum program lesson 16117 object tracking kalman filters sort deepsort metrics faf mt ml fp fn id sw frag mota motp datasets and benchmarks mot17 kitti ua detrac imagenet vid ytvis tao https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 08 tracking 08 tracking pdf https github com illumaria made deep learning blob master icons jupyter png https nbviewer jupyter org github illumaria made computer vision blob master 08 tracking 08 tracking ipynb https github com illumaria made deep learning blob master icons youtube png https youtu be wr5mjgyshiu 22 04 2021 9 https data mail ru curriculum program lesson 16119 1 mode collapse wasserstein gan earth mover s distance https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 09 intro gan 09 intro gan pdf https github com illumaria made deep learning blob master icons youtube png https youtu be ok2laxyhcly 29 04 2021 10 https data mail ru curriculum program lesson 16116 flownet multi domain net goturn 2d cnn rnn 3d cnn video2gif https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 10 video analysis 10 video analysis pdf https github com illumaria made deep learning blob master icons youtube png https youtu be dre eii6atq 13 05 2021 11 https data mail ru curriculum program lesson 16118 meta learning semi supervised learning meta learning learning to learn few shot learning omniglot dataset black box metric based optimization based semi supervised learning simclr https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 11 meta learning 11 meta learning pdf https github com illumaria made deep learning blob master icons youtube png https youtu be 4vkya0gosaw 20 05 2021 12 https data mail ru curriculum program lesson 16120 2 1d gans inception score frechet inception distance progressive growing of gans biggan hinge loss function self attention stylegan stylegan2 https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 12 gan 12 gan pdf https github com illumaria made deep learning blob master icons youtube png https youtu be syyfmdidxyk 27 05 2021 13 https data mail ru curriculum program lesson 16121 mobilenet v1 v2 1x1 convs depthwise separable convolution qat pytorch jit torchscript jit compiler onnx onnx runtime https github com illumaria made deep learning blob master icons pdf png https github com illumaria made computer vision blob master 13 mobile nets 13 mobile nets pdf https github com illumaria made deep learning blob master icons youtube png https youtu be yrn80qa37ye 03 06 2021 | ai |
|
machine-learning-systems-design | machine learning engineer sde github com chiphuyen 3 github 27 1 2 3 1 2 1 2 machine learning methods change every year solving problems stays the same 1 2 4 1 2 3 4 4 images ml sys design ml project flow png machine learning is driven more by data than by algorithms baseline baseline baseline baseline baseline baseline deep learning needs data you might first need users debug serving https kuhungio me 2019 flask vue ml https kuhungio me 2019 flask vue ml utm source website utm campaign ml sys design all models are wrong but some are useful 10 10 pdf github airbnb ctr images ml sys design airbnb jpg instacart images ml sys design visual jpg https github com kuhung machine learning systems design tree master pdf 27 27 https kuhungio me machine learning systems design https kuhungio me machine learning systems design utm source website utm campaign ml sys design images ml sys design website jpg about chiphuyen machine learning systems design machine learning systems design kuhung dsqa dsqa div align center img src https kuhungio me images post social jpg div dsqa https github com kuhung dsqa machine learning systems design https github com chiphuyen machine learning systems design | os |
|
Introduction-to-Image-Processing-and-Computer-Vision | introduction to image processing and computer vision introduction to image processing and computer vision labs based on computer vision algorithms and applications richard szeliski 1 image processing and introduction to computer vision 1 read save and display images with matplotlib 2 numpy for images 3 image channels 4 arithmetic operations 5 image histogram with numpy and matplotlib 2 images as a functions and gaussian filters 1 color images as a functions 2 bliding images 3 diffrence 4 generate gaussian noise 5 smoothing the images 3 correlations and template matching 1 correlation vs convolution kernels 2 in depth with filters 3 edges 4 median filter with noise 5 template matching 4 edges detection and hough transform 1 things to recap 2 derivatives and edges 3 image gradient 4 from gradient to edges 5 hough space 6 application 5 circle hough transformation modern techniques 1 things to recap 2 haugh transform for circles 3 what if we don t know the radius 4 generalized hough transform 5 modern object detection 6 application on line detection 7 application on circle detection 8 application on realistic image | ai |
|
Weather_App | weather app development build weather app using react native alt text https github com nimeshpiyumantha weather app blob master src assets ss ss png alt text https github com nimeshpiyumantha weather app blob master src assets ss lv 0 20230618205545 gif setup in project expo go npm install npx expo start clone this repository md https github com nimeshpiyumantha weather app git connect with me if you have any bugs or issues if you want to explain my code please contact me on div align center br b mail me b nbsp a href mailto nimeshpiyumantha11 gmail com img width 20px src https github com nimeshpiyumantha red alpha blob main gmail svg a p div p align center a href https twitter com npiyumantha60 img align center src https raw githubusercontent com rahuldkjain github profile readme generator master src images icons social twitter svg alt nimeshpiyumantha height 30 width 40 a a href https www linkedin com in nimesh piyumantha 33736a222 target blank img align center src https raw githubusercontent com rahuldkjain github profile readme generator master src images icons social linked in alt svg alt https www linkedin com public profile settings trk d flagship3 profile self view public profile height 30 width 40 a a href https www facebook com profile php id 100025931563090 target blank img align center src https raw githubusercontent com rahuldkjain github profile readme generator master src images icons social facebook svg alt nimesh piyumantha height 30 width 40 a a href https www instagram com nimmaa target blank img align center src https raw githubusercontent com rahuldkjain github profile readme generator master src images icons social instagram svg alt nimmaa height 30 width 40 a p div align center repo license https img shields io github license nimeshpiyumantha weather app labelcolor black color 3867d6 style for the badge repo size https img shields io github repo size nimeshpiyumantha weather app label repo 20size style for the badge labelcolor black color 20bf6b github forks https img shields io github forks nimeshpiyumantha weather app labelcolor black color 0fb9b1 style for the badge github stars https img shields io github stars nimeshpiyumantha weather app labelcolor black color f7b731 style for the badge github lastcommit https img shields io github last commit nimeshpiyumantha weather app logo github labelcolor black color d1d8e0 style for the badge div div align center 2023 nimesh piyumantha https github com nimeshpiyumantha inc all rights reserved div | front_end |
|
nlp-editor | copyright 2022 elyra authors licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license visual editor for nlp rules visual editor for creating nlp rules visual editor interface https user images githubusercontent com 81634386 158040876 7bb94cbd 7c4a 4b2c b50f 7524985801c0 png how to use 1 watch a live demo of the nlp editor and learn more about our future plans in our recent ibm data science community presentation https community ibm com community user ai datascience blogs tim bonnemann1 2022 06 30 replay available learn about elyra visual nlp edit 2 walk through our tutorial tutorial md 3 try the editor by following the instructions below try the editor 1 clone the repository git clone git github com codait nlp editor git 1 navigate to the source code cd nlp editor the application users a nodejs server file as proxy this makes it easy to replace and embed the ui with any other server websphere nginx etc prerequisites on a terminal window install the node version manager nvm as follows curl o https raw githubusercontent com nvm sh nvm v0 39 0 install sh bash reference https github com nvm sh nvm installing and updating verify nvm installed properly nvm v 0 39 2 next install the required nodejs version currently at 18 12 0 nvm install v18 12 0 verify node and npm installed properly node v v18 12 0 npm v 8 19 2 run the editor locally without a backend runtime 1 install the dependencies npm install 2 build the app npm run build 3 run the app npm run serve 4 open http localhost 8080 in a web browser run the editor locally using the ibm watson discovery backend container additional prerequsites docker ibm watson discovery backend container 01 ibm watson discovery web nlp tool backend date tar gz supplied to you 1 follow steps above to run the editor locally comment out this step as we ve added these directories to git 2 add the following folders to the nlp editor seer code folder at root level then add aql processor folder underneath and finally two more folders user data in and run aql result the folder structure should reflect the following nlp editor seer code aql processor user data in nlp editor seer code aql processor run aql result 2 extract 01 ibm watson discovery web nlp tool backend date tar gz into a folder of your choice say watson nlp web tool 3 build the container image cd watson nlp web tool docker build t watson nlp web tool 1 0 4 run the container image with volumes mapped note that path to nlp editor is the absolute path to the nlp editor repository from step 1 docker run d v path to nlp editor seer core aql processor user data in app seer core aql processor user data in v path to nlp editor seer core aql processor run aql result app seer core aql processor run aql result name watson nlp web tool watson nlp web tool 1 0 5 open http localhost 8080 in a web browser or use reuse session from step 1 6 create your nlp model use the tutorial tutorial md for guidance 7 when you are satisfied with your model click export a zip file is generated on your local file system 8 in watson discovery on cloudpak for data apply the model by following the steps in advanced rules models https cloud ibm com docs discovery data topic discovery data domain advanced rules getting help we welcome your questions ideas and feedback please create an issue https github com codait nlp editor issues or a discussion thread https github com codait nlp editor discussions contributing to the nlp editor if you are interested in helping make the nlp editor better we encourage you to take a look at our contributing contributing md page | editor natural-language-processing | ai |
blockshell | readme md just includes installation guide you can find detailed guide in this a href https github com daxeel blockshell wiki get started with blockshell target blank wiki page a blockshell a command line utility for learning blockchain technical concepts likechaining mining proof of work etc img src https image ibb co mjfngw blockshell gif img src https preview ibb co dhc7yb logomakr 5g ei dw png height 80 about anyone who wants to understand how blockchain technology works then b blockshell b should be a great start because i have created blockshell keeping blockchain fundamentals in the center of development with blockshell you will actually create a tiny blockchain in your system where you can create blocks with data explore blocks etc so by using blockshell anyone can learn following blockchain concepts block chaining hashing mining proof of work blockshell web explorer p blockshell comes with built in blockchain explorer by which you can actully see how blocks are mined and what is stored and where p latest mined blocks block details https preview ibb co iza5jg screen shot 2018 01 25 at 11 25 22 pm png https preview ibb co cdb0jb screen shot 2018 01 25 at 11 25 35 pm png installation step 1 create project directory mkdir project name cd project name step 2 create new virtual environment with python version 2 7 virtualenv venv step 3 activate virtual environment source venv bin activate or source venv scripts activate step 4 clone this repo git clone https github com daxeel blockshell git step 5 change directory to cloned one cd blockshell step 6 install blockshell pip install editable step 7 try blockshell command and test installation blockshell b output in terminal after calling blockshell command b img src https image ibb co drqgrw screen shot 2018 01 25 at 11 21 38 pm png | blockchain blockchain-demos command-line-tool cli blockchain-technology blockchain-concepts blockchain-platform blockchain-explorer | blockchain |
developer-test | purplebricks developer test the aim of this test is to give us an idea about how you approach the development and maintenance of web applications you will work from a github repository which contains an existing web application the ui should be functional but there is no expectation that you modify the brand theme we are looking for a solution that shows how you build maintainable scalable and secure software the test is based on an overly simplified version of our business domain the existing web application supports two types of customer sellers are able to upload information about a property and list the property for sale buyers can search for a property and make an offer when an offer has been placed on a property the seller should be able to accept or reject the offer test objectives objective 1 extend an existing feature we need you to extend the offer functionality of the web application so that when a seller accepts an offer the buyer that placed the offer can see that their offer has been accepted user story as a buyer i want to see when my offer has been accepted so that i can proceed with the property purchase objective 2 add a new feature we need you to add the ability for a buyer to book a viewing it s unlikely a customer would want to make an offer on a property without booking a viewing user story as a buyer i want to book a viewing appointment at a property so that i can determine whether i would like to make an offer objective 3 review the existing code write a short review of the existing sample codebase let us know what you think is good or bad about it feel free to fix any problems and commit these changes to the solution deliverables your submission should be delivered in as a visual studio solution compatible with visual studio 2015 the solution should compile for data persistence please use the existing entity framework model with sql feel free to add migrations if you need to we would prefer that the solution is delivered via github if you are not able to fork this original repository publically then please fork to a private repository and then provide us with the zip file from the download option in github good luck faqs should i show my commit history showing your commit history is recommended so that we can see your approach how long should i spend working on the assignment you can take as long as you need to complete the assignment but do remember that this is throwaway code and the aim is to demonstrate your approach rather than build a complete system do i need to deploy the application if you wish to demonstrate your working app then you may deploy it to azure on a free trial account this is not mandatory | front_end |
|
zag | a href https zagjs com img alt zag js hero image src https repository images githubusercontent com 383777434 87c5d462 1c65 45d7 9561 3f3f64d814f4 img a zag finite state machines for accessible javascript components write once use everywhere the component interactions are modelled in a framework agnostic way we provide adapters for js frameworks like react solid or vue focus on accessibility zag is built with accessibility in mind we handle many details related to keyboard interactions focus management aria roles and attributes headless the machine apis are completely unstyled and gives you the control to use any styling solution you prefer powered by state machines zag is built on top of the latest ideas in statecharts we don t follow the scxml specifications but we ve created an api that we think will help us build more complex components fast documentation to see the documentation visit zagjs com https zagjs com releases for changelog check changelog md changelog md problem with the rise of design systems and component driven development there s an endless re implementation of common component patterns tabs menu modal etc in multiple frameworks most of these implementations seem to be fairly similar in spirit the differences being around the reactivity and effects systems for the framework e g usestate useeffect in react js framework specific solutions tend to grow in complexity over time and often become hard to understand debug improve or test solution zag is a javascript api that implements common component patterns using the state machine methodology installation sh npm i save zag js component or yarn add zag js component component represents any component machine like dialog zag js dialog tooltip zag js tooltip etc for framework specific solutions we provide simple wrappers to help you consume the component state machines zag js react react hooks for consuming machines in react applications zag js vue vue composition for consuming machines in vue applications zag js solid solid js utilities for consuming machines in solid js applications usage jsx import as toggle from zag js toggle import normalizeprops usemachine from zag js react function example if you need access to state or send from machine const state send usemachine toggle machine id 2 convert machine details into dom props const api toggle connect state send normalizeprops consume into components return button api buttonprops toggle me button guiding principles all component machines and tests are modelled according to the wai aria authoring practices https www w3 org tr wai aria practices write end to end tests for every component based on the wai aria spec regardless of the framework users expect component patterns to work the same way all machines should be light weight simple and easy to understand avoid using complex machine concepts like spawn nested states etc fun facts zag means to take a sharp change in direction this clearly describes our approach of using state machines to power the logic behind ui components teasers when you see someone using classic react vue or solid to build an interactive ui component that exists in zag tell them to zag it anyone using zag will be called a zagger the feeling you get when you use zag will be called zagadat the zag community will be called zag nation commands build commands our build is managed with esbuild and turborepo to provide fast concurrent builds across the packages build build the cjs esm and dts files this is the actual production build that we run in the ci examples since zag is framework agnostic we need a way to test it within a framework the examples directory includes starter projects for the frameworks we support start react starts the next js typescript project start vue starts the vue 3 typescript project start solid starts the solid typescript project e2e tests we ve setup end to end tests for every machine we built we use playwright https playwright dev for testing and we ensure that the component works the same way regardless of the framework e2e react starts the e2e tests for the react project e2e vue starts the e2e tests for the vue project e2e solid starts the e2e tests for the solid project contributing new machines features generate machine generates a new machine package in the packages directory it sets up the required files and structure for new machine generate util generates a new utility package in the packages utilities directory other commands test run the tests for all packages lint lint all packages inspirations duplicate code in chakra ui react https chakra ui com and vue https vue chakra ui com thoughts on pure ui https rauchg com 2015 pure ui guillermo rauch pure ui control https asolove medium com pure ui control ac8d1be97a8d adam solve material components web https github com material components material components web for inspiring my first prototype radix ui https radix ui com for inspiring the dimissable and presence pattern xstate https xstate js org for inspiring the base implementation of the state machine vue js https vuejs org and lit https lit element polymer project org for inspiring new patterns in the machine computed and watch contributions looking to contribute look for the good first issue label bugs please file an issue for bugs missing documentation or unexpected behavior feature requests please file an issue to suggest new features vote on feature requests by adding a this helps maintainers prioritize what to work on license mit segun adebayo https github com segunadebayo | react vue solid accessibility ui ui-kit state-machines ui-components xstate fsm headless a11y | os |
Info | bernie 2016 technology welcome this is the official github organization for the bernie 2016 https berniesanders com presidential campaign many of our projects that power bernie s campaign are open source and we welcome contributions so we put together this guide on how to get involved where do we hang out we sync up on slack you can request an invite here http organize berniesanders com slack berniebuilders and then head over to tech team what should i work on to get a sense of which projects are most active you can take a look at https github com bernie 2016 the projects will be listed in order of most to least recent activity there s also a list of projects in projects md https github com bernie 2016 info blob master projects md although it may sometimes be a little out of date the current priorities for the campaign will be set as pinned messages in tech team in slack these are things we often need to get done very soon so help is always appreciated if the top priorities seem too complicated and difficult to get into they probably often will be for newcomers feel free to instead trawl through some of our projects and try to find issues marked with the newbie friendly tag pick a newbie friendly task even better if it is a newbie friendly task with a high priority tag as well and give it a shot how do i commit a change to the project bernie 2016 projects use gitflow to accept pull requests to submit a change to a project 1 fork the project to your own github account click fork on the project page in github 2 create your feature branch git checkout b my new feature 3 commit your changes git commit am add some feature 4 push to the branch git push origin my new feature 5 create a new pull request each project has a readme that hopefully explains how to get started and how to run any tests or code validation you should ensure that your changes use good code style are appropriately commented and don t break any tests if applicable feel free to request code review or discuss your ideas over in our slack why are you doing everything in oss isn t that crazy for a campaign other than the ideals based reasons which we do really believe in we honestly think open sourcing our software is the best way to get as many people involved as possible we will win this campaign if we are able to make it as easy as possible for massive amounts of people to organize this is true just as much for our software as it is for every other part of the campaign how are your projects licensed all of our open source projects are licensed with the gnu affero public license version 3 agpl http www gnu org licenses agpl 3 0 en html a short summary of the agpl can be found here https tldrlegal com license gnu affero general public license v3 agpl 3 0 it is similar to many other open source licenses in that it permits you to use modify and distribute code for your own purposes including commercial purposes but if you make significant changes to the software you must make your copy open source including the original license and stating significant changes that you ve made a few projects are licensed differently mostly for dependency related reasons check the license txt file in any repository for specifics | server |
|
plasma | plasma p align center img width 800 src https user images githubusercontent com 1813468 98610527 d37ba500 2300 11eb 87c3 80cc1c08ecb4 png alt plasma p plasma canvas app https bit ly 3mx0uqq b2b b2c salute plasma salutejs plasma ui salutejs plasma web salutejs plasma b2c salutejs plasma tokens salutejs plasma tokens web salutejs plasma tokens b2c salutejs plasma icons plasma ui canvas app react https reactjs org storybook https bit ly 3xratfg https bit ly 3hwggy3 npm ui https img shields io npm v salutejs plasma ui label 40salutejs 2fplasma ui style for the badge https www npmjs com package salutejs plasma ui plasma web b2b https bit ly 3otwx5v storybook https bit ly 3eh1x7b npm ui https img shields io npm v salutejs plasma web label 40salutejs 2fplasma web style for the badge https www npmjs com package salutejs plasma web plasma b2c b2c https bit ly 3otwx5v storybook https bit ly 44cjwib npm ui https img shields io npm v salutejs plasma b2c label 40salutejs 2fplasma b2c style for the badge https www npmjs com package salutejs plasma b2c plasma tokens plasma tokens web plasma tokens b2c css custom propperties https developer mozilla org en us docs web css javascript https bit ly 3kflkes npm ui https img shields io npm v salutejs plasma tokens label 40salutejs 2fplasma tokens style for the badge https www npmjs com package salutejs plasma tokens npm ui https img shields io npm v salutejs plasma tokens web label 40salutejs 2fplasma tokens web style for the badge https www npmjs com package salutejs plasma tokens web npm ui https img shields io npm v salutejs plasma tokens b2c label 40salutejs 2fplasma tokens b2c style for the badge https www npmjs com package salutejs plasma tokens b2c plasma icons https bit ly 42hgvsf storybook https bit ly 3lhwbwc https bit ly 3xqmjum npm ui https img shields io npm v salutejs plasma icons label 40salutejs 2fplasma icons style for the badge https www npmjs com package salutejs plasma icons canvas app web node js npm https nodejs org ru create react app cra https create react app dev docs getting started quick start react web https ru reactjs org tutorial tutorial html https create react app dev docs getting started quick start sh npm i s styled components 5 1 1 salutejs plasma ui salutejs plasma tokens salutejs plasma icons styled components http styled components com nb create react app react react dom sh npm i s react react dom https bit ly 3hwggy3 d0 bd d0 b0 d1 81 d1 82 d1 80 d0 be d0 b9 d0 ba d0 b0 jsx src app jsx import react from react import button from salutejs plasma ui function app return div classname app p button view primary hello plasma button p div export default app https bit ly 3hwggy3 d0 b8 d1 81 d0 bf d0 be d0 bb d1 8c d0 b7 d0 be d0 b2 d0 b0 d0 bd d0 b8 d0 b5 plasma https github com salute developers plasma contributing md https github com salute developers plasma issues new | os |
|
Adlibre-DMS | adlibre dms build status https travis ci org adlibre adlibre dms svg branch master https travis ci org adlibre adlibre dms adlibre dms is a web based media and document management system dms edms it liberates you from hard copy paper systems by creating a digital vault for your documents that can be accessed from anywhere the core philosophy is that your documents are secure yet accessible from wherever they are required with adlibre dms you can make your documents universally available within your business and across all your applications and systems it is the perfect system for archiving and transaction oriented document workflows and because it has a flexible architecture it is ideal for government and enterprise users or smaller businesses looking for a document management platform to build on adlibre dms is also 100 commercially supported open source key features web services api rest for all core functionality easily integrated with 3rd party applications extensible plugin architecture easily add new features via plugins cross platform mobile device ready scalable designed to store 100 s of millions of documents unlimited document indexes and meta data technical features built with python django couchdb meta data backend very minimal hardware requirements components adlibre dms is built in modular fashion and comprises the following components api plugin manager metadata template ui mui configuration driven indexing and retrieval interface admin ui decoupled jquery json graphical ui for administrative functions browser provides wholistic interface for all components status the source code here on master is fast moving and evolving quickly if you are deploying in a production environment then we recommend forking and stabilising the codebase to your requirements or purchasing a support agreement from adlibre the code here can be considered the community version of adlibre dms commercial support adlibre dms is developed and commercially supported by adlibre pty ltd http www adlibre com au more information is available from adlibre dms website http www adlibre com au adlibre dms | python document-management document-archiving | os |
CECS447 | cecs447 embedded system design using arm cortex m4 microcontrollers | c | os |
web-dev | web applications development 2017 in russian it is which means modern technologies for web application development so during this course we would practice in development of web applications as it is done in 2017 lecture slides lecture slides are accessible to everyone from google drive lecture slides https drive google com folderview id 0b3itoci o3udsjhlcunhswc5mke usp sharing textbook https github com grsu web dev tree master textbook | front_end |
|
AI_Music_Generator | p align center img src music png alt music width 2000 height 200 p h1 ai music generator h1 p align center img src https img shields io badge made 20by 20 matakshay blue img src https badges frapsoft com os v1 open source svg v 103 img src https img shields io badge contributions welcome brightgreen img src https img shields io badge python v3 7 2b orange img src https img shields io badge tensorflow 2 1 0 yellow p p align justify this is a deep learning natural language processing model which can generate piano music p h3 table of contents h3 ol type i li a href intro introduction a li li a href dataset dataset a li li a href musicology music theory a li li a href model model a li li a href frameworks frameworks libraries languages a li li a href usage usage a li li a href acknowledgement acknowledgement a li ol h2 id intro introduction h2 p align justify the usage of neural networks has been steadily increasing over time with a multitude of papers being published every year deep learning has found its applications in many fields of our daily lives ranging from recommedation systems and personalization to medical diagnosis and healthcare a recently popularised area of applying these techniques is for content generation br text generation the most commonly seen form of this has become a ubiquitous feature in recent years auto complete features in our message apps emails and even google searches is a common and helpful application of this the model on the backend inputs and processes the initial few words typed by us and predicts the next most probable word from its vocabulary the user has an option to use this word or continue typing either of which further trains the model as it learns from the actual next word br an attempt along a similar philosophy can be made to train a neural network to generate music and this is indeed becoming popular in recent years here i build a long short term memory lstm neural network in python using keras to generate piano music p h2 id dataset dataset h2 p align justify the dataset contained in the songs directory consists of around 90 midi musical instrument digital interface audio files each of these files is a couple of minutes in duration and consists of piano music most of these files contain music from the final fantasy series of games since the music is very distinct and has beautiful melodies for playing the music of a file follow the steps in a href usage usage a section below p h2 id musicology music theory h2 p align justify the concise oxford dictionary defines music as the art of combining vocal or instrumental sounds or both to produce beauty of form harmony and expression of emotion in simpler terms music can be thought of comprising of a basic element note a note essentially represents the pitch of the music at that point in time notes are a discretization of musical phenomena and are often regarded as the building blocks of music pitch can be roughly realised to be correlated with the frequency of the sound but in essence is more of an abstract property which depends on the perception of person hearing it it is often represented with capital letters a b c d e f g these letter names can also be modified by using two accidentals the shap sign which raises a note by half step and the flat sign which lowers it by half step br each note also has certain other characteristics namely offset the length of time from the start of a piece when the note is played and duration the time for which the note is held if there are no periods of silence in the music and no occurrences of two notes being played together then the offset of a note is effectively the sum of the previous durations br a chord in music is a set of multiple notes pitches that are heard sounding simultaneously a piano normally contains many spans or sets of eight white keys called an octave p h2 id model model h2 div align center figure img src model plot png alt model plot width 400 figcaption a plot of the model and its layers figcaption figure div p align justify i built an lstm model using keras sequential api which inputs sequences of notes of fixed length and learns to predict the next note in the sequence a plot of the model layers is given above p h2 id frameworks frameworks libraries languages h2 ul li keras li li tensorflow li li numpy li li python3 li li timidity li li pickle mixin li li glob li li music21 li ul h2 id usage usage h2 ol li install all dependencies br code pip install python3 code br code pip install numpy code br code pip install tensorflow code br code pip install keras code br code pip install timidity code br code pip install pickle mixin code br code pip install music21 code li li clone the repository to your system and head over to it br code git clone https github com matakshay ai music generator code br code cd ai music generator code li li to listen to a music file br code cd songs code br code timidity filename code br replace filename with complete name of file you wish to listen li li to generate piano music from a random sequence from the songs directory br code python3 generate py code br this will create a midi music file named output midi in the same directory to listen to this type br code timidity output midi code br br this step can be repeated any number of times and at each iteration a random music file will be generated li ol h2 id acknowledgement acknowledgement h2 p align justify i referred many articles blogs and websites while building this project some of them are mentioned below p ul li https colah github io posts 2015 08 understanding lstms li li https towardsdatascience com how to generate music using a lstm neural network in keras 68786834d4c5 li li https en wikipedia org wiki musicology li li https en wikipedia org wiki music theory fundamentals of music li li https en wikipedia org wiki elements of music li li https en wikipedia org wiki definition of music li ul p align justify the cover picture at the beginning of this document just above the title has been taken from a href https emerj com ai sector overviews musical artificial intelligence 6 applications of ai for audio here a p | deep-learning neural-networks natural-language-processing natural-language-generation music-generation lstm-neural-networks keras python3 music21 tensorflow machine-learning artificial-intelligence | ai |
caffeineengine | caffeineengine an engine built in php designed to be embedded in systems that are created this is a old project and its freezed only exists today to me improve my personal technics with php | os |
|
nlp-tool | p align center img src https github com saphyothuhtet nlp tool blob main images peacock 3 png width 100 height 120 p nlp tool no name type time and space complexity usage note 1 character break used regular expression o n o 1 can be used for any language 2 syllable tokenization unicode regular expression o n o 1 can be used for unicode data of myanmar burmese rakhine pali and paoh languages 3 syllable tokenization zawgyi regular expression o n o 1 can be used for zawgyi encoding myanmar burmese language 4 multilingual semi syllable tokenization unicode regular expression o n o 1 can be used for unicode encoding lao kannada oriya gujarati malayalam khmer bengali sinhala tamil mon pali and sanskrit sagaw karen western poh karen eastern poh karen geba karen kayah rumai palaung khamathi shan aiton and phake burmese myanmar paoh rakhine languages can also be used as a word break for english and charcter break for any other languages i got this new idea while working in keywords detection in burmese and other two languages regarding keywords detection the word like can be found in the sentence like and the scanerio is irrelevant and luckily i found an alternative that would be helpful for three languages here semi syllable does not refer to the minor syllable in phonology instead it is new tokenization that does not break into a full syllable mode now i found that it is useful in keyword detection to reduce false positive errors i may explain why keywords detection later the beauty of this tokenization would be you don t need to know much about the nature of the specific language it will especially work for a similar script like brahmic script since it is in the initial state it may have some errors 5 burmese sentence level zawgyi unicode detection machine learning 6 burmese to braille muu haung converter regular expression o n o 1 can be used to change from burmese to burmese braille muu haung the brialle to burmese dictonary may need to be updated the data for the dicitonary is prepared by phyo thu htet naing linn phyo and thiha nyein 7 keywords detection regular expression 8 email detection regular expression o n o 1 can be used to detect emails in the text br e g input phyothuhtet39 gmail com mail microsoft mail phyothuhtet studentambassadors com ayethida89 young utycc edu mm output ayethida89 young utycc edu mm phyothuhtet39 gmail com phyothuhtet studentambassadors com streamlit ss https github com saphyothuhtet nlp tools blob main images screenshot 20from 202021 07 27 2016 52 42 png current version pip3 install requirements txt streamlit run nlptools py gdg 2022 text classification with zero shot and few shot learning pdf https github com saphyothuhtet nlp tool tree main gdg 2022 zero shot example notebook https colab research google com drive 1jocvilorbwwiktxkwxcov9hltaddgcaw usp sharing this is the original notebook provided by hugging face acknowledgment i would like to thank dr ye kyaw thu dr hnin aye thant ma aye hninn khine and ma yi yi chan myae win shein for their guidance support and suggestions the skills acquired from dr ye kyaw thu s nlp class helped me a lot in order to develop new ideas in nlp field and this repo and a shoutout to the creators of rabbit converter and jrgraphix net s unicode character table these tools were super helpful to develop nlp concepts especially for burmese language thanks license mit license copyright c 2021 sa phyo thu htet permission is hereby granted free of charge to any person obtaining a copy of this software and associated documentation files the software to deal in the software without restriction including without limitation the rights to use copy modify merge publish distribute sublicense and or sell copies of the software and to permit persons to whom the software is furnished to do so subject to the following conditions the above copyright notice and this permission notice shall be included in all copies or substantial portions of the software the software is provided as is without warranty of any kind express or implied including but not limited to the warranties of merchantability fitness for a particular purpose and noninfringement in no event shall the authors or copyright holders be liable for any claim damages or other liability whether in an action of contract tort or otherwise arising from out of or in connection with the software or the use or other dealings in the software bibtex article saphyothuhtet title nlp tool author phyo thu htet journal https github com saphyothuhtet nlp tool volume 1 year 2019 2023 references 1 unicode character table https jrgraphix net r unicode 1000 109f 2 rabbit converter http www rabbit converter org 3 nlp class utycc https github com ye kyaw thu nlp class 4 y k thu et al sylbreak4all regular expressions for syllable breaking of nine major ethnic languages of myanmar 2021 16th international joint symposium on artificial intelligence and natural language processing isai nlp 2021 pp 1 6 doi 10 1109 isai nlp54397 2021 9678188 | nlp myanmar-language burmese tokenization mulilingual zawgyi-unicode | ai |
embeddedvision | computer vision for embedded systems an introductory graduate course on efficient computer vision software designed for deployment on edge devices how to upload a notebook to google colab 1 clone this github repository or download its contents as a zip file 2 go to https colab research google com 3 navigate to file upload notebook 4 choose the notebook you want to upload google colab useful information 1 in google colab code is run in cells if you want to evaluate your entire homework before you submit you can navigate to runtime run all on colab 2 for the assignments ensure you are using a gpu to evaluate your homework this can be done by navigating to runtime change runtime type gpu on colab how to submit homework to upload a homework notebook you can fill out the following google form https docs google com forms d e 1faipqlsfos1g36tiq36r tlhdz5p8t9z avyyiko9askoo16i9u4vq viewform usp sf link we require you to convert your google colab to a pdf to do so 1 navigate to the notebook colabtohtml ipynb on the github and open it on google colab 2 follow the instructions on the notebook to convert your colab to an html file 3 then open the html version and navigate to file print print to pdf | ai |
|
Hands-On-Machine-Learning-for-Algorithmic-Trading | hands on machine learning for algorithmic trading hands on machine learning for algorithmic trading published by packt a href https www packtpub com big data and business intelligence hands machine learning algorithmic trading utm source github utm medium repository utm campaign 9781789346411 img src https d1ldz4te4covpm cloudfront net sites default files imagecache ppv4 main book cover b11166 new png alt hands on machine learning for algorithmic trading height 256px align right a this is the code repository for hands on machine learning for algorithmic trading https www packtpub com big data and business intelligence hands machine learning algorithmic trading utm source github utm medium repository utm campaign 9781789346411 published by packt design and implement investment strategies based on smart algorithms that learn from data using python what is this book about the explosive growth of digital data has boosted the demand for expertise in trading strategies that use machine learning ml this book enables you to use a broad range of supervised and unsupervised algorithms to extract signals from a wide variety of data sources and create powerful investment strategies this book covers the following exciting features implement machine learning techniques to solve investment and trading problems leverage market fundamental and alternative data to research alpha factors design and fine tune supervised unsupervised and reinforcement learning models optimize portfolio risk and performance using pandas numpy and scikit learn integrate machine learning models into a live trading strategy on quantopian if you feel this book is for you get your copy https www amazon com dp 178934641x today a href https www packtpub com utm source github utm medium banner utm campaign githubbanner img src https raw githubusercontent com packtpublishing github master github png alt https www packtpub com border 5 a instructions and navigations all of the code is organized into folders for example chapter02 the code will look like the following interesting times extract interesting date ranges returns returns interesting times fall2015 to frame pf join benchmark rets add 1 cumprod sub 1 plot lw 2 figsize 14 6 title post brexit turmoil following is what you need for this book hands on machine learning for algorithmic trading is for data analysts data scientists and python developers as well as investment analysts and portfolio managers working within the finance and investment industry if you want to perform efficient algorithmic trading by developing smart investigating strategies using machine learning algorithms this is the book for you some understanding of python and machine learning techniques is mandatory with the following software and hardware list you can run all code files present in the book chapter 1 15 software and hardware list chapter software required os required 2 20 python 2 7 3 5 scipy 0 18 windows mac os x and linux any numpy 1 11 matplotlib 2 0 scikitlearn 0 18 gensim keras 2 we also provide a pdf file that has color images of the screenshots diagrams used in this book click here to download it https www packtpub com sites default files downloads 9781789346411 colorimages pdf related products other books you may enjoy machine learning algorithms second edition packt https www packtpub com big data and business intelligence machine learning algorithms second edition utm source github utm medium repository utm campaign 9781789347999 amazon https www amazon com dp 1789347998 building machine learning systems with python third edition packt https www packtpub com big data and business intelligence building machine learning systems python third edition utm source github utm medium repository utm campaign 9781788623223 amazon https www amazon com dp 1788623223 get to know the author stefan jansen cfa is founder and lead data scientist at applied ai where he advises fortune 500 companies and startups across industries on translating business goals into a data and ai strategy builds data science teams and develops ml solutions before his current venture he was managing partner and lead data scientist at an international investment firm where he built the predictive analytics and investment research practice he was also an executive at a global fintech startup operating in 15 markets worked for the world bank advised central banks in emerging markets and has worked in 6 languages on four continents stefan holds master s from harvard and berlin university and teaches data science at general assembly and datacamp suggestions and feedback click here https docs google com forms d e 1faipqlsdy7datc6qmel81fiuuymz0wy9vh1jhkvpy57oimekgqib ow viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781789346411 https packt link free ebook 9781789346411 a p | ai |
|
runLLM | runllm repository for running a large language model e g opt bloom with using runai epfl cluster basic setup for runai 1 download config file and move it to direcotry kube below your root cd mkdir kube mv config kube config export kubeconfig kube config 2 download runai login wget https github com run ai runai cli releases download v2 4 1 runai cli v2 4 1 linux amd64 tar gz tar xvf runai cli v2 4 1 linux amd64 tar gz runai login chmod x runai sudo install runai sh check out the existing list for a valid installation runai config project nlp your gaspar id runai whoami runai list jobs 3 docker build can be omitted no need to push to the harbor just using runai submit e g submit i ubuntu cf docker ps to check whether you installed docker app build a docker image on your local disk docker build t ic registry epfl ch nlp your tag login to docker with epfl credential docker login ic registry epfl ch push docker image to the harbor where you can find all the docker images docker push ic registry epfl ch nlp your tag 4 submit docker image now submit the job runai submit i ic registry epfl ch nlp docker image if you want to watch the changes in every 2 seconds do the command below if you cannot use eatch command and using mac just do brew install watch watch runai list jobs you can interact throughout terminal by bashing runai bash project name 4 1 submit dockerfile to use with vscode interactive mode runai submit test i ic registry epfl ch nlp sooh test g 1 interactive service type nodeport port 30022 22 then you can access throughout ssh ssh p 30022 root iccluster mapped iccluster number iccluster epfl ch here pwd will be root you should specify lines on dockerfile regarding ssh access port number please refer docker https github com run ai docs blob master quickstart python 2bssh dockerfile if you want to mount dataset from your lab cluster use the submit command below runai submit job name i ic registry epfl ch nlp your tag g 1 cpu 1 pvc runai nlp sooh nlpdata1 nlpdata1 for the train mode your outputs will be saved in scratch if you submit the runai file with the command below note if you are using mac s m1 memory chip then you should build your image in this way for training mode unless you cannot mount your volume into runai cluster permission denied docker buildx build push platform linux amd64 f dockerfile t ic registry epfl ch nlp your tag then you can access by interactive mode with giving the same pvc runai nlp sooh nlpdata1 nlpdata1 option 4 2 instead you can use bash runai interactive sh by correcting user id user name 5 delete project after done runai delete project name bloom accelerator with deepspeed for bloom i used the accelerate package you can use the saved weights located in nlpdata1 home sooh bloom bloom python bloom accelerate inference py name nlpdata1 home sooh bloom bloom batch size 1 https huggingface co blog bloom inference pytorch scripts https huggingface co blog bloom inference pytorch scripts opt175b alpa for the opt175b model i used alpa package https alpa ai tutorials opt serving html which will allow you to use llm for inference install alpa prerequisites if you want to convert the weight format of 175b opt you ll need 700gb ram memory 350gb disk space singleton 350gb disk space alpa numpy if you are a nlp lab member you can use the converted opt weight for alpa package located in nlpdata1 share models opt 175b opt 175b np check your cuda version by nvidia smi update pip pip3 install upgrade pip use your own cuda version here cuda cuda113 means cuda 11 3 pip3 install cupy cuda113 instead you can use to fix this error https github com cupy cupy issues 5211 conda install pytorch torchvision torchaudio cudatoolkit 11 3 c pytorch c nvidia c conda forge then check whether your system already has nccl installed by the command below python3 c from cupy cuda import nccl highly likely you ll get error cupy is not in the path related then follow the process below pip install u setuptools pip pip install cupy vvvv cuda path opt nvidia cuda pip install cupy now move on to install alpa with python wheels in this case the wheel compatible with cuda 11 1 and cudnn 8 0 5 pip3 install alpa pip3 install jaxlib 0 3 22 cuda111 cudnn805 f https alpa projects github io wheels html cf alpa modified the original jaxlib at the version jaxlib 0 3 22 alpa regularly rebases the official jaxlib repository to catch up with the upstream keep in check alpa https alpa ai install html let s check the installation ray start head python3 m alpa test install now install alpa requirements pip3 install transformers 4 23 1 fastapi uvicorn omegaconf jinja2 install torch corresponding to your cuda version e g for cuda 11 3 pip3 install torch torchvision torchaudio extra index url https download pytorch org whl cu113 clone the alpa repo from git if your machine does not have a git then do apt get install git git clone https github com alpa projects alpa git then install llm serving package cd alpa examples pip3 install e whoooa finally let s start inference with the converted model weights for alpa package i converted the original weights into singleton numpy format for alpa saved in nlpdata1 share models opt 175b you need at least 8 gpus 40gb for inference cd llm serving python3 textgen py model alpa opt 175b references https github com epfml kubernetes setup https github com cvlab epfl cvlab kubernetes guide | ai |
|
Deep-Learning-Notebooks | deep learning notebooks deep learning ipython notebooks computer vision and image processing custom object detector using yolov3 ipynb training custom object detector using yolov3 colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master custom object detector using yolov3 ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master custom object detector using yolov3 ipynb message https img shields io badge darknet lightgrey transfer learning based classifier to classify benign malignant tumor ipynb training transfer learning based classifier to classify benign malignant tumor on warwick qu dataset https warwick ac uk fac sci dcs research tia glascontest download on mobilenetv2 colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master transfer learning based classifier to classify benign malignant tumor ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master transfer learning based classifier to classify benign malignant tumor ipynb message https img shields io badge keras lightgrey using otsu s method for segmentation ipynb using otsu s method to generate data for training of deep learning image segmentation models colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master using otsu e2 80 99s method for segmentation ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master using otsu e2 80 99s method for segmentation ipynb message https img shields io badge opencv lightgrey multi class classifier to recognize sign language ipynb multi class classifier to recognize sign language colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master multi class classifier to recognize sign language ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master multi class classifier to recognize sign language ipynb message https img shields io badge keras lightgrey horses vs humans using transfer learning ipynb horses and humans classification using pre trained inceptionv3 network colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master horses vs humans using transfer learning ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master horses vs humans using transfer learning ipynb message https img shields io badge keras lightgrey gan keras ipynb simple gan trained on mnist dataset colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master gan keras ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master gan keras ipynb message https img shields io badge keras lightgrey gan pytorch ipynb simple gan trained on mnist dataset colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master gan pytorch ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master gan pytorch ipynb message https img shields io badge pytorch lightgrey cats vs dogs classifier ipynb classify cats and dogs colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master cats vs dogs classifier ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master cats vs dogs classifier ipynb message https img shields io badge keras lightgrey blurring image ipynb blurring image using opencv colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master blurring image ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master blurring image ipynb message https img shields io badge opencv lightgrey thresholding image ipynb thresholding image using opencv colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master thresholding image ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master thresholding image ipynb message https img shields io badge opencv lightgrey handwriting digits recognition ipynb handwriting digits recognition on mnist without convolutions colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master handwriting digits recognition ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master handwriting digits recognition ipynb message https img shields io badge keras lightgrey intel image classification from kaggle dataset ipynb classification of natural scenes colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks keras blob master intel image classification from kaggle dataset ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master intel image classification from kaggle dataset ipynb message https img shields io badge keras lightgrey unet keras ipynb implementaion of unet in keras colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master unet keras ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master unet keras ipynb message https img shields io badge keras lightgrey mask rcnn keras ipynb using mask rcnn to segment images colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master mask rcnn keras ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master mask rcnn keras ipynb message https img shields io badge keras lightgrey data preprocessing downloading dataset from kaggle ipynb download dataset from kaggle to local system colab https colab research google com assets colab badge svg https colab research google com github anspire notebooks blob master downloading dataset from kaggle ipynb download https img shields io badge download notebook blue https anspire github io git raw html url https raw githubusercontent com anspire notebooks master downloading dataset from kaggle ipynb message https img shields io badge kaggle lightgrey contributing pull requests are welcome for major changes please open an issue first to discuss what you would like to change please make sure to update tests as appropriate license mit https choosealicense com licenses mit | ai |
|
fuel-core | fuel client build https github com fuellabs fuel core actions workflows ci yml badge svg https github com fuellabs fuel core actions workflows ci yml crates io https img shields io crates v fuel core label latest https crates io crates fuel core docs https docs rs fuel core badge svg https docs rs fuel core discord https img shields io badge chat 20on discord orange logo discord logocolor ffffff color 7389d8 labelcolor 6a7ec2 https discord gg xfpk4pe fuel client implementation contributing if you are interested in contributing to fuel see our contributing md contributing md guidelines for coding standards and review process before pushing any changes or creating pull request please run source ci checks sh building system requirements there are several system requirements including clang macos bash brew update brew install cmake brew install protobuf debian bash apt update apt install y cmake pkg config build essential git clang libclang dev protobuf compiler arch bash pacman syu needed noconfirm cmake gcc pkgconf git clang protobuf compiler compiling we recommend using xtask to build fuel core sh cargo xtask build this will run cargo build as well as any other custom build processes we have such as re generating a graphql schema for the client testing the ci checks sh ci checks sh script file can be used to run all ci checks including the running of tests shell source ci checks sh the script requires pre installed tools for more information run shell cat ci checks sh running the service can be launched by executing fuel core run the list of options for running can be accessed via the help option console target debug fuel core run help usage fuel core run options options chain chain config specify either an alias to a built in configuration or filepath to a json file default local testnet for many development purposes it is useful to have a state that won t persist and the db type option can be set to in memory as in the following example example console target debug fuel core run db type in memory 2023 06 13t12 45 22 860536z info fuel core cli run 230 block production mode instant 2023 06 13t12 38 47 059783z info fuel core cli run 310 fuel core version v0 18 1 2023 06 13t12 38 47 078969z info new name fuel core commit result block id b1807ca9f2eec7e459b866ecf69b68679fc6b205a9a85c16bd4943d1bfc6fb2a height 0 tx status fuel core importer importer 231 committed block 2023 06 13t12 38 47 097777z info new name fuel core fuel core graphql api service 208 binding graphql provider to 127 0 0 1 4000 to disable block production on your local node set poa instant false example console target debug fuel core run poa instant false 2023 06 13t12 44 12 857763z info fuel core cli run 232 block production disabled troubleshooting publishing we use publish crates https github com katyo publish crates action for automatic publishing of all crates if you have problems with publishing you can troubleshoot it locally with act https github com nektos act shell act release s github token your github token j publish crates check container architecture linux amd64 reuse it requires githubtoken to do request to the github you can create it with this https docs github com en enterprise server 3 4 authentication keeping your account and data secure creating a personal access token instruction outdated database if you encounter an error such as console thread main panicked at unable to open database databaseerror error message invalid argument column families not opened column 11 column 10 column 9 column 8 column 7 column 6 column 5 column 4 column 3 column 2 column 1 column 0 fuel core src main rs 23 66 clear your local database using rm rf fuel db file descriptor limits on some macos versions the default file descriptor limit is quite low which can lead to io errors with messages like too many open files or even fatal runtime error rust cannot catch foreign exceptions when rocksdb encounters these issues use the following command to increase the open file limit note that this only affects the current shell session so consider adding it to zshrc bash ulimit n 10240 log level the service relies on the environment variable rust log for more information check the envfilter examples https docs rs tracing subscriber latest tracing subscriber struct envfilter html examples crate human logging can be disabled with the environment variable human logging false debugging see the guide on debugging docs developers debugging md for an overview on running a debug build of a local node docker kubernetes sh create docker image docker build t fuel core f deployment dockerfile delete docker image docker image rm fuel core create kubernetes volume deployment service kubectl create f deployment fuel core yml delete kubernetes volume deployment service kubectl delete f deployment fuel core yml graphql service the client functionality is available through a service endpoint that expect graphql queries transaction executor the transaction executor currently performs instant block production changes are persisted to rocksdb by default service endpoint graphql schema available after building crates client assets schema sdl the service expects a mutation defined as submit that receives a transaction https github com fuellabs fuel vm tree master fuel tx in hex encoded binary format as specified here https github com fuellabs fuel specs blob master src tx format transaction md curl example this example will execute a script that represents the following sequence of asm https github com fuellabs fuel vm tree master fuel asm rs addi 0x10 regid zero 0xca addi 0x11 regid zero 0xba log 0x10 0x11 regid zero regid zero ret regid one console cargo run bin fuel core client transaction submit script gas price 0 gas limit 1000000 maturity 0 script 80 64 0 202 80 68 0 186 51 65 16 0 36 4 0 0 script data inputs coinsigned utxo id tx id c49d65de61cf04588a764b557d25cc6c6b4bc0d7429227e2a21e61c213b3a3e2 output index 0 owner f1e92c42b90934aa6372e30bc568a326f6e66a1a0288595e6e3fbd392a4f3e6e amount 10599410012256088338 asset id 2cafad611543e0265d89f1c2b60d9ebf5d56ad7e23d9827d6b522fd4d6e44bc3 tx pointer block height 0 tx index 0 witness index 0 maturity 0 predicate gas used null predicate null predicate data null outputs witnesses data 150 31 98 51 6 239 255 243 45 35 182 26 129 152 46 95 45 211 114 58 51 64 129 194 97 14 181 70 190 37 106 223 170 174 221 230 87 239 67 224 100 137 25 249 193 14 184 195 15 85 156 82 91 78 91 80 126 168 215 170 139 48 19 5 receipts root 0x6114142d12e0f58cfb8c72c270cd0535944fb1ba763dce83c17e882c482224a2 | blockchain fuel | blockchain |
web-development-resources | awesome web development resources awesome awesome badge this is an awesome project about web development resources resources are added frequently enjoy if you like this repo be sure to it please read contributing guidelines contributing md before submitting new resources initially created by marko https markodenic com at web development resources https markodenic com free web development resources table of contents hosting hosting learning platforms learning platforms coding challenge platforms coding challenge platforms freelancing platforms freelancing platforms remote jobs remote jobs photos photos videos videos illustrations illustrations icons icons fonts fonts youtube channels youtube channels podcasts podcasts code editors code editors color palettes color palettes ui inspiration ui inspiration docs docs animation libraries animation libraries charts charts chrome extensions chrome extensions website optimization tools website optimization tools html css javascript templates htmlcssjavascript templates newsletters newsletters css generators css generators css games css games online tools online tools ui components ui components vue ui libraries vue ui libraries react ui libraries react ui libraries angular ui libraries angular ui libraries others others hosting website description https netlify com netlify unites an entire ecosystem of modern tools and services into a single simple workflow for building high performance sites and apps https firebase google com firebase helps you build and run successful apps it is backed by google and loved by app development teams from startups to global enterprises https aws amazon com amazon web services offers a broad set of global cloud based products and services help organizations move faster lower it costs and scale https pages github com github pages are websites for you and your projects it is hosted directly from your github repository you just have to edit push and your changes are live https vercel com vercel combines the best developer experience with an obsessive focus on end user performance it enables frontend teams to do their best work you just have to develop preview and ship https surge sh surge is static web publishing for front end developers it is simple single command web publishing it publishes html css and js for free without leaving the command line https render com render is a unified cloud to build and run all your apps and websites with free tls certificates a global cdn ddos protection private networks and auto deploys from git https docs gitlab com ee user project pages gitlab pages static websites directly from a repository in gitlab to publish a website one can use any static site generator or any plain written html css and javascript https stormkit io stormkit can easily manage your frontend infrastructure it integrates perfectly with your git flow it helps you build deploy and scale your web apps seamlessly https www digitalocean com digitalocean has the cloud computing services you need with predictable pricing robust documentation and scalability to support your growth at any stage it is simpler cloud for happier devs to have better results https www 000webhost com 000webhostapp is zero cost website hosting with php mysql cpanel no ads its servers use advanced firewalls and include ddos protection https infinityfree net infinityfree is fully featured completely free website hosting with php mysql and no ads on site https pages cloudflare com cloudflare pages is a jamstack platform for frontend developers to collaborate and deploy websites it offers free unlimited bandwidth https supabase com supabase is an open source firebase alternative start your project with a postgres database authentication instant apis realtime subscriptions and storage https railway app railway is an infrastructure platform where you can provision infrastructure develop with that infrastructure locally and then deploy to the cloud https fly io fly is a platform for running full stack apps and databases close to your users we ve been hammering on this thing since 2017 and we think it s pretty great back to top table of contents learning platforms website https tutoriac com https www freecodecamp org https www lambdatest com learning hub https codecademy com https javascript30 com https www frontendmentor io https testautomationu applitools com https www coursera org https www edx org https khanacademy org https sololearn com https www scaler com topics https www theodinproject com https javascript info https vueschool io https www guru99 com https trailhead salesforce com https ocw mit edu https open appacademy io https web dev https scrimba com https thegymnasium com https www amigoscode com https cssbattle dev https bento io https fullstackopen com en https upskillcourses com courses https www geeksforgeeks org web development https hackdesign org lessons https javatpoint com https learn microsoft com en gb training https www codementor io events https eloquentjavascript net https skillcombo com courses development web development free https code org https www interviewbit com https css tricks com back to top table of contents coding challenge platforms website https www codewars com https topcoder com https www codingame com start https hackerrank com https projecteuler net https coderbyte com https codechef com https exercism org https leetcode com https spoj com https codeforces com https codesignal com https frontendmentor io https devchallenges io https www hackerearth com https www frontendpractice com https www codementor io projects https css challenges com https 100dayscss com https codepip com https www w3schools com codegame https edabit com https www jschallenger com https www codingninjas com https lab reaal me jsrobot https divize io challenges back to top table of contents freelancing platforms website https toptal com https upwork com https www freelancer com https peopleperhour com https simplyhired com https www envato com https guru com https fiverr com https 6nomads com https www truelancer com https gun io https www refrens com back to top table of contents remote jobs website https www showwcase com jobs https www remotefrontendjobs com https jobboardsearch com https www flexjobs com https remote co remote jobs https justremote co https weworkremotely com https remoteok io https himalayas app https jobspresso co https wfh io https 4dayweek io https www hiretechladies com https nowhiteboard org https www coolstartupjobs com https wellfound com https www smartremotejobs com https www remotehub com https startup jobs remote jobs https remotescout ch https jobstache com https jobsinjs com https devemploy com https echojobs io https aijobstracker com remote https www remote io https web3 career remote jobs back to top table of contents photos website https unsplash com https pixabay com https pexels com https reshot com https librestock com https visualhunt com https freephotos cc en https picjumbo com https www pxfuel com https www splitshire com https freeforcommercialuse net back to top table of contents videos website https dareful com https www videvo net https www videezy com https pixabay com videos https mixkit co https www vidsplay com https mazwai com https lifeofvids com https www pexels com https coverr co https www splitshire com https photostockeditor com https www clipstill com back to top table of contents illustrations website https icons8 com illustrations https www opendoodles com https undraw co illustrations https www drawkit com https icons8 com ouch https iradesign io https interfacer xyz https blush design https storyset com https themeisle com illustrations https www manypixels co gallery https www artify co illustrations figma https www artify co vector illustrations https cocomaterial com back to top table of contents icons website https free icons github io free icons https fontawesome com https flaticon com https icons8 com https www iconfinder com https fonts google com icons https iconmonstr com https heroicons com https boxicons com https css gg https lineicons com https remixicon com https tabler icons io https simpleicons org https feathericons com https svgrepo com https iconic app https icomoon io https iconscout com unicons https icons holasvg com https fontello com https fontastic me https ionic io ionicons https icons getbootstrap com https react icons github io react icons https www iconspedia com https favicons beaubus com https www 3dicons com https flowbite com icons back to top table of contents fonts website https fonts google com https fontspace com https www 1001fonts com https www fontsquirrel com https ffonts net https www fontfabric com https urbanfonts com https www fontpair co https fonts bunny net back to top table of contents youtube channels website traversy media https www youtube com c traversymedia freecodecamp org https www youtube com c freecodecamp the net ninja https youtube com c thenetninja google chrome developers https www youtube com c googlechromedevelopers derek banas https www youtube com c derekbanas academind https www youtube com c academind codingtech https www youtube com c codingtech cod community https www youtube com channel ucvi5azod4edumpshr00efiw web dev simplified https www youtube com c webdevsimplified dev ed https www youtube com c deved codestackr https youtube com c codestackr coding addict https www youtube com c codingaddict kevin powell https youtube com kepowob code with ania kub w https youtube com c aniakub c3 b3w the coding train https www youtube com c thecodingtrain kudvenkat https www youtube com user kudvenkat program with erik https www youtube com c programwitherik coder coder https www youtube com c thecodercoder clever programmer https www youtube com channel ucqrilqnl5ed9dz6cgmyvmtq javascript mastery https www youtube com c javascriptmastery adrian twarog https www youtube com channel ucvm5yywwflwpcqgbrr68jlq wes bos https www youtube com wesbos designcourse https www youtube com c designcourse codedamn https www youtube com c codedamn programmingwithmosh https www youtube com c programmingwithmosh fireship https www youtube com c fireship codevolution https www youtube com c codevolution buddy https youtube com c buddyworks leon noel https youtube com channel ucgirshbdwucgjgmppz 13xw css weekly https www youtube com c cssweekly dave gray https www youtube com davegrayteachescode sonny sangha https www youtube com c sonnysangha learncode academy https www youtube com learncodeacademy videos corey schafer https www youtube com coreyms back to top table of contents podcasts website syntax https syntax fm fullstack radio https fullstackradio com the changelog https changelog com the laracasts snippet https laracasts com podcast front end happy hour https frontendhappyhour com javascript jabber https javascriptjabber com commit your code https anchor fm commityourcode shop talk https shoptalkshow com ladybug podcast https www ladybug dev codepen radio https blog codepen io radio jamstack radio https www heavybit com library podcasts jamstack radio devdiscuss https dev to devdiscuss devnews https dev to devnews react native radio https reactnativeradio com html all the things https podcast htmlallthethings com the css podcast https thecsspodcast libsyn com the stack overflow podcast https stackoverflow blog podcast back to top table of contents code editors website vs code https code visualstudio com sublime text https www sublimetext com brackets http brackets io emacs https www gnu org software emacs vim https www vim org spacemacs https www spacemacs org emacs https www gnu org software emacs neovim https neovim io fleet https www jetbrains com fleet back to top table of contents color palettes website https coolors co https colorhunt co https paletton com https color hex com https mycolor space https flatuicolors com https color adobe com https htmlcolorcodes com https colorsinspo com https uigradients com https www colorion co https www gradientos app https www eggradients com https cssgradient io https www 0to255 com https branition com colors https materialui co colors back to top table of contents ui inspiration website https landingexam com https uigarage net https httpster net https www awwwards com https dribbble com https onepagelove com https www behance net https tympanus net codrops https landings dev back to top table of contents docs website https developer mozilla org en us https w3schools com https w3docs com https devdocs io back to top table of contents animation libraries website csshake https elrumordelaluz github io csshake animate css https animate style animejs https animejs com greensock gsap https greensock com gsap magic animations https www minimamente com project magic hover css https ianlunn github io hover anijs https anijs github io wicked css https kristofferandreasen github io wickedcss tuesday http shakrmedia github io tuesday mo js https mojs github io aos https michalsnik github io aos velocity js http velocityjs org popmotion https popmotion io snap svg http snapsvg io animista https animista net lottie player https lottiefiles com web player framer motion https www framer com motion swiperjs https swiperjs com motion one https motion dev back to top table of contents charts website chart js https www chartjs org d3 js https d3js org three js https threejs org amcharts https www amcharts com charts css https chartscss org echarts https echarts apache org back to top table of contents chrome extensions website web developer https chrome google com webstore detail web developer bfbameneiokkgbdmiekhjnmfkcnldhhm cssviewer https chrome google com webstore detail cssviewer ggfgijbpiheegefliciemofobhmofgce wappalyzer https chrome google com webstore detail wappalyzer gppongmhjkpfnbhagpmjfkannfbllamg jsonview https chrome google com webstore detail jsonview chklaanhfefbnpoihckbnefhakgolnmc daily dev news for busy developers https chrome google com webstore detail dailydev news for busy de jlmpjdjjbgclbocgajdjefcidcncaied hl en lighthouse https chrome google com webstore detail lighthouse blipmdconlkpinefehnmjammfjpmpbjk hl en checkbot seo speed security checker https chrome google com webstore detail checkbot seo web speed se dagohlmlhagincbfilmkadjgmdnkjinl performance analyser https chrome google com webstore detail performance analyser djgfmlohefpomchfabngccpbaflcahjf whatfont https chrome google com webstore detail whatfont jabopobgcpjmedljpbcaablpmlmfcogm hl en visbug https chrome google com webstore detail visbug cdockenadnadldjbbgcallicgledbeoc related colorzilla https chrome google com webstore detail colorzilla bhlhnicpbhignbdhedgjhgdocnmhomnp hl en us window resizer https chrome google com webstore detail window resizer kkelicaakdanhinjdeammmilcgefonfh hl en githunt https chrome google com webstore detail githunt khpcnaokfebphakjgdgpinmglconplhp hl en react developer tools https chrome google com webstore detail react developer tools fmkadmapgofadopljbjfkapdkoienihi hl en hackertab dev all developer news in 1 tab https chrome google com webstore detail hackertabdev developer ne ocoipcahhaedjhnpoanfflhbdcpmalmp vue developer tools https chrome google com webstore detail vuejs devtools nhdogjmejiglipccpnnnanhbledajbpd back to top table of contents website optimization tools website google pagespeed insights https pagespeed web dev gtmetrix https gtmetrix com webpagetest https www webpagetest org yslow https yslow org web dev https web dev measure optimizilla https imagecompressor com seotester https seotest me back to top table of contents html css javascript templates website https htmlrev com https www tooplate com https html5up net https templatemo com https uideck com https freehtml5 co https www zerotheme com https bootstrapmade com https graygrids com https tailwindtemplates co https themeselection com https builtatlightspeed com https web3templates com back to top table of contents newsletters website topics marko tech tips https markodenic com newsletter useful tech tips directly to your inbox smashing newsletter https www smashingmagazine com the smashing newsletter front end and ux frontend focus https frontendfoc us html css webgl canvas browser tech and more css weekly https css weekly com css javascript weekly https javascriptweekly com javascript accessibility weekly https a11yweekly com accessibility jamstacked https jamstack email jamstack ecosystem ui dev newsletter https www silvestar codes side projects ui dev mentoring reads user interface development go make things https gomakethings com daily vanilla javascript back to top table of contents css generators website description glassmorphism generators https markodenic com tools glassmorphism css generator use glassmorphism generator to create a stunning effect for your projects buttons generator https markodenic com tools buttons generator an online gallery of 100 button designs you can easily copy and use in your projects layoutit grid https grid layoutit com quickly design web layouts and get html and css code learn css grid visually and build web layouts with our interactive css grid generator css gradient editor https cssgradienteditor com you may only need this tool for creating css gradients and patterns hola svg loaders generator https holasvg com loaders svg loaders generator with sass and smil options shape divider https www shapedivider app a free tool to make it easier for designers and developers to export a beautiful svg shape divider for their latest project beaubus patterns https patterns beaubus com 150 free svg patterns and css background images generator 9elements fancy border radius https 9elements github io fancy border radius 9elements is a little tool that helps you create your very own organic shape when you use four eight values specifying border radius in css you can create organic looking shapes blobmaker https www blobmaker app blobmaker is a free generative design tool made with by z creative labs to help you quickly create random unique and organic looking svg shapes toptal css3 generator https www toptal com developers css3maker a free online tool for quickly generating css3 snippets such as for effects gradients and animations neumorphism https neumorphism io a free online tool for designing attractive ui with colors gradients and shadows scrollbar app https scrollbar app a simple online tool for creating custom scrollbars for the web code magic https code magic vercel app a free tool to make css easier by generating tailwind and css code for effects gradients and inputs back to top table of contents css games website description flexbox froggy https flexboxfroggy com flexbox froggy is an interactive game that helps users learn and practice the css flexbox layout knights of the flexbox table https knightsoftheflexboxtable com set of challenges where we must use flexbox properties to position and align elements on the screen to complete a medieval themed game layout grid garden https cssgridgarden com a grid garden is an interactive game that helps users learn and practice the css grid layout grid attack https codingfantasy com games css grid attack play learn and practice the css grid layout css diner https flukeout github io fun and interactive way to learn how to select elements on a web page using css guess css https www guess css app another fun and interactive way to learn css css speedrun https css speedrun netlify app a css speedrun is a challenge to see how quickly a developer can complete a task using only css cascading style sheets back to top table of contents online tools website description prm https prm pushkaryadav in project profile readme maker easiest way to create amazing readme s for your github projects and profile qr code generator https markodenic com tools qr code generator use qr code generator to easily create a qr code for your project google analytics checker https www statsglitch com google analytics checker scan your project to ensure that google analytcs tag is properly set gradient art https gra dient art an advanced css gradient editor with layering design tools and free cloud storage jsont https www jsont run a simple and powerful json formatting tool json crack https jsoncrack com a simple tool to visualize json code in a neat tree structure codepng https codepng app convert your source code into awesome shareable images yuyu ai https yuyu ai is a frontend ai tool to generate html and css instantly from a jpg or png file tablebackend com https tablebackend com a backend for your simple projects using oowerful canvas based data grid for handling millions of rows back to top table of contents ui components website description flowbite https flowbite com open source ui component library based on tailwind css featuring dark mode and interactive elements csslayout https csslayout io a site with multiple css code snippets for very frequently used components in day to day web pages tailgrids https tailgrids com 300 free and premium tailwind css ui components and sections ayro ui bootstrap https ayroui com bootstrap ui components snippets and sections for modern web apps ui hut https www uihut com home free and premium ui compontents or templates for bootstrap figma xd psd etc uiverse https uiverse io open source ui elements made with css html where anyone can contribute back to top table of contents vue ui libraries website description vuetify https vuetifyjs com vuetify is a semantic component framework for vue it aims to provide clean semantic and reusable components that make building your application a breeze build amazing applications with the power of vue material design and a massive library of beautifully crafted components and features vue material https www creative tim com vuematerial simple lightweight and built exactly according to the google material design specs bootstrapvue https bootstrap vue org bootstrapvue provides one of the most comprehensive implementations of bootstrap v4 for vue js with extensive and automated wai aria accessibility markup quasar framework https quasar dev quasar is an mit licensed open source vue js based framework it enables web developers to create responsive websites apps in various formats spas ssr optional pwa client takeover pwas bex mobile apps android ios etc and multi platform desktop apps using electron quasar s motto is write code once deploy it as a website mobile app and or electron app it provides a state of the art cli and efficient quasar web components eliminating the need for additional heavy libraries like hammer js moment js or bootstrap back to top table of contents react ui libraries website description material ui https mui com core simple and customizable component library to build faster beautiful and more accessible react applications ant design https ant design an enterprise class ui design language and react ui library react bootstrap https react bootstrap github io bootstrap components built with react semantic ui react https react semantic ui com semantic ui react is the official react integration for semantic ui chakra ui https chakra ui com a simple modular and accessible component library that gives you the building blocks you need to build your react applications nativebase https nativebase io nativebase is an accessible utility first component library that helps you build consistent ui across android ios and web prime react https primereact org the ultimate collection of design agnostic flexible and accessible react ui components back to top table of contents angular ui libraries website description taiga ui https taiga ui dev a powerful set of open source components for angular primeng https www primefaces org primeng angular ui component library featuring elegant high performance accessible and fully customizable ui components back to top table of contents others website description mdb markdwon badges https mdb pushkaryadav in generate amazing svg markdown badges within few clicks markdown preview https freecodetools org markdown preview markdown editor with instant preview using github css javascript quiz https javascriptquiz com javascript quiz check your knowledge by having fun cookiebubble https cookiebubble netlify app the easy way to inform users that your website is using cookies developer updates https www developerupdates com keeps you updated one everything going on in the software developement world boxy svg editor https boxy svg com svg editing tool here you can easily edit and save any svg file cssrepo https cssrepo com a curated list of awesome frameworks style guides and other cool nuggets for writing amazing css web searcher https websearcher vercel app create open graph twitter and basic meta tags easily makemeta https makemeta app effortlessly generate meta tags for your website back to top table of contents made by marko https markodenic com similar amazing projects public apis https publicapis dev dev resources https devresourc es awesome badge https cdn rawgit com sindresorhus awesome d7305f38d29fed78fa85652e3a63e154dd8e8829 media badge svg | webdevelopment chrome-extensions website-optimization freelancing-platforms color-palettes youtube-channels | front_end |
CSC401 | csc401 natural language processing assignments for csc401 | ai |
|
MicrogameDevKit | mobile microgame dev kit development kit for microgames what is a microgame original definition https www mariowiki com microgame a microgame is a very short roughly 8 seconds game that has a single object a microgame usually has a single input method and supplies 1 to 2 words max for instruction this simple example is taken from warioware touched https www mariowiki com party popper since a microgame only lasts 8 seconds they should be very easy to figure out how to beat the idea all templates provide basic gameplay logic to hack with all templates will be derived from the timertemplate to provide easy communication to game overall template demonstrate different ways one might utilize the touchscreen to make a game everything is royalty free or original structure the top level of the repo will be a list of folders for all the templates each template is a standalone unity solution to make basic swipe game ninja star throw basic rub game erasing something with an eraser basic scribble game connect point a to point b with a line basic drag game drag puzzle piece into puzzle export all scripts and prefabs to shared unitypackage without example scenes instructions for making a new microgame from scratch first import the frametemplate unity package set build type to android or ios set package name to com gamejam yournamehere disable any rotation besides portrait handle 3 different difficulties of play 0 easy 1 normal 2 hard remember to set the correct values in the mockcoremanager | front_end |
|
octahack-zig | octahack an embeddable precise and efficient modular music or anything else you want system as the wip name implies it s heavily inspired by elektron s octatrack a ridiculously efficient hardware sampler music workstation this library is designed quite differently however essentially it s a digital modular rack but designed to be usable in a performance setting the north star for this is to be efficient enough for a performer to be able to start from a completely blank slate and play an entire electronic music set it should be controllable 100 with a midi controller like the apc or launchpad or even better to compile as a custom os for an arduino or raspberry pi based custom piece of hardware this readme is basically just a way for me to get all my thoughts out and organised so it might get out of date as the project evolves but it will give a pretty decent overview of what i m aiming for here concepts rack this is the world of the octahack each project is a single rack which has a maximum size determined when the project is compiled you only have one track as with many rack based daws you implement multiple tracks by passing different subsets of components into different elements of a mixer you start with just the track inputs and outputs which are defined by whatever platform you re running on for example if you compiled the octahack software to run on a real octatrack which is unlikely but theoretically possible the inputs would be the individual a b c d audio inputs and the midi input the outputs would be the midi out cue l cue r main l main r and perhaps the headphone out would be separate too depending on how the octatrack is wired internally the precise input mapping is still up for debate but the current image in my head is a component button which is the super key for component related activities pressing and releasing component on its own allows you to replace the current component component left adds a component before the current one and component right adds a component after the current one probably this new edit component menu would be where you would find the option to add a new group too where adding a new group would just create a group containing only the current component no matter which of edit add before add after was selected component a component is a mapping from inputs to outputs and acts in a push pull manner every tick 44100 times a second or whatever the output frequency is set to each component is updated and can change internal state then each output of the rack is calculated by querying the components that it is wired to these components can then query the value of any of their inputs which query the outputs that they re wired to and so forth the reason to have this dual system is that some components need to constantly update whereas others can avoid calculating a lot of the time a delay reverb component wants to consume input even when it s not outputting anything because it s stateful and only updating it when its outputs are being output to the outside world will lead to weird and surprising behaviour with an audio recorder component the situation would be even worse conversely some other components could be made far simpler and more efficient by only calculating inputs that need to be calculated i o the data that flows through components is typed where inputs can only be wired to outputs of corresponding types the only types i m intending to support to begin with are midi audio and gate gate is any trigger input and is used for things like the input of adsr generators audio is just a number representing the instantaneous amplitude of the audio signal and so non audio continuous data like lfos should also just be audio output parameters unlike in a traditional modular rack where any parameters that should be controlled need a corresonding cv voltage input instead parameters are first class and can be wired to an output of the correct type parameters being first class means we can implement things like the following octatrack style scenes with crossfading locking parameters together so changing one will always change another adding parameters to groups that can control one or more parameters of subcomponents plus it means that automating parameters is more lightweight where any parameter can be automated but you don t have to clutter up the list of inputs with control inputs probably when a parameter is wired to an output turning the knob corresponding to the parameter should change the multiplier for that parameter instead of requiring a separate attenuverter file editors in order to allow components working with midi to interact better instead of having a sequencer which emits midi it would be better to only have a midi player component type but include a midi file editor with access to this editor for any midi file currently in use to never be more than a couple button presses away sequencing arbitrary parameters like the octatrack s parameter locks is done by wiring the midi player into a splitter which gives you gate note cc as separate outputs and then wiring those cc params to the desired parameters this means that saving and loading sequences is no different from saving and loading midi files and you can easily swap a live programmed sequence out for a pre made midi file save a live programmed sequence for later import into a desktop daw etc building our whole sequencing system around midi also forces us to treat midi sequenced within the system no differently to midi input from an external device which makes us play nicer with hardware sequencers recording since this is a system built for performance recording has to be a first class citizen i think that similar to how in octatrack flex machines can play recordings just like any other flex slot recordings should just be files and you should be able to work with them the same as any file i don t know yet whether you should be able to have arbitrarily expandable i e infinite until you run out of memory recording buffers or whether you should have to specify maximum recording time upfront like the octatrack certainly i think that you should be able to create an arbitrarily high number of recording buffers as long as you don t run out of memory instead of having both the size and number of the buffers fixed since any ui for recording is going to need to have support for wiring the recording input to an arbitrary output of an arbitrary component anyway it makes sense to just make all recording work be done by components the record start stop should be controlled by gate inputs so it can be automated while audio recording is obviously the most immediately clear use of this midi recording should use the exact same system grouping groups are a special kind of component which are a way to collect components together and abstract away details you create a group with a single component but can expand its size to hold any number of components groups have any number of inputs and any number of outputs todo perhaps with a maximum dictated by the number that can be easily represented in the ui and unlike component inputs outputs these are polymorphic which means they can be any type any wire from inside to outside the group or vice versa must pass through the inputs and outputs of the group itself but to make things simpler it ll probably be possible to directly wire an output of any component to any other component with new inputs outputs on the group and the path between the two components being generated automatically it should be possible to save a group to a file and load it back into any project allowing users to save and load synths effects or even whole tracks which should allow a dj like workflow where you can have two groups at once both wired into a mixer fade into the second one from the first then delete the first and load in a new group which is the next track in the set because of this system of saving loading groups i think that a slot based system for file access like the octatrack s is undesirable although it s useful for quickly swapping out files in many places at once during a performance i think that you could get the same benefit by combining a few smaller features first any time a component needs a file it references it by path instead of by slot if you want two components to share the same file you lock their file parameters together as mentioned in the earlier parameters section we allow a quick view which shows all the files currently in use in the project with files that are locked together shown as a single entry but files that happen to be the same but are not locked together shown as separate entries when editing a file parameter you are presented with the quick view along with an option to choose from the file system and choosing from the quick view just locks the parameters together | os |
|
MentalLLaMA | p align center width 100 img src https i postimg cc 0nd8vxbl logo png width 100 height 100 p div div align left a href https stevekgyang github io target blank kailai yang sup 1 2 sup emsp a href https www zhangtianlin top target blank tianlin zhang sup 1 2 sup emsp a target blank shaoxiong ji sup 3 sup a emsp a target blank qianqian xie sup 1 2 sup a emsp a target blank ziyan kuang sup 6 sup a emsp a href https research manchester ac uk en persons sophia ananiadou target blank sophia ananiadou sup 1 2 4 sup a emsp a href https jimin chancefocus com target blank jimin huang sup 5 sup a div div div align left sup 1 sup national centre for text mining emsp sup 2 sup the university of manchester emsp sup 3 sup university of helsinki emsp sup 4 sup artificial intelligence research center aist emsp sup 5 sup wuhan university emsp sup 6 sup jiangxi normal university emsp div div align left img src https i postimg cc kj7rzvnr nactem hires png alt nactem height 85px emsp img src https i postimg cc nc2jy6fn uom png alt uom university logo height 85px emsp img src https i postimg cc cjd3hsry helsinki jpg alt helsinki logo height 85px emsp img src https i postimg cc snpxvkwg airc logo png alt airc logo height 85px emsp img src https i postimg cc cltkbwz7 57 eddd9 fb0 df712 f3 ab627163 c2 1 ef15655 13 fca png alt wuhan university logo height 85px img src https i postimg cc t3tjyqgp jiangxi png alt jiangxi logo height 85px div https black readthedocs io en stable static license svg news oct 13 2023 we release the training data for the following datasets dr dreaddit sad multiwd and irf more to come stay tuned oct 7 2023 our evaluation paper towards interpretable mental health analysis with large language models has been accepted by emnlp 2023 main conference as a long paper ethical considerations this repository and its contents are provided for non clinical research only none of the material constitutes actual diagnosis or advice and help seeker should get assistance from professional psychiatrists or clinical practitioners no warranties express or implied are offered regarding the accuracy completeness or utility of the predictions and explanations the authors and contributors are not responsible for any errors omissions or any consequences arising from the use of the information herein users should exercise their own judgment and consult professionals before making any clinical related decisions the use of the software and information contained in this repository is entirely at the user s own risk the raw datasets collected to build our imhi dataset are from public social media platforms such as reddit and twitter and we strictly follow the privacy protocols and ethical principles to protect user privacy and guarantee that anonymity is properly applied in all the mental health related texts in addition to minimize misuse all examples provided in our paper are paraphrased and obfuscated utilizing the moderate disguising scheme in addition recent studies have indicated llms may introduce some potential bias such as gender gaps meanwhile some incorrect prediction results inappropriate explanations and over generalization also illustrate the potential risks of current llms therefore there are still many challenges in applying the model to real scenario mental health monitoring systems by using or accessing the information in this repository you agree to indemnify defend and hold harmless the authors contributors and any affiliated organizations or persons from any and all claims or damages introduction this project presents our efforts towards interpretable mental health analysis with large language models llms in early works we comprehensively evaluate the zero shot few shot performances of the latest llms such as chatgpt and gpt 4 on generating explanations for mental health analysis based on the findings we build the interpretable mental health instruction imhi dataset with 105k instruction samples the first multi task and multi source instruction tuning dataset for interpretable mental health analysis on social media based on the imhi dataset we propose mentallama the first open source instruction following llms for interpretable mental health analysis mentallama can perform mental health analysis on social media data and generate high quality explanations for its predictions we also introduce the first holistic evaluation benchmark for interpretable mental health analysis with 19k test samples which covers 8 tasks and 10 test sets our contributions are presented in these 2 papers the mentallama paper https arxiv org abs 2309 13567 the evaluation paper https arxiv org abs 2304 03347 mentallama model we provide 4 model checkpoints evaluated in the mentallama paper mentallama chat 13b https huggingface co klyang mentallama chat 13b this model is fine tuned based on the meta llama2 chat 13b foundation model and the full imhi instruction tuning data the training data covers 8 mental health analysis tasks the model can follow instructions to make accurate mental health analysis and generate high quality explanations for the predictions due to the model size the inference are relatively slow mentallama chat 7b https huggingface co klyang mentallama chat 7b this model is fine tuned based on the meta llama2 chat 7b foundation model and the full imhi instruction tuning data the training data covers 8 mental health analysis tasks the model can follow instructions to make mental health analysis and generate explanations for the predictions mentalbart https huggingface co tianlin668 mentalbart this model is fine tuned based on the bart large foundation model and the full imhi completion data the training data covers 8 mental health analysis tasks the model cannot follow instructions but can make mental health analysis and generate explanations in a completion based manner the smaller size of this model allows faster inference and easier deployment mentalt5 https huggingface co tianlin668 mentalt5 this model is fine tuned based on the t5 large foundation model and the full imhi completion data the model cannot follow instructions but can make mental health analysis and generate explanations in a completion based manner the smaller size of this model allows faster inference and easier deployment you can use the mentallama models in your python project with the hugging face transformers library here is a simple example of how to load the model python from transformers import llamatokenizer llamaforcausallm tokenizer llamatokenizer from pretrained model path model llamaforcausallm from pretrained model path device map auto in this example llamatokenizer is used to load the tokenizer and llamaforcausallm is used to load the model the device map auto argument is used to automatically use the gpu if it s available model path denotes your model save path after loading the models you can generate a response here is an example python prompt consider this post work it has been a stressful week hope it gets better question what is the stress cause of this post inputs tokenizer prompt return tensors pt generate generate ids model generate inputs input ids max length 2048 tokenizer batch decode generate ids skip special tokens true clean up tokenization spaces false 0 our running of these codes on mentallama chat 13b gets the following response answer this post shows the stress cause related to work reasoning the post explicitly mentions work as being stressful and expresses a hope that it gets better this indicates that the poster is experiencing stress in relation to their work suggesting that work is the primary cause of their stress in this instance the imhi dataset we collect raw data from 10 existing datasets covering 8 mental health analysis tasks and transfer them into test data for interpretable mental health analysis statistic about the 10 test sets are as follows name task data split data source annotation released dr https aclanthology org w18 5903 depression detection 1 003 430 405 reddit weak labels yes clp https aclanthology org w15 1204 depression detection 456 196 299 reddit human annotations not yet dreaddit https aclanthology org d19 6213 stress detection 2 837 300 414 reddit human annotations yes swmh https arxiv org abs 2004 07601 mental disorders detection 34 822 8 705 10 882 reddit weak labels not yet t sid https arxiv org abs 2004 07601 mental disorders detection 3 071 767 959 twitter weak labels not yet sad https dl acm org doi 10 1145 3411763 3451799 stress cause detection 5 547 616 684 sms human annotations yes cams https aclanthology org 2022 lrec 1 686 depression suicide cause detection 2 207 320 625 reddit human annotations not yet loneliness loneliness detection 2 463 527 531 reddit human annotations not yet multiwd https github com drmuskangarg multiwd wellness dimensions detection 15 744 1 500 2 441 reddit human annotations yes irf https aclanthology org 2023 findings acl 757 interpersonal risks factors detection 3 943 985 2 113 reddit human annotations yes training data we introduce imhi the first multi task and multi source instruction tuning dataset for interpretable mental health analysis on social media we currently release the training and evaluation data from the following sets dr dreaddit sad multiwd and irf the instruction data is put under train data instruction data the items are easy to follow the query row denotes the question and the gpt 3 5 turbo row denotes our modified and evaluated predictions and explanations from chatgpt gpt 3 5 turbo is used as the golden response for evaluation to facilitate training on models with no instruction following ability we also release part of the test data for imhi completion the data is put under train data complete data the file layouts are the same with instruction tuning data evaluation benchmark we introduce the first holistic evaluation benchmark for interpretable mental health analysis with 19k test samples we currently release the test data from the following sets dr dreaddit sad multiwd and irf the instruction data is put under test data test instruction the items are easy to follow the query row denotes the question and the gpt 3 5 turbo row denotes our modified and evaluated predictions and explanations from chatgpt gpt 3 5 turbo is used as the golden response for evaluation to facilitate test on models with no instruction following ability we also release part of the test data for imhi completion the data is put under test data test complete the file layouts are the same with instruction tuning data model evaluation response generation to evaluate your trained model on the imhi benchmark first load your model and generate responses for all test items we use the hugging face transformers library to load the model for llama based models you can generate the responses with the following commands cd src python imhi py model path model path batch size 8 model output path output path test dataset imhi llama cuda model path and output path denote the model save path and the save path for generated responses all generated responses will be put under model output some generated examples are shown in examples response generation examples you can also evaluate with the imhi completion test set with the following commands cd src python imhi py model path model path batch size 8 model output path output path test dataset imhi completion llama cuda you can also load models that are not based on llama by removing the llama argument in the generated examples the goldens row denotes the reference explanations and the generated text row denotes the generated responses from your model correctness evaluation the first evaluation metric for our imhi benchmark is to evaluate the classification correctness of the model generations if your model can generate very regular responses a rule based classifier can do well to assign a label to each response we provide a rule based classifier in imhi py and you can use it during the response generation process by adding the argument rule calculate to your command the classifier requires the following template label reasoning explanation however as most llms are trained to generate diverse responses a rule based label classifier is impractical for example mentallama can have the following response for an sad query this post indicates that the poster s sister has tested positive for ovarian cancer and that the family is devastated this suggests that the cause of stress in this situation is health issues specifically the sister s diagnosis of ovarian cancer the post does not mention any other potential stress causes making health issues the most appropriate label in this case to solve this problem in our mentallama paper https arxiv org abs 2309 13567 we train 10 neural network classifiers based on mentalbert https arxiv org abs 2110 15621 one for each collected raw dataset the classifiers are trained to assign a classification label given the explanation we release these 10 classifiers to facilitate future evaluations on imhi benchmark all trained models achieve over 95 accuracy on the imhi test data before you assign the labels make sure you have transferred your output files in the format of exmaples response generation examples and named as dataset csv put all the output files you want to label under the same data path dir then download the corresponding classifier models from the following links the models download links cams https huggingface co tianlin668 cams clp https huggingface co tianlin668 clp dr https huggingface co tianlin668 dr dreaddit https huggingface co tianlin668 dreaddit irf https huggingface co tianlin668 irf loneliness https huggingface co tianlin668 loneliness multiwd https huggingface co tianlin668 multiwd sad https huggingface co tianlin668 sad swmh https huggingface co tianlin668 swmh t sid https huggingface co tianlin668 t sid put all downloaded models under a model path dir and name each model with its dataset for example the model for dr dataset should be put under model path dr now you can obtain the labels using these models with the following commands cd src python label inference py model path model path data path data path data output path output path cuda where model path data path denote your specified model and data dirs and output path denotes your output path after processing the output files should have the format as the examples in examples label data examples if you hope to calculate the metrics such as weight f1 score and accuracy add the argument calculate to the above command explanation quality evaluation the second evaluation metric for the imhi benchmark is to evaluate the quality of the generated explanations the results in our evaluation paper https arxiv org abs 2304 03347 show that bart score https arxiv org abs 2106 11520 is moderately correlated with human annotations in 4 human evaluation aspects and outperforms other automatic evaluation metrics therefore we utilize bart score to evaluate the quality of the generated explanations specifically you should first generate responses using the imhi py script and obtain the response dir as in examples response generation examples firstly download the bart score https github com neulab bartscore directory and put it under src then download the bart score checkpoint https drive google com file d 1 7jff7koinb7zrxkhiigtmr4chvet01m view usp sharing then score your responses with bart score using the following commands cd src python score py gen dir name dir name score method bart score cuda dir name denotes the dir name of your geenrated responses and should be put under model output we also provide other scoring methods you can change score method to gpt3 score bert score bleu rouge to use these metrics for gpt score https github com jinlanfu gptscore you need to first download the project and put it under src human annotations we release our human annotations on ai generated explanations to facilitate future research on aligning automatic evaluation tools for interpretable mental health analysis based on these human evaluation results we tested various existing automatic evaluation metrics on correlation with human preferences the results in our evaluation paper https arxiv org abs 2304 03347 show that bart score is moderately correlated with human annotations in all 4 aspects quality evaluation in our evaluation paper https arxiv org abs 2304 03347 we manually labeled a subset of the aigc results for the dr dataset in 4 aspects fluency completeness reliability and overall the annotations are released in this dir human evaluation dr annotation where we labeled 163 chatgpt generated explanations for the depression detection dataset dr the file chatgpt data csv includes 121 explanations that correctly classified by chatgpt chatgpt false data csv includes 42 explanations that falsely classified by chatgpt we also include 121 explanations that correctly classified by instructiongpt 3 in gpt3 data csv expert written golden explanations in our mentallama paper https arxiv org abs 2309 13567 we invited one domain expert major in quantitative psychology to write an explanation for 350 selected posts 35 posts for each raw dataset the golden set is used to accurately evaluate the explanation generation ability of llms in an automatic manner to facilitate future research we release the expert written explanations for the following datasets dr dreaddit swmh t sid sad cams loneliness multiwd and irf 35 samples each the data is released in this dir human evaluation test instruction expert the expert written explanations are processed to follow the same format as other test datasets to facilitate model evaluations you can test your model on the expert written golden explanations with similar commands as in response generation for example you can test llama based models as follows cd src python imhi py model path model path batch size 8 model output path output path test dataset expert llama cuda citation if you use the human annotations or analysis in the evaluation paper please cite misc yang2023interpretable title towards interpretable mental health analysis with large language models author kailai yang and shaoxiong ji and tianlin zhang and qianqian xie and ziyan kuang and sophia ananiadou year 2023 eprint 2304 03347 archiveprefix arxiv primaryclass cs cl if you use mentallama in your work please cite article yang2023mentalllama title mentalllama interpretable mental health analysis on social media with large language models author yang kailai and zhang tianlin and kuang ziyan and xie qianqian and ananiadou sophia journal arxiv preprint arxiv 2309 13567 year 2023 license mentallama is licensed under mit please find more details in the mit license file | chatgpt gpt4 interpretability large-language-models llama2 mental-health language-model natural-language-processing natural-language-understanding social-media | ai |
tutorman | tutorman tutoring management system project for database engineering university class built in django how to run python3 manage py runserver python3 manage py migrate when running the project for the first time initializes database structure server is available at http 127 0 0 1 8000 | server |
|
prompt_engineering | prompt engineering table of contents prompt engineering in context learning with gpt 3 and other large language models prompt engineering about about objectives objectives data data repository overview repository overview contrbutors contrbutors license license about this week s challenge is to systematically explore strategies that help generate prompts for llms to extract relevant entities from job descriptions and also to classify web pages given only a few examples of human scores you will be also required to compare responses and accuracies of multiple llm models for given prompts objectives understand the algorithms and techniques that goes into building large language models design a pipeline that takes a news item e g title description body or a job description and returns a score for the news item and list of entities and potentially their relationship for the job description according to stored examples consider the following while designing your pipeline think about in what format you want to receive the news item to be processed think about how to select the best samples for the given news item think about how to pre process the incoming item as well as the pre defined samples think about how to compose a prompt that gives the best result for the given item think about the post processing step you need to do to increase the accuracy as well as return in the format required write a flask or fastapi backend the api should have at least two endpoints bnewscore for scoring breaking news that may lead to public unrest jdentities for extracting entities from job description data the 1st dataset used for this project could be found in here https docs google com spreadsheets d 19n k6snim0fyld2tbs 5y3wesgdveb3j edit usp sharing ouid 108085860825615283789 rtpof true sd true and the 2nd dataset development and traing https github com walidamamou relation extraction transformer blob main relations dev txt and testing and final reporting https github com walidamamou relation extraction transformer blob main relations test txt repository overview structure of the repository models contains trained model github github workflows for ci cd cml screenshots model versioning screenshots data contains data versioning metedata scripts contains the main script logger py logger for the project plot py handles plots preprocessing py dataset preprocessing notebooks job description entity extraction ipynb extraction of job description entity document score ipynb score for the news item tests test preprocessing py test for the preprocessing script readme md contains the project description requirements txt contains the required packages license license of the project dvc contains the dvc configuration contrbutor s yohans samuel https www linkedin com in yohanssamuel license mit https choosealicense com licenses mit | gpt-3 in-context-learning language-model large prompt-engineering | ai |
esp-led-status | esp led status library for esp open rtos https github com superhouse esp open rtos to communicate device status through different led blink patterns patterns are defined as a list of delays in milliseconds with positive values being periods when led is on and negative values periods when led is off c 1000ms on 1000ms off led status pattern t waiting wifi led status pattern 1000 1000 one short blink every 3 seconds led status pattern t normal mode led status pattern 100 2900 three short blinks led status pattern t three short blinks led status pattern 100 100 100 100 100 700 define status led pin 13 static led status t status led status init status led pin 1 led status set status normal mode execute one time signal led status signal status three short blinks license mit licensed see the bundled license https github com maximkulkin esp led status blob master license file for more details | os |
|
Data-Engineering-Evolution | data engineering evolution this repository is an attempt to show the data engineering evolution from basic analytics on your local machine to orchestrated etl pipelines in the cloud the goal is to show how the data engineering landscape has evolved over the years and how it is still evolving this repository is a work in progress and will be updated to include more recent technologies and tools v1 local analysis no spearation of etl and analysis the first version of the data engineering evolution is a simple data pipeline that scrapes data from a website and stores it in a local csv file it scrapes the data from seek and stores it in a csv file the data is then loaded into a pandas dataframe and analysed the analysis is then visualised using matplotlib everything is done locally on your machine v2 local analysis with separation of etl and analysis the second version of the data engineering evolution is a simple data pipeline that scrapes data from a website and stores it in a local csv file it scrapes the data from seek and stores it in a csv file the data is then loaded into a pandas dataframe and analysed using a separate processing script the analysis is then visualised using a web application that is hosted locally the web application is built using streamlit and visualisation is done using plotly everything is done locally on your machine | cloud |
|
homography | homography estimation homography py estimating homography solve homography matrix map five images to the target surface original image img https raw githubusercontent com w181496 homography master input times square jpg result img https github com w181496 homography blob master homo png qrcode py unwarp qrcode from screen using backward warping original image img https raw githubusercontent com w181496 homography master input screen jpg result img https github com w181496 homography blob master recover png | ai |
|
nlplot | nlplot nlplot analysis and visualization module for natural language processing description facilitates the visualization of natural language processing and provides quicker analysis you can draw the following graph 1 n gram bar chart https htmlpreview github io https github com takapy0210 takapy blog blob master nlp twitter analytics using nlplot 2020 05 17 uni gram html 2 n gram tree map https htmlpreview github io https github com takapy0210 takapy blog blob master nlp twitter analytics using nlplot 2020 05 17 tree 20of 20most 20common 20words html 3 histogram of the word count https htmlpreview github io https github com takapy0210 takapy blog blob master nlp twitter analytics using nlplot 2020 05 17 number 20of 20words 20distribution html 4 wordcloud https github com takapy0210 takapy blog blob master nlp twitter analytics using nlplot wordcloud png 5 co occurrence networks https htmlpreview github io https github com takapy0210 takapy blog blob master nlp twitter analytics using nlplot 2020 05 17 co occurrence 20network html 6 sunburst chart https htmlpreview github io https github com takapy0210 takapy blog blob master nlp twitter analytics using nlplot 2020 05 17 sunburst 20chart html tested in english and japanese requirement python package https github com takapy0210 nlplot blob master requirements txt installation sh pip install nlplot i ve posted on this blog https www takapy work entry 2020 05 17 192947 about the specific use japanese and the sample code is also available in the kernel of kaggle https www kaggle com takanobu0210 twitter sentiment eda using nlplot english quick start data preparation the column to be analyzed must be a space delimited string python sample data target col text texts think rich look poor when you come to a roadblock take a detour when it is dark enough you can see the stars never let your memories be greater than your dreams victory is sweetest when you ve known defeat df pd dataframe target col texts df head text 0 think rich look poor 1 when you come to a roadblock take a detour 2 when it is dark enough you can see the stars 3 never let your memories be greater than your dreams 4 victory is sweetest when you ve known defeat quick start python api python import nlplot import pandas as pd import plotly from plotly subplots import make subplots from plotly offline import iplot import matplotlib pyplot as plt matplotlib inline target col as a list type or a string separated by a space npt nlplot nlplot df target col text stopword calculations can be performed stopwords npt get stopword top n 30 min freq 0 1 n gram bar chart fig unigram npt bar ngram title uni gram xaxis label word count yaxis label word ngram 1 top n 50 width 800 height 1100 color none horizon true stopwords stopwords verbose false save false fig unigram show fig bigram npt bar ngram title bi gram xaxis label word count yaxis label word ngram 2 top n 50 width 800 height 1100 color none horizon true stopwords stopwords verbose false save false fig bigram show 2 n gram tree map fig treemap npt treemap title tree map ngram 1 top n 50 width 1300 height 600 stopwords stopwords verbose false save false fig treemap show 3 histogram of the word count fig histgram npt word distribution title word distribution xaxis label count yaxis label width 1000 height 500 color none template plotly bins none save false fig histgram show 4 wordcloud fig wc npt wordcloud width 1000 height 600 max words 100 max font size 100 colormap tab20 r stopwords stopwords mask file none save false plt figure figsize 15 25 plt imshow fig wc interpolation bilinear plt axis off plt show 5 co occurrence networks npt build graph stopwords stopwords min edge frequency 10 the number of nodes and edges to which this output is plotted if this number is too large plotting will take a long time so adjust the min edge frequency well node size 70 edge size 166 fig co network npt co network title co occurrence network sizing 100 node size adjacency frequency color palette hls width 1100 height 700 save false iplot fig co network 6 sunburst chart fig sunburst npt sunburst title sunburst chart colorscale true color continuous scale oryel width 1000 height 800 save false fig sunburst show other the original data frame of the co occurrence network can also be accessed display npt node df head npt node df shape npt edge df head npt edge df shape document tbd test sh cd tests pytest other plotly is used to plot the figure https plotly com python co occurrence networks is used to calculate the co occurrence network https networkx github io documentation stable tutorial html wordcloud uses the following fonts https mplus fonts osdn jp about html | visualization nlp plotly wordcloud analytics python | ai |
issues | issues github issues client built with cappuccino atlas nativehost and node on heroku try it at http githubissues heroku com http githubissues heroku com read more on the cappuccino blog http cappuccino org discuss 2010 05 13 github issues cappuccino app desktop and web | front_end |
|
yoyow-core | yoyow core getting started we recommend building on ubuntu 16 04 lts 64 bit building on other system note yoyow requires an openssl https www openssl org version in the 1 0 x series openssl 1 1 0 and newer are not supported if your system openssl version is newer then you will need to manually provide an older version of openssl and specify it to cmake using dopenssl include dir dopenssl ssl library and dopenssl crypto library note yoyow requires a boost http www boost org version in the range 1 57 1 60 versions earlier than 1 57 or newer than 1 60 are not supported if your system boost version is newer then you will need to manually build an older version of boost and specify it to cmake using dboost root build dependencies sudo apt get update sudo apt get install autoconf cmake make automake libtool git libboost all dev libssl dev g libcurl4 openssl dev build script git clone https github com yoyow org yoyow core git cd yoyow core git checkout yy mainnet may substitute yy mainnet with current release tag git submodule update init recursive mkdir build cd build cmake dcmake build type release make yoyow node make yoyow client launch programs yoyow node yoyow node the node will automatically create a data directory including a config file it may take several hours to fully synchronize the blockchain after syncing you can exit the node using ctrl c and setup the command line wallet by editing witness node data dir config ini as follows rpc endpoint 127 0 0 1 9000 after starting the witness node again in a separate terminal you can run programs yoyow client yoyow client set your inital password set password password unlock password docs https github com yoyow org yoyow core wiki more info https yoyow org https wallet yoyow org | blockchain |
|
ulaskelas-frontend | ulas kelas generic badge https img shields io badge flutter v3 0 5 blue https flutter dev docs generic badge https img shields io badge dart v2 13 4 blue https dart dev guides test https github com ristekcsui ulaskelas frontend actions workflows config yml badge svg https github com ristekcsui ulaskelas frontend actions workflows config yml deploy web https github com ristekcsui ulaskelas frontend actions workflows firebase hosting merge yml badge svg https github com ristekcsui ulaskelas frontend actions workflows firebase hosting merge yml codecov https codecov io gh ristekcsui ulaskelas frontend branch main graph badge svg token shfalbjg9u https codecov io gh ristekcsui ulaskelas frontend generic badge https img shields io badge development v0 0 1 brightgreen https play google com store generic badge https img shields io badge style very good analysis b22c89 svg https pub dev packages very good analysis generic badge https img shields io badge component ristek material component 9932cc svg https pub dev packages ristek material component ulas kelas app getting started how to run drive and build apk example how to run release development app flutter clean flutter pub get flutter run t lib main development dart release flavor development example how to build release development app flutter clean flutter pub get flutter build apk t lib main development dart release no shrink flavor development split per abi example how to build bundle release production app flutter clean flutter pub get flutter build appbundle t lib main production dart release no shrink flavor production example how to run flutter web app flutter run t lib main development dart d chrome example how to drive automation test on development environment note that automation test doesn t support release mode flutter drive t test driver app dart flavor development supported flavor 1 development 2 production architecture pattern reso coder s fllutter clean architecture how to communicate with data back end lib documentation data and backend md alt text https i0 wp com resocoder com wp content uploads 2019 08 clean architecture flutter diagram png ssl 1 state management using state rebuilder for zero boilerplate state management visit https pub dev packages states rebuilder api documentation postman https www getpostman com collections 682bf27acd4b0fc9010c how to import collection here https developer ft com portal docs start install postman and import request collection more postman tutorial here https www postman com postman workspace postman answers request 9215231 f3a24076 e530 4858 b872 b028446f6fc6 versioning major minor patch given a version number major minor patch increment the 1 major version when you make incompatible api changes 2 minor version when you add functionality in a backwards compatible manner and 3 patch version when you make backwards compatible bug fixes additional labels for pre release and build metadata are available as extensions to the major minor patch format and also supported by cider https pub dev packages cider to patch cider bump patch bump build or make patch to minor cider bump minor bump build or make minor to major cider bump major bump build or make major capital abcd naming convention snake case for file and folder capital abcd git flow commit rules feat fix docs style refactor perf test build ci feat a new feature fix a bug fix docs documentation only changes style changes that do not affect the meaning of the code white space formatting missing semi colons etc refactor a code change that neither fixes a bug nor adds a feature perf a code change that improves performance test adding missing tests build changes to the build compilation packaging process or auxiliary tools such as documentation generation ci changes in the continuous integration delivery setup examples feat form login ci refactor analysis job before push 1 flutter analyze 2 flutter test branch rules feature hotfix coldfix service integration ui how to contribute to help work on this project please refer to contributing md contributing md | hacktoberfest student-project universitas-indonesia | front_end |
SolarOS_STM32 | solaros stm32 this is a rtos running on stm32 1 2 32 3 4 5 6 1 cortex m3 m4 2 cortex m3 3 keil uvision5 1 2 4 1 3 4 4 1 2 star fork | os |
|
EdgarAnalytics | table of contents 1 understanding the challenge readme md understanding the challenge 2 introduction readme md introduction 3 challenge summary readme md challenge summary 4 details of challenge readme md details of challenge 5 implementation details readme md implementation details 6 input files readme md input files 7 output file readme md output file 8 example readme md example 9 writing clean scalable and well tested code readme md writing clean scalable and well tested code 10 repo directory structure readme md repo directory structure 11 testing your directory structure and output format readme md testing your directory structure and output format 11 instructions to submit your solution readme md instructions to submit your solution 13 faq readme md faq understanding the challenge we highly recommend that you take a few dedicated minutes to read this readme in its entirety before starting to think about potential solutions you ll probably find it useful to review the examples and understand the problem at a high level before digging into the specific details many of which are covered in the faq introduction many investors researchers journalists and others use the securities and exchange commission s electronic data gathering analysis and retrieval edgar system to retrieve financial documents whether they are doing a deep dive into a particular company s financials or learning new information that a company has revealed through their filings the sec maintains edgar weblogs showing which ip addresses have accessed which documents for what company and at what day and time this occurred imagine the sec has asked you to take the data and produce a dashboard that would provide a real time view into how users are accessing edgar including how long they stay and the number of documents they access during the visit while the sec usually makes its edgar weblogs publicly available after a six month delay imagine that for this challenge the government entity has promised it would stream the data into your program in real time and with no delay your job as a data engineer is to build a pipeline to ingest that stream of data and calculate how long a particular user spends on edgar during a visit and how many documents that user requests during the session challenge summary for this challenge we re asking you to take existing publicly available edgar weblogs and assume that each line represents a single web request for an edgar document that would be streamed into your program in real time using the data identify when a user visits calculate the duration of and number of documents requested during that visit and then write the output to a file your role on the project is to work on the data pipeline to hand off the information to the front end as the backend data engineer you do not need to display the data or work on the dashboard but you do need to provide the information you can assume there is another process that takes what is written to the output file and sends it to the front end if we were building this pipeline in real life we d probably have another mechanism to send the output to the gui rather than writing to a file however for the purposes of grading this challenge we just want you to write the output to files details of challenge for the purposes of this challenge an ip address uniquely identifies a single user a user is defined to have visited the edgar system if during the visit the ip address requested one or more documents also for the purposes of this challenge the amount of time that elapses between document requests should be used to determine when a visit also referred to as a session begins and ends a single user session is defined to have started when the ip address first requests a document from the edgar system and continues as long as the same user continues to make requests the session is over after a certain period of time has elapsed we ll provide you that value and the user makes no requests for documents in other words this period of inactivity helps to determine when the session is over and the user is assumed to have left the system the duration of any particular session is defined to be the time between the ip address first request and the last one in the same session prior to the period of inactivity if the user returns later to access another document requests that subsequent request would be considered the start of a new session implementation details your program should expect two input files be sure to read the section repo directory structure for details on where these files should be located log csv edgar weblog data inactivity period txt holds a single value denoting the period of inactivity that should be used to identify when a user session is over as you process the edgar weblogs line by line the moment you detect a user session has ended your program should write a line to an output file sessionization txt listing the ip address duration of the session and number of documents accessed the value found in inactivity period txt should be used to determine when a session has ended and when a new session has possibly started however once you reach the end of the log csv that last timestamp should signal the end of all current sessions regardless of whether the period of inactivity has been met input files log csv the sec provides weblogs stretching back years and is regularly updated although with a six month delay https www sec gov dera data edgar log file data set html for the purposes of this challenge you can assume that the data is being streamed into your program in the same order that it appears in the file with the first line after the header being the first request and the last line being the latest you also can assume the data is listed in chronological order for the purposes of this challenge while you re welcome to run your program using a subset of the data files found at the sec s website you should not assume that we ll be testing your program on any of those data files also while we won t expect your program to be able to process all of the sec s weblogs there is over 1tb of data you should be prepared to talk about how you might design or redesign your program should the challenge be changed to require you to process hundreds of gigabytes or even a terabyte for the purposes of this challenge below are the data fields you ll want to pay attention to from the sec weblogs ip identifies the ip address of the device requesting the data while the sec anonymizes the last three digits it uses a consistent formula that allows you to assume that any two ip fields with the duplicate values are referring to the same ip address date date of the request yyyy mm dd time time of the request hh mm ss cik sec central index key accession sec document accession number extention value that helps determine the document being requested there are other fields that can be found in the weblogs for the purposes of this challenge your program can ignore those other fields unlike other weblogs that contain the actual http web request the sec s files use a different but deterministic convention for the purposes of this challenge you can assume the combination of cik accession and extention fields uniquely identifies a single web page document request don t assume any particular format for any of those three fields e g the fields can consist of numbers letters hyphens periods and other characters the first line of log csv will be a header denoting the names of the fields in each web request each field is separated by a comma your program should only use this header to determine the order in which the fields will appear in the rest of the other lines in the same file inactivity period txt this file will hold a single integer value denoting the period of inactivity in seconds that your program should use to identify a user session the value will range from 1 to 86 400 i e one second to 24 hours output file once your program identifies the start and end of a session it should gather the following fields and write them out to a line in the output file sessionization txt the fields on each line must be separated by a ip address of the user exactly as found in log csv date and time of the first webpage request in the session yyyy mm dd hh mm ss date and time of the last webpage request in the session yyyy mm dd hh mm ss duration of the session in seconds count of webpage requests during the session unlike the input weblog data file and for the purposes of this challenge your program should not write a header line to the output file but instead write just the results each line should have the fields in the exact order detailed above fields must be separated by a comma if your program is able to detect multiple user sessions ending at the same time it should write the results to the sessionization txt output file in the same order as the user s first request for that session appeared in the input log csv file example suppose your input files contained only the following few lines note that the fields we are interested in are in bold below but will not be like that in the input file there s also an extra newline between records below but the input file won t have that inactivity period txt 2 log csv ip date time zone cik accession extention code size idx norefer noagent find crawler browser 101 81 133 jja 2017 06 30 00 00 00 0 0 1608552 0 0001047469 17 004337 index htm 200 0 80251 0 1 0 0 0 0 0 9 0 0 0 107 23 85 jfd 2017 06 30 00 00 00 0 0 1027281 0 0000898430 02 001167 index htm 200 0 2825 0 1 0 0 0 0 0 10 0 0 0 107 23 85 jfd 2017 06 30 00 00 00 0 0 1136894 0 0000905148 07 003827 index htm 200 0 3021 0 1 0 0 0 0 0 10 0 0 0 107 23 85 jfd 2017 06 30 00 00 01 0 0 841535 0 0000841535 98 000002 index html 200 0 2699 0 1 0 0 0 0 0 10 0 0 0 108 91 91 hbc 2017 06 30 00 00 01 0 0 1295391 0 0001209784 17 000052 txt 200 0 19884 0 0 0 0 0 0 0 10 0 0 0 106 120 173 jie 2017 06 30 00 00 02 0 0 1470683 0 0001144204 14 046448 v385454 20fa htm 301 0 663 0 0 0 0 0 0 0 10 0 0 0 107 178 195 aag 2017 06 30 00 00 02 0 0 1068124 0 0000350001 15 000854 xbrl zip 404 0 784 0 0 0 0 0 0 0 10 0 1 0 107 23 85 jfd 2017 06 30 00 00 03 0 0 842814 0 0000842814 98 000001 index html 200 0 2690 0 1 0 0 0 0 0 10 0 0 0 107 178 195 aag 2017 06 30 00 00 04 0 0 1068124 0 0000350001 15 000731 xbrl zip 404 0 784 0 0 0 0 0 0 0 10 0 1 0 108 91 91 hbc 2017 06 30 00 00 04 0 0 1618174 0 0001140361 17 026711 txt 301 0 674 0 0 0 0 0 0 0 10 0 0 0 the single line on inactivity period txt tells us that once two seconds have elapsed since a user made a document request we can assume that user s particular visit has ended any subsequent requests would be considered a new session the first day and time listed in the input file is 2017 06 30 and the time is 00 00 00 that means at that date and time the following ip addresses initiated a visit to edgar 101 81 133 jja made a request for cik 1608552 0 accession 0001047469 17 004337 and extention index htm 107 23 85 jfd made a request for cik 1027281 0 accession 0000898430 02 001167 and extention index htm 107 23 85 jfd made a request for cik 1136894 0 accession 0000905148 07 003827 and extention index htm so for the first second of data that your program has encountered it knows one user has accessed one document and a second user has requested two first second illustration images first second png when your program reads in the input file s fourth line it should detect that the day and time has advanced by one second so now this is what we know second second illustration images second second png then when it reaches the sixth and seventh line third second illustration images third second png when it first reads the eighth line it should detect that the time is now 2017 06 30 00 00 03 for one user 101 8 33 jja its session has ended because two seconds of inactivity have passed for that user because there was only one request only one web page document was accessed end of third second illustration images end of third png at that point the output file sessionization txt should contain the following line 101 81 133 jja 2017 06 30 00 00 00 2017 06 30 00 00 00 1 1 after processing the eighth line of the input file and as we examine the timestamp in the ninth line of the input file we detect that the time has progressed to 2017 06 30 00 00 04 for a second user 108 91 91 hbc we now see that two seconds of inactivity has elapsed and we can identify a second session fourth second illustration images fourth second png the output file sessionization txt should now consist of the following data 101 81 133 jja 2017 06 30 00 00 00 2017 06 30 00 00 00 1 1 108 91 91 hbc 2017 06 30 00 00 01 2017 06 30 00 00 01 1 1 finally after your program processes the ninth and 10th line it should detect that the end of file has been reached and there are no more requests for any users at this point it should identify all sessions regardless of the period of inactivity end of file illustration images end of file png at that point it should write the results to the output file and the entire content of sessionization txt should be 101 81 133 jja 2017 06 30 00 00 00 2017 06 30 00 00 00 1 1 108 91 91 hbc 2017 06 30 00 00 01 2017 06 30 00 00 01 1 1 107 23 85 jfd 2017 06 30 00 00 00 2017 06 30 00 00 03 4 4 106 120 173 jie 2017 06 30 00 00 02 2017 06 30 00 00 02 1 1 107 178 195 aag 2017 06 30 00 00 02 2017 06 30 00 00 04 3 2 108 91 91 hbc 2017 06 30 00 00 04 2017 06 30 00 00 04 1 1 notice from the above output that the first two lines were the ones we had already written the third line details the session for 107 23 85 jfd next because its first document request came at 2017 06 30 00 00 00 which is earlier than any of the other remaining sessions the fourth line belongs to ip address 106 120 173 jie because that user s first document request came at 2017 06 30 00 00 02 the first document request from 107 178 195 aag also comes at the same time but it is listed after 106 120 173 jie in the input file so that is why it is listed on the fifth line the second session detected for 108 91 91 hbc concludes the sessionization txt file writing clean scalable and well tested code as a data engineer it s important that you write clean well documented code that scales for large amounts of data for this reason it s important to ensure that your solution works well for a large number of records rather than just the above example it s also important to use software engineering best practices like unit tests especially since data is not always clean and predictable for more details about the implementation please refer to the faq below if further clarification is necessary email us at cc insightdataengineering com but please do so only after you have read through the readme and faq one more time and cannot find the answer to your question before submitting your solution you should summarize your approach dependencies and run instructions if any in your readme you may write your solution in any mainstream programming language such as c c c clojure erlang go haskell java python ruby or scala once completed submit a link to a github repo with your source code in addition to the source code the top most directory of your repo must include the input and output directories and a shell script named run sh that compiles and runs the program s that implement the required features if your solution requires additional libraries environments or dependencies you must specify these in your readme documentation see the figure below for the required structure of the top most directory in your repo or simply clone this repo repo directory structure the directory structure for your repo should look like this readme md run sh src sessionization py input inactivity period txt log csv output sessionization txt insight testsuite run tests sh tests test 1 input inactivity period txt log csv output sessionization txt your own test 1 input your own inputs output sessionization txt don t fork this repo and don t use this readme instead of your own the content of src does not need to be a single file called sessionization py which is only an example instead you should include your own source files and give them expressive names testing your directory structure and output format to make sure that your code has the correct directory structure and the format of the output files are correct we have included a test script called run tests sh in the insight testsuite folder the tests are stored simply as text files under the insight testsuite tests folder each test should have a separate folder with an input folder for inactivity period txt and log csv and an output folder for sessionization txt you can run the test with the following command from within the insight testsuite folder insight testsuite run tests sh on a failed test the output of run tests sh should look like fail test 1 thu mar 30 16 28 01 pdt 2017 0 of 1 tests passed on success pass test 1 thu mar 30 16 25 57 pdt 2017 1 of 1 tests passed one test has been provided as a way to check your formatting and simulate how we will be running tests when you submit your solution we urge you to write your own additional tests test 1 is only intended to alert you if the directory structure or the output for this test is incorrect your submission must pass at least the provided test in order to pass the coding challenge instructions to submit your solution to submit your entry please use the link you received in your coding challenge invite email you will only be able to submit through the link one time do not attach a file we will not admit solutions which are attached files use the submission box to enter the link to your github repo or bitbucket only link to the specific repo for this project not your general profile put any comments in the readme inside your project repo not in the submission box we are unable to accept coding challenges that are emailed to us faq here are some common questions we ve received if you have additional questions please email us at cc insightdataengineering com and we ll answer your questions as quickly as we can during pst business hours and update this faq again only contact us after you have read through the readme and faq one more time and cannot find the answer to your question which github link should i submit you should submit the url for the top level root of your repository for example this repo would be submitted by copying the url https github com insightdatascience edgar analytics into the appropriate field on the application do not try to submit your coding challenge using a pull request which would make your source code publicly available do i need a private github repo no you may use a public repo there is no need to purchase a private repo you may also submit a link to a bitbucket repo if you prefer are the session durations inclusive or exclusive as shown in the above example the duration is inclusive in other words if the timestamps for the session start is 00 00 01 and session end is 00 00 03 the duration is 3 seconds what if there is a single request in a session as shown in the above example the minimum duration for a session is 1 second if a user requests the same document more than once during a session how many webpage requests is that every time a user accesses an edgar document that request should be counted even if the user is requesting the same document multiple times for instance if within a session there are two requests once for cik 1608552 0 accession 0001047469 17 004337 and extention index htm and then a second time for the same exact combination the count of webpage requests for that session would be 2 how do you know when a session is over as shown in the above example the session is over when the end of the file is reached or after a period of inactivity has elapsed with no requests from that user for example if the inactivity period is 2 seconds and the session start is 00 00 01 and there are no further requests from that user by 00 00 04 then the session is considered over at 00 00 01 where can i get obtain the input file log csv we ve provided one example as shown above in this readme for you to better understand the challenge but you should create your own data to test your program you can obtain other data directly from the sec https www sec gov dera data edgar log file data set html but be aware that the weblog files are quite large and you also may have problems decompressing the archive file unzip may not work on the edgar zip file and you may have to use open source software such as 7zip if you are unable to decompress the zip file revert to creating your own data for the challenge do not spend too long on trying to decompress the archive file may i use r matlab or other analytics programming languages to solve the challenge it s important that your implementation scales to handle large amounts of data while many of our fellows have experience with r and matlab applicants have found that these languages are unable to process data in a scalable fashion so you must consider another language may i use distributed technologies like hadoop or spark your code will be tested on a single machine so using these technologies will negatively impact your solution we re not testing your knowledge on distributed computing but rather on computer science fundamentals and software engineering best practices what sort of system should i use to run my program on windows linux mac you may write your solution on any system but your source code should be portable and work on all systems additionally your run sh must be able to run on either unix or linux as that s the system that will be used for testing linux machines are the industry standard for most data engineering teams so it is helpful to be familiar with this if you re currently using windows we recommend installing a virtual unix environment such as virtualbox or vmware and using that to develop your code otherwise you also could use tools such as cygwin or docker or a free online ide such as cloud9 how fast should my program run while there are no strict performance guidelines to this coding challenge we will consider the amount of time your program takes when grading the challenge therefore you should design and develop your program in the optimal way i e think about time and space complexity instead of trying to hit a specific run time value can i use pre built packages modules or libraries this coding challenge can be completed without any exotic packages while you may use publicly available packages modules or libraries you must document any dependencies in your accompanying readme file when we review your submission we will download these libraries and attempt to run your program if you do use a package you should always ensure that the module you re using works efficiently for the specific use case in the challenge since many libraries are not designed for large amounts of data should i use the pandas library for python while the pandas library is useful for many problems related to small batches of data it is not scalable at dealing with streaming data problems like this challenge as a result you should strongly consider alternative algorithms and data structus that scale with larger streaming data will you email me if my code doesn t run unfortunately we receive hundreds of submissions in a very short time and are unable to email individuals if their code doesn t compile or run this is why it s so important to document any dependencies you have as described in the previous question we will do everything we can to properly test your code but this requires good documentation more so we have provided a test suite so you can confirm that your directory structure and format are correct can i use a database engine this coding challenge can be completed without the use of a database however if you use one it must be a publicly available one that can be easily installed with minimal configuration do i need to use multi threading no your solution doesn t necessarily need to include multi threading there are many solutions that don t require multiple threads cores or any distributed systems but instead use efficient data structures what should the format of the output be in order to be tested correctly you must use the format described above you can ensure that you have the correct format by using the testing suite we ve included should i check if the files in the input directory are text files or non text files binary no for simplicity you may assume that all of the files in the input directory are text files with the format as described above can i use an ide like eclipse or intellij to write my program yes you can use whatever tools you want as long as your run sh script correctly runs the relevant target files and creates the sessionization txt file in the output directory what should be in the input directory you can put any text file you want in the directory since our testing suite will replace it indeed using your own input files would be quite useful for testing the file size limit on github is 100 mb so you won t be able to include the larger sample input files in your input directory how will the coding challenge be evaluated generally we will evaluate your coding challenge with a testing suite that provides a variety of inputs and checks the corresponding output this suite will attempt to use your run sh and is fairly tolerant of different runtime environments of course there are many aspects e g clean code documentation that cannot be tested by our suite so each submission will also be reviewed manually by a data engineer how long will it take for me to hear back from you about my submission we receive hundreds of submissions and try to evaluate them all in a timely manner we try to get back to all applicants within two or three weeks of submission but if you have a specific deadline that requires expedited review please email us at cc insightdataengineering com | server |
|
MLND | mlnd machine learning nano degree udacity machine learning courses x project 0 titanic survival exploration https github com mtyylx mlnd blob master p0 titanic titanic survival exploration ipynb x project 1 housing price prediction https github com mtyylx mlnd blob master p1 boston housing boston housing ipynb x project 2 finding donors for charityml https github com mtyylx mlnd blob master p2 finding donors finding donors ipynb x project 3 create customer segments https github com mtyylx mlnd blob master p3 create customer segments customer segments ipynb x project 4 smart cab with q learning https github com mtyylx mlnd blob master p4 smart cab smartcab ipynb x project 5 image classification of cifar 10 using tensorflow https github com mtyylx mlnd blob master p5 image classification image classification zh cn ipynb project 6 cats vs dogs using transfer learning https github com mtyylx mlnd blob master p6 dogs vs cats dog 20vs 20cat 20 20experimenting 20with 20transfer 20learning ipynb side projects class activation maps cam dynamic visualization https github com mtyylx mlnd blob master p6 dogs vs cats class 20activation 20map 20visualizations ipynb cifar10 image classification using keras 89 test accuracy https github com mtyylx mlnd blob master p5 image classification cifar10 20image 20classification 20using 20keras 20 concise ipynb mnist image classification using keras 99 5 test accuracy https github com mtyylx mlnd blob master p5 image classification mnist 20image 20classification 20using 20keras ipynb | ai |
|
RMDHMern | rmdhmern this website was built using full stack technologies such as react node express html and css the backend of the project is fully hosted on mongodb and so is the frontend this project came about through the mern stack course offered by freecodecamp org you can access the link to the actual website here https restaurant reviews icpdy mongodbstitch com it would be available for a short amount of time a huge thanks to freecodecamp for this free resource | server |
|
scaling_sentemb | scaling sentence embeddings with large language models overview large language models llms have recently garnered significant interest with in context learning llms achieve impressive results in various natural language tasks however the application of llms to sentence embeddings remains an area of ongoing research in this work we propose an in context learning based method aimed at improving sentence embeddings performance our approach involves adapting the previous prompt based representation method for autore gressive models constructing a demonstration set that enables llms to perform in context learning and scaling up the llms to different model sizes through extensive experiments in context learning enables llms to generate high quality sentence embeddings without any fine tuning it helps llms achieve performance comparable to current contrastive learning methods by scaling model size we find scaling to more than tens of billion parameters harms the performance on semantic textual similarity sts tasks however the largest model outperforms other counterparts and achieves the new state of the art result on transfer tasks we also fine tune llms with current contrastive learning approach and the 2 7b opt model incorporating our prompt based method surpasses the performance of 4 8b st5 achieving the new state of the art results on sts tasks results on sts tasks with in context learning without fine tuning table align center thead tr th model th th align center sts12 th th align center sts13 th th align center sts14 th th align center sts15 th th align center sts16 th th align center stsb th th align center sick r th th align center avg th tr thead tbody tr td opt 125m td td align center 62 22 td td align center 73 10 td td align center 61 84 td td align center 71 09 td td align center 72 08 td td align center 67 80 td td align center 64 10 td td align center 67 46 td tr tr td opt 350m td td align center 63 87 td td align center 73 85 td td align center 63 41 td td align center 72 45 td td align center 73 13 td td align center 70 84 td td align center 65 61 td td align center 69 02 td tr tr td opt 1 3b td td align center 72 78 td td align center 83 77 td td align center 73 61 td td align center 83 42 td td align center 80 60 td td align center 78 80 td td align center 69 69 td td align center 77 52 td tr tr td opt 2 7b td td align center 68 49 td td align center 84 72 td td align center 75 15 td td align center 83 62 td td align center 81 34 td td align center 80 94 td td align center 72 97 td td align center 78 18 td tr tr td opt 6 7b td td align center 70 65 td td align center 84 51 td td align center 75 01 td td align center 83 51 td td align center 82 00 td td align center 81 12 td td align center 76 77 td td align center 79 08 td tr tr td opt 13b td td align center 71 99 td td align center 85 22 td td align center 76 04 td td align center 82 23 td td align center 81 38 td td align center 81 42 td td align center 75 00 td td align center 79 04 td tr tr td opt 30b td td align center 69 99 td td align center 83 35 td td align center 74 75 td td align center 83 14 td td align center 82 42 td td align center 81 45 td td align center 77 46 td td align center 78 94 td tr tr td opt 66b td td align center 69 93 td td align center 83 29 td td align center 74 88 td td align center 80 10 td td align center 81 11 td td align center 81 76 td td align center 76 26 td td align center 78 19 td tr tbody table to evaluate the above results please run the following script sh bash run icl sh opt 125m opt 350m opt 1 3b opt 2 7b opt 6 7b opt 13b opt 30b opt 66b results on sts tasks with contrastive learning with fine tuning table align center thead tr th align center model th th align center sts12 th th align center sts13 th th align center sts14 th th align center sts15 th th align center sts16 th th align center stsb th th align center sick r th th align center avg th tr thead tbody tr td align center a href https huggingface co royokong prompteol opt 1 3b rel nofollow style font size 0 93em royokong prompteol opt 1 3b a td td align center 79 01 td td align center 89 26 td td align center 84 10 td td align center 88 30 td td align center 84 62 td td align center 87 71 td td align center 80 52 td td align center 84 79 td tr tr td align center a href https huggingface co royokong prompteol opt 2 7b rel nofollow style font size 0 93em royokong prompteol opt 2 7b a td td align center 79 49 td td align center 89 64 td td align center 84 80 td td align center 89 51 td td align center 85 91 td td align center 88 33 td td align center 81 64 td td align center 85 62 td tr tr td align center a href https huggingface co royokong prompteol opt 6 7b rel nofollow style font size 0 93em royokong prompteol opt 6 7b a td td align center 80 14 td td align center 90 02 td td align center 84 94 td td align center 89 78 td td align center 85 84 td td align center 88 75 td td align center 81 29 td td align center 85 82 td tr tr td align center a href https huggingface co royokong prompteol opt 13b rel nofollow style font size 0 93em royokong prompteol opt 13b a td td align center 80 20 td td align center 90 24 td td align center 85 34 td td align center 89 52 td td align center 85 90 td td align center 88 56 td td align center 82 06 td td align center 85 97 td tr tr td td td align center td td align center td td align center td td align center td td align center td td align center td td align center td td align center td tr tr td align center a href https huggingface co royokong prompteol llama 7b rel nofollow style font size 0 93em royokong prompteol llama 7b a td td align center 79 16 td td align center 90 22 td td align center 85 40 td td align center 88 99 td td align center 86 25 td td align center 88 37 td td align center 81 51 td td align center 85 70 td tr tr td align center a href https huggingface co royokong prompteol llama 13b rel nofollow style font size 0 93em royokong prompteol llama 13b a td td align center 78 63 td td align center 90 03 td td align center 85 46 td td align center 89 48 td td align center 86 18 td td align center 88 45 td td align center 82 69 td td align center 85 85 td tr tbody table to evaluate the above results please run the following script sh model path facebook opt 2 7b or decapoda research llama x hf x model size 7b 13b lora royokong prompteol opt 2 7b or royokong prompteol llama x x model size 7b 13b template this sentence sent 0 means in one word python evaluation py model name or path model path mode test mask embedding sentence mask embedding sentence template template lora weight lora load kbit 16 examples 1 loading base model python import torch from transformers import automodelforcausallm autotokenizer import our models the package will take care of downloading the models automatically tokenizer autotokenizer from pretrained facebook opt 2 7b model automodelforcausallm from pretrained facebook opt 2 7b tokenizer pad token id 0 tokenizer padding side left texts there s a kid on a skateboard a kid is skateboarding a kid is inside the house use in context learning to generate embeddings directly using in contex learning get embeddings python template this sentence a jockey riding a horse means in one word equestrian this sentence sent 0 means in one word inputs tokenizer template replace sent 0 i replace for i in texts padding true return tensors pt with torch no grad embeddings model inputs output hidden states true return dict true hidden states 1 1 use contrastive learning models to generate embeddings using trained lora to get embeddings python from peft import peftmodel peft model peftmodel from pretrained model royokong prompteol opt 2 7b torch dtype torch float16 template this sentence sent 0 means in one word inputs tokenizer template replace sent 0 i replace for i in texts padding true return tensors pt with torch no grad embeddings peft model inputs output hidden states true return dict true hidden states 1 1 setup install dependencies sh pip install r requirements txt download data sh cd senteval data downstream bash download dataset sh cd cd data bash download nli sh cd in context learning we provide in context learning examples in icl examples txt to evaluate examples on sts b development set sh base model facebook opt 2 7b python evaluation py model name or path base model mode dev mask embedding sentence load kbit 4 icl examples file 274 templates txt contrastive learning train sh bash train llm sh opt 2 7b can be other models test sh bash eval checkpoints sh opt 2 7b lora first evaluate checkpoint on sts b dev and evaluate best checkpoint on sts tasks acknowledgement our code is based on simcse and alpaca lora | ai |
|
cs5272-project | cs5272 project embedded system design project | os |
|
Design-interviews | design interviews system design interview asked questions | os |
|
healthcareai-py | healthcareai code health https landscape io github healthcatalyst healthcareai py master landscape svg style flat https landscape io github healthcatalyst healthcareai py master appveyor build status https ci appveyor com api projects status github healthcatalyst healthcareai py branch master svg true https ci appveyor com project catalystadmin healthcareai py branch master build status https travis ci org healthcatalyst healthcareai py svg branch master https travis ci org healthcatalyst healthcareai py anaconda server badge https anaconda org catalyst healthcareai badges version svg https anaconda org catalyst healthcareai anaconda server badge https anaconda org catalyst healthcareai badges installer conda svg https conda anaconda org catalyst pypi version https badge fury io py healthcareai svg https badge fury io py healthcareai doi https zenodo org badge doi 10 5281 zenodo 999010 svg https doi org 10 5281 zenodo 999010 github license https img shields io badge license mit blue svg https raw githubusercontent com healthcatalyst healthcareai py master license the aim of healthcareai is to streamline machine learning in healthcare the package has two main goals allow one to easily create models based on tabular data and deploy a best model that pushes predictions to a database such as mssql mysql sqlite or csv flat file provide tools related to data cleaning manipulation and imputation installation windows if you haven t install 64 bit python 3 5 via the anaconda distribution https repo continuum io archive anaconda3 4 2 0 windows x86 64 exe important when prompted for the installation type select just me recommended this makes permissions later in the process much simpler open the terminal i e cmd or powershell if using windows run conda install pyodbc upgrade to latest scipy note that upgrade command took forever run conda remove scipy run conda install scipy run conda install scikit learn install healthcareai using one and only one of these three methods ordered from easiest to hardest 1 recommended install the latest release with conda by running conda install c catalyst healthcareai 2 recommended install the latest release with pip run pip install healthcareai 3 if you know what you re doing and instead want the bleeding edge version direct from our github repo run pip install https github com healthcatalyst healthcareai py zipball master why anaconda we recommend using the anaconda python distribution when working on windows there are a number of reasons when running anaconda and installing packages using the conda command you don t need to worry about dependency hell https en wikipedia org wiki dependency hell particularly because packages aren t compiled on your machine conda installs pre compiled binaries a great example of the pain the using conda saves you is with the python package scipy which by their own admission http www scipy org scipylib building windows html is difficult linux you may need to install the following dependencies sudo apt get install python tk sudo pip install pyodbc note you ll might run into trouble with the pyodbc dependency you may first need to run sudo apt get install unixodbc dev then retry sudo pip install pyodbc credit stackoverflow http stackoverflow com questions 2960339 unable to install pyodbc on linux once you have the dependencies satisfied run pip install healthcareai or sudo pip install healthcareai macos pip install healthcareai or sudo pip install healthcareai linux and macos via docker install docker https docs docker com engine installation clone this repo look for the green button on the repo main page cd into the cloned directory run docker build t healthcareai run the docker instance with docker run p 8888 8888 healthcareai you should then have a jupyter notebook available on http localhost 8888 verify installation to verify that healthcareai installed correctly open a terminal and run python this opens an interactive python console also known as a repl https en wikipedia org wiki read e2 80 93eval e2 80 93print loop then enter this command from healthcareai import supervisedmodeltrainer and hit enter if no error is thrown you are ready to rock if you did get an error or run into other installation issues please let us know http healthcare ai contact html or better yet post on stack overflow http stackoverflow com questions tagged healthcare ai with the healthcare ai tag so we can help others along this process getting started 1 read through the getting started http healthcareai py readthedocs io en latest getting started section of the healthcareai py http healthcareai py readthedocs io en latest documentation 2 read through the example files to learn how to use the healthcareai py api for examples of how to train and evaluate a supervised model inspect and run either example regression 1 py or example classification 1 py using our sample diabetes dataset for examples of how to use a model to make predictions inspect and run either example regression 2 py or example classification 2 py after running one of the first examples for examples of more advanced use cases inspect and run example advanced py 3 to train and evaluate your own model modify the queries and parameters in either example regression 1 py or example classification 1 py to match your own data 4 decide what type of prediction output you want see choosing a prediction output type http healthcareai py readthedocs io en latest prediction types for details 5 set up your database tables to match the schema of the output type you chose if you are working in a health catalyst edw ecosystem primarily mssql please see the health catalyst edw instructions http healthcareai py readthedocs io en latest catalyst edw instructions for setup otherwise please see working with other databases http healthcareai py readthedocs io en latest databases for details about writing to different databases mssql mysql sqlite csv 6 congratulations after running one of the example files with your own data you should have a trained model to use your model to make predictions modify either example regression 2 py or example classification 2 py to use your new model you can then run it to see the results for issues double check that the code follows the examples here http healthcareai py readthedocs io en latest if you re still seeing an error create a post in stack overflow http stackoverflow com questions tagged healthcare ai with the healthcare ai tag that contains details on your environment os database type r vs py goals ie what are you trying to accomplish crystal clear steps for reproducing the error you can also log a new issue in the github repo by clicking here https github com healthcatalyst healthcareai py issues new | python machine-learning healthcare | ai |
natural-language-processing-tensorflow | natural language processing tensorflow natural language processing in tensorflow word encoding image images 2 png same with asci code with asci analysis the word listen and silent are the same value but the two words are very differents of meaning image images 3 png how sentiment analysis work image images 4 png how we can observed the similarity between two words image images 5 png now if we look at the two sentences to determine the difference between two sentences image images 6 png how to analyse the synthaxe image images 1 png creating the list of sequences image images 7 png complete analysis corpus image images 8 png padding sequences image images 9 png result of padding sequence image images 10 png personnalize padding image images 11 png sarcasm in news headlines dataset by rishabh misra https rishabhmisra github io publications https rishabhmisra github io publications how to load sarcasm dataset image images 12 png how to analysis sarcasm dataset image images 13 png sarcasm detection https www kaggle com datasets rmisra news headlines dataset for sarcasm detection https www kaggle com datasets rmisra news headlines dataset for sarcasm detection build in dataset in tensorflow image images 14 png dataset image images 15 png verify tensorflow version image images 16 png import tensorflow dataset image images 17 png split data image images 18 png image images 19 png image images 20 png image images 21 png tokenizer image images 22 png model image images 23 png or image images 24 png image images 25 png training model image images 26 png expect layer image images 27 png reverse word index image images 28 png vecteor in embedded data image images 29 png download in colab image images 30 png image images 30 png model for sarcasm dataset importation of tokenizer and pad sequence image images 31 png hyper parameters image images 32 png download sarcasm dataset image images 33 png loading sarcasm dataset image images 34 png building a classifier for the sarcasm dataset image images 35 png sequence dataset image images 36 png create a model image images 37 png summarization of model image images 38 png training the model image images 39 png plotting the result of training image images 40 png image images 41 png tensorflow datasets https github com tensorflow datasets tree master docs catalog https github com tensorflow datasets tree master docs catalog https www tensorflow org datasets catalog overview https www tensorflow org datasets catalog overview subwords text encoder https www tensorflow org datasets api docs python tfds deprecated text subwordtextencoder https www tensorflow org datasets api docs python tfds deprecated text subwordtextencoder diving into the code encode and decode image images 42 png image images 43 png classify sub word image images 44 png image images 45 png image images 46 png rnn the neural network is kind a function that we can a data and label it give a rules image images 47 png image images 48 png image images 49 png how rnn work image images 52 png visualize a sequence image images 50 png image images 51 png more about rnn https www coursera org lecture nlp sequence models deep rnns ehs0s https www coursera org lecture nlp sequence models deep rnns ehs0s how to understang the context of word image images 53 png uni directional cell state image images 54 png bi directional cell state image images 55 png how to implement lstm in tensorflow image images 56 png how to stack lstm image images 57 png more about lstms https www coursera org lecture nlp sequence models long short term memory lstm kxoay https www coursera org lecture nlp sequence models long short term memory lstm kxoay generation a new text process image images 58 png preparing the training data image images 59 png image images 60 png image images 61 png image images 62 png image images 63 png image images 64 png image images 65 png more on the training data image images 66 png image images 67 png image images 68 png finding what the next word should be image images 69 png example of text generated image images 70 png use bidirectional lstm image images 71 png example after training with bidirectional lstm image images 72 png predicting a word image images 73 png tokenize the word to predicting image images 74 png image images 75 png padded a sequence image images 76 png image images 77 png passed to model for prediction image images 78 png reverse look out image images 79 png generation by doing ten time image images 80 png result iamge images 81 png download a corpus image images 82 png looking into the code image images 83 png iamge images 84 png userful link https www tensorflow org api docs python tf keras preprocessing text tokenizer https www tensorflow org api docs python tf keras preprocessing text tokenizer https ai stanford edu amaas data sentiment https ai stanford edu amaas data sentiment https github com tensorflow datasets tree master docs catalog https github com tensorflow datasets tree master docs catalog https www tensorflow org datasets catalog overview https www tensorflow org datasets catalog overview https www tensorflow org text tutorials text generation https www tensorflow org text tutorials text generation | nlp nlp-machine-learning nlp-keywords-extraction tensorflow dataset deep-learning lstm-neural-networks lstm-sentiment-analysis machine-learning natural-language-processing sentiment-analysis rnn rnn-tensorflow text-classification | ai |
awesome-assistive-technology | awesome assistive technology collecting assistive technology information | server |
|
CV_course_py | cv course py this repository contains the jupyter notebooks and related files used in the python programming exercise for the cs e4850 computer vision course at the aalto university including 1 the actual weekly homework assignments and 2 all test images please see the notebooks for more details on e g submission guideline installation b clone the repository b br cd your directory br git clone https github com aaltovision cv course py git br b load install anaconda2 3 b br aalto linux workstations module load anaconda2 3 br your home workstation install e g miniconda https conda io miniconda html br b create a virtual environment for the course programming assignments b br conda create n cv course python 2 3 numpy scipy matplotlib scikit image scikit learn jupyter br source activate cv course br pip install opencv python br b start jupyter and find the notebooks b br jupyter notebook br br all notebooks and assignments should be python 2 3 compatible for more information or bug reporting feel free to contact at juha ylioinas aalto fi santiago cortes aalto fi | ai |
|
Data-Structures-and-Algorithms-DSA | data structures and algorithms dsa it003 n27 | server |
|
wade | wade go wade go is an upcoming brand new way to develop web sites and applications it s a client centric web development library but not for javascript isomorphic javascript is cool but what could be better than that isomorphic go advantages isomorphism write ui client once in go and html render seemlessly on both client and server no seo problems go code is transpiled to javascript on browser pleasure modern react like development model in go strict types ftw maintainability no more maintainability headache like with javascript and we could go easy on tests it helps tremendously to have strict typing and a nice compiler especially for large projects convenience easy collaboration between client and server since they use the same great programming language development status mar 12 2015 iteration 5 starts may 03 2015 core rendering and template component functionalities working still early stage not yet have end to end tests for the dom diff engine run the test app make sure you have a working go installation and gopherjs https github com gopherjs gopherjs then 1 go get u github com gowade wade 2 install fuel the code generator go install github com gowade wade fuel 3 go to browser tests worklog main run fuel build then run run gopherjs 4 use browser to open the file browser tests worklog main public index html license wade go is bsd licensed https github com gowade wade blob master license | front_end |
|
simulator | https kns cnki net kcms detail detail aspx dbcode cmfd dbname cmfd201802 filename 1018872311 nh v mtg3mtr6z1vyek9wrji2rnj1l0hote5yceviuelsogvymux1efltn0romvqzcvryv00xrnjdvvi3cwzadwr0rnk issues agv agv agv agv agv 1 agv agv 2 agv agv agv agv agv agv a open 3 agv agv java c opencv express smartcar vision simulator express github https github com xxxtai express github agv java swing spring boot neety a dijkstra simulator https xxxtai arthas hot swap oss cn beijing aliyuncs com moda 7560e2d688531d315c58816c94053f59 mp4 smartcar vision github https github com xxxtai smartcar vision github demo https xxxtai arthas hot swap oss cn beijing aliyuncs com moda d4427af0b2261e1b2fe12b7d5fced3b6 mp4 demo c opencv pid jeson tk1 ubuntu arduino simulator github https github com xxxtai simulator github https xxxtai arthas hot swap oss cn beijing aliyuncs com moda 7560e2d688531d315c58816c94053f59 mp4 smartcar vision express simulator spring swing netty main express simulaotr excel graph xls comgraph https user gold cdn xitu io 2020 1 12 16f9916c9960ef7e w 1282 h 716 f jpeg s 64952 design and key technologies of multi agv logistics sorting system paper link https kns cnki net kcms detail detail aspx dbcode cmfd dbname cmfd201802 filename 1018872311 nh v mtg3mtr6z1vyek9wrji2rnj1l0hote5yceviuelsogvymux1efltn0romvqzcvryv00xrnjdvvi3cwzadwr0rnk if it s not convenient to download leave an email in issues and i ll send it to you when i have time design and key technologies of multi agv logistics sorting system abstract the sorting process of china s logistics industry is still at the stage of manual sorting which has the problems of low sorting efficiency high labor cost and high error rate automated logistics sorting is developing in the direction of intellectualization agv automatic guide car has the characteristics of high intelligence and high flexibility using a large number of agv to pick up the express package can greatly improve the sorting efficiency reduce the labor cost and reduce the error rate the object of this paper is a multi agv logistics sorting system on the basis of the design of the multi agv logistics sorting system two key technologies of visual navigation positioning and multi agv path planning are studied the main research work of this paper consists of three parts 1 research and implementation of agv vision navigation and positioning technology this paper analyzes the shortcomings of the existing visual navigation and positioning techniques such as the expensive and inaccurate location of the camera and proposes a visual navigation and positioning method based on the coded logo for these shortcomings by this method agv can be used to navigate accurately even when the ordinary camera is running at a high speed and precise positioning 2 the research of multi agv path planning method the multi agv logistics sorting system has the characteristics of complex path network and large number of agv the traditional multi agv path planning method based on static determination of network is not suitable for multi agv logistics sorting system in this paper based on the characteristics of multi agv logistics sorting system a path planning model based on dynamic random network is established and the a algorithm is improved such as introducing time variables considering the cost of turning and optimizing the open table finally the two path planning methods are analyzed and compared through experiments 3 design and implementation of multi agv logistics sorting system this paper focuses on the overall design of the multi agv logistics sorting system according to the design requirements the express sorting robot is made and the upper computer system software is developed based on the java the visual navigation and positioning method based on the coded symbol and the path planning method based on the dynamic random network are realized by using the c and opencv programming module introduction the research of this paper is divided into three parts express smartcar vision simulator express github https github com xxxtai express github the scheduling system is the brain of the whole multi agv logistics sorting system which is responsible for information co ordination scheduling and control of the express sorting robot based on java swing spring boot neety and other technologies this project implements three path planning algorithms improved a algorithm dijkstra algorithm greedy algorithm and conflict prevention algorithm among multiple robots combined with the simulator project it can simulate the scheduling of large scale robots large scale robot scheduling simulation https v qq com x page c3050fw4ria html smartcar vision github https github com xxxtai smartcar vision github demo robot video https xxxtai arthas hot swap oss cn beijing aliyuncs com moda d4427af0b2261e1b2fe12b7d5fced3b6 mp4 demo the project is the brain of sorting robot which realizes visual navigation machine control and scheduling based on c opencv pid control and other technologies the project runs on the embedded board of jason tk1 in fact it is a ubuntu system arduino is used for the bottom motor control the project depends on hardware with this code only helps to study the implementation of visual navigation it is difficult to actually run simulator github https github com xxxtai simulator github large scale robot scheduling simulation https xxxtai arthas hot swap oss cn beijing aliyuncs com moda 7560e2d688531d315c58816c94053f59 mp4 a sorting robot is not cheap so there is no large scale robot sorting experiment in order to verify the effectiveness of the algorithm proposed in this paper the simulation software is developed this project simulates the physical characteristics of the robot operation and fully adapts to the scheduling system which has no special customization how to run smartcar vision engineering relies on physical hardware so it s hard to run the code inside can be used to learn machine vision here we mainly talk about how large scale robot scheduling simulation works express and simulator are based on spring development of the swing project netty communication is used between them find the main function and run it first run express then run simulator both projects need to provide a map metadata excel file and the warehouse has also prepared a graph xls for you when you run you need to modify the file path in the comgraph file | path-planning java spring-boot dispatching-system machine-vision graduation-project | os |
Remote-Cloud-Engineering-Repo | remote cloud engineering repo welcome to my cloud engineering repo this github repo contains cloud engineering projects i have worked on | cloud |
|
CHP013-Unity-step-by-step- | chp013 unity step by step unity in embedded system design and robotics a step by step guide | os |
|
RPTQ4LLM | rptq reorder based post training quantization for large language models large scale language models llms have shown exceptional performance on various tasks however the deployment of llms is challenging due to their enormous size one of the main challenges in quantizing llms is the different ranges between the channels which affects the accuracy and compression ratio of the quantized model in our paper https arxiv org abs 2304 01089 we propose a novel reorder based quantization approach called rptq the rptq approach involves rearranging the channels in the activations and then quantizing them in clusters thereby reducing the impact of the range difference between channels by implementing the rptq approach we achieved a significant breakthrough by pushing llm models to 3 bit activation for the first time overview ims cover png update 2023 4 23 an bug in the calculation of the reorder index was identified in qkt matmul r2 this bug has been fixed and the results have been updated accordingly requirements python packages torch 2 0 0 transformers 4 28 0 omegaconf pycountry sqlitedict lm eval usage the rptq approach can be applied to opt models python main py opt 1 3b wbits 4 abits 4 eval ppl tasks lambada openai piqa arc easy arc challenge openbookqa boolq only quantize k v cache python main py opt 1 3b wbits 4 abits 4 only quant kv eval ppl tasks lambada openai piqa arc easy arc challenge openbookqa boolq to quantize larger network please use multigpu python main py opt 66b wbits 4 abits 4 only quant kv eval ppl tasks lambada openai piqa arc easy arc challenge openbookqa boolq multigpu results perplexity model opt 1 3b opt 6 7b opt 13b opt 30b opt 66b opt 175b task wiki pt c4 wiki pt c4 wiki pt c4 wiki pt c4 wiki pt c4 wiki pt c4 fp16 14 63 16 96 14 72 10 86 13 09 11 74 10 13 12 34 11 20 9 56 11 84 10 69 9 34 11 36 10 28 8 34 12 01 10 13 w4a16 14 78 17 21 14 92 11 18 13 62 12 07 10 29 12 45 11 27 9 55 11 91 10 74 9 30 11 42 10 31 8 37 12 31 10 26 w4a8 15 39 17 79 15 48 11 21 13 74 12 11 10 90 13 40 11 62 10 22 12 41 11 01 9 46 11 73 10 57 8 43 12 24 10 49 w4a4 16 88 19 23 16 55 12 00 15 17 12 85 12 74 15 76 14 71 11 15 14 11 13 48 12 23 18 87 15 93 10 60 15 59 12 28 w4a4kv 15 26 17 65 15 37 11 26 13 44 12 03 10 59 12 80 11 54 9 99 12 18 11 01 9 75 11 64 10 61 8 40 12 38 10 54 w4a3kv 17 22 19 94 16 92 11 92 14 13 12 61 11 15 13 90 12 04 11 62 14 95 11 96 10 88 14 69 11 36 9 39 13 45 11 27 w3a3kv 18 45 21 33 18 26 12 42 14 48 13 13 11 47 14 08 12 41 11 76 14 98 12 22 11 47 15 03 11 75 10 03 13 82 11 30 zero shot tasks task lambada openai piqa model 1 3b 6 7b 13b 30b 66b 1 3b 6 7b 13b 30b 66b fp16 57 98 61 84 68 60 71 41 67 14 72 47 74 53 76 87 78 01 78 12 w4a16 57 46 60 78 68 50 71 37 67 06 71 59 74 80 76 93 78 29 78 18 w4a8 52 39 67 35 62 44 64 99 67 02 69 69 75 89 75 46 76 93 77 52 w4a4 49 34 64 93 60 23 63 92 68 50 68 66 75 40 73 55 76 16 77 14 w4a4kv 52 90 67 39 62 77 64 89 69 99 69 26 76 00 74 42 76 65 76 98 w4a3kv 47 02 64 97 61 05 59 20 66 23 68 22 75 73 73 23 67 46 74 21 w3a3kv 42 84 64 11 60 02 58 33 65 28 68 22 74 64 74 10 67 51 75 13 task arc easy arc challenge model 1 3b 6 7b 13b 30b 66b 1 3b 6 7b 13b 30b 66b fp16 51 05 58 03 61 91 65 31 64 68 29 69 33 61 35 66 38 05 38 99 w4a16 51 17 57 02 61 82 65 10 64 89 30 03 32 59 35 49 37 96 38 99 w4a8 48 35 60 18 60 94 63 46 64 60 26 36 34 04 35 58 37 45 38 82 w4a4 47 55 56 90 58 41 62 12 63 76 25 85 34 30 33 95 36 17 37 20 w4a4kv 47 76 57 74 58 54 63 59 63 67 27 64 33 95 34 21 37 37 37 71 w4a3kv 46 29 56 69 56 10 48 44 59 00 26 02 33 95 33 95 30 71 36 77 w3a3kv 44 02 55 59 53 74 50 42 57 65 26 53 32 16 32 50 30 71 34 98 task openbookqa boolq model 1 3b 6 7b 13b 30b 66b 1 3b 6 7b 13b 30b 66b fp16 33 00 38 00 39 00 40 20 41 60 57 73 67 03 65 90 70 45 70 85 w4a16 31 80 37 40 39 20 40 60 42 00 58 99 59 72 66 66 70 70 70 55 w4a8 32 40 38 00 38 60 39 40 41 80 46 88 65 93 66 57 70 64 71 07 w4a4 32 60 38 40 38 00 38 60 42 00 41 37 65 44 58 47 67 70 70 24 w4a4kv 32 60 38 40 38 00 39 80 41 60 43 33 62 11 62 47 68 22 70 79 w4a3kv 32 80 36 80 37 00 34 00 39 40 42 84 61 31 57 76 61 74 67 06 w3a3kv 28 40 35 20 37 20 32 40 38 60 46 23 60 79 65 07 63 08 67 49 citation if you use our rptq approach in your research please cite our paper misc yuan2023rptq title rptq reorder based post training quantization for large language models author zhihang yuan and lin niu and jiawei liu and wenyu liu and xinggang wang and yuzhang shang and guangyu sun and qiang wu and jiaxiang wu and bingzhe wu year 2023 eprint 2304 01089 archiveprefix arxiv primaryclass cs cl | ai |
|
CMPE245-Embedded-Wireless | cmpe245 embedded wireless for embedded wireless systems design prototyping testing and verification | os |
|
ml-workshop-1-of-4 | introduction to machine learning with scikit learn part 1 of 4 other parts part 2 https github com amueller ml workshop 2 of 4 part 3 https github com amueller ml workshop 3 of 4 part 4 https github com amueller ml workshop 4 of 4 content what is machine learning and what can it do for you https amueller github io ml workshop 1 of 4 slides 01 introduction html data loading and basic api of scikit learn https amueller github io ml workshop 1 of 4 slides 02 supervised learning html fundamentals of data preprocessing scaling and categorical data https amueller github io ml workshop 1 of 4 slides 03 preprocessing html imputation dealing with missing values https amueller github io ml workshop 1 of 4 slides 04 missing values html instructor andreas mueller http amuller github io amuellerml https twitter com amuellerml columbia university book introduction to machine learning with python http shop oreilly com product 0636920030515 do this repository will contain the teaching material and other info associated with the introduction to machine learning with scikit learn course about the workshop machine learning has become an indispensable tool across many areas of research and commercial applications from text to speech for your phone to detecting the higgs boson machine learning excels at extracting knowledge from large amounts of data this talk will give a general introduction to machine learning as well as introduce practical tools for you to apply machine learning in your research we will focus on one particularly important subfield of machine learning supervised learning the goal of supervised learning is to learn a function that maps inputs x to an output y by using a collection of training data consisting of input output pairs we will walk through formulating a problem as a supervised machine learning problem creating the necessary training data and applying and evaluating a machine learning algorithm this workshop should give you all the necessary background to start using machine learning yourself prerequisites this workshop assumes familiarity with jupyter notebooks and basics of pandas matplotlib and numpy obtaining the tutorial material if you are familiar with git it is most convenient if you clone the github repository this is highly encouraged as it allows you to easily synchronize any changes to the material git clone https github com amueller ml workshop 1 of 4 git if you are not familiar with git you can download the repository as a zip file by heading over to the github repository https github com amueller ml workshop 1 of 4 in your browser and click the green download button in the upper right images download repo png please note that i may add and improve the material until shortly before the tutorial session and we recommend you to update your copy of the materials one day before the tutorials if you have an github account and forked cloned the repository via github you can sync your existing fork with via the following commands git pull origin master installation notes this tutorial will require recent installations of numpy http www numpy org scipy http www scipy org matplotlib http matplotlib org pillow https python pillow org pandas http pandas pydata org scikit learn http scikit learn org stable 0 22 1 ipython http ipython readthedocs org en stable jupyter notebook http jupyter org the last one is important you should be able to type jupyter notebook in your terminal window and see the notebook panel load in your web browser try opening and running a notebook from the material to see check that it works for users who do not yet have these packages installed a relatively painless way to install all the requirements is to use a python distribution such as anaconda https www continuum io downloads which includes the most relevant python packages for science math engineering and data analysis anaconda can be downloaded and installed for free including commercial use and redistribution the code examples in this tutorial requires python 3 5 or later after obtaining the material we strongly recommend you to open and execute a jupyter notebook jupter notebook check env ipynb that is located at the top level of this repository inside the repository you can open the notebook by executing bash jupyter notebook check env ipynb inside this repository inside the notebook you can run the code cell by clicking on the run cells button as illustrated in the figure below images check env 1 png finally if your environment satisfies the requirements for the tutorials the executed code cell will produce an output message as shown below images check env 2 png | ai |
|
RTX | keil rtx cmsis api please read on https github com flxo lpckit for usage instructions and purpose | os |
|
flasky-first-edition | flasky this repository contains the archived source code examples for my o reilly book flask web development http www flaskbook com first edition for the code examples for the current edition of the book go to https github com miguelgrinberg flasky https github com miguelgrinberg flasky the commits and tags in this repository were carefully created to match the sequence in which concepts are presented in the book please read the section titled how to work with the example code in the book s preface for instructions | front_end |
|
reactide | p align center a href http reactide io img alt reactide src https i imgur com hrnmujs png width 30 a p github license https img shields io github license reactide reactide https github com reactide reactide blob master license txt prs welcome https img shields io badge prs welcome brightgreen svg https github com reactide reactide pulls reactide is the first dedicated ide for react web application development reactide is a cross platform desktop application that offers a simulator made for live reloading and quick react component prototyping react brings an integrated suite of development tools to streamline react development the days of flipping between browser ide and server are over reactide is in active development please follow this repo for contribution guidelines and our development road map p align center img alt reactide screenshot src https i imgur com a29j8fs jpg p get right to coding reactide runs an integrated node server and custom browser simulator as projects evolve the developer can continually track changes through live reloading directly in the development environment without the need for constant flipping to the browser reactide also offers integration with create react app for faster project boilerplate configuration the simulator and component tree are both functioning for all react applications state flow visualization managing state across a complex react application is the biggest pain point of developing react apps reactide offers a visual component tree that dynamically loads and changes based on components within the working directory while giving information about props and state at every component by navigating through a live representation of the architecture of a project developers can quickly identify and pinpoint the parent child relationships of even the most complex applications the component tree works out of the box by finding the entry point to your react application that you provide inside the reactide config js file integrated terminal for powerful commands and workflows the terminal is the life and blood of any ide allowing for complex manipulation of the file system node and even build tools reactide offers a compatible terminal for running commands in bin bash for unix and cmd for windows to provide powerful workflows to even seasoned developers getting started with reactide the reactide ide can be set up in two ways the first is to bundle the electron app and run it as a native desktop app the instructions are as follows 1 go to your terminal and type the following git checkout 3 0 release npm install npm run webpack production npm run electron packager 2 in your reactide folder navigate to the release builds folder and double click on reactide executable to check out reactide in developer mode follow these instructions 1 go to your terminal and type the following git checkout 3 0 release npm install npm run webpack production npm start setting up the simulator in order to take advantage of the live simulator please follow the below steps in your project directory 1 go to the reactide config js file and change the html and js entry points to the relative path of your respective files 2 in the terminal run npm run reactide server for any questions please look at the example project in the example folder for how to set up webpack and dev server contributors jin choi https github com jinihendrix mark marcelo https github com markmarcelo bita djaghouri https github com bitadj pablo lee https github com pablytolee ryan yang https github com ryany1819 oscar chan https github com chanoscar0 juan hart https github com juanhart1 eric pham https github com ep36 khalid umar https github com khalid050 rocky liao https github com seemsrocky | react redux devtools nodejs javascript electron node desktop-app reactide | front_end |
cheat | workflow status https github com cheat cheat actions workflows build yml badge svg cheat cheat allows you to create and view interactive cheatsheets on the command line it was designed to help remind nix system administrators of options for commands that they use frequently but not frequently enough to remember the obligatory xkcd http imgs xkcd com comics tar png the obligatory xkcd use cheat with cheatsheets example the next time you re forced to disarm a nuclear weapon without consulting google you may run sh cheat tar you will be presented with a cheatsheet resembling the following sh to extract an uncompressed archive tar xvf path to foo tar to extract a gz archive tar xzvf path to foo tgz to create a gz archive tar czvf path to foo tgz path to foo to extract a bz2 archive tar xjvf path to foo tgz to create a bz2 archive tar cjvf path to foo tgz path to foo usage to view a cheatsheet sh cheat tar a top level cheatsheet cheat foo bar a nested cheatsheet to edit a cheatsheet sh cheat e tar opens the tar cheatsheet for editing or creates it if it does not exist cheat e foo bar nested cheatsheets are accessed like this to view the configured cheatpaths sh cheat d to list all available cheatsheets sh cheat l to list all cheatsheets that are tagged with networking sh cheat l t networking to list all cheatsheets on the personal path sh cheat l p personal to search for the phrase ssh among cheatsheets sh cheat s ssh to search by regex for cheatsheets that contain an ip address sh cheat r s 0 9 1 3 3 0 9 1 3 flags may be combined in intuitive ways example to search sheets on the personal cheatpath that are tagged with networking and match a regex sh cheat p personal t networking regex s 0 9 1 3 3 0 9 1 3 installing for installation and configuration instructions see installing md cheatsheets cheatsheets are plain text files with no file extension and are named according to the command used to view them sh cheat tar file is named tar cheat foo bar file is named bar in a foo subdirectory cheatsheet text may optionally be preceeded by a yaml frontmatter header that assigns tags and specifies syntax syntax javascript tags array map to map over an array const squares 1 2 3 4 map x x x the cheat executable includes no cheatsheets but community sourced cheatsheets are available cheatsheets you will be asked if you would like to install the community sourced cheatsheets the first time you run cheat cheatpaths cheatsheets are stored on cheatpaths which are directories that contain cheatsheets cheatpaths are specified in the conf yml file it can be useful to configure cheat against multiple cheatpaths a common pattern is to store cheatsheets from multiple repositories on individual cheatpaths yaml conf yml cheatpaths name community a name for the cheatpath path documents cheat community the path s location on the filesystem tags community these tags will be applied to all sheets on the path readonly true if true cheat will not create new cheatsheets here name personal path documents cheat personal this is a separate directory and repository than above tags personal readonly false new sheets may be written here the readonly option instructs cheat not to edit or create any cheatsheets on the path this is useful to prevent merge conflicts from arising on upstream cheatsheet repositories if a user attempts to edit a cheatsheet on a read only cheatpath cheat will transparently copy that sheet to a writeable directory before opening it for editing directory scoped cheatpaths at times it can be useful to closely associate cheatsheets with a directory on your filesystem cheat facilitates this by searching for a cheat folder in the current working directory if found the cheat directory will temporarily be added to the cheatpaths autocompletion shell autocompletion is currently available for bash fish and zsh copy the relevant completion script completions into the appropriate directory on your filesystem to enable autocompletion this directory will vary depending on operating system and shell specifics additionally cheat supports enhanced autocompletion via integration with fzf to enable fzf integration 1 ensure that fzf is available on your path 2 set an envvar export cheat use fzf true installing md installing md releases https github com cheat cheat releases cheatsheets https github com cheat cheatsheets completions https github com cheat cheat tree master scripts fzf https github com junegunn fzf go https golang org | cheatsheets interactive-cheatsheets man-page documentation help cheat bash cheatsheet | os |
snarkOS | p align center img alt snarkos width 1412 src https cdn aleo org snarkos banner png p p align center a href https circleci com gh aleohq snarkos img src https circleci com gh aleohq snarkos svg style svg circle token 6e9ad6d39d95350544f352d34e0e5c62ef54db26 a a href https codecov io gh aleohq snarkos img src https codecov io gh aleohq snarkos branch master graph badge svg token cck8ts9hpo a a href https www aleo org discord img src https img shields io discord 700454073459015690 logo discord a a href https github com aleohq snarkos img src https img shields io badge contributors 50 ee8449 a p a name tableofcontents a table of contents 1 overview 1 overview 2 build guide 2 build guide 2 1 requirements 21 requirements 2 2 installation 22 installation 3 run an aleo node 3 run an aleo node 3a run an aleo client 3a run an aleo client 3b run an aleo prover 3b run an aleo prover 4 faqs 4 faqs 5 command line interface 5 command line interface 6 development guide 6 development guide 6 1 quick start 61 quick start 6 2 operations 62 operations 7 contributors 7 contributors 8 license 8 license comment 4 json rpc interface 40 4 json rpc interface 41 comment 5 additional information 40 5 additional information 41 1 overview snarkos is a decentralized operating system for zero knowledge applications this code forms the backbone of aleo https aleo org network which verifies transactions and stores the encrypted state applications in a publicly verifiable manner 2 build guide 2 1 requirements the following are minimum requirements to run an aleo node os 64 bit architectures only latest up to date for security clients ubuntu 22 04 lts macos ventura or later windows 11 or later provers ubuntu 22 04 lts macos ventura or later validators ubuntu 22 04 lts cpu 64 bit architectures only clients 16 cores provers 32 cores 64 cores preferred validators 32 cores 64 cores preferred ram ddr4 or better clients 16gb of memory provers 32gb of memory 64gb or larger preferred validators 64gb of memory 128gb or larger preferred storage pcie gen 3 x4 pcie gen 4 x2 nvme ssd or better clients 64gb of disk space provers 128gb of disk space validators 2tb of disk space 4tb or larger preferred network symmetric commercial always on clients 100mbps of upload and download bandwidth provers 250mbps of upload and download bandwidth validators 500mbps of upload and download bandwidth gpu clients not required at this time provers cuda enabled gpu optional validators not required at this time please note to run an aleo prover that is competitive the machine will require more than these requirements 2 2 installation before beginning please ensure your machine has rust v1 66 installed instructions to install rust can be found here https www rust lang org tools install start by cloning this github repository git clone https github com aleohq snarkos git depth 1 next move into the snarkos directory cd snarkos for ubuntu users a helper script to install dependencies is available from the snarkos directory run build ubuntu sh lastly install snarkos cargo install path please ensure ports 4133 tcp and 3033 tcp are open on your router and os firewall 3 run an aleo node 3a run an aleo client start by following the instructions in the build guide 2 build guide next to start a client node from the snarkos directory run run client sh 3b run an aleo prover start by following the instructions in the build guide 2 build guide next generate an aleo account address snarkos account new this will output a new aleo account in the terminal please remember to save the account private key and view key the following is an example output attention remember to store this account private key and view key private key aprivatekey1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx save me and use in the next step view key aviewkey1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx save me address aleo1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx save me next to start a proving node from the snarkos directory run run prover sh when prompted enter your aleo private key enter the aleo prover account private key aprivatekey1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 4 faqs 1 my node is unable to compile ensure your machine has rust v1 66 installed instructions to install rust can be found here https www rust lang org tools install if large errors appear during compilation try running cargo clean ensure snarkos is started using run client sh or run prover sh 2 my node is unable to connect to peers on the network ensure ports 4133 tcp and 3033 tcp are open on your router and os firewall ensure snarkos is started using run client sh or run prover sh 3 i can t generate a new address before running the command above snarkos account new try source bashrc also double check the spelling of snarkos note the directory is snarkos the command is snarkos 5 command line interface to run a node with custom settings refer to the full list of options and flags available in the snarkos cli the full list of cli flags and options can be viewed with snarkos help snarkos the aleo team hello aleo org usage snarkos options subcommand options h help print help information v verbosity verbosity specify the verbosity options 0 1 2 3 default 2 subcommands account commands to manage aleo accounts clean cleans the snarkos node storage help print this message or the help of the given subcommand s start starts the snarkos node update update snarkos the following are the options for the snarkos start command usage snarkos start options options network network id specify the network id of this node default 3 validator specify this node as a validator prover specify this node as a prover client specify this node as a client private key private key specify the node s account private key private key file private key file specify the path to a file containing the node s account private key node ip port specify the ip address and port for the node server default 0 0 0 0 4133 connect ip port specify the ip address and port of a peer to connect to rest rest specify the ip address and port for the rest server default 0 0 0 0 3033 norest if the flag is set the node will not initialize the rest server nodisplay if the flag is set the node will not render the display verbosity verbosity level specify the verbosity of the node options 0 1 2 3 default 2 logfile path specify the path to the file where logs will be stored default tmp snarkos log dev node id enables development mode specify a unique id for this node 6 development guide 6 1 quick start in the first terminal start the first validator by running cargo run release start nodisplay dev 0 validator in the second terminal start the second validator by running cargo run release start nodisplay dev 1 validator in the third terminal start the third validator by running cargo run release start nodisplay dev 2 validator in the fourth terminal start the fourth validator by running cargo run release start nodisplay dev 3 validator from here this procedure can be used to further start up provers and clients 6 2 operations it is important to initialize the nodes starting from 0 and incrementing by 1 for each new node the following is a list of options to initialize a node replace node id with a number starting from 0 cargo run release start nodisplay dev node id validator cargo run release start nodisplay dev node id prover cargo run release start nodisplay dev node id client cargo run release start nodisplay dev node id when no node type is specified the node will default to client 6 3 local devnet 6 3 1 install tmux to run a local devnet with the script start by installing tmux details summary macos summary to install tmux on macos you can use the homebrew package manager if you haven t installed homebrew yet you can find instructions at their website https brew sh bash once homebrew is installed run brew install tmux details details summary ubuntu summary on ubuntu and other debian based systems you can use the apt package manager bash sudo apt update sudo apt install tmux details details summary windows summary there are a couple of ways to use tmux on windows using windows subsystem for linux wsl 1 first install windows subsystem for linux https docs microsoft com en us windows wsl install 2 once wsl is set up and you have a linux distribution installed e g ubuntu open your wsl terminal and install tmux as you would on a native linux system bash sudo apt update sudo apt install tmux details 6 3 2 start a local devnet to start a local devnet run devnet sh follow the instructions in the terminal to start the devnet 6 3 3 view a local devnet switch nodes forward to toggle to the next node in a local devnet run ctrl b n switch nodes backwards to toggle to the previous node in a local devnet run ctrl b p select a node choose tree to select a node in a local devnet run ctrl b w select a node manually to select a node manually in a local devnet run ctrl b select window t node id 6 3 4 stop a local devnet to stop a local devnet run ctrl b kill session then press enter clean up to clean up the node storage run cargo run release clean dev node id 7 contributors thank you for helping make snarkos better what do the emojis mean https allcontributors org docs en emoji key all contributors list start do not remove or modify this section prettier ignore start markdownlint disable table tbody tr td align center valign top width 14 28 a href https github com howardwu img src https avatars githubusercontent com u 9260812 v 4 s 100 width 100px alt howard wu br sub b howard wu b sub a br a href https github com aleohq snarkos commits author howardwu title code a a href maintenance howardwu title maintenance a a href ideas howardwu title ideas planning feedback a a href https github com aleohq snarkos pulls q is 3apr reviewed by 3ahowardwu title reviewed pull requests a td td align center valign top width 14 28 a href https github com raychu86 img src https avatars githubusercontent com u 14917648 v 4 s 100 width 100px alt raymond chu br sub b raymond chu b sub a br a href https github com aleohq snarkos commits author raychu86 title code a a href maintenance raychu86 title maintenance a a href ideas raychu86 title ideas planning feedback a a href https github com aleohq snarkos pulls q is 3apr reviewed by 3araychu86 title reviewed pull requests a td td align center valign top width 14 28 a href https github com ljedrz img src https avatars githubusercontent com u 3750347 v 4 s 100 width 100px alt ljedrz br sub b ljedrz b sub a br a href https github com aleohq snarkos commits author ljedrz title code a a href maintenance ljedrz title maintenance a a href ideas ljedrz title ideas planning feedback a a href https github com aleohq snarkos pulls q is 3apr reviewed by 3aljedrz title reviewed pull requests a td td align center valign top width 14 28 a href https github com niklaslong img src https avatars githubusercontent com u 13221615 v 4 s 100 width 100px alt niklas long br sub b niklas long b sub a br a href https github com aleohq snarkos commits author niklaslong title code a a href maintenance niklaslong title maintenance a a href ideas niklaslong title ideas planning feedback a a href https github com aleohq snarkos pulls q is 3apr reviewed by 3aniklaslong title reviewed pull requests a td td align center valign top width 14 28 a href https github com collinc97 img src https avatars githubusercontent com u 16715212 v 4 s 100 width 100px alt collin chin br sub b collin chin b sub a br a href https github com aleohq snarkos commits author collinc97 title code a a href https github com aleohq snarkos commits author collinc97 title documentation a a href https github com aleohq snarkos pulls q is 3apr reviewed by 3acollinc97 title reviewed pull requests a td td align center valign top width 14 28 a href https github com iamalwaysuncomfortable img src https avatars githubusercontent com u 26438809 v 4 s 100 width 100px alt mike turner br sub b mike turner b sub a br a href https github com aleohq snarkos commits author iamalwaysuncomfortable title code a a href https github com aleohq snarkos commits author iamalwaysuncomfortable title documentation a a href https github com aleohq snarkos pulls q is 3apr reviewed by 3aiamalwaysuncomfortable title reviewed pull requests a td td align center valign top width 14 28 a href https gakonst com img src https avatars githubusercontent com u 17802178 v 4 s 100 width 100px alt georgios konstantopoulos br sub b georgios konstantopoulos b sub a br a href https github com aleohq snarkos commits author gakonst title code a td tr tr td align center valign top width 14 28 a href https github com kobigurk img src https avatars githubusercontent com u 3520024 v 4 s 100 width 100px alt kobi gurkan br sub b kobi gurkan b sub a br a href https github com aleohq snarkos commits author kobigurk title code a td td align center valign top width 14 28 a href https github com jules img src https avatars githubusercontent com u 30194392 v 4 s 100 width 100px alt jules br sub b jules b sub a br a href https github com aleohq snarkos commits author jules title code a td td align center valign top width 14 28 a href https github com protryon img src https avatars githubusercontent com u 8600837 v 4 s 100 width 100px alt max bruce br sub b max bruce b sub a br a href https github com aleohq snarkos commits author protryon title code a td td align center valign top width 14 28 a href https github com daniilr img src https avatars githubusercontent com u 1212355 v 4 s 100 width 100px alt daniil br sub b daniil b sub a br a href https github com aleohq snarkos commits author daniilr title code a td td align center valign top width 14 28 a href https github com akattis img src https avatars githubusercontent com u 4978114 v 4 s 100 width 100px alt akattis br sub b akattis b sub a br a href https github com aleohq snarkos commits author akattis title code a td td align center valign top width 14 28 a href https github com wcannon img src https avatars githubusercontent com u 910589 v 4 s 100 width 100px alt william cannon br sub b william cannon b sub a br a href https github com aleohq snarkos commits author wcannon title code a td td align center valign top width 14 28 a href https github com sadroeck img src https avatars githubusercontent com u 31270289 v 4 s 100 width 100px alt sam de roeck br sub b sam de roeck b sub a br a href https github com aleohq snarkos commits author sadroeck title code a td tr tr td align center valign top width 14 28 a href https github com wcannon aleo img src https avatars githubusercontent com u 93155840 v 4 s 100 width 100px alt wcannon aleo br sub b wcannon aleo b sub a br a href https github com aleohq snarkos commits author wcannon aleo title code a td td align center valign top width 14 28 a href https github com soft2dev img src https avatars githubusercontent com u 35427355 v 4 s 100 width 100px alt soft2dev br sub b soft2dev b sub a br a href https github com aleohq snarkos commits author soft2dev title code a td td align center valign top width 14 28 a href https github com amousa11 img src https avatars githubusercontent com u 12452142 v 4 s 100 width 100px alt ali mousa br sub b ali mousa b sub a br a href https github com aleohq snarkos commits author amousa11 title code a td td align center valign top width 14 28 a href https pyk sh img src https avatars githubusercontent com u 2213646 v 4 s 100 width 100px alt pyk br sub b pyk b sub a br a href https github com aleohq snarkos commits author pyk title code a td td align center valign top width 14 28 a href https github com whalelephant img src https avatars githubusercontent com u 18553484 v 4 s 100 width 100px alt belsy br sub b belsy b sub a br a href https github com aleohq snarkos commits author whalelephant title code a td td align center valign top width 14 28 a href https github com apruden2008 img src https avatars githubusercontent com u 39969542 v 4 s 100 width 100px alt apruden2008 br sub b apruden2008 b sub a br a href https github com aleohq snarkos commits author apruden2008 title code a td td align center valign top width 14 28 a href https stackoverflow com story fabianoprestes img src https avatars githubusercontent com u 976612 v 4 s 100 width 100px alt fabiano prestes br sub b fabiano prestes b sub a br a href https github com aleohq snarkos commits author zosorock title code a td tr tr td align center valign top width 14 28 a href https github com harukama img src https avatars githubusercontent com u 861659 v 4 s 100 width 100px alt haruka br sub b haruka b sub a br a href https github com aleohq snarkos commits author harukama title code a td td align center valign top width 14 28 a href https github com e4m7he6g img src https avatars githubusercontent com u 95574065 v 4 s 100 width 100px alt e4m7he6g br sub b e4m7he6g b sub a br a href https github com aleohq snarkos commits author e4m7he6g title code a td td align center valign top width 14 28 a href https github com w4ll3 img src https avatars githubusercontent com u 8595904 v 4 s 100 width 100px alt greg rio granado magalh es br sub b greg rio granado magalh es b sub a br a href https github com aleohq snarkos commits author w4ll3 title code a td td align center valign top width 14 28 a href https stake nodes guru img src https avatars githubusercontent com u 44749897 v 4 s 100 width 100px alt evgeny garanin br sub b evgeny garanin b sub a br a href https github com aleohq snarkos commits author evgeny garanin title code a td td align center valign top width 14 28 a href https github com macro ss img src https avatars githubusercontent com u 59944291 v 4 s 100 width 100px alt macro hoober br sub b macro hoober b sub a br a href https github com aleohq snarkos commits author macro ss title code a td td align center valign top width 14 28 a href https github com code pangolin img src https avatars githubusercontent com u 89436546 v 4 s 100 width 100px alt code pangolin br sub b code pangolin b sub a br a href https github com aleohq snarkos commits author code pangolin title code a td td align center valign top width 14 28 a href https github com kaola526 img src https avatars githubusercontent com u 88829586 v 4 s 100 width 100px alt kaola526 br sub b kaola526 b sub a br a href https github com aleohq snarkos commits author kaola526 title code a td tr tr td align center valign top width 14 28 a href https github com clarenous img src https avatars githubusercontent com u 18611530 v 4 s 100 width 100px alt clarenous br sub b clarenous b sub a br a href https github com aleohq snarkos commits author clarenous title code a td td align center valign top width 14 28 a href https github com unordered set img src https avatars githubusercontent com u 78592281 v 4 s 100 width 100px alt kostyan br sub b kostyan b sub a br a href https github com aleohq snarkos commits author unordered set title code a td td align center valign top width 14 28 a href https github com austinabell img src https avatars githubusercontent com u 24993711 v 4 s 100 width 100px alt austin abell br sub b austin abell b sub a br a href https github com aleohq snarkos commits author austinabell title code a td td align center valign top width 14 28 a href https github com yelhousni img src https avatars githubusercontent com u 16170090 v 4 s 100 width 100px alt youssef el housni br sub b youssef el housni b sub a br a href https github com aleohq snarkos commits author yelhousni title code a td td align center valign top width 14 28 a href https github com ghostant 1017 img src https avatars githubusercontent com u 53888545 v 4 s 100 width 100px alt ghostant 1017 br sub b ghostant 1017 b sub a br a href https github com aleohq snarkos commits author ghostant 1017 title code a td td align center valign top width 14 28 a href https pencil li img src https avatars githubusercontent com u 5947268 v 4 s 100 width 100px alt miguel gargallo br sub b miguel gargallo b sub a br a href https github com aleohq snarkos commits author miguelgargallo title code a td td align center valign top width 14 28 a href https github com wang384670111 img src https avatars githubusercontent com u 78151109 v 4 s 100 width 100px alt chines wang br sub b chines wang b sub a br a href https github com aleohq snarkos commits author wang384670111 title code a td tr tr td align center valign top width 14 28 a href https github com ayushgw img src https avatars githubusercontent com u 14152340 v 4 s 100 width 100px alt ayush goswami br sub b ayush goswami b sub a br a href https github com aleohq snarkos commits author ayushgw title code a td td align center valign top width 14 28 a href https github com timsmith1337 img src https avatars githubusercontent com u 77958700 v 4 s 100 width 100px alt tim o2stake br sub b tim o2stake b sub a br a href https github com aleohq snarkos commits author timsmith1337 title code a td td align center valign top width 14 28 a href https github com liusen adalab img src https avatars githubusercontent com u 74092505 v 4 s 100 width 100px alt liu sen br sub b liu sen b sub a br a href https github com aleohq snarkos commits author liusen adalab title code a td td align center valign top width 14 28 a href https github com pa1amar img src https avatars githubusercontent com u 20955327 v 4 s 100 width 100px alt palamar br sub b palamar b sub a br a href https github com aleohq snarkos commits author pa1amar title code a td td align center valign top width 14 28 a href https github com swift mx img src https avatars githubusercontent com u 80231732 v 4 s 100 width 100px alt swift mx br sub b swift mx b sub a br a href https github com aleohq snarkos commits author swift mx title code a td td align center valign top width 14 28 a href https github com dtynn img src https avatars githubusercontent com u 1426666 v 4 s 100 width 100px alt caesar wang br sub b caesar wang b sub a br a href https github com aleohq snarkos commits author dtynn title code a td td align center valign top width 14 28 a href https github com paulip1792 img src https avatars githubusercontent com u 52645166 v 4 s 100 width 100px alt paul ip br sub b paul ip b sub a br a href https github com aleohq snarkos commits author paulip1792 title code a td tr tr td align center valign top width 14 28 a href https philipglazman com img src https avatars githubusercontent com u 8378656 v 4 s 100 width 100px alt philip glazman br sub b philip glazman b sub a br a href https github com aleohq snarkos commits author philipglazman title code a td td align center valign top width 14 28 a href https github com avadon img src https avatars githubusercontent com u 404177 v 4 s 100 width 100px alt ruslan nigmatulin br sub b ruslan nigmatulin b sub a br a href https github com aleohq snarkos commits author avadon title code a td td align center valign top width 14 28 a href https www garillot net img src https avatars githubusercontent com u 4142 v 4 s 100 width 100px alt fran ois garillot br sub b fran ois garillot b sub a br a href https github com aleohq snarkos commits author huitseeker title code a td td align center valign top width 14 28 a href https github com aolcr img src https avatars githubusercontent com u 67066732 v 4 s 100 width 100px alt aolcr br sub b aolcr b sub a br a href https github com aleohq snarkos commits author aolcr title code a td td align center valign top width 14 28 a href https github com zvolin img src https avatars githubusercontent com u 34972409 v 4 s 100 width 100px alt maciej zwoli ski br sub b maciej zwoli ski b sub a br a href https github com aleohq snarkos commits author zvolin title code a td td align center valign top width 14 28 a href https www linkedin com in ignacio avecilla 39386a191 img src https avatars githubusercontent com u 63374472 v 4 s 100 width 100px alt nacho avecilla br sub b nacho avecilla b sub a br a href https github com aleohq snarkos commits author iavecilla title code a td td align center valign top width 14 28 a href https github com features security img src https avatars githubusercontent com u 27347476 v 4 s 100 width 100px alt dependabot br sub b dependabot b sub a br a href https github com aleohq snarkos commits author dependabot title code a td tr tbody tfoot tr td align center size 13px colspan 7 img src https raw githubusercontent com all contributors all contributors cli 1b8533af435da9854653492b1327a23a4dbd0a10 assets logo small svg a href https all contributors js org docs en bot usage add your contributions a img td tr tfoot table markdownlint restore prettier ignore end all contributors list end this project follows the all contributors https github com all contributors all contributors specification contributions of any kind welcome 8 license we welcome all contributions to snarkos please refer to the license 7 license for the terms of contributions license gpl v3 https img shields io badge license apache 202 0 blue svg license md | aleo blockchain cryptography zero-knowledge zksnarks rust | blockchain |
Healing-Land | healing land group members member name student id github profile wilfred p p s it20601256 poornasankalana wanni arachchige h s it20606060 heshani9918 pallewatta u d p it20620820 udul d jayakody j a m g it20150648 malsha2000 | front_end |
|
job-hunt-interview-questions-2020 | div align center img height 60 src https raw githubusercontent com devabhijeet devabhijeet master assets javascript svg h1 interview questions for front end h1 div span this readme is a compilation of all the question asked during my recent covid 19 job hunt i ve also attached a list of resources that i d referred for the preparations br br the questions are divided into following sections ul align left li js li li coding li li assignments li li miscellaneous li ul span js 1 given a multidimensional array with depth of n flatten it once flattened make it available as a method on array instance details summary b answer b summary p javascript 1 2 3 4 1 2 3 4 let arr 1 2 3 4 5 6 7 8 9 10 function flatten arr return arr reduce function acc next let isarray array isarray next return acc concat isarray flatten next next if array prototype flatten array prototype flatten function return flatten this console log arr flatten p details 2 create a promise from scratch details summary b answer b summary p javascript class custompromise state pending value undefined thencallbacks errorcallbacks constructor action action this resolver bind this this reject bind this resolver value this state resolved this value value this thencallbacks foreach callback callback this value reject value this state rejected this value value this errorcallbacks foreach callback callback this value then callback this thencallbacks push callback return this catch callback this errorcallbacks push callback return this let promise new custompromise resolver reject settimeout const rand math ceil math random 1 1 6 6 if rand 2 resolver success else reject error 1000 promise then function response console log response catch function error console log error p details 3 filter movie list by average rating name sort filtered list by any field inside movie object details summary b answer b summary p javascript o m function getmovies return id name year o r function getratings return id movie id rating 0 rating 10 e g 9 3 minavgrating avgrating minavgrating sort name ascending order movies by name name descending avgrating search ave avengers avengers avengers avengersinfinitywar avengers const tolower str str tolocalelowercase const getavrgrating movie movingwithratings let count 0 return movingwithratings reduce acc value index const moviematch movie id value movie id if moviematch acc value rating count if index movingwithratings length 1 acc acc count return acc 0 const issubstring str1 str2 str1 tolower str1 split join str2 tolower str2 split join if str1 length str2 length return str1 startwith str2 else return str2 startwith str1 const movieslist getmovies const movingwithratings getratings function querymovies search sort minavgrating let filteredmovies movingwithratings filter movie getavrgrating movie movingwithratings minavgrating filteredmovies filteredmovies map movie movieslist filter listitem listitem id movie movie id pop filteredmovies filteredmovies filter movie issubstring tolower movie name tolower search filteredmovies filteredmovies sort a b const isdescending sort 0 true false let sortcopy isdescending sort slice 1 sort const value1 a sortcopy const value2 b sortcopy if isdescending return value1 value2 1 1 else return value1 value2 1 1 filteredmovies filteredmovies map movie movie avgrating movingwithratings filter ratedmovie ratedmovie movie id movie id 0 rating return filteredmovies p details 4 given an end point url to fetch all the posts and comments do the following map all the comments to the posts it belongs to the resultant data after mapping should be of below structure details summary b answer b summary p answer https github com devabhijeet cure fit interview challenge tree master javascript service js const posts url https jsonplaceholder typicode com posts const comments url https jsonplaceholder typicode com comments export const fetchallposts return fetch posts url then res res json export const fetchallcomments return fetch comments url then res res json import fetchallposts fetchallcomments from service const fetchdata async const posts comments await promise all fetchallposts fetchallcomments const graballcommentsforpost postid comments filter comment comment postid postid const mappedpostwithcomment posts reduce acc post const allcomments graballcommentsforpost post id acc post id allcomments return acc console log mappedpostwithcomment mappedpostwithcomment fetchdata p details 5 implement a method gethashcode on string instance the method should be available on all strings details summary b answer b summary p javascript let s1 sample if string prototype gethashcode string prototype gethashcode function console log string instance this return this p details 6 what does the below expressions evaluate to javascript 1 true true true 1 true 2 3 two three details summary b answer b summary p javascript 2 2 1true false true p details 7 implement bind and reduce details summary b answer b summary p javascript bind if function prototype bind function prototype bind function arg const func this const context arg 0 const params arg slice 1 return function innerparam func apply context params innerparam reduce array prototype reduce function func initstate const arr this const callback func let init initstate arr foreach function value index init callback init value return init p details 8 implement debounce function details summary b answer b summary p javascript const debounce function func interval let timerid return function e clearinterval timerid timer settimeout function func apply interval debounce apicall 3000 p details 9 implement throtlling function details summary b answer b summary p javascript const throttle callback interval let timerid let allowevents true return function let context this let args arguments if allowevents callback apply context args allowevents false timerid settimeout function allowevents true interval p details 10 design api polling mechanism the api is called after a fixed interval the api is a stock api that fetches the latest price of stock upon fetching the results render the ui the question demands the design aspect of the solution and not the code it was open ended question details summary b answer b summary p javascript with setinterval throttling and flags setinterval endpoint render with the inversion of control endpoint render settimeout endpoint render settimeout p details 11 convert class based inheritance code given below to es5 code javascript class parent name constructor name this name name getname return this name class children extends parent constructor props super props details summary b answer b summary p javascript function parent name this name name parent prototype getname function return this name function children name parent call this name children prototype new parent p details 12 what does following code evaluates to javascript q 1 var x 1 var y x x 0 console log x y q 2 var x 1 var y x x console log x y q 3 function abc console log this abc new abc q 4 var x 1 var obj x 2 getx function return console log this x obj getx let a obj getx console log a q 5 how to get the a to log 2 in the above code q 6 console log a settimeout console log b 0 settimeout console log c 0 console log d q 7 settimeout function console log a 0 promise resolve then function console log b then function console log c console log d q 8 let obj1 a 1 b 2 function mutate obj obj a 4 c 6 console log obj1 mutate obj1 console log obj1 details summary b answer b summary p javascript a 1 0 1 a 2 1 a 3 window object is logged a 4 logs 2 and 1 a 5 a call obj a 6 a d b c a 7 d b c a a 8 a 1 b 2 a 1 b 2 p details 13 given an array of numbers implement the following javascript const list 1 2 3 4 5 6 7 8 const filteredarray list filter between 3 6 4 5 details summary b answer b summary p javascript function between start end return function value index return value start value end p details algorithms 1 consider the following series javascript a 1 b a 2 2 c b 2 3 and so on write a program that outputs the number corresponding to a given letter given a string of letters like grep computes the sum of the numbers corresponding to all the letters in the string i e g r e p as given by the above series and given a large number that would fit into a standard 32 bit integer finds the shortest string of letters corresponding to it you may use a greedy approach for the last part compute the values of the numbers corresponding to letters as and when required and do not pre compute beforehand and store them in a data structure details summary b answer b summary p javascript a 1 b a 2 2 c b 2 3 d c 2 3 var genchararray function chara charz var a i chara charcodeat 0 j charz charcodeat 0 for i j i a push string fromcharcode i return a var charmap var chararray genchararray a z chararray foreach function char index charmap char number index 1 var charsequence function char if typeof char string char charmap char if char 1 return 1 else return char 2 charsequence char 1 var input process argv 2 if input length 1 console log charsequence charmap input else if input length 1 var chartotalsequence input split reduce function acc curr return acc charsequence charmap curr 0 console log chartotalsequence p details 2 given an array find a pair such that it sums to a given number details summary b answer b summary p javascript let nums 2 7 10 1 11 15 9 let target 11 let numsmap new map let pairs nums reduce acc num let numtofind target num if numsmap get numtofind return acc num numtofind else numsmap set num true return acc console log pairs pairs p details 3 find the local maxima in a given array a local maxima is a element that is greater than it s left and right neighbours i provided a o n solution which was quite straight forward before going for optimisation details summary b answer b summary p javascript let x 1 2 3 5 4 outputs 5 if x length 1 return x 0 else let i 1 for i x length 1 i if x i 1 x i and x i x i 1 return x i if x length 1 i return x i p details 4 rotate a matrix clockwise by 90 degree the solution should be in place leetcode https leetcode com problems rotate image details summary b answer b summary p javascript 1 2 3 4 5 6 7 8 9 the solution is to first take the transpose of the matrix after taking the transpose the resulting matrix is as follows 1 4 7 2 5 8 3 6 9 after the transpose step all we have to do is to reverse the array each entry the resulting matrix after after reversal is as follows 7 4 1 8 5 2 9 6 3 the above matrix is rotated 90 degree p details 5 maximum subarray sum modulo m details summary b answer b summary p answer https www geeksforgeeks org maximum subarray sum modulo m p details 6 given an array find three element in array that sum to a given target details summary b answer b summary p javascript let x 1 2 3 4 5 let target 7 let found const twopointer l r current while l r const totalsum current x l x r if totalsum target found push current x l x r return else if totalsum target r else l const threesum x target for let i 0 i x length i const current x i let leftpointer i 1 let rightpointer x length 1 if current x leftpointer x rightpointer target found push current x leftpointer x rightpointer else twopointer leftpointer rightpointer current return found p details 7 given a string and an integer k find number of substrings in which all the different characters occurs exactly k times link https www geeksforgeeks org number substrings count character k details summary b answer b summary p javascript const substrhassamecharcount str startindex endindex totalhop let charmap for let k startindex k endindex k let currentchar str k if charmap currentchar charmap currentchar else charmap currentchar 1 let totalcount object values charmap length 0 return totalcount object values charmap every item item totalhop false const characterwithcountk str k if k 0 return let count 0 let initialhop k while initialhop str length for let j 0 j str length j let startindex j let endindex j initialhop if endindex str length continue count substrhassamecharcount str startindex endindex k count 1 count initialhop k count substrhassamecharcount str 0 initialhop k count 1 count return count let str aabbcc let k 2 console log characterwithcountk str k p details 8 given two input strings s1 and s2 containing characters from a z in different orders find if rearranging string in s1 results in a string that is equal to s2 details summary b answer b summary p javascript let s1 dadbcbc let s2 ccbbdad let charmap const canberearranged s1 s2 if s1 length s2 length return false for let i 0 i s1 length i const charfromstring1 s1 i const charfromstring2 s2 i if charfromstring1 in charmap charmap charfromstring1 else charmap charfromstring1 1 if charfromstring2 in charmap charmap charfromstring2 else charmap charfromstring2 1 for let x in charmap if charmap x 0 return false return true canberearranged s1 s2 p details 9 given an array or variable input size write a function to shuffle the array details summary b answer b summary p javascript const swap index1 index2 arr let temp arr index1 arr index1 arr index2 arr index2 temp const shuffle arr let totallength arr length while totallength 0 let random math floor math random totallength totallength swap totallength random arr return arr let arr 1 2 3 4 5 6 7 8 9 10 arr shuffle arr p details 10 calculate the sum of all elements in a multidimensional array of infinite depth details summary b answer b summary p javascript let arr 4 5 7 8 5 7 9 3 5 7 let sum 0 const calculatesum arr arr reduce function acc currentval const isentryarray array isarray currentval if isentryarray return acc concat calculatesum currentval else sum currentval return acc concat currentval calculatesum arr console log sum p details 11 flatten a nested object of varying debt details summary b answer b summary p javascript const obj level1 level2 level3 more stuff other otherz level4 the end level2still last one am bored more what ipsum lorem latin var removenesting function obj parent for let key in obj if typeof obj key object removenesting obj key parent key else flattenedobj parent key obj key let flattenedobj const sample removenesting obj console log flattenedobj p details 12 given a json input where each entry represents a directory such that each directory in turn can have a nested entry of it s own create the resulting directory structure details summary b answer b summary p answer https jsbin com gajiweq 1 edit js console p details 13 given an array of object containing list of employee data such that each employee has list of reportee use this information to construct a hirerachy of employees details summary b answer b summary p javascript const employeesdata id 2 name abhishek cto reportees 6 id 3 name abhiram coo reportees id 6 name abhimanyu engineering manager reportees 9 id 9 name abhinav senior engineer reportees id 10 name abhijeet ceo reportees 2 3 a ceo b cto d engineering manager e senior software engineer c coo const findceo currentemp let parentemployee employeesdata filter emp emp reportees indexof currentemp id 1 if parentemployee parentemployee length 0 return findceo parentemployee 0 else return currentemp const loghierarchy currentemp indent console log repeat indent currentemp name indent 4 for let i 0 i currentemp reportees length i let employee employeesdata filter emp emp id currentemp reportees i loghierarchy employee 0 indent const traverse employee let ceo findceo employee loghierarchy ceo 0 traverse employeesdata 0 p details 14 print a given matrix in spiral form javascript const inputmatrix 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 const exprectoutput 1 2 3 4 5 10 15 20 19 18 17 16 11 6 7 8 9 14 13 12 details summary b answer b summary p javascript function spiralparser inputmatrix const output let rows inputmatrix length let cols rows 0 inputmatrix 0 length 0 singleemptyrow edge case 1 if rows 0 return if rows 1 singleelementrownocol edge case 2 if cols 0 return else if cols 1 singleelementrow edge case 3 1 output push inputmatrix 0 0 return output let top 0 let bottom rows 1 let left 0 let right cols 1 let direction 0 0 left right 1 top bottom 2 right left 3 bottom top while left right top bottom if direction 0 left right for let i left i right i output push inputmatrix top i top else if direction 1 top bottom for let i top i bottom i output push inputmatrix i right right else if direction 2 right left for let i right i left i output push inputmatrix bottom i bottom else if direction 3 bottom top for let i bottom i top i output push inputmatrix i left left direction direction 1 4 return output console log spiralparser inputmatrix2 p details 15 find maximum consecutive repeating char in a give string javascript let str bbbaaaaccadd max repeating char is a with count 4 details summary b answer b summary p javascript sudo code maxnow if input string length is 1 or greater than 1 1 0 maxoverall if input string length is 1 or greater than 1 1 0 for char in inputstring starting from index 1 if char equals prevchar maxnow maxoverall max maxoverall maxnow else if char not equals prevchar maxnow 1 p details 16 given a input array of varying length segregate all the 2 s at the end of the array javascript let inputarr 2 9 1 5 2 3 1 2 7 4 3 8 29 2 4 6 54 32 2 100 ouput 9 1 5 3 1 7 4 3 8 29 4 6 54 32 100 2 2 2 2 2 details summary b answer b summary p javascript let slowrunner 0 for let fastrunner 0 fastrunner arr length fastrunner if arr fastrunner 2 arr slow 2 arr fastrunner arr slow arr slow arr fastrunner slowrunner p details 17 reverse a linked list javascript input 1 2 3 4 5 6 output 1 2 3 4 5 6 details summary b answer b summary p javascript sudo code let current head let prev null let next null while current next current next current next prev prev current current next p details 18 preorder tree traversal using iteration no recurssion details summary b answer b summary p javascript sudo code const preorder root let stack stack push root while there is element in stack let current stack pop console log current value if current right stack push current right if current left stack push current left p details assignments 1 design a parking lot system with following requirements it can hold up to n vehicles handle availability of parking lot entry and exit log of vehicles automated ticketing system for every vehicle entering exiting the parking lot will have vehicle registration with vehicle details like registration no color allocated parking slot i should be able to query registration no of all vehicles of a particular color parking slot of a vehicle given registration no parking slots for vehicles given a color list of available slots in the parking lot requirements can use anything to structure the code classes structs your solution should be extendable for future use cases few code design principles modularity of code naming conventions solid principles details summary b answer b summary p answer https github com devabhijeet parking lot design js p details 2 create a react component ping that makes an api call to a given url if the api calls returns status code 200 that means the user is online however if the api call receives status code other than 200 it means the user is offline try changing status form dev tools network panel details summary b answer b summary p answer https codesandbox io s admiring davinci xnjef p details 3 create a dynamic form builder from a json input the form can be grouped based on id each group can have a nested group of it s own details summary b answer b summary p answer https codesandbox io s great noyce 75kup p details 4 create a minimal excel sheet in pure javascript that supports adding and removing rows columns there was time limit on this question of 40 minutes details summary b answer b summary p answer https codesandbox io s smoosh thunder krv8m p details 5 you have to make a search input box which will search over a list of users the user object has the following fields javascript id a unique id name user s name items list of items ordered by user address address of the user pincode user address pin code you have to implement search on all of the fields the search results will show up as a list of user cards to summarize on typing in the search input box the search results list opens up the search could be just a string matching search the list of cards can be navigated through keyboard or mouse only one card should highlight at a time if both mouse and keyboard are used for navigation keyboard will take preference if mouse is kept hovered on the list similarly mouse will take preference if keyboard navigation is not used this behaviour is similar to how youtube search works when no search results are found an empty card is displayed the card list would be scrollable the highlighted card via keyboard mouse will scroll into view details summary b answer b summary p answer https codesandbox io s silly moon 31m7u p details miscellaneous 1 how would you architect a front end application click https dev to vycke how to create a scalable and maintainable front end architecture 4f47 2 implement lazy loading click https css tricks com the complete guide to lazy loading images 3 what is server side rendering 4 how to deploy a react app to production 5 what are service worker web worker 6 how to optimise a web app and make it more performant 7 explain different type of client side cache strategies 8 what is cors 9 what are higher order component in react 10 how does connect function work in redux 11 what are pure components in react 12 difference between proto and prototype 13 difference between inline vs inline block vs block 14 difference between flex and grid layout 15 different positioning system in css 16 specificity and selectors priority in css 17 difference between display none vs visibility hidden vs opacity resources link https github com devabhijeet learning resources | front_end |
|
text-summerization-using-fuzzy-logic | text summerization using fuzzy logic this is an implementation of a text summeration from a txt file using fuzzy logic language package python 3 6 skfuzzy from sci kit fuzzy nltk preprocessing lemmatization stopwords sentence tokenizing fuzzifier getting features from the extracted sentence features extraction tight weight sentence location sentence length thematic word term weight sentence similarity proper noun numerical data membership function triangular membership function auto membership function inference engine rule based defuzifier converting the fuzzified values to crisp for sentence selection senetence selection compression rate is 20 which means get top 20 sentence with highest score from the list of all the sentence in the corpus | ai |
|
spaceone-design-system | h1 align center spaceone design system h1 br div align center style display flex flex direction column div img width 245 src https user images githubusercontent com 35549653 76694897 de236300 66bb 11ea 9ace b9edde9c12da png div br div a href https www apache org licenses license 2 0 target blank img alt license apache 2 0 src https img shields io badge license apache 2 0 yellow svg a a href http storybook developer spaceone dev target blank img alt spaceone storybook src https img shields io badge design system spaceone blueviolet svg logo storybook a div div br https sourcerer io fame wanzargen spaceone dev spaceone design system images 0 https sourcerer io fame wanzargen spaceone dev spaceone design system links 0 https sourcerer io fame wanzargen spaceone dev spaceone design system images 1 https sourcerer io fame wanzargen spaceone dev spaceone design system links 1 https sourcerer io fame wanzargen spaceone dev spaceone design system images 2 https sourcerer io fame wanzargen spaceone dev spaceone design system links 2 https sourcerer io fame wanzargen spaceone dev spaceone design system images 3 https sourcerer io fame wanzargen spaceone dev spaceone design system links 3 https sourcerer io fame wanzargen spaceone dev spaceone design system images 4 https sourcerer io fame wanzargen spaceone dev spaceone design system links 4 https sourcerer io fame wanzargen spaceone dev spaceone design system images 5 https sourcerer io fame wanzargen spaceone dev spaceone design system links 5 https sourcerer io fame wanzargen spaceone dev spaceone design system images 6 https sourcerer io fame wanzargen spaceone dev spaceone design system links 6 https sourcerer io fame wanzargen spaceone dev spaceone design system images 7 https sourcerer io fame wanzargen spaceone dev spaceone design system links 7 br spaceone design system spaceone storybook http storybook developer spaceone dev br br author see our owners https github com spaceone dev spaceone design system blob master authors file br br license this project is apache 2 0 https www apache org licenses license 2 0 licensed br br chart license spaceone design system internally uses amcharts for dynamic chart br before using the design system look carefully at amcharts license br if you want to purchase the amcharts license that suits you and use it on your application see the license faq https www amcharts com online store licenses explained how to use 1 install shell npm install spaceone design system vue vue composition api vue router vue i18n vue fragment amcharts 2 set plugin add following lines to main js file br javascript import spacedesignsystem from spaceone design system vue use spacedesignsystem pluginoptions plugin options option description installvuerouter whether to install vue router some components use vue router so don t give this option if you have already installed it in your application installvuei18n whether to install vue i18n some components use vue i18n so don t give this option if you have already installed it in your application installvuecompositionapi whether to install the vue composition api all components use the vue composition api so don t give this option if you have already installed it in your application installfragment whether to install vue fragment some components use vue fragment so don t give this option if you have already installed it in your application amchartslicenses if you use the amcharts library such as dynamic chart license the amcharts as a string array typescript interface spaceonedsoptions installvuerouter boolean installvuei18n boolean installvuecompositionapi boolean installfragment boolean amchartslicenses string 3 set components locally example javascript import pbuttontab pdynamiclayout from spaceone design system export default components pbuttontab pdynamiclayout br how to apply styles spaceone design system is based on tailwindcss br global styles case 1 all styles if your project doesn t use tailwindcss add the code below to main ts javascript import spaceone design system dist css style css case 2 without tailwindcss styles if your project use tailwindcss you don t need to import all styles br in that case add codes below to your tailwind config js javascript const spaceonetailwind require spaceone design system tailwind config js module exports theme spaceonetailwind theme your customized theme variants spaceonetailwind variants your customized variants plugins spaceonetailwind plugins your customized plugins also you need to add codes below to your main js javascript import spaceone design system dist css light style css | os |
|
Natural-Language-Processing-Projects | apress source code this repository accompanies natural language processing projects https link springer com book 10 1007 978 1 4842 7386 9 by akshay kulkarni adarsha shivananda and anoosh kulkarni apress 2022 comment cover cover image 978 1 4842 7385 2 jpg download the files as a zip using the green button or clone the repository to your machine using git releases release v1 0 corresponds to the code in the published book without corrections or updates contributions see the file contributing md for more information on how you can contribute to this repository | ai |
|
hardware-aware-transformers | hat hardware aware transformers for efficient natural language processing paper https arxiv org abs 2005 14187 website https hat mit edu video https youtu be n th1jibqcw inproceedings hanruiwang2020hat title hat hardware aware transformers for efficient natural language processing author wang hanrui and wu zhanghao and liu zhijian and cai han and zhu ligeng and gan chuang and han song booktitle annual conference of the association for computational linguistics year 2020 overview we release the pytorch code and 50 pre trained models for hat hardware aware transformers within a transformer supernet supertransformer we efficiently search for a specialized fast model subtransformer for each hardware with latency feedback the search cost is reduced by over 10000 teaser https hanruiwang me project pages hat assets teaser jpg hat framework overview overview https hanruiwang me project pages hat assets overview jpg hat models achieve up to 3 speedup and 3 7 smaller model size with no performance loss results https hanruiwang me project pages hat assets results jpg usage installation to install from source and develop locally bash git clone https github com mit han lab hardware aware transformers git cd hardware aware transformers pip install editable data preparation task task name train valid test wmt 14 en de wmt14 en de wmt 16 https drive google com uc export download id 0b bzck ksdkpm25jrun2x2uxmm8 newstest2013 newstest2014 wmt 14 en fr wmt14 en fr wmt 14 http statmt org wmt14 translation task html download newstest2012 2013 newstest2014 wmt 19 en de wmt19 en de wmt 19 http www statmt org wmt19 translation task html download newstest2017 newstest2018 iwslt 14 de en iwslt14 de en iwslt 14 train set https wit3 fbk eu archive 2014 01 texts de en de en tgz iwslt 14 valid set iwslt14 ted dev2010 br iwslt14 tedx dev2012 br iwslt14 ted tst2010 br iwslt14 ted tst2011 br iwslt14 ted tst2012 to download and preprocess data run bash bash configs task name preprocess sh if you find preprocessing time consuming you can directly download the preprocessed data we provide bash bash configs task name get preprocessed sh testing we provide pre trained models subtransformers on the machine translation tasks for evaluations the params and flops do not count in the embedding lookup table and the last output layers because they are dependent on tasks task hardware latency params br m flops br g bleu sacre br bleu model name link wmt 14 en de raspberry pi arm cortex a72 cpu 3 5s br 4 0s br 4 5s br 5 0s br 6 0s br 6 9s 25 22 br 29 42 br 35 72 br 36 77 br 44 13 br 48 33 1 53 br 1 78 br 2 19 br 2 26 br 2 70 br 3 02 25 8 br 26 9 br 27 6 br 27 8 br 28 2 br 28 4 25 6 br 26 6 br 27 1 br 27 2 br 27 6 br 27 8 hat wmt14ende raspberrypi 3 5s bleu 25 8 br hat wmt14ende raspberrypi 4 0s bleu 26 9 br hat wmt14ende raspberrypi 4 5s bleu 27 6 br hat wmt14ende raspberrypi 5 0s bleu 27 8 br hat wmt14ende raspberrypi 6 0s bleu 28 2 br hat wmt14ende raspberrypi 6 9s bleu 28 4 link https www dropbox com s pmfwwg1d1kmfdh5 hat wmt14ende raspberrypi 3 5s bleu 25 8 pt dl 0 br link https www dropbox com s ko0i65k1664p74u hat wmt14ende raspberrypi 4 0s bleu 26 9 pt dl 0 br link https www dropbox com s f4y6u9cbcdykeha hat wmt14ende raspberrypi 4 5s bleu 27 6 pt dl 0 br link https www dropbox com s av5vycafxo57x6w hat wmt14ende raspberrypi 5 0s bleu 27 8 pt dl 0 br link https www dropbox com s ywedqumq91a4ekn hat wmt14ende raspberrypi 6 0s bleu 28 2 pt dl 0 br link https www dropbox com s x7iucaotbeald3q hat wmt14ende raspberrypi 6 9s bleu 28 4 pt dl 0 wmt 14 en de intel xeon e5 2640 cpu 137 9ms br 204 2ms br 278 7ms br 340 2ms br 369 6ms br 450 9ms 30 47 br 35 72 br 40 97 br 46 23 br 51 48 br 56 73 1 87 br 2 19 br 2 54 br 2 86 br 3 21 br 3 53 25 8 br 27 6 br 27 9 br 28 1 br 28 2 br 28 5 25 6 br 27 1 br 27 3 br 27 5 br 27 6 br 27 9 hat wmt14ende xeon 137 9ms bleu 25 8 br hat wmt14ende xeon 204 2ms bleu 27 6 br hat wmt14ende xeon 278 7ms bleu 27 9 br hat wmt14ende xeon 340 2ms bleu 28 1 br hat wmt14ende xeon 369 6ms bleu 28 2 br hat wmt14ende xeon 450 9ms bleu 28 5 link https www dropbox com s bvq3y6igoyxe1t5 hat wmt14ende xeon 137 9ms bleu 25 8 pt dl 0 br link https www dropbox com s yg12xz504uw2g1s hat wmt14ende xeon 204 2ms bleu 27 6 pt dl 0 br link https www dropbox com s l5ljas8zyg9ik65 hat wmt14ende xeon 278 7ms bleu 27 9 pt dl 0 br link https www dropbox com s fkp61h8jbyt524i hat wmt14ende xeon 340 2ms bleu 28 1 pt dl 0 br link https www dropbox com s 3mv3oaddeb132np hat wmt14ende xeon 369 6ms bleu 28 2 pt dl 0 br link https www dropbox com s bjldda9nzj7cpni hat wmt14ende xeon 450 9ms bleu 28 5 pt dl 0 wmt 14 en de nvidia titan xp gpu 57 1ms br 91 2ms br 126 0ms br 146 7ms br 208 1ms 30 47 br 35 72 br 40 97 br 51 20 br 49 38 1 87 br 2 19 br 2 54 br 3 17 br 3 09 br 25 8 br 27 6 br 27 9 br 28 1 br 28 5 25 6 br 27 1 br 27 3 br 27 5 br 27 8 hat wmt14ende titanxp 57 1ms bleu 25 8 br hat wmt14ende titanxp 91 2ms bleu 27 6 br hat wmt14ende titanxp 126 0ms bleu 27 9 br hat wmt14ende titanxp 146 7ms bleu 28 1 br hat wmt14ende titanxp 208 1ms bleu 28 5 link https www dropbox com s 71w5t0qidsxqe1e hat wmt14ende titanxp 57 1ms bleu 25 8 pt dl 0 br link https www dropbox com s j0hnmxw6xz6tskh hat wmt14ende titanxp 91 2ms bleu 27 6 pt dl 0 br link https www dropbox com s pyetdnbz1zvcfg5 hat wmt14ende titanxp 126 0ms bleu 27 9 pt dl 0 br link https www dropbox com s ixn832oai2k44j9 hat wmt14ende titanxp 146 7ms bleu 28 1 pt dl 0 br link https www dropbox com s owpdwmqwpn9jw14 hat wmt14ende titanxp 208 1ms bleu 28 5 pt dl 0 wmt 14 en fr raspberry pi arm cortex a72 cpu 4 3s br 5 3s br 5 8s br 6 9s br 7 8s br 9 1s 25 22 br 35 72 br 36 77 br 44 13 br 49 38 br 56 73 1 53 br 2 23 br 2 26 br 2 70 br 3 09 br 3 57 38 8 br 40 1 br 40 6 br 41 1 br 41 4 br 41 8 36 0 br 37 3 br 37 8 br 38 3 br 38 5 br 38 9 hat wmt14enfr raspberrypi 4 3s bleu 38 8 br hat wmt14enfr raspberrypi 5 3s bleu 40 1 br hat wmt14enfr raspberrypi 5 8s bleu 40 6 br hat wmt14enfr raspberrypi 6 9s bleu 41 1 br hat wmt14enfr raspberrypi 7 8s bleu 41 4 br hat wmt14enfr raspberrypi 9 1s bleu 41 8 link https www dropbox com s ku97fwz1oj1a112 hat wmt14enfr raspberrypi 4 3s bleu 38 8 pt dl 0 br link https www dropbox com s 9noopb605fqmjpl hat wmt14enfr raspberrypi 5 3s bleu 40 1 pt dl 0 br link https www dropbox com s vmdkh0ctpdac7gr hat wmt14enfr raspberrypi 5 8s bleu 40 6 pt dl 0 br link https www dropbox com s dbo9abn5pnb6qgz hat wmt14enfr raspberrypi 6 9s bleu 41 1 pt dl 0 br link https www dropbox com s x8tsbxbwkk64ejg hat wmt14enfr raspberrypi 7 8s bleu 41 4 pt dl 0 br link https www dropbox com s zbsbl5e96y3t5zl hat wmt14enfr raspberrypi 9 1s bleu 41 8 pt dl 0 wmt 14 en fr intel xeon e5 2640 cpu 154 7ms br 208 8ms br 329 4ms br 394 5ms br 442 0ms 30 47 br 35 72 br 44 13 br 51 48 br 56 73 1 84 br 2 23 br 2 70 br 3 28 br 3 57 39 1 br 40 0 br 41 1 br 41 4 br 41 7 36 3 br 37 2 br 38 2 br 38 5 br 38 8 hat wmt14enfr xeon 154 7ms bleu 39 1 br hat wmt14enfr xeon 208 8ms bleu 40 0 br hat wmt14enfr xeon 329 4ms bleu 41 1 br hat wmt14enfr xeon 394 5ms bleu 41 4 br hat wmt14enfr xeon 442 0ms bleu 41 7 link https www dropbox com s 6xswl0oesuvmqk5 hat wmt14enfr xeon 154 7ms bleu 39 1 pt dl 0 br link https www dropbox com s ot3zt8nenda54j7 hat wmt14enfr xeon 208 8ms bleu 40 0 pt dl 0 br link https www dropbox com s epe2lvus4l40v9o hat wmt14enfr xeon 329 4ms bleu 41 1 pt dl 0 br link https www dropbox com s qnt2qzkb3i054c6 hat wmt14enfr xeon 394 5ms bleu 41 4 pt dl 0 br link https www dropbox com s 79zcolb53jbhchk hat wmt14enfr xeon 442 0ms bleu 41 7 pt dl 0 wmt 14 en fr nvidia titan xp gpu 69 3ms br 94 9ms br 132 9ms br 168 3ms br 208 3ms 30 47 br 35 72 br 40 97 br 46 23 br 51 48 1 84 br 2 23 br 2 51 br 2 90 br 3 25 39 1 br 40 0 br 40 7 br 41 1 br 41 7 36 3 br 37 2 br 37 8 br 38 3 br 38 8 hat wmt14enfr titanxp 69 3ms bleu 39 1 br hat wmt14enfr titanxp 94 9ms bleu 40 0 br hat wmt14enfr titanxp 132 9ms bleu 40 7 br hat wmt14enfr titanxp 168 3ms bleu 41 1 br hat wmt14enfr titanxp 208 3ms bleu 41 7 link https www dropbox com s hvy255ls277onjw hat wmt14enfr titanxp 69 3ms bleu 39 1 pt dl 0 br link https www dropbox com s rvfv99jbh4n7qys hat wmt14enfr titanxp 94 9ms bleu 40 0 pt dl 0 br link https www dropbox com s u6u3u40pr4f5mzh hat wmt14enfr titanxp 132 9ms bleu 40 7 pt dl 0 br link https www dropbox com s wlbbmnrl61dx4z7 hat wmt14enfr titanxp 168 3ms bleu 41 1 pt dl 0 br link https www dropbox com s e41lnsktn5bb2fz hat wmt14enfr titanxp 208 3ms bleu 41 7 pt dl 0 wmt 19 en de nvidia titan xp gpu 55 7ms br 93 2ms br 134 5ms br 176 1ms br 204 5ms br 237 8ms 36 89 br 42 28 br 40 97 br 46 23 br 51 48 br 56 73 2 27 br 2 63 br 2 54 br 2 86 br 3 18 br 3 53 42 4 br 44 4 br 45 4 br 46 2 br 46 5 br 46 7 41 9 br 43 9 br 44 7 br 45 6 br 45 7 br 46 0 hat wmt19ende titanxp 55 7ms bleu 42 4 br hat wmt19ende titanxp 93 2ms bleu 44 4 br hat wmt19ende titanxp 134 5ms bleu 45 4 br hat wmt19ende titanxp 176 1ms bleu 46 2 br hat wmt19ende titanxp 204 5ms bleu 46 5 br hat wmt19ende titanxp 237 8ms bleu 46 7 link https www dropbox com s 6pokem8orb75ldh hat wmt19ende titanxp 55 7ms bleu 42 4 pt dl 0 br link https www dropbox com s zgcd70pzf1sle4z hat wmt19ende titanxp 93 2ms bleu 44 4 pt dl 0 br link https www dropbox com s mm827rst6a144zy hat wmt19ende titanxp 134 5ms bleu 45 4 pt dl 0 br link https www dropbox com s y0vov0n9zt50n9c hat wmt19ende titanxp 176 1ms bleu 46 2 pt dl 0 br link https www dropbox com s w1si4mgf1e3l8oj hat wmt19ende titanxp 204 5ms bleu 46 5 pt dl 0 br link https www dropbox com s rljih3t0hglp39a hat wmt19ende titanxp 237 8ms bleu 46 7 pt dl 0 iwslt 14 de en nvidia titan xp gpu 45 6ms br 74 5ms br 109 0ms br 137 8ms br 168 8ms 16 82 br 19 98 br 23 13 br 27 33 br 31 54 0 78 br 0 93 br 1 13 br 1 32 br 1 52 33 4 br 34 2 br 34 5 br 34 7 br 34 8 32 5 br 33 3 br 33 6 br 33 8 br 33 9 hat iwslt14deen titanxp 45 6ms bleu 33 4 br hat iwslt14deen titanxp 74 5ms bleu 34 2 br hat iwslt14deen titanxp 109 0ms bleu 34 5 br hat iwslt14deen titanxp 137 8ms bleu 34 7 br hat iwslt14deen titanxp 168 8ms bleu 34 8 link https www dropbox com s ntj1gfskish8vz3 hat iwslt14deen titanxp 45 6ms bleu 33 4 pt dl 0 br link https www dropbox com s gjq46181s3xbz0k hat iwslt14deen titanxp 74 5ms bleu 34 2 pt dl 0 br link https www dropbox com s fg3r3tk2vjg0diq hat iwslt14deen titanxp 109 0ms bleu 34 5 pt dl 0 br link https www dropbox com s 3j5vu5jh71xwec1 hat iwslt14deen titanxp 137 8ms bleu 34 7 pt dl 0 br link https www dropbox com s 5xy9hdjuc5c6sw5 hat iwslt14deen titanxp 168 8ms bleu 34 8 pt dl 0 download models bash python download model py model name model name for example python download model py model name hat wmt14ende raspberrypi 3 5s bleu 25 8 to download all models python download model py download all test bleu sacrebleu score bash bash configs task name test sh model file configs task name subtransformer model name yml normal sacre for example bash configs wmt14 en de test sh downloaded models hat wmt14ende raspberrypi 3 5s bleu 25 8 pt configs wmt14 en de subtransformer hat wmt14ende raspberrypi 3 5s bleu 25 8 yml normal another example bash configs iwslt14 de en test sh downloaded models hat iwslt14deen titanxp 137 8ms bleu 34 7 pt configs iwslt14 de en subtransformer hat iwslt14deen titanxp 137 8ms bleu 34 7 yml sacre test latency model size and flops to profile the latency model size and flops flops profiling needs torchprofile https github com mit han lab torchprofile git you can run the commands below by default only the model size is profiled bash python train py configs configs task name subtransformer model name yml sub configs configs task name subtransformer common yml latgpu latcpu profile flops for example python train py configs configs wmt14 en de subtransformer hat wmt14ende raspberrypi 3 5s bleu 25 8 yml sub configs configs wmt14 en de subtransformer common yml latcpu another example python train py configs configs iwslt14 de en subtransformer hat iwslt14deen titanxp 137 8ms bleu 34 7 yml sub configs configs iwslt14 de en subtransformer common yml profile flops training 1 train a supertransformer the supertransformer is a supernet that contains many subtransformers with weight sharing by default we train wmt tasks on 8 gpus please adjust update freq according to gpu numbers 128 x for x gpus note that for iwslt we only train on one gpu with update freq 1 bash python train py configs configs task name supertransformer search space yml for example python train py configs configs wmt14 en de supertransformer space0 yml another example cuda visible devices 0 1 2 3 python train py configs configs wmt14 en fr supertransformer space0 yml update freq 32 in the configs file supertransformer model architecture subtransformer search space and training settings are specified we also provide pre trained supertransformers for the four tasks as below to download run python download model py model name model name task search space model name link wmt 14 en de space0 hat wmt14ende super space0 link https www dropbox com s pkdddxvvpw9a4vq hat wmt14ende super space0 pt dl 0 wmt 14 en fr space0 hat wmt14enfr super space0 link https www dropbox com s asegvw9qzpxui6a hat wmt14enfr super space0 pt dl 0 wmt 19 en de space0 hat wmt19ende super space0 link https www dropbox com s uc0lw6jdep1vazc hat wmt19ende super space0 pt dl 0 iwslt 14 de en space1 hat iwslt14deen super space1 link https www dropbox com s yv0mn8ns36gxkhs hat iwslt14deen super space1 pt dl 0 2 evolutionary search the second step of hat is to perform an evolutionary search in the trained supertransformer with a hardware latency constraint in the loop we train a latency predictor to get fast and accurate latency feedback 2 1 generate a latency dataset bash python latency dataset py configs configs task name latency dataset hardware name yml for example python latency dataset py configs configs wmt14 en de latency dataset cpu raspberrypi yml hardware name can be cpu raspberrypi cpu xeon and gpu titanxp the configs file contains the design space in which we sample models to get model architecture real latency data pairs we provide the datasets we collect in the latency dataset latency dataset folder 2 2 train a latency predictor then train a predictor with collected dataset bash python latency predictor py configs configs task name latency predictor hardware name yml for example python latency predictor py configs configs wmt14 en de latency predictor cpu raspberrypi yml the configs file contains the predictor s model architecture and training settings we provide pre trained predictors in latency dataset predictors latency dataset predictors folder 2 3 run evolutionary search with a latency constraint bash python evo search py configs supertransformer config file yml evo configs evo settings yml for example python evo search py configs configs wmt14 en de supertransformer space0 yml evo configs configs wmt14 en de evo search wmt14ende titanxp yml the configs file points to the supertransformer training config file evo configs file includes evolutionary search settings and also specifies the desired latency constraint latency constraint note that the feature norm and lat norm here should be the same as those when training the latency predictor write config path specifies the location to write out the searched subtransformer architecture 3 train a searched subtransformer finally we train the search subtransformer from scratch bash python train py configs subtransformer architecture yml sub configs configs task name subtransformer common yml for example python train py configs configs wmt14 en de subtransformer wmt14ende titanxp 200ms yml sub configs configs wmt14 en de subtransformer common yml configs points to the write config path in step 2 3 sub configs contains training settings for the subtransformer after training a subtransformer you can test its performance with the methods in testing testing section dependencies python 3 6 pytorch http pytorch org 1 0 0 configargparse 0 14 new model training requires nvidia gpus and nccl https github com nvidia nccl related works on efficient deep learning micronet for efficient language modeling https arxiv org abs 2005 07877 lite transformer with long short range attention https arxiv org abs 2004 11886 amc automl for model compression and acceleration on mobile devices https arxiv org abs 1802 03494 once for all train one network and specialize it for efficient deployment https arxiv org abs 1908 09791 proxylessnas direct neural architecture search on target task and hardware https arxiv org abs 1812 00332 contact if you have any questions feel free to contact hanrui wang https hanruiwang me through email hanrui mit edu mailto hanrui mit edu or github issues pull requests are highly welcomed licence this repository is released under the mit license see license license for more information acknowledgements we are thankful to fairseq https github com pytorch fairseq as the backbone of this repo | hardware-aware transformer specialization efficient-model natural-language-processing machine-translation | ai |
FanCraterChat | fancrater chat user manual table of contents walkthrough walkthrough problem statement problem statement system requirements system requirements conceptual design conceptual design user interface design user interface design functional requirements functional requirements implementation implementation testing testing walkthrough when a user selects the fancrater app from their device they will be directed to the splash screen where they can choose to login on this screen the user does not have access the the drawer and cannot proceed without logging in or already being logged in login screen https i imgur com tnjfhkal jpg login screen once a user has selected to login from the splash screen they will be directed to an authorization page if the user has an account they can opt to sign in with google or enter the user information once entered the user will select log in at the bottom of the screen on successful login the user will be directed to the map screen auth0 login https i imgur com g1oxhrhl png auth0 login if a user does not have an account they will need to select sign up a user has the option to sign up by using google or entering their email and password once the information has been entered the user will click sign up at the bottom of the screen on successful sign up the user will directed to the map screen auth0 signup https i imgur com mrllhfsl png auth0 signup the map screen will show up with pins for other users in the same area who share the same notable all screens from this point will now have access to the menu drawer to view it slide from the left edge of the screen or click the menu button at the top right on all screens to see more information about a user select a pin map screen https i imgur com of9hazol png map screen once a pin is selected information about that user will pop up on the screen to view their user profile simply click anywhere on the card while the map does show a waypoint for yourself on the map clicking on your information card will not do anything the notable that the users share with be listed underneath the username map screen click on user https i imgur com t5llulzl png map screen click on user the user profile has information about the user s location their notables and a small blurb about themselves you may click the back button at the top left to return to the map to chat with a user select chat at the bottom of the screen other user profile https i imgur com smxt63ml png other user profile a chat screen will appear when chat is selected if you have not previously chatted with the user no messages appear if you have chatted with the user before old messages will appear if the other user is currently typing you will see a user is typing notification above the input bar clicking on the other user s picture will take you to their profile page to see all current chats with other users press the back arrow in the top left hand corner of the screen chat screen https i imgur com r48wejpl png chat screen the chat overview screen shows all open chats with other users to go back to a previous chat simply click on that chat to open it clicking on one of the profile pictures on the left side of the screen will take you to that users profile page a user can navigate to other screens on the app by pressing the menu button in the top right hand corner or by pressing the back arrow in the top left hand corner chat overview screen https i imgur com xm9kuvll png chat overview screen when the menu button is pressed from any screen in the app or the user swipes from the left side of the screen a drawer will slide out with options to navigate to other areas of the app from here the user can select to go to the map the chat screen the setting screen or logout which will log you out and return you to the splash screen side drawer https i imgur com sbybse0l png side drawer when the setting screen is selected from the drawer the user will be directed to their own profile page the profile page displays the user id avatar and current location the user s notables are listed below the user information as well as a small blurb about the user a logout button at the bottom of the screen will log the user out of their profile and navigate back to the original login screen own profile screen https i imgur com 2wzmmjol png own profile screen users can select the options tab from the settings screen by either swiping on the screen or selecting the tab once the options tab has been selected the user can choose how they would like to share their location to other users that share the same notables the default is set to use current location but the user can click the arrow to select zip code to navigate to other areas in the app the user can select the menu at the top right hand corner of the screen or swiping from the left side to open the navigation drawer the back arrow located on the top left hand corner of the screen will navigate back to the previous screen options screen https i imgur com sk2pwcpl png options screen if the arrow is selected the user has the option to update their zip code a user must enter a 5 digit zip code otherwise a red circle with an x will appear once a 5 digit zip code has been entered a green check mark will appear indicating that the zip code has been updated options screen zipcode error https i imgur com hadmn1ql png options screen zipcode error options screen zipcode success https imgur com 7ynklu4l png options screen zipcode success user can select the about tab from the settings screen by either swiping on the screen or selecting the tab this screen gives the user information about fancrater and the app itself about screen https i imgur com sqzecjdl png about screen to logout of the app a user will either need to open the drawer by pressing the menu in the top right hand corner on any screen or swiping from the left and selecting logout or navigate back to their profile screen in settings and select logout logging out will return the user back to the splash screen problem statement background the client for this project is a company called fan crater the client has an online presence where they connect celebrities notables with their fan base fan crater provides a portal for notables to provide their fans with direct personalized exclusive content fan users referred to as superfans pay a monthly fee to gain access to the exclusive content https www fancrater com problem fan crater wishes to create a mobile application to be used by superfans to connect with and to chat with other superfans the application should show the user s current location on a map in relation to the location of other superfans the application should have the ability for two fans to chat with each other the user should only be able see and connect with superfans who follow the same notable s when the user touches the location of another superfan on the map the application should give the user information about the other fan and an option to start a chat session with them solution the development team plans on using javascript and react native with native base to create this application the application will use geo location software to get the current user s location and the location of other superfans with the same notable s the application will have to filter based on notables the application will display a map of the locations of other superfans the user can touch the pin location of another fan to see their profile screen with an option to chat with the other fan a chat application will be incorporated into the app for the superfans to chat with one another the application will also be required to do other basic functions such as log in log out settings ect system requirements our code base relied on two sets of dependencies one set for the app and another for development of the app as the main code for react native applications is javascript these dependencies are all javascript packages management and installation was initially handled with npm and later with yarn where all packages could be installed by running npm install or later yarn install all dependencies are listened and managed by yarn in the package json file development dependencies were those that were not actually required to run the bundled code for the application but were required for development and running a non release version of the app on an emulator simulator or on a mobile device that is connected to the server that bundles and installs the app on a real device for development purposes application dependencies native base 2 4 1 react 16 3 0 alpha 1 react native 0 54 0 rc 3 react native auth0 1 2 2 react native gifted chat 0 4 3 react native maps 0 20 1 react native permissions 1 1 1 react native vector icons 4 5 0 react navigation 1 5 8 sendbird 3 0 57 development dependencies babel jest 22 4 3 babel preset react native 4 0 0 jest 22 4 3 react test renderer 16 3 0 alpha 1 conceptual design interaction diagram interaction diagram https i imgur com 9o3zmqyl png interaction diagram class diagram https i imgur com jraxhndl png class diagram use case diagram use case diagram https i imgur com ibdqludl png use case diagram user interface design splash screen with login button profile screen with settings bar login screen https i imgur com syzvardl png login screen profile screen https i imgur com z1xfbifl png profile screen map screen with users pinned chat screen map screen https i imgur com l8cdwntl png map screen chat screen https i imgur com bsibqdll png chat screen functional requirements fancrater terms notable a user that other users can subscribe to e g musicians actors sports figures superfan a user who subscribes to a notable overview the fancrater app will consist of geolocation chat and user authentication technologies a superfan can login see other super fans who subscribe to the same notable view their profile chat with those super fans and logout authentication a superfan must login to the app on initial download and use the superfan will use their email and password to log into the app the superfan will remain logged in until they have formally logged out the superfan may create a user for themselves or use an existing account geolocation superfans who subscribe to the same notable will be able to see each other on a live map within the app a superfan must broadcast their location in order for other super fans to see them on the map a superfan shall be able to turn off broadcasting superfans can click on an icon and see the nickname profile picture and notable they subscribe to they may also select them to view profile and or chat superfans can broadcast by postal code or latitude and longitude settings superfans can view their settings to see profile information set their mode of location gathering and read a fancrater about page chat superfans can chat with one another when a superfan shows up on the map another superfan can chat with them a superfan must be broadcasting their location to start a new chat superfans can view their previous chats superfans can use previous chats to talk to other superfans while not broadcasting their location implementation the initial react native project was created with react native cli by running react native init this created a template empty application that could then launched on either android or ios devices if properly built or if running in developer mode as our code base is quite large and management of it would be tricky without version control our code is currently accessible in a public repository on github at https github com rpicking fancraterchat while there are minimal steps to launch the application once initial requirements for development are satisfied some software will need to be installed for development 1 installation of node js npm 2 global installation of yarn npm install g yarn 3 download code git clone https github com rpicking fancraterchat git 4 installation of react native cli yarn install react native cli 5 in the top level directory of fancraterchat install dependencies yarn install 6 launch the app with native code specific to ios or android if running on an ios simulator or device react native run ios if running on an android emulator or device react native run android 7 installation of additional packages can be install with yarn and will be added to the package json file that contains all dependencies yarn install package name 8 debugging of the live code could be done by opening the developer menu not present when building for release and enabling live debugging which could then be viewed in a browser s console on an emulator simulator press ctrl m or cmd m to launch the developer menu and enable live debugging on an actual device shake the phone to bring up the menu and enable live debugging application programs as react native already has quite a lot of programs developed to aid in application development we had no need to create our own for our codebase we used a few that came with the installation of react native cli which is a required package for development of react native applications development for android devices could take place on any operating system but ios building testing could only be done on a mac due to apple s restrictions metro bundler this was a bundler that came with react native cli when launching the application through react native run android or react native run ios after the building of the native code application by react native cli the metro bundler was launched this will monitor files recursively in your project folder bundling them and sending them to your connected device or emulator to run android studio if you wished to run the application on an android emulator you were required to install android studio and download install an emulator to launch the emulator you would open android studio navigate to the avd manager and launch the device you could then compile bundle and launch your code on that device by running react native run android android device running the application on an android device required you to open said device s settings page navigating to about and clicking on the build 7 times if done successfully it will announce that the device has be put in developer mode and you can now build bundle and run the application by connecting your device and running react native run android xcode running on an ios device required opening the fancraterchat ios folder in xcode this was only after registering your device you would build bundle and compile the code through the xcode interface ios simulator running on an ios simulator would require the installation of xcode on the machine you could then build bundle and compile the code by running react native run ios which would launch the ios simulator and start the application on it testing levels of testing unit integration and ui testing was all done manually on fancrater chat through each step of the development process we followed the process of testing the functionality of our features for functionality making sure it was interacting with the any resources correctly and finally made sure the ui was correct technologies and processes for fancrater chat we utilized informal technologies for testing to keep it simple and cost effective integration testing for our api resources for user data chat and authentication we were able to write our javascript code and run it locally to make sure we got the returning json objects this made it simple and was enough testing to know we had correct working code unit testing was done locally on a separate app to make sure our functionality was correct every time a new function was developed we were able to build it on another side app locally test it then implement it and push to the app in doing this we used mock data to simulate the functionality existing in the app user interface testing was completed after we knew our functionality and integration tests were working properly we had standards for how the app was supposed to show information and allow the users to interact with those standards we were able to develop our ui certain styling liberties were taken for colors and icons passing rate before we could implement new functionality our code had to first pass local testing before being pushed and added to the app our app was able to maintain a heathy passing rate by not breaking when adding new functionality this was due to our robust testing processes before implementing new features during our development it also produced a smoother development life cycle for fancrater chat | os |
|
cs224d | cs224d deep learning for natural language processing exercises hey if you are here you are probably searching for some second thought in the cs224d http cs224d stanford edu and i hope i can help you if you have no idea how you should run them access the link above and search the problem sets materials that you will find out some results accuracies in the third assignment one layer rnn wvecdim 5 0 740343088764 wvecdim 15 0 766135064058 wvecdim 25 0 793929596835 wvecdim 30 0 796149299105 wvecdim 35 0 801746809178 wvecdim 45 0 801505537192 two layer rnn deep rnn2 wvecdim 30 middledim 5 0 733490964364 wvecdim 30 middledim 15 0 765435375299 wvecdim 30 middledim 25 0 773421478032 wvecdim 30 middledim 30 0 768475402321 wvecdim 30 middledim 35 0 790937824209 wvecdim 30 middledim 45 0 794098487225 | ai |
|
Google-Cloud-Engineer-Learning-Path | google cloud engineer learning path a cloud engineer plans configures sets up and deploys cloud solutions this learning path guides you through a curated collection of on demand courses labs and skill badges that provide you with real world hands on experience using google cloud technologies essential to the cloud engineering role learning path link https www cloudskillsboost google https www cloudskillsboost google journeys 11 | google-cloud-platform google-cloud | cloud |
on-sw-eng | presentation about software engineering in the cloud era this repo hosts the markdown source of my presentation on software engineering the presentation is built using the static page generator hugo https gohugo io the hugo theme reveal hugo https themes gohugo io reveal hugo is used to turn it into an html presentation with reveal js https revealjs com build after cloning the main repo you need to initialize the git submodule shell git submodule update init recursive after that run this command shell hugo server deploy the rendered content is hosted here https on sw eng netlify app every push to the main branch will trigger a redeploy maintenance run the following command to update all submodules with their newest upstream version shell git submodule update remote | cloud |
|
ptcg-detection | how to run this i didn t structure this piece of code to run on computers other than my own it has a bunch of hardcoded directory paths and requires you to get a dataset from somewhere i ll try to document some of the things you d need in order to run this this assumes you ve watched my rustconf 2021 talk peertube https viste pt w mgbrpwntebjwuunvvtmi1l youtube https www youtube com watch v bly yf4nmqq 1 cargo build after cloning the repo cargo build should work fine there are three relevant bins cache dataset photo detect and video detect you ll probably need to change some paths to make them suit your needs but it should build successfully if you re not using linux this may not compile due to the v4l dependency this is necessary to use video detect but if you just want to do the photo detection instead of video you can remove this dependency and ignore errors when building or just remove src bin video detect rs 2 card image dataset the code assumes that you have all the card images in the directory dataset in jpg format you can get them from pkmncards com https pkmncards com or pokemontcg io https pokemontcg io or some other source there is a restriction on the image dimensions mine are all 600x825 you also need a file named cardback jpg mine is 585x819 in the root directory of the project 3 set icon templates the base detection algorithm is not able to tell apart two similar cards from two different sets so i ve added a second step where the set symbol is extracted from the photo and compared with a bunch of set symbol templates these are templates are not provided and they re not easy to create so the easiest way to make this run is to edit the load templates function in src lib rs and make it return an empty vec instead of what it currently returns 4 generate cached hashes the base algorithm uses a perceptual hash to compare images this is somewhat expensive to calculate so the detection programs assume a cache file dataset txt exists to generate this cache run cargo run bin cache dataset 5 create an output directory the photo detect program writes files to the output directory so you should create it before running it mkdir output 6 run photo detect this program takes a bunch of 1920x1080 photos you need to use this resolution analyses them and writes a bunch of debug images to the output directory including a copy of the best three matches found in the dataset it reads the photos from the directory images canon 1080p you can change this in src bin photo detect rs and recompile it 7 run video detect this program reads frames from a camera and analyses them it assumes that the camera has a resolution of 1920x1080 it also assumes the camera is available on dev video3 v4l device new 3 it displays the detection on an iced gui and it also prints the matches to stdout | ai |
|
JSAT | java statistical analysis tool a href https travis ci org edwardraff jsat builds img src https travis ci org edwardraff jsat svg branch master a jsat is a library for quickly getting started with machine learning problems it is developed in my free time and made available for use under the gpl 3 part of the library is for self education as such all code is self contained jsat has no external dependencies and is pure java i also aim to make the library suitably fast for small to medium size problems as such much of the code supports parallel execution the current master branch of jsat is going through a larger refactoring as jsat moves to java 8 this may cause some examples to break if used against the head version but they should be fixible with minimal changes get jsat ther current release of jsat is version 0 0 9 and supports java 6 the current master branch is now java 8 you can download jsat from maven central add the below to your pom file xml dependencies dependency groupid com edwardraff groupid artifactid jsat artifactid version 0 0 9 version dependency dependencies if you want to use the bleeding edge but don t want to bother building yourself i recommend you look at jitpack io https jitpack io edwardraff jsat it can build a pom repo for you for any specific commit version click on commits in the link and then click get it for the commit version you want if you want to read the javadoc s online you can find them hosted on my website here http www edwardraff com jsat docs jsat 0 0 8 javadoc why use jsat for research and specialized needs jsat has one of the largest collections of algorithms available in any framework see an incomplete list here https github com edwardraff jsat wiki algorithms additional there are unfortunately not as many ml tools for java as there are for other languages compared to weka jsat is usually faster http jsatml blogspot com 2015 03 jsat vs weka on mnist html if you want to use jsat and the gpl is not something that will work for you let me know and we can discuss the issue see the wiki https github com edwardraff jsat wiki for more information as well as some examples on how to use jsat note updates to jsat may be slowed as i begin a phd program in computer science the project isn t abandoned i just have limited free time and will be balancing my phd work with a full time job if you discover more hours in the day please let me know development will be further slowed due to some health issues i ll continue to try and be prompt on any bug reports and emails but new features will be a bit slower please use the github issues first for contact citations if you use jsat and find it helpful citations are appreciated please cite the jsat paper http www jmlr org papers v18 16 131 html published at jmlr if you re feeling a little lazy the bibtex is below article jmlr v18 16 131 author raff edward journal journal of machine learning research number 23 pages 1 5 title jsat java statistical analysis tool a library for machine learning url http jmlr org papers v18 16 131 html volume 18 year 2017 | machine-learning java machine-learning-library machine-learning-algorithms svm tsne jsat | ai |
SMQ | smq c client library the smq c client library for microcontrollers and computers includes porting layers for many rtos environments and bare metal smq based on the publish subscribe pattern provides features similar to other pub sub protocols such as mqtt however smq extends the pub sub pattern with additional features such as one to one messaging and sender s address which are typically required in device management smq iot protocol https makoserver net gz images smq iot broker svg see the following for details smq home page https realtimelogic com products simplemq smq documentation https realtimelogic com ba doc url smq html smq c client api https realtimelogic com ba doc en c reference html group smqclient html this repository includes the standard smq c client library and introductory examples recommended for anyone new to the smq protocol smq c c examples the following examples are listed in the recommended study order 1 the two introductory 1 introductory smq examples examples publish cpp and subscribe cpp are recommended for any c or c developer new to the smq protocol 2 the light bulb example 2 light bulb example is the companion example for an online tutorial 3 the iot example led smq c 3 smq iot example is the companion example for an online tutorial 1 introductory smq examples the two introductory examples publish cpp examples publish cpp and subscribe cpp examples subscribe cpp use the c api which is slightly easier to use than the c api a recommendation is to initially read the introduction to the c and c api concept https realtimelogic com ba doc url introduction html oo c the examples require the json library the following shows how to fetch the two repositories and how to compile all examples on linux shell sudo apt install g gcc make git git clone https github com realtimelogic json git git clone https github com realtimelogic smq git cd smq make windows users compile and run the examples using the project files in vcmake the publish and subscribe examples require an smq broker running on the same computer run the broker as follows download the pre compiled mako server https makoserver net download overview for your platform and unpack the archive copy the two files mako exe and mako zip to this directory start the broker as follows linux shell sudo mako u whoami l broker windows shell mako l broker the file broker preload broker preload will now be loaded by the mako server and you will see several lines being printed including the following shell server listening on ipv6 port 80 server listening on ipv4 port 80 loading certificate makoserver sharkssl server listening on ipv6 port 443 sharkssl server listening on ipv4 port 443 creating broker registering topic example struct a with tid 2 registering topic example struct b with tid 3 registering topic example jstruct a with tid 4 registering topic example json array with tid 5 for your c program define example struct a 2 define example struct b 3 define example jstruct a 4 define example json array 5 as shown above the server is listening on port 80 if the server is not listening on port 80 open the two examples publish cpp and subscribe cpp in an editor change the port number macro closer to the top of the two files to the port number used by the mako server and recompile the examples start the publisher and subscriber examples in separate terminal windows the publisher publishes data and the subscriber consumes data see the following short video for details on running the example bundle https youtu be yqjbwq2pzvm simplified c c design the two c examples highlight several features that simplify designing c c applications using the smq protocol the smq protocol registers topics by name https realtimelogic com ba doc url smq html topicnames but translates the names to topic ids tids in this example setup the smq broker initialization in the script broker preload broker preload pre registers all topics used and forces the smq broker to use static tids instead of dynamic tids this construction simplifies the c code which would otherwise have to keep track of dynamically registered tids the smq payload can be anything from binary data to json the two examples show how one can send c structures as binary data between two c programs this construction works as long as the two c programs are compiled for the same architecture and alignment json simplifies sending structured messages between different architectures and computer languages most json libraries operate on complete messages which can be problematic in an embedded system if the json payload is large the two examples show how to send and receive complete messages but also how to send json messages in chunks and how to parse the received json messages in chunks the mako server printouts shown above include define directives these are already included in the two example programs the macro names correspond to the topic name string and the numbers are the topic id tid enforced by the broker preload broker preload lua script topic example struct a tid 2 payload is examplestructa see examples examplestruct h examples examplestruct h topic example struct b tid 3 payload is examplestructb see examplestruct h topic example jstruct a tid 4 payload is json which is sent and received as one chunk the payload is a json representation of examplestructa topic example json array tid 5 payload is json which is sent and received as multiple chunks the payload is an array of json object representations of examplestructa json library the json library https github com realtimelogic json used by the examples is fairly unique in that it provides some interesting features for encoding and decoding json in resource constrained embedded devices a recommendation is to read the json tutorial https realtimelogic com ba doc en c reference html md en c md json html prior to looking at the source code using json with real time iot communication https realtimelogic com gz images json iot svg 2 light bulb example the light bulb example is the companion example for the tutorial modern approach to embedding a web server in a device https realtimelogic com articles modern approach to embedding a web server in a device two identical examples are provided bulb c examples bulb c and bulb cpp examples bulb cpp you can compare the two examples which makes it easy to see the difference between the c and c api the light bulb connects to our public smq test broker https simplemq com you also need the companion javascript code https github com realtimelogic lsp examples tree master smq examples lightswitch and lightbulb app when testing this example start the light bulb example as follows shell bulb the following screenshot shows how to compile and run the example how to compile and run the iot light bulb example https realtimelogic com blogmedia modernapproachembeddedwebserver bulb c code png when you have initially tested the c code and the javascript code with the public test broker change the url in all code to your own private broker the broker you set up in example 1 1 introductory smq examples 3 smq iot example the iot example shows how one can use the smq protocol to design a complete iot solution the following video shows how to use the iot example smq iot example https img youtube com vi r8sjfdyspsm mqdefault jpg https youtu be r8sjfdyspsm the iot example led smq c is a copy of the smq example from sharkssl https github com realtimelogic sharkssl the example can use the secure sharkmq library or the standard smq library the example is using a tls connection when compiled with sharkmq and a standard tcp ip connection when compiled with smq smq api documentation https realtimelogic com ba doc en c reference html group smqclient html sharkmq api documentation https realtimelogic com ba doc en c shark structsharkmq html the standard smq stack includes a compatibility api that enables programs using the sharkmq api to be compiled with the standard smq stack how to run the iot example linux see the above linux build instructions start led smq in a terminal window windows compile and run the example using the project files in vcmake embedded systems see instructions below build instructions for embedded systems the example code led smq c requires porting to your embedded board s led s see the tutorial interfacing led demo programs to hardware https realtimelogic com ba doc en c shark md md examples html leddemo for details see the src arch src arch directory for smq porting layer details ready to run embedded examples the sharkssl esp32 ide https realtimelogic com downloads sharkssl esp32 includes ready to use led interface code for the esp32 you can also download the iot example for arduino https realtimelogic com downloads smq smq arduino zip tl dr iot solution quickstart set up your own iot solution as follows 1 download and compile the example code as is the example when run connects to the online test broker https simplemq com m2m led 2 familiarize yourself with how the example works 3 follow the setting up an environmentally friendly iot solution https makoserver net articles setting up a low cost smq iot broker tutorial for how to set up your own iot solution 4 modify the example code examples led smq c and change the domain url smq domain the url should be set to your own iot server smq license the source code in this repository is released under the eclipse public license v 2 0 https www eclipse org legal epl v20 html https www eclipse org legal epl v20 html you may compile a program licensed under the epl without modification and commercially license the result in accordance with the terms of the epl https www eclipse org legal epl 2 0 faq php this source code may also be made available under the following secondary licenses when the conditions for such availability set forth in the eclipse public license v 2 0 are satisfied gnu general public license version 2 | os |
|
chakra-next | h1 align center code 47ng chakra next code h1 div align center npm https img shields io npm v 47ng chakra next color red https www npmjs com package 47ng chakra next mit license https img shields io github license 47ng chakra next svg color blue https github com 47ng chakra next blob next license continuous integration https github com 47ng chakra next workflows continuous 20integration badge svg branch next https github com 47ng chakra next actions coverage status https coveralls io repos github 47ng chakra next badge svg branch next https coveralls io github 47ng chakra next branch next div p align center opinionated design system for react based on a href https chakra ui com chakra ui a a href https nextjs org next js a p features default theme with semantic tokens 100 typescript transpiled to esm requires next js 12 components links links cards cards svg svg redirect redirect nossr nossr more to come installation in your next js app shell npm install 47ng chakra next theme tools to resolve theme tokens across color modes use usecolormodetoken ts import usecolormodetoken from 47ng chakra next const fill usecolormodetoken red 500 blue 500 const shadow usecolormodetoken md dark lg shadows the following semantic tokens are provided colors body follows the html body next background color text dim text dimmer text dimmest card bg shadows card shadow make card shadow darker in dark mode to stand out components links tsx import routelink outgoinglink buttonroutelink from 47ng chakra next export default integrate next js routes with chakra styles routelink to login login routelink use as for dynamic routes routelink to posts slug as posts foo login routelink make external links stand out outgoinglink href https github com showexternalicon github routelink for when a button looks better still outputs an a tag buttonroutelink to logout logout buttonroutelink navlinks use navlink when you want a link to have special styling depending on the current page by default navlinks span style text decoration underline underline span their text when active are active when the current path starts with the link path example tsx import navlink from 47ng chakra next export default navlink to blog blog navlink the link will be active for the following paths path active home false blog true blog true blog foo true custom active styles tsx import navlink from 47ng chakra next export default navlink to blog borderbottomwidth 3px borderbottomcolor transparent active color blue 500 borderbottomcolor blue 500 blog navlink exact paths sometimes you want the navlink to be active only on exact route matches tsx import navlink navlinkmatch from 47ng chakra next export default navlink to home shouldbeactive navlinkmatch exact home navlink you can also have custom logic to determine whether a navlink should be active tsx import navlink navlinkmatch from 47ng chakra next export default navlink to blog post as blog another blog post active true shouldbeactive to as router navlinkmatch exact to as router router query active true another blog post navlink redirect redirect will change the current url to the one given when mounted tsx import redirect from 47ng chakra next export default loggedin loggedin text hello text redirect to login by default the redirection will be pushed onto the navigation history stack you can replace the history stack instead with the replace prop tsx import redirect from 47ng chakra next export default redirect to home replace next js dynamic paths are also supported tsx import redirect from 47ng chakra next export default redirect to blog slug as blog foo bar if you want to redirect to an external link https github com vercel next js blob main errors invalid href passed md not an internal route you will have to set the external prop tsx import redirect from 47ng chakra next export default redirect to https example com external you can also have the history replaced with external urls redirect to https example com external replace you can also pass transition options tsx redirect to home shallow scroll false cards tsx import card cardprops from 47ng chakra next export default card as box card i m in a card card apply card styles to a custom component mychakracomponent cardprops svg extends chakra svg with with svg namespace pre filled role img tsx import svg from 47ng chakra next export default svg aria labelledby svg demo title svg demo desc viewbox 0 0 24 24 display block my 4 mx auto title id svg demo title a red circle title desc id svg demo desc svg lets you style svg container tags with chakra ui style props desc circle fill red cx 12 cy 12 r 10 svg note to use theme tokens for fills strokes and other svg properties you must resolve them first tsx import usetoken from chakra ui react export default svg aria labelledby svg demo title svg demo desc viewbox 0 0 24 24 display block my 4 mx auto fill usetoken colors red 500 resolve theme tokens with usetoken title id svg demo title a red circle title desc id svg demo desc svg lets you style svg container tags with chakra ui style props desc circle you can also use the css prop names directly fill var chakra colors red 500 cx 12 cy 12 r 10 svg nossr sometimes you want to render a component only on the client and have a skeleton or fallback component rendered on the server whether for ssr or static output tsx import nossr from 47ng chakra next export default nossr this is only rendered on the client nossr skeleton is rendered on ssr ssg therealthing is rendered on the client nossr fallback skeleton therealthing nossr examples header with navigation links tsx import box stack from chakra ui core import navlink from 47ng chakra next export default box as header stack as nav isinline navlink to features features navlink navlink to pricing pricing navlink navlink to docs documentation navlink stack box license mit https github com 47ng chakra next blob next license made with by fran ois best https francoisbest com | react nextjs chakra-ui design-system typescript | os |
Directional-Stimulus-Prompting | dsp directional stimulus prompting directional stimulus prompting is a framework that uses a tuneable language model lm to provide guidance for the black box frozen large language model llm towards desirable properties specifically we train a policy lm to generate discrete tokens as directional stimulus of each input which is a hint cue such as keywords of an article for summarization the directional stimulus is then combined with the original input and fed into the llm to guide its generation toward the desired target an example can be seen in figure 1 p align center img align center src pics example png width 600px p p align left b figure 1 b comparison of our proposed directional stimulus prompting with the standard prompting method to use the llm such as gpt 3 on the summarization task our dsp uses a tuneable policy lm to generate the stimulus highlighted in orange color which is keywords in this case to guide the llm on generating the desired summary highlighted in blue color with higher rouge scores or other measures like human preference p the policy lm can be trained through 1 supervised finetuning from annotated data sft and 2 reinforcement learning from offline and online rewards rl to explore directional stimulus that better aligns llms with human preferences this framework is flexibly applicable to various lms and tasks an illustration of the dsp framework is shown in figure 2 paper link https arxiv org abs 2302 11520 p align center img align center src pics dsp png width 600px p p align left b figure 2 b overview of our proposed framework dsp which learns a small policy lm to improve the frozen llm s performance on specific downstream tasks given the input the policy lm generates stimulus to guide the llm s generation which is then evaluated with downstream performance measures or human labelers the evaluation scores are used as rewards to optimize the policy lm with rl the parameters of llm are frozen while the policy lm is tuneable p currently we test the framework on two benchmark tasks summarization dialogue generation our code is based on rl4lms https github com allenai rl4lms users can customize the dataset metrics and llm based reward function to train transformer based policy lms to provide guidance for the llms towards the desirable properties install local installation bash git clone https github com leezekun directional stimulus prompting git cd directional stimulus prompting pip install e docker we provide also a dockerfile for development using docker containers containing all the dependencies bash docker build t dsp additional dependencies optionally corenlp libraries are required for certain metric computations eg spice which can be downloaded through cd rl4lms envs text generation caption metrics spice bash get stanford models sh setup openai access key you should setup your openai access key to call the api export openai api key xxxxxxxx step 1 supervised fine tuning sft first we perform supervised finetuning sft on the policy lm with annotated data to provide a good initial point for the further rl training the code and data are placed in the sft4lms directory we provide the script to run the sft for the two tasks bash sh run sft cnndm sh for the summarization task on the cnn daily mail dataset sh run sft multiwoz sh for the dialogue generation task on the multiwoz dataset step 2 rl training with ppo nlpo this part is based on rl4lms https github com allenai rl4lms a simple training api that can be invoked via train script https github com allenai rl4lms blob main scripts training train text generation py that allows to train ppo nlpo or a supervised model by using a config file yaml we provide the scripts of training the policy lm t5 on the tasks of summarization and dialogue generation you can run the scripts bash sh run ppo cnndm sh sh run ppo multiwoz sh the config files for the summarization and dialogue generation tasks can be found in the scripts training task configs summarization with hint and scripts training task configs multiwoz with hint respectively you can customize the configuration files as instructed in rl4lms https github com allenai rl4lms yaml file schema configuring building blocks config file contains details about hyper parameter settings for building blocks which are described below dataset task dataset containing samples with input prompts and reference sentences available datasets are found in the class datapoolregistry in registry https github com allenai rl4lms blob main rl4lms envs text generation registry py see how to create your own dataset here adding dataset for our experiments we customize the datasets of cnn daily mail and multiwoz which are registered as cnn daily mail with hint and multiwoz with hint yaml datapool id cnn daily mail with hint args prompt prefix extract the keywords n train 2000 n val 500 n test 500 extraction mode textrank extraction source all yaml datapool id multiwoz with hint args version 2 0 n train 80 n val 100 n test 1000 reward function reward function which computes token level scores at each time step of mdp available reward functions can be found in the class rewardfunctionregistry see how to create your own reward function here adding reward function we customize the llm based reward functions where the reward is measured on the generation of llms guided by stimulus generated by the trained policy lm yaml reward fn id summarization with hint args gpt3 model gpt 3 5 turbo interval 0 5 arguments for exponential backoff timeout 20 0 exp 2 0 patience 10 temperature 0 7 arguments for the llm s inference max tokens 128 num seqs 4 top p 1 0 stop words article q a im end selection strategy choose all average all the inferences generated by the llm prompt prefix extract the keywords prompt path prompts cnn fs txt hint prompt path prompts cnn hint fs txt gpt3 metric rouge avg metric on the generation of the llm gpt3 coef 10 use baseline false t5 coef 0 t5 metric hint hit the customized metric on the keywords generated by the policy lm t5 t5 pos coef 1 0 t5 neg coef 0 25 penalty for the policy lm t5 if generated a wrong keyword step reward coef 1 0 set as 0 if not use step reward split token we use to split multiple keywords split token id 117 token id of for t5 note that we conducted the experiments using codex gpt 3 5 turbo which has not been supported by openai since march 23rd 2023 however you can apply for either the codex model access or a research subsidy https openai com form researcher access program you can also try other models by changing the gpt3 model environment configures a gym style text generation environment https github com allenai rl4lms blob main rl4lms envs text generation env py which simulates mdp episodes rollouts are generated using train samples from dataset consisting of input and reference texts further we wrap our env with subprocvecenv from stable baselines that processes n envs episodes in parallel using multi processing to compute step wise rewards further configuration settings include max episode length max length of the episode max prompt length maximum length of the input text to consider terminate on eos whether to terminate the episode as soon as eos action is performed prompt truncation side truncation side for the prompt text context start token id for context token corresponds to initial token given to decoder in encoder decoder models yaml env n envs 10 args max prompt length 512 max episode length 100 terminate on eos true prompt truncation side right context start token 0 on policy alg we provide implementations of 4 on policy algorithms ppo nlpo a2c and trpo adapted from stable baselines3 https github com dlr rm stable baselines3 tailored to work with nlp tasks which can be used out of the box with either a causal policy or a seq2seq lm policy see how to create your own on policy algorithm adding custom on policy algorithms or policy adding custom policies we also provide a supervised trainer https github com allenai rl4lms blob 2863116cd5860e4a4106a76486e70bfac25df2ba rl4lms envs text generation training utils py l225 for benchmarking purposes supervised warm start models are already uploaded to huggingface hub and specified in the respective config files hyper parameters for the algorithm can be specified at alg args further all rl algorithms use adaptive kl controller to keep the lm close to original lm by setting initial kl co efficient alg kl div coeff and target kl alg kl div target kl we support two types of lm policy causal lm policy for decoder only models and seq2seq lm policy for encoder decoder models further for nlpo we also provide maskable variants of these policy implementations can be found here https github com allenai rl4lms blob main rl4lms envs text generation policy py in and it can be attached to algorithms by specifying alg policy id and alg policy args yaml alg id nlpo args n steps 512 batch size 1 verbose 1 learning rate 0 000002 n epochs 5 ent coef 0 0 vf coef 0 5 kl div coeff 0 005 target kl 0 5 policy id maskable seq2seq lm actor critic policy args model name model path the initial checkpoint of the policy lm use t5 base or the checkpoints trained with sft in the first step apply model parallel true prompt truncation side right min tokens to keep 100 top mask 0 9 mask type learned top p target update iterations 20 generation kwargs min length 8 max new tokens 64 do sample true top k 100 trainer config we provide an on policy trainer https github com allenai rl4lms blob 2863116cd5860e4a4106a76486e70bfac25df2ba rl4lms envs text generation training utils py l126 a feature complete wrapper that instantiates building blocks from their corresponding configs and provides an outer training loop consisting of train and eval iterations train evaluation n iters each iteration corresponds to performing updates with alg args n steps x env n envs of the chosen algorithm for every eval every iters lm is evaluated on validation split using metrics listed in train evaluation metrics with generation kwargs provided in train evaluation generation kwargs this overrides rollout alg policy generation kwargs for inference purposes only we customize the evaluation function which measures on the generation of the llm and the trained policy lm t5 yaml train and evaluation train evaluation eval batch size 10 n iters 20 eval every 2 save every 2 metrics id summarization with hint args gpt3 model gpt 3 5 turbo interval 0 5 timeout 20 0 exp 2 patience 10 temperature 0 7 max tokens 128 num seqs 3 top p 1 0 stop words article q a selection strategy choose all split token split token id 117 token id of t5 for prompt prefix extract the keywords prompt path prompts cnn fs txt hint prompt path prompts cnn hint fs txt use lower baseline false use upper baseline false gpt3 metrics id meteor args id rouge args use single ref false id bleu args id bert score args language en t5 metrics id hint hit args split generation kwargs for the trained policy lm t5 min length 8 max new tokens 64 do sample true top k 0 temperature 0 7 custom building blocks wrench rl4lms provide complete customizability with respect to adding new tasks datasets reward functions evaluation metric on policy algorithms and actor critic policies adding dataset users can create their own datasets by sub classing textgenpool https github com allenai rl4lms blob af5a1326578789856ca8550cb5496c9ccc1afdc5 rl4lms data pools text generation pool py l15 just by overriding prepare cls split str args textgenpool method to return an instance of textgenpool an example is shown below python from rl4lms data pools text generation pool import sample textgenpool class mydatapool textgenpool classmethod def prepare cls split str samples for ix item in enumerate sample sample id f split ix prompt or input text item document references item target samples append sample pool instance cls samples return pool instance adding reward function custom reward funtions can be implemented easily by sub classing rewardfunction https github com allenai rl4lms blob af5a1326578789856ca8550cb5496c9ccc1afdc5 rl4lms envs text generation reward py l12 a callable which takes observation s next observation s action a done indicating whether episode is finished and meta info containing other information about textual input here observation https github com allenai rl4lms blob af5a1326578789856ca8550cb5496c9ccc1afdc5 rl4lms envs text generation observation py l11 is a data class object consisting of generated text at a particular step prompt text context text at that step reference text which can be used to compute token level or sentence level rewards python from rl4lms envs text generation observation import observation from rl4lms envs text generation reward import rewardfunction class myrewardfunction rewardfunction def init self args none super init def call self prev observation observation action int current observation observation done bool meta info dict str any none float if done reward return reward return 0 bulb in addition to traditional nlg metrics for quick prototyping we provide two synthetic reward functions which trains lms to generate numbers https github com allenai rl4lms blob af5a1326578789856ca8550cb5496c9ccc1afdc5 rl4lms envs text generation test reward py l8 in increasing order and generate dates https github com allenai rl4lms blob af5a1326578789856ca8550cb5496c9ccc1afdc5 rl4lms envs text generation test reward py l54 these can be used to quickly test different algorithms and policies corresponding configs can be found here numbers https github com allenai rl4lms tree main scripts training task configs synthetic generate increasing numbers dates https github com allenai rl4lms tree main scripts training task configs synthetic generate dates adding custom metrics users can create their own evaluation metric which then will be used to periodically evaluate the model on validation split of dataset this can be done by sub classing basemetric https github com allenai rl4lms blob af5a1326578789856ca8550cb5496c9ccc1afdc5 rl4lms envs text generation metric py l20 which takes prompt texts generated texts reference texts meta infos current lm model split name as inputs and returns a dict with metric name as key and value consisting of tuple of sentence level scores and corpus level scores an example is as follows python from rl4lms envs text generation metric import basemetric class mymetric basemetric def init self none super init def compute self prompt texts list str generated texts list str reference texts list list str meta infos list dict str any none model pretrainedmodel none split name str none metric dict custom metrics my metric 0 4 0 7 0 9 0 7 return metric dict adding custom on policy algorithms in addition to supported on policy algorithms ppo nlpo a2c trpo users can implement their own on policy algorithms with ease by sub classing stable baselines3 s onpolicyalgorithm https github com dlr rm stable baselines3 blob a697401e032dd4fecbbd4162755ddd707df980d3 stable baselines3 common on policy algorithm py l20 since we provide wrappers https github com allenai rl4lms blob af5a1326578789856ca8550cb5496c9ccc1afdc5 rl4lms envs text generation alg wrappers py l67 for on policy algorithms that handles rollouts using lm policies environment computing rewards etc users just need to implement train method with custom loss functions python from stable baselines3 common on policy algorithm import onpolicyalgorithm class myonpolicyalgorithm onpolicyalgorithm def init args super init args def train self none train for n epochs epochs for epoch in range self n epochs do a complete pass on the rollout buffer for rollout data in self rollout buffer get self batch size compute loss adding custom policies we provide lm based actor critic policy implementations https github com allenai rl4lms blob main rl4lms envs text generation policy py that wraps causal lm and seq2seq lms these can be also extended for eg use a different critic architecture by overriding appropriate methods eg evaluate actions registry finally just register your custom components by adding them to corresponding registry https github com allenai rl4lms blob main rl4lms envs text generation registry py after which they can be used directly from configs similar to pre defined components wave crowdsourcing templates we have provided the crowdsourcing templates we used on mechanical turk along with example inputs in scripts crowdworking templates you might find these a helpful starting point either for evaluating your own model s generations or for gathering training data for a learned reward function logging and experiment results additionally we support wandb logging and warm starting of training by storing checkpoints and other training artifacts in a user specified path this is especially useful for running preemptible jobs on large scheduled clusters artifacts include 1 jsonl file containing rollout infos at specified intervals 2 jsonl file containing training infos at specified intervals 3 jsonl file containing validation metrics at specified intervals 4 jsonl file containing test metrics before and after training 5 json file with validation predictions at specified intervals 6 json file with test predictions before and after training 7 trained lm model 8 config json used to run the experiment complete usage is as follows bash wandb api key your wandb api key here python scripts training train text generation py config path path to config file experiment name experiment name base path to store results path to store results log to wandb citation bibtex article li2023guiding title guiding large language models via directional stimulus prompting author li zekun and peng baolin and he pengcheng and galley michel and gao jianfeng and yan xifeng journal arxiv preprint arxiv 2302 11520 year 2023 acknowledgement we thank the authors of rl4lms https github com allenai rl4lms for sharing their code you can contact zekun li zekunli cs ucsb edu if there are questions related to the code | ai |
|
Newsletter-Signup | newsletter signup this is one of the projects for the complete web development bootcamp course the web app focusses on implementing full stack web development using nodejs and expressjs as backend along with use of requests module for working with third party apis this project is a template for newsletter signup page which adds users to my newsletter using mailchimp s apis | server |
|
ibc | ibc banner assets interchain standards image jpg synopsis this repository is the canonical location for development and documentation of the inter blockchain communication protocol ibc it shall be used to consolidate design rationale protocol semantics and encoding descriptions for ibc including both the core transport authentication ordering layer ibc tao and the application layers describing packet encoding processing semantics ibc app contributions are welcome see contributing md meta contributing md for contribution guidelines see roadmap md meta roadmap md for a public up to date version of our roadmap what is ibc markdown link check disable next line for a high level explanation of what ibc is and how it works please read this blog post https medium com the interchain foundation eli5 what is ibc def44d7b5b4c interchain standards all standards at or past the draft stage are listed here in order of their ics numbers sorted by category meta interchain standard number standard title stage maintainer 1 spec ics 001 ics standard readme md ics specification standard n a protocol team core interchain standard number standard title stage implementations maintainer 2 spec core ics 002 client semantics readme md client semantics candidate ibc go https github com cosmos ibc go protocol team 3 spec core ics 003 connection semantics readme md connection semantics candidate ibc go https github com cosmos ibc go protocol team 4 spec core ics 004 channel and packet semantics readme md channel packet semantics candidate ibc go https github com cosmos ibc go protocol team 5 spec core ics 005 port allocation readme md port allocation candidate ibc go https github com cosmos ibc go protocol team 23 spec core ics 023 vector commitments readme md vector commitments candidate ibc go https github com cosmos ibc go protocol team 24 spec core ics 024 host requirements readme md host requirements candidate ibc go https github com cosmos ibc go protocol team 25 spec core ics 025 handler interface readme md handler interface candidate ibc go https github com cosmos ibc go protocol team 26 spec core ics 026 routing module readme md routing module candidate ibc go https github com cosmos ibc go protocol team 33 spec core ics 033 multi hop readme md multi hop messaging candidate ibc go https github com cosmos ibc go protocol team client interchain standard number standard title stage implementations maintainer 6 spec client ics 006 solo machine client readme md solo machine client candidate ibc go https github com cosmos ibc go tree main modules light clients 06 solomachine protocol team 7 spec client ics 007 tendermint client readme md tendermint client candidate ibc go https github com cosmos ibc go tree main modules light clients 07 tendermint protocol team 8 spec client ics 008 wasm client readme md wasm client draft protocol team composable finance https www composable finance 9 spec client ics 009 loopback cilent readme md loopback client draft ibc go https github com cosmos ibc go tree main modules light clients 09 localhost protocol team 10 spec client ics 010 grandpa client readme md grandpa client draft octopus network https oct network relayer interchain standard number standard title stage implementations maintainer 18 spec relayer ics 018 relayer algorithms readme md relayer algorithms finalized go relayer https github com cosmos relayer rust relayer https github com informalsystems hermes ts relayer https github com confio ts relayer protocol team app interchain standard number standard title stage implementations maintainer 20 spec app ics 020 fungible token transfer readme md fungible token transfer candidate ibc go https github com cosmos ibc go tree main modules apps transfer protocol team 27 spec app ics 027 interchain accounts readme md interchain accounts candidate ibc go https github com cosmos ibc go tree main modules apps 27 interchain accounts protocol team 28 spec app ics 028 cross chain validation readme md cross chain validation draft protocol team 29 spec app ics 029 fee payment general relayer incentivization mechanism candidate ibc go https github com cosmos ibc go tree main modules apps 29 fee protocol team 30 spec app ics 030 middleware ibc application middleware n a n a protocol team 31 spec app ics 031 crosschain queries cross chain queries draft n a protocol team 32 https github com strangelove ventures async icq interchain queries candidate async icq https github com strangelove ventures async icq strangelove ventures https strange love 100 spec app ics 100 atomic swap interchain atomic swap candidate ibcswap https github com ibcswap ibcswap side labs https side one 721 spec app ics 721 nft transfer non fungible token transfer candidate nft transfer https github com bianjieai nft transfer iris network https www irisnet org translations the interchain standards are also translated into the following languages chinese https github com octopus network ibc spec cn | distributed-ledger blockchain interchain cosmos | blockchain |
vui-ad-hoc-alexa-recognizer | build status https travis ci org rationalanimal vui ad hoc alexa recognizer svg branch master https travis ci org rationalanimal vui ad hoc alexa recognizer npm downloads http img shields io npm dm vui ad hoc alexa recognizer svg style flat label npm 20downloads https npm stat com charts html package vui ad hoc alexa recognizer open open source software https img shields io badge open oss e2 9c 94 brightgreen svg http open oss com release https img shields io github release rationalanimal vui ad hoc alexa recognizer svg label last 20release a https www npmjs com package vui ad hoc alexa recognizer average time to resolve an issue http isitmaintained com badge resolution rationalanimal vui ad hoc alexa recognizer svg http isitmaintained com project rationalanimal vui ad hoc alexa recognizer average time to resolve an issue percentage of issues still open http isitmaintained com badge open rationalanimal vui ad hoc alexa recognizer svg http isitmaintained com project rationalanimal vui ad hoc alexa recognizer percentage of issues still open patreon https img shields io badge back on patreon red svg https www patreon com rationalanimal vui ad hoc alexa recognizer provides natural language understanding processing capability to enable easy implementation of chat bots and voice services high performance run time in only 2 lines of code require to include it and the call to process the text these can run anywhere node js is running backend browser mobile apps etc with or without internet connection has a rich set of built in intents and extensible slots equivalent to alexa s custom slots both list based and regular expression based synonyms slot flags parametrized flags transformation functions soundex matching wild card matching option lists text equivalents sets mix in post processing sentiment analysis unlimited sets of recognizers to build large segmented apps domains with state specific processing builtin and custom chainable responders sub domains trusted and non trusted etc what s in the name you may be wondering why such an odd name glad you asked here is the explanation 1 vui stands for voice user interface because this module allows building skills apps that have voice user interface 2 ad hoc because this module creates a pre configured run time for specific set s of intents requiring no further configuration at run time 3 alexa because this module started out by using alexa skill configuration files it has expanded well beyond that but can still be used as before so if you already have an alexa skill using this module should be very easy and fast and even if you don t it s still easy and fast just a little longer to configure also you can use this module simply to ease your alexa coding you can configure everything using vui ad hoc alexa recognizer saving yourself time and effort by not having to manually enter all the variations then use the included alexify utility to output alexa compatible files 4 recognizer because that s what it does recognizes and processes utterances to identify intents extract slot values and optionally provide responses and update the app state repository this module as well as related vui modules can be found here https github com rationalanimal tutorials examples documentation i have finally gotten to a point where the major features are at a good spot and i can spare some time for tutorials examples and documentation to that end i have set up a web site on github pages and included the first set of tutorials with more on the way here is the website https rationalanimal github io vui ad hoc alexa recognizer recognizer tutorials the first set of tutorials deals with the lower level i e recognizer functionality hello world and using hello world these two tutorials together comprise the usual hello world example one shows how to configure generate and test a recognizer json file the other tutorial shows how to write your own code to use the generated file you have options option lists that is this tutorial shows how to avoid having to manually enter and maintain a large number of related utterances great intrinsic value this one shows how to get information back from the user via built in slot types let s talk here the user is shown how to build an actual chat bot a small app that takes input from the user parses it stores some values in the state for future use and responds to the user based on the parsed information and state count on it introduces the use of what is probably the singularly most useful built in slot type number he spoke bespoke deals with how to define the simplest type of a custom slot list based custom slot known intentions deals with another built in feature shows how to use configure and turn off built in intents it s all the same to me demonstrates custom slots that use synonyms to simplify the code that has to deal with multiple values that map to a smaller subset wild card thing explains how to get values from the user that are not part of the accepted set i e using wildcard matches express yourself regularly explains how to use regular expressions based custom slots to get user input that matches a particular pattern rather than specific list of values that sounds like you explains how to use soundex matching to process words that sound similar this helps with commonly substituted words in chat bots and words that sound the same but aren t spelled the same way in voice services six of one half a dozen of the other covers text equivalents sets and their use text equivalents feature lets you define words and phrases that are equivalent to each other so that you need only to specify a simple utterance and have vui ad hoc alexa recognizer match on any variation of that utterance that may result from using text equivalents this allows solving many different issues from typos to homophones to special vocabularies etc would you like some fries with that introduces the concept of mix in aka add on processing shows how you can use mix ins to cleanly separate matching and business logic change matching through configuration and mix in application add logging and more domain tutorials finally here is the first domain tutorial hello domain this is a very simple introductory example of a domain that doesn t use the state nor any of the advanced features but does respond to every handled intent without the need for any conversation specific code articles in addition to tutorials i will from time to time publish articles related to either chatbots voice services in general or vui ad hoc alexa recognizer in particular or some of both they are located on the same website as the tutorials here is the first of these better way of building better chatbots at https rationalanimal github io vui ad hoc alexa recognizer articles betterwayofbuildingbetterchatbots note on licenses the code in this project is distributed under the mit license some data files found in the builtinslottypes directory e g colors json use values taken from wikipedia and thus they are licensed under creative commons attribution sharealike license such files have appropriate attribution and license information within them if you don t wish to deal with these licenses simply delete such file s from the builtinslottypes directory you will need to provide your own versions of these files note that only some of those files have a different license you don t need to delete the entire directory to remove them simply search in builtinslottypes directory for license and or attribution also afinn related json data files in the builtinmixins directory are a modification of the original files see attribution within the files distributed under apache 2 0 license installation shell npm install vui ad hoc alexa recognizer save summary this module provides the ability to match user s text input possibly from speech to text source to an intent with slot values as well as the ability to configure return values and state updates right within it there are two levels of functionality a lower level allowing a match of a string against a single recognizer returning an intent match with parsed out slot values and a higher level domain functionality that allows configuring an entire app returning not just a match but results and even updating the current state recognizer or lower level functionality npm module that provides vui voice user interface special ad hoc recognizer designed to parse raw text from the user and map it to intents this could be useful in many cases if you already have an alexa skill and would like to convert it to google assistant or some other service this module makes it really easy many skills can be converted in less than an hour also you can use this to quickly create a skill or an app even if you don t already have an alexa skill you will simply need to create the required intents utterances and optionally custom slot value files equivalent of which you d have to do anyway it uses the same two files intents and utterances that are used to configure alexa skills it also supports the beta alexa configuration but i don t recommend using it yet as this allows easy middleware implementation that can be placed between google assistant or other service and the alexa backend if you have custom slots and you want to use exact or soundex matches on those slots then you would also need file s listing these values supports almost all alexa features built in intents all major built in slot types most minor ones as well as extra features such as the ability to do wildcard or soundex matches transforming the values before sending them to be processed text equivalents matching such as misspellings equivalent words or phrases homophones and more etc additional configuration can be added through config file you can also use it without any backend service whatsoever simply use it with your javascript code same way you would use any other npm module it will provide complete utterance parsing and slot values mapping simply use simple branching code e g switch statement using the intent to complete processing keep in mind that many text parsing tasks can be trivially configured as intents utterances even if you have no intention of building a chat bot or a voice service for example if you wanted to parse spelled out numbers or even combinations of spelled out numbers and numerals you can easily setup to do it like this utterances text numberintent numberslot intents json intents intent numberintent slots name numberslot type amazon number now you can call the match function pass it any combination of spelled out and numerals and it will return a match on numberintent with the value of the numberslot being set to the parsed number shell node matcher js 51 thousand 2 hundred sixty 3 json name numberintent slots numberslot name numberslot value 51263 note that the number slot is coded to be able to accept both normal numbers and numbers that people spell out digit by digit or groups of digits such as zip codes or phone numbers so one two three four five will parse as 12345 etc this does mean that occasionally there may come up a way to parse the same expression in more than one way and the attempt is made to make the most likely match similarly you can parse dates etc even if that s the only thing you want to do e g you have an app where the user can type in a date simply use vui ad hoc alexa recognizer to parse it and return a date dates will match not only the exact date specification but strings such as today etc domain higher level functionality domains are a higher level of parsing than recognizers domains do use recognizer parsing but add the follow abilities define a list of recognizers to be used define application state based conditions for using a particular match e g only test against a particular recognizer if you are in a specific state allow returning of results in addition to simply matching on an intent e g if the user says how are you doing not only will it match on a greeting intent but also will return good and you allow updating of the application state right within the matching code rather than having to write the extra code to do it e g if the user says my name is bob then some portion of the state will be set to bob by the domain handling code allow nesting of the domains this is particularly useful as whole types of interactions can be encapsulated as domains and then reused it also allows breaking large apps into smaller chunks i e domains usage recognizer or lower level functionality it has two pieces of functionality run it offline to generate a recognizer json file that will be used in matching parsing the text add two lines of code to your app skill to use it to match the raw text at run time using the generated recognizer json file imagine you already have an alexa skill and you would like to port it to cortana or google assistant or even if you don t but want to create a chat bot service from scratch here are examples of files that you will have for your alexa skill or will need to create if you don t have any yet these are not complete files you can find the complete sample files in the test directory shell cat test utterances txt testintent test testintent test me testintent test please testintent test pretty please testintent test pleeeeeeease minionintent one of the minions is minionslot minionintent minionslot stateintent stateslot stateintent new england includes stateslot as one of its states blahintent here is my number blahslot use it wisely and here is another one blehslot don t squander it blahintent here is blahslot and blehslot anotherintent first is someslot and then there is someotherslot shell cat test intents json json intents intent testintent slots intent blahintent slots name blahslot type amazon number name blehslot type amazon number intent anotherintent slots name someslot type some name someotherslot type someother intent minionintent slots name minionslot type minions intent stateintent slots name stateslot type amazon us state and also here is an example of a custom slot type file shell cat test minions txt bob steve stewart beta interactionmodel js file support currently amazon is beta testing a new gui tools for editing skills interaction models this produces a single json file that includes all the information for the skill intents utterances custom slot types and some new bits prompts dialogs confirmations the support for this is being added to various parts of vui ad hoc alexa recognizer including the generator and alexifyer and is functional however until amazon finishes the beta testing this will not be finalized and may change note at this time i do not recommend using interaction model files while i have finished the support i have not tested it fully generate recognizer json file the first step is to generate a run time file recognizer json this file has all the information that is needed to parse user text later to create it run the generator e g shell node generator js intents test intents json utterances test utterances txt config test config json the example intents json utterances txt and config json files are included in the test directory this will produce a recognizer json in the current directory additionally there is beta support for the beta interaction model builder by amazon to use it specify interactionmodel parameter instead of intents and utterances shell node generator js interactionmodel test interactionmodel json config test config json note at this time i do not recommend using interaction model files while i have finished the support i have not tested it fully note that you can use the extra features in the interactionmodel json file just as you could with intents json and utterances txt e g options lists slot flags transcend specific slot types simply use alexifyutterances js see later to prepare interactionmodel json for import into alexa developer console for usage simply run the generator without any arguments shell node generator js and the generator command will list the needed arguments e g shell usage node users ilya alexaprojects vui ad hoc alexa recognizer vui ad hoc alexa recognizer generator js sourcebase basesourcedirectory that is the base for the other file references on the command line or in the config file this will be used for both build and run time source base unless overridden by other command line arguments buildtimesourcebase buildtimebasesourcedirectory that is the base for the other file references on the command line or in the config file at build time will override sourcebase value for build time directory if both are supplied runtimesourcebase runtimebasesourcedirectory that is the base for the other file references e g in the config file at run time will override sourcebase value for run time directory if both are supplied vuibase basevuidirectory that is the location of vui ad hoc alexa recognizer this will be used for both build and run time vui base unless overridden by other command line arguments defaults to node modules vui ad hoc alexa recognizer buildtimevuibase buildtimebasevuidirectory that is the location of vui ad hoc alexa recornizer executable files at build time will override vuibase value for build time directory if both are supplied runtimevuibase runtimebasevuidirectory that is the location of vui ad hoc alexa recognizer executable files at run time will override vuibase value for run time directory if both are supplied runtimeexebase runtimebaseexedirectory that is the location of javascript executable files at run time config configfilename specify configuration file name optional if not specified default values are used intents intentsfilename specify intents file name required there is no point in using this without specifying this file utterances utterancesfilename specify utterances file name optional this is optional only in the sense that it can be omitted but in practice it is required there only time you would invoke this function without an utterance file argument is if your skill generates only build in intents which would make it rather useless optimizations single stage optional single stage means no pre matches using wildcards depending on the recognizer this may be slower or faster suppressrecognizerdisplay does not send recognizer json to console note here that you should already have the intents json and utterances txt files as these files are used to configure the alexa skill also you can specify how to parse built in intents in the config json for example json builtinintents name amazon repeatintent enabled false will turn off parsing of the amazon repeatintent you can also specify additional utterances for built in intents either directly in the config file or in an external file json builtinintents name amazon stopintent enabled true extendedutterances enough already quit now extendedutterancesfilename test stopintentextendedutterances txt similarly you can affect built in slot types using config json builtinslots name amazon us first name extendedvalues prince abubu extendedvaluesfilename test usfirstnameextendedvalues txt this will add prince abubu and whatever names are found in test usfirstnameextendedvalues txt file to the list of first names recognized by the amazon us first name slot parse user text the second step is to use recognizer json file at run time to parse the user text and produce the output json that can be used to set the intent portion of the request json you only need 2 lines of code to be added to your app to use it js let recognizer require vui ad hoc alexa recognizer let parsedresult recognizer recognizer matchtext some text to match to intent if this is not working check to make sure you have generated your recognizer json first and that it s located in the same directory where your code is note that there are additional arguments to matchtext you can specify sorting order excluded intents and a different recognizer file you can also use it from the command line to test your configuration and to see the matches for an example of how to use it assuming you cloned the code from github and ran npm test to have it configured with the test samples try shell node matcher js bob which will produce json name minionintent slots minionslot name minionslot value bob or you could specify a particular recognizer file to use e g shell node matcher js bob recognizer json or try shell node matcher js here is four hundred eighty eight million three hundred fifty two thousand five hundred twelve and also six oh three five five five one two one two which will produce json name blahintent slots blahslot name blahslot value 488352512 blehslot name blehslot value 6035551212 shell node matcher js thirty five fifty one which will produce json name fourdigitintent slots fooslot name fooslot value 3551 shell node matcher js sure which will produce json name amazon yesintent slots shell node matcher js new england includes new hampshire as one of its states which will produce json name stateintent slots stateslot name stateslot value new hampshire shell node matcher js my first name is jim which will produce json name firstnameintent slots firstnameslot name firstnameslot value jim shell node matcher js december thirty first nineteen ninety nine which will produce json name dateintent slots dateslot name dateslot value 1999 12 31 shell node matcher js lets do it on tuesday which will produce json name dayofweekintent slots dayofweekslot name dayofweekslot value tuesday please note that matcher js is just a convenience and also serves as an example you will not be using it at run time most likely though some might find the use for it if you are porting an existing alexa skill to for example cortana you will probably deploy your code that uses the parser to some middleware layer like this alexa alexa middleware cortana skill aws lambda aws lambda skill where in the middleware aws lambda exposed via api gateway you will be able to see the raw user text from cortana passed as the message field in the request then call it same way matcher js does get the resulting json and update the intent in the request from none to the resulting json the backend lambda can then process it further if you are using vui ad hoc alexa recognizer for new development you have two main options use it just for nlp to map utterances to intents which you will then use for branching code use higher level domain functionality to define much of your app skill matched custom slot values there are differences between what kind of values different services may send alexa appears to respect capitalization of the custom slot values or it sends lower case versions while cortana capitalizes the first letter under some circumstances but not always and also adds a period at the end of utterances or even other punctuation signs i ve seen entering a zip code changed from 12345 to is 12345 to keep cortana as well as other misbehaving services behaving consistently the returned matches use the capitalization of the custom slot values supplied in the config file rather than what cortana will send thus if your custom slot value in the config file is petunia then petunia will be returned even if cortana will send you petunia slot flags in some cases you would like to match on a particular slot differently from the standard algorithm for example if you are trying to get the user s first name you may way to match on anything the user says so that unusual names are matched in this case you can modify your utterances file to include special flags e g firstnameintent my first name is firstnameslot include wildcard match exclude values match these flags will be used in parsing here are the different currently available flags 1 include values match exclude values match to include exclude custom slot values in the matching pattern 2 include wildcard match exclude wildcard match to include exclude a wildcard in the matching pattern 3 soundex match to use soundex for matching will match on values that sound like desired values but are not necessarily spelled the same 4 include synonyms match exclude synonyms match to include exclude synonym values in the matching pattern this is only relevant for custom slot types that actually use synonyms 5 exclude year only dates this flag is only applied to the amazon date type slot and turns off parsing of a single number as a year this is useful when there are otherwise identical utterances that may match on a number or on a date if the year only match is allowed then there is no way to differentiate between the two 6 exclude non states this flag is only applied to the amazon us state type slot and turns off parsing of us territories and d c 7 state this is a parametrized flag see below currently it only applies to the amazon airport slot type and it restricts the matches to the specified states 8 country this is a parametrized flag see below currently it only applies to the amazon airline and amazon airport slot types and it restricts the matches to the specified countries 9 continent type these are parametrized flags see below currently they only apply to the amazon airline slot type and they restrict the matches to the specified types and continents 10 sport league these are parametrized flags see below currently they only apply to the amazon sportsteam slot type and they restrict the matches to the specified sports and leagues 11 include prior names exclude prior names currently these only apply to the amazon sportsteam and amazon corporation slot type and they include exclude the prior team or corporation names in the search default is exclude prior names if you don t specify any of these then include values match and exclude wildcard match will be used as the default also if you include by mistake both include and exclude for the same flag the default value is silently going to be used if you are concerned you can look at the generated recognizer json to see how the flags were parsed also note that soundex match will automatically imply exclude values match and exclude wildcard match flags however soundex match is only available for the custom slot types at this time it would typically not be useful at this time with only these sets of flags to specify include for both or exclude for both wildcard and value matches unless you are specify soundex match if you are going to include wildcard then there is no reason to include values as well it will only slow down the parsing if you exclude both then it will be the same as if you had removed that slot from the utterance completely for this reason parsing ignores these combinations if you specify include wildcard match then only the wild card will be used if you specify both exclude values match and exclude wildcard match then only exclude wildcard match is used also note that you have to be very careful when using wildcards for example imagine this utterance instead of the above example firstnameintent firstnameslot include wildcard match exclude values match this will match on anything the user says so don t use wildcards in naked utterances ones that use nothing but slots unless you are absolutely sure that that is what you want this is why these flags exist at the utterance level rather than intent level also you should probably not specify wildcard matches on slots of many of the built in intents such as date or number this will likely not end well and it doesn t make sense for this reason parsing ignores these flags at this time on most of these slot types parsing will also ignore soundex match on non custom slot types though this may be added in the future for some built in types in the future there will be other flags added possibly specific to particular built in slot types e g i may add a flag to return only the female first names from the amazon us first name type slot or numeric values within a certain range from the amazon number type slot parameterized flags some flags can take parameters for example country continent and type flags are used to specify countries continents and types to use in the match for example if your utterances file contains these lines shell airlineintent airlineslot country canada is a canadian airline airlineintent airlineslot continent north america is a north american airline then only canadian airlines will match the first one and only north american airlines will match the second one custom list based slot types with synonyms you can create a very simple custom slot type based on a list of simple values loaded either from config json or a separate text file but you can also load values which are objects these objects must themselves contain a value field that replaces the simple field in addition these objects can also contain a field synonyms which must be an array of strings for example here is a custom slot type defined in a config json json name kitchenstuff values spoon value pan synonyms skillet this will match on spoon pan and skillet furthermore and this is the real value of the synonyms when matching on the skillet the actual returned value will be pan otherwise you could have simply added more values instead of using synonyms couple of important points you can mix strings and objects within config json if you want to specify json objects in a separate file then you must use a file with a json extension and it must contain valid json whatever you specify in a file that does not have json extension will be loaded as plain strings one per line even if it contains valid json synonyms and soundex custom slot type that has synonyms will work with soundex flag just like one without synonyms custom slot types based on regular expressions in addition to the normal custom type slots one based on a list of values you can also define a custom slot type based on a regular expression this might be useful if you are looking for some value that has a particular format for example a serial number for a product might have a specific format and you may be looking for it in user input it would be impractical to specify all the serial numbers even if you had the up to date list instead you can define a custom slot that will match the regular expression for the serial number and return it e g given config json text name customregexp customregexpstring abc123 xyz789 and otherwise standard intents and utterances files when shell node matcher js here is xyz789 if you see it will produce json name customregexpintent slots customregexpslot name customregexpslot value xyz789 you can also load the reg ex for a custom slot from a file this can be useful for sharing the same reg ex between many different recognizers to do this use customregexpfile member instead of customregexpstring text name customregexp customregexpfile customregexpfile txt options list instead of creating multiple similar utterance lines like you would do with alexa utterances you can specify variations with options lists text dateintent i want to wish to like to would like to can meet you with you dateslot is equivalent to these text dateintent i want to meet you dateslot dateintent i want to meet with you dateslot dateintent i wish to meet you dateslot dateintent i wish to meet with you dateslot dateintent i like to meet you dateslot dateintent i like to meet with you dateslot dateintent i would like to meet you dateslot dateintent i would like to meet with you dateslot dateintent i can meet you dateslot dateintent i can meet with you dateslot note that you can specify an options list that s optional by omitting one value text dateintent i want to wish to like to would like to can meet you with you dateslot will match on for instance text i want to meet tomorrow text equivalents similarly to the options list text equivalents allow variations in the text to be matched unlike the options list you don t have to manually add all of the possibilities instead simply enclose the text that should match using equivalents and this module will do it for you e g text hiintent hi what time is it is equivalent to these text hiintent hi what time is it hiintent hello what time is it hiintent hey what time is it hiintent how are you what time is it hiintent good morning what time is it hiintent good day what time is it hiintent good night what time is it hiintent hi there what time is it etc at this time the following implementation is in place it uses two data sets a very small default data set and a common misspellings data set it will match both single word substitutions and phrase substitutions this will soon be expanded to include an independent npm module containing additional values so that they can be updated independently of this module as well as the ability to add your own modules to support special domain equivalents for example you can add slang data set or medical jargon data set so an example similar to the above that substitutes both the phrases and the individual words and uses multiple data sets e g correcting for typos skipping optional words like please etc could be text hitimeintent how are you can you tell me please what is the acceptable time to come to work is equivalent to these text hitimeintent how are you can you tell me please what is the acceptable time to come to work hitimeintent how are you doing can you tell me please what is the acceptable time to come to work hitimeintent how are you can you tell me what is the acceptable time to come to work hitimeintent how are you doing can you tell me what is the acceptable time to come to work hitimeintent how are you can you tell me please what is the acceptible time to come to work hitimeintent how are you doing can you tell me please what is the acceptible time to come to work hitimeintent how are you can you tell me what is the acceptible time to come to work hitimeintent how are you doing can you tell me what is the acceptible time to come to work hitimeintent hi can you tell me please what is the acceptable time to come to work hitimeintent hello can you tell me please what is the acceptable time to come to work hitimeintent good morning can you tell me please what is the acceptable time to come to work hitimeintent good day can you tell me please what is the acceptable time to come to work hitimeintent good evening can you tell me please what is the acceptable time to come to work hitimeintent good night can you tell me please what is the acceptable time to come to work hitimeintent whats up can you tell me please what is the acceptable time to come to work hitimeintent hey can you tell me please what is the acceptable time to come to work hitimeintent hi can you tell me what is the acceptable time to come to work hitimeintent hello can you tell me what is the acceptable time to come to work hitimeintent good morning can you tell me what is the acceptable time to come to work hitimeintent good day can you tell me what is the acceptable time to come to work hitimeintent good evening can you tell me what is the acceptable time to come to work hitimeintent good night can you tell me what is the acceptable time to come to work hitimeintent whats up can you tell me what is the acceptable time to come to work hitimeintent hey can you tell me what is the acceptable time to come to work hitimeintent hi can you tell me please what is the acceptible time to come to work hitimeintent hello can you tell me please what is the acceptible time to come to work hitimeintent good morning can you tell me please what is the acceptible time to come to work hitimeintent good day can you tell me please what is the acceptible time to come to work hitimeintent good evening can you tell me please what is the acceptible time to come to work hitimeintent good night can you tell me please what is the acceptible time to come to work hitimeintent whats up can you tell me please what is the acceptible time to come to work hitimeintent hey can you tell me please what is the acceptible time to come to work hitimeintent hi can you tell me what is the acceptible time to come to work hitimeintent hello can you tell me what is the acceptible time to come to work hitimeintent good morning can you tell me what is the acceptible time to come to work hitimeintent good day can you tell me what is the acceptible time to come to work hitimeintent good evening can you tell me what is the acceptible time to come to work hitimeintent good night can you tell me what is the acceptible time to come to work hitimeintent whats up can you tell me what is the acceptible time to come to work hitimeintent hey can you tell me what is the acceptible time to come to work note that the matching algorithm is pretty efficient and does not actually try to match on these utterances but instead uses a single regular expression removing flags cleaning up the utterance file there is also a utility available to clean up utterance files for use with alexa this may be needed if you want to use a single file as your utterances file for both alexa and porting projects since the slot flags don t exist in alexa they need to be stripped from the utterance file for that use alexifyutterances js utility shell node alexifyutterances js utterances test utterances txt intents test intents json output testutterances txt noconfig result was saved to testutterances txt you can now import testutterances txt into the alexa developer console note that not only will alexifyutterances js remove flags it will also unfold options lists into multiple utterances as well as unfold any text equivalents so that you can use them with alexa this feature would be useful even if you only want to use this module to reduce the tedium of entering multiple lines into alexa and don t even intent to create your own chat bot or convert your alexa skill there is also support for the beta amazon interaction model editor you can edit the files it generates to add features supported by this module e g you can options lists or slot flags or even transcend native slot types then run it through the alexifyutterances js and the result will be importable back into alexa console shell node alexifyutterances js interactionmodel test interactionmodel json output alexifiedmodel json noconfig result was saved to alexifiedmodel json nominal support for some built in list slots many of the list slots e g amazon actor have very large value lists these are often not needed in a typical vui skill thus a compromise support is provided for them they are there and can be used but they only have a few values if you actually do have a need for them you have two options 1 you can provide your own expansion list of values in the config json file 2 you can use wildcard slot matching to match on any value the user can provide transform functions custom transform functions you can transform matched values before returning them you do this by specifying transform functions in the config file here are examples for the built in and custom slot types json customslottypes name some values apple star fruit pear orange transformsrcfilename test transformsome js builtinslots name amazon us state transformsrcfilename test transformusstate js name amazon month transformsrcfilename test transformfirstwordtitlecase js js you then put into the specified transformsrcfilename file the source code for the function to do the transformation then when you type shell node matcher js january is the best month you will get note the capitalized first letter of the month json name monthintent slots monthslot name monthslot value january see the test directory for more examples there are many reasons you may want to do this transforming states into postal code or fixing issues with speech recognition etc for example a particular service may not understand some spoken phrases well one that i ve ran into is the word deductible is understood to be the duck tibble this will never match well you could add this to your list of acceptable values this will only solve half a problem once you match it and send it to your alexa backend it will choke on this so you can add a transform function to map the duck tibble to deductible before sending it off to alexa backend when you write a custom transform function be aware that it has this signature javascript function sometransformfunction value intentname slotname slottype do something and return transformed value and that it returns a transformed value or undefined if the input value is undefined or null you can use the other arguments to change how your function may transform the matched value for example you may specify a particular transform function of a slot type but you may check within your function that the slot name equals a particular slot name and change the transformation built in transform functions in addition to being able to write your own custom transform functions you can also use some built in ones the current list is text addanglebrackets surrounds the matched value with addcurlybrackets surrounds the matched value with addparentheses surrounds the matched value with addsquarebrackets surrounds the matched value with codetostate converts passed in us state postal code to the corresponding state name does not convert territories formatasusphonenumber1 formats transcend us phone number matched value as 111 111 1111 formatasusphonenumber2 formats transcend us phone number matched value as 111 111 1111 formatasusphonenumber3 formats transcend us phone number matched value as 111 111 1111 formatasusphonenumber4 formats transcend us phone number matched value as 111 111 1111 removedigits removes all digits from the matched value removedollar removes all occurrences of s from the matched value removenondigits removes all non digits from the matched value removenonalphanumericcharacters removes all non alphanumeric characters from the matched value removenonwordcharacters same as removenonalphanumericcharacters but allows underscore removes anything that s not a number letter or underscore from the matched value removeperiod removes all occurrences of from the matched value removepoundsign removes all occurrences of s from the matched value removewhitespaces removes all continuous sequences of any white space characters from the matched value replacewhitespaceswithspace replaces all continuous sequences of any white space characters in the matched value with a single space statetocode converts passed in us state name to the corresponding postal code does not convert territories tolowercase converts the matched value to lower case touppercase converts the matched value to upper case you can also see the currently available ones in the builtintransforms directory to use them specify transformbuiltinname member instead of the transformsrcfilename json name meaningless values foo bar transformbuiltinname touppercase chaining transform functions both custom and built in transform functions can be chained simply by specifying an array instead of a single value in the configuration file for example json name meaningless values foo bar transformbuiltinname touppercase addparentheses addsquarebrackets will apply all the specified transforms mix ins sometimes you may want to do some additional processing of the result before returning it it could be almost anything for example add logging to all matches compute sentiment score and add it to the result adjust update replace matched slot values and many other possible examples mix in or add on processing allows you to do that and you can do it mostly through configuration some coding may be required built in mix ins currently there are only about seven built in mix ins here is the list with a short description for each adddefaultslots can be used to inject slot s with hard coded values changeintent can be used to change the matched intent to another one charactercount counts the characters in the matched utterance and attaches this count to the result countregexp counts the occurence of the specified reg exp and attaches this count to the result noop a simple logging mix in does not modify the result in any way simply logs it to console removeslots removes all matched slots from the result wordcount counts the words in the matched utterance and attaches this count to the result imagine that you update you config json file to add the mixins section like this text mixins bundles bundlename loggingmixin mixincode mixinbuiltinname noop arguments log true appliesto bundlename loggingmixin intentmatchregexstring what this does is defines a mix in bundle i e bundle of the code noop and argument and give it a name loggingmixin then it specifies that this bundle applies to every intent i e appliesto field has a pairing of this bundle with the intentmatchregexstring which matches on every intent as a result the noop mix in will run after every match and log the results you can modify which intents it applies to by chaining the matching reg exp the code that will actually be run is noop js located in the builtinmixins directory javascript use strict module exports function standardargs customargs eslint disable line no unused vars let intentname let utterance let priorresult if typeof standardargs undefined intentname standardargs intentname utterance standardargs utterance priorresult standardargs priorresult if typeof customargs undefined customargs log true console log noop built in mix in called if typeof standardargs undefined console log noop standardargs json stringify standardargs else console log noop standardargs undefined if typeof customargs undefined console log noop customargs json stringify customargs else console log noop customargs undefined note the signature two arguments are passed in both are objects the first one is passed to your mix in by vui ad hoc alexa recognizer automatically it contains intent name utterance that matched and the result to be returned to the user the second one contains the arguments specified in the config json log true passed to this function on your behalf by vui ad hoc alexa recognizer you might be wondering why some of these exist after all why have code that removes parts of the result produced by the matching process here is a simple for instance you are encountering issues with some intent s you don t want to delete the code nor do you want to change skill definitions you just want to temporarily disable these intents so you may remove all the slot values then change the intent name to something like removedintent which will handle any such cases and will respond to the user with i am sorry i didn t get that essentially disabling the intents or you may have decided that you want to experiment with changing the conversation flow and remap an intent to a closely related but different one custom mix ins in addition to the built in functionality you can define your own code to run for some intents imagine you have an intent on which you want to do some post processing for example you may have an intent that collects some numerical input from the user you might ask the user how many television sets do you have and you may define multiple utterances to recognize some contain just the number some might be a full sentence containing a number i have 2 television sets but the user might say something like i have a television set or i have a couple of television sets now these two last utterances do not contain an explicit number but they do implicitly specify the count you could construct several intents numberoftvsetintent onetvsetintent twotvsetsintent and then map corresponding utterances to their intents and the handler code would know about the implied counts in the 1 and 2 tv sets intents however that requires a complication of the code and potentially mixing parsing and business logic together wouldn t it be nice if we simply could somehow extract the counts 1 and 2 respectively and add them to the result as slot values so that the business logic would simply use them well that s what a custom mix in would let you do text mixins bundles bundlename tvcountmixin mixincode mixinsrcfilename injecttvcountslotvalue js arguments appliesto bundlename tvcountmixin intentmatchregexstring tvcountintent now after a successful match on tvcountintent injecttvcountslotvalue js will run and add the corresponding slot and value to the result what would this code look like something like this javascript use strict module exports function standardargs customargs eslint disable line no unused vars let intentname let utterance let priorresult if typeof standardargs undefined intentname standardargs intentname utterance standardargs utterance priorresult standardargs priorresult if typeof priorresult undefined priorresult null typeof priorresult slots undefined priorresult slots null typeof priorresult slots countslot undefined if utterance endswith a television set priorresult slots countslot name countslot value 1 else if utterance endswith a couple of television sets priorresult slots countslot name countslot value 2 note the signature just as with the built in mix ins two arguments are passed in both are objects with multiple potentially fields the first one is passed to your mix in by vui ad hoc alexa recognizer automatically it contains intent name utterance that matched and the result to be returned to the user the second one contains the arguments specified in the config json nothing in this case here this code checks to see if the result already has a countslot value if not it will attempt to determine whether it s 1 or 2 by looking at the utterance and updating the result with injected countslot applying mix ins when there is no match sometimes you may want to apply a particular mix in not when there is an intent match but when there isn t one a typical common example is replacing all non matches with a default intent e g unknownintent you can easily do this by specifying unmatched true in your config json json bundlename setintentmixin unmatched true you can even combine both matched intent and unmatched specifications json bundlename loggingmixin intentmatchregexstring unmatched true the above will execute on every match attempt whether it successfully matches or not sentiment analysis afinn you can use afinn built in mix in for sentiment analysis currently it supports afinn96 and afinn111 data sets also there is an afinn96 misspelled words data set that includes misspelled versions of the words in afinn96 more data sets are on the way some of them will be alternative base sets afinn165 others are additional data sets that can be added to the base set such as scored misspelled words or emoji data set this mix in takes a single argument and returns a single score for example given this config snippet json bundlename afinnsentimentbundle mixincode mixinbuiltinname afinn arguments ratingdatasetfiles afinn96 json afinn96misspelled json might attach this sentiment afinn score to the result json name afinnintent slots sentiment afinn score 3 precompiled sentiment data sets when you specify the data sets for sentiment analysis you should be aware that it may have some performance implications for your convenience you can specify the data set s individually as many as you d like as part of the ratingdatasetfiles array however this would then mean that the sentiment analysis code would have to do extra work at run time namely merge the data sets sort remove duplicates etc you can eliminate these steps if you use precompiled data sets these include one or more data sets already merged sorted etc you can specify only one such file since the intent is to create a single precompiled file that does not need to be processed further so instead of the ratingdatasetfiles array please use precomputeddataset field json bundlename afinnsentimentbundle mixincode mixinbuiltinname afinn arguments precomputeddataset afinn96withmisspelledwords precompiled json currently there is only one precomputed data set afinn96withmisspelledwords precompiled json but you can make a set yourself it you need it making your own precompiled sets if you want to create a custom precompiled set you can use provided afinndatasetcombiner js utility to do so you can run it without arguments to get the usage info but it s really simple shell node afinndatasetcombined js i input file1 input file2 o output file e g shell node afinndatasetcombiner js i builtinmixins afinn96 json builtinmixins afinn96misspelled json builtinmixins afinnemoticon 8 json o builtinmixins afinn96withmisspelledwordsandemoticons precompiled json dollar values if a service like cortana passes a dollar value e g 1000 it will be mapped to 1000 dollars as would be expected by an alexa skill note that if you want to test it with matcher js you have to either escape the character or enclose the whole string in rather than to avoid command line handling of shell node matcher js the first price is 1000 and the second price is 525000 which will produce json name priceintent slots priceoneslot name priceoneslot value 1000 pricetwoslot name pricetwoslot value 525000 note that this is identical to shell node matcher js the first price is 1000 dollars and the second price is 525000 dollars which will produce json name priceintent slots priceoneslot name priceoneslot value 1000 pricetwoslot name pricetwoslot value 525000 trailing punctuation trailing periods exclamation signs and question marks are ignored during parsing commas in numeric slots any commas within numeric input e g 20 000 are ignored during parsing optimizations note now that multi stage matching has been enabled the performance should be a lot better for many previously slow scenarios however you can still make it faster by arranging for the parsing order and excluding some intents from parsing in some interesting cases multi stage matching is actually slower sometimes by a large factor than single stage matching to accommodate such cases you can generate recognizer files without multi stage matching to do so simply add optimizations single stage to the generator command line shell node generator js intents test intents json utterances test utterances txt config test config json optimizations single stage intent parsing order you can pass to the matching call the name s of the intents that you want to try to match first currently it only supports custom intents but that s not a problem since built in intents are very fast then this call will likely execute much faster since most of the time you know what the next likely answers i e utterances are going to be you can provide them to the matching call for example the following call will try to match countryintent first javascript let result recognizer recognizer matchtext have you been to france countryintent intent exclusion in addition to the intent parsing order you can also pass a list of intents to be excluded from the matching process this is useful if you have intents that have very large sets of custom values and you are pretty sure you don t want to parse then in a particular place in your skill i e if you are in a flow that does not include some intents then you should be able to exclude them from parsing the following call will try to match countryintent first and will not even try to match firstnameintent javascript let result recognizer recognizer matchtext have you been to france countryintent firstnameintent alternate recognizer files in addition to the intent parsing order and intent exclusion lists you can pass an alternate recognizer file to use in the matching call note that the normal behavior is to assume a recognizer named recognizer json explicitly specifying the recognizer simply overrides the default javascript let result recognizer recognizer matchtext have you been to france countryintent firstnameintent alternativerecognizer this can be used both for performance optimization as well as for breaking up large skills apps into smaller chucks typically by functionality though now that domain functionality has been added you should probably use domains for modularizing your app unless there is a reason not to let s say you have a large skill that has several logical flows in it for example you can have a travel skill that lets you book a hotel and or a car check for special events reserve a table at a restaurant etc each of these may have its own logic its flow so you may define a separate set of intents and utterances and custom slot values for each then use a session variable to keep track of the current flow for each flow you can generate its own recognizer file then at run time use the recognizer for the flow that the user is in this has multiple advantages performance will increase the skill app can be developed separately by different teams each having its own skill app portion that they are working on and they will update only the recognizer for their portion options list using options lists instead of multiple similar utterances may improve performance testing with a simple one slot date example and an utterance that unfolds to about 3800 utterances reduces the time from 330 ms to 74 ms on a desktop computer considering that if you run it on aws lambda which is much slower than a typical higher end desktop computer you may be shaving off seconds off of your time which for voice interactions is quite important soundex support soundex support has been added at the utterance level for custom slot types you can now specify a soundex match flag for a custom slot type and soundex match will be used this allows matching of expressions that are not exact matches but approximate matches shell cat utterances txt where utterances txt includes this line minionintent another minion is minionslot soundex match then shell node matcher js another minion is steward will return json name minionintent slots minionslot name minionslot value stewart note that stewart matched on steward domain higher level functionality domainrunner js more documentation for domains is coming meanwhile you can test your domain files using domainrunner js utility to see the usage simply run it shell node domainrunner js will return text usage node domainrunner js domain path to a domain state path to state json outputstate true false builtinaccessor basic readonly to exit type exit if you specify the path to the domain file and to the state json object then you will see a prompt once you run it shell please type user text if you do you will see the results being returned the domainrunner js will continue running and accepting user input and possible updating the state object until you kill the process or type exit domain configuration you can create a domain rather easily for the simplest setup all you need is an existing recognizer json file and you are in business imagine you already have a recognizer file lets call it myrecognizer json that s located in your current directory you would also need a state json file for now you can simply create a file named mystate json in the current directory that contains an empty object then you can define a domain json like this json description simplest domain recognizers key mine path myrecognizer json states matchcriteria default matchspecs recognizer mine you could now test it with domain runner shell node domainrunner js you will see a prompt assuming you have a corresponding dateintent defined and you type in tomorrow you will see something like this shell please type user text tomorrow your text was tomorrow domain response match name dateintent slots dateslot name dateslot value 2017 08 27 please type user text returning hard coded result this is nice it works and it shows how easy it is to set up a domain howeveer there is really nothing new here yet how about specifying actual results we can do it quite easily by adding just a few more values to the domain file json description simplest domain recognizers key mine path myrecognizer json states matchcriteria default matchspecs recognizer mine responder result directvalue text thank you notice that we ve added the result field this specifies what the returned results will be in this case they will consist of just one object that has a single field named text with the value thank you run the domain runner again and see the results shell please type user text tomorrow your text was tomorrow domain response match name dateintent slots dateslot name dateslot value 2017 08 27 result text thank you you can see that there is now a result field in the domain s response that has the value we ve specified returning one of several hard coded results at random if you look closely at the domain file you ll see that we are specifying just one value you can specify many values with one of them being chosen at random like this json description simplest domain recognizers key mine path myrecognizer json states matchcriteria default matchspecs recognizer mine responder result directvalues pickmethod random values text thanks a bunch text danke text thank you if you run domain runner again you will see one of the three messages displaying at random every time you type tomorrow shell please type user text tomorrow your text was tomorrow domain response match name dateintent slots dateslot name dateslot value 2017 08 27 result text danke returning one of several hard coded results at random without repeating this is better but still very simple what if you wanted the replied to not repeat at least until all of them were used up you can do that too by simply changing the value of the pickmethod field from random to randomdonotrepeat and adding a repeatselector field json description simplest domain recognizers key mine path myrecognizer json states matchcriteria default matchspecs recognizer mine responder result directvalues pickmethod randomdonotrepeat repeatselector squirrelledawayalreadyused values text thanks a bunch text danke text thank you now if you run domain runner again you will see that the values don t repeat at least until you use up all 3 then the cycle starts again how is this done quite simple to see it run the domain runner with the outputstate true option and you should see something like this shell please type user text tomorrow your text was tomorrow domain response match name dateintent slots dateslot name dateslot value 2017 08 27 result text thank you state object squirrelledawayalreadyused text thank you please type user text notice that the state object which was empty to begin with mystate json now has a field squirrelledawayalreadyused it s an array and it contains the values of the outputs that have already been used every time a particular output is provided this field is updated to include it so it will not be used again until all the values have been used up the squirrelledawayalreadyused field name comes from the domain configuration you can specify anything you want as the name the reason by the way for specify the field name is to avoid collisions and overwriting some portions of the state that you didn t mean to overwrite this way you can pick a name for the field to keep track of the used values combining multiple results you can specify multiple results to be returned when you do you can specify how to combine them to do so specify responders plural rather than responder field for example json description simplest domain recognizers key mine path myrecognizer json states matchcriteria default matchspecs recognizer mine responders result combinerule setto directvalues pickmethod randomdonotrepeat repeatselector squirrelledawayalreadyused values text thanks a bunch text danke text thank you result combinerule mergeappend directvalues pickmethod randomdonotrepeat repeatselector squirrelledawayalreadyused2 values text second text 1 ssml speak thanks a bunch speak videos http someotherurl com text second text 2 ssml speak thanks a bunch with a card speak videos http somethirdurl com card title card title here we have two responders the first one is just as before except that it has a combinerule field with setto value this tells the code to reset the result to the output of this responder the second one has a combine rule set to mergeappend this will attempt to merge and or append the second result with the first one text please type user text tomorrow your text was tomorrow domain response match name dateintent slots dateslot name dateslot value 2017 08 27 result text danke second text 2 ssml speak thanks a bunch with a card speak videos http somethirdurl com card title card title state object squirrelledawayalreadyused text danke squirrelledawayalreadyused2 text second text 2 ssml speak thanks a bunch with a card speak videos http somethirdurl com card title card title as you can see not only have the two results been merged different fields from both have been added to the final result but also where there are the same fields present in both they have been appended see the text fields when combining different outputs the following happens text fields are concatenated with a space in between ssml fields contents are concatenated and surrounded with a single set of speak tags e g speak one speak concatenated with speak two speak will result in speak one two speak videos arrays are combined separate card elements are added to an array merge and replace results in addition to merging appending you can also use mergereplace method of combining when you do that non conflicting fields from the results will be added to the final output however when the fields are conflicting e g two text fields instead of being appended they will be replaced by the later result so if you were to change the previous domain file to this json description simplest domain recognizers key mine path myrecognizer json states matchcriteria default matchspecs recognizer mine responders result combinerule setto directvalues pickmethod randomdonotrepeat repeatselector squirrelledawayalreadyused values text thanks a bunch text danke text thank you result combinerule mergereplace directvalues pickmethod randomdonotrepeat repeatselector squirrelledawayalreadyused2 values text second text 1 ssml speak thanks a bunch speak videos http someotherurl com text second text 2 ssml speak thanks a bunch with a card speak videos http somethirdurl com card title card title and re run domain runner you would get this output text please type user text tomorrow your text was tomorrow domain response match name dateintent slots dateslot name dateslot value 2017 08 27 result text second text 2 ssml speak thanks a bunch with a card speak videos http somethirdurl com card title card title state object squirrelledawayalreadyused text thank you squirrelledawayalreadyused2 text second text 2 ssml speak thanks a bunch with a card speak videos http somethirdurl com card title card title note that the text value comes from the second responder only setto and ignore combine rules you ve already seen the setto combine rule it s used in the first responder it s designed not to merge two results but rather to set the result to the second one if its combine rule is setto ignore combine rule is the opposite the result if any produced by the responder with this combine rule is ignored completely this exists to support the responders that are there primarily to update the state rather than produce the output sometimes you may also want to use it to temporarily disable the output from a responder without actually deleting it custom responders you can create completely custom responders by providing function body source right within the domain file for example json result functionsource return text thanks when this function is created at run time it will be created with these 3 arguments match stateaccessor selectorarray and the corresponding values will be passed in if you need to go ahead and use them in your function body custom responders modules while it s all well and good to add a function body right within the domain json file it s a little problematic the source code must be escape to be part of the json file if you make a mistake in escaping it the function will not work additionally you can t unit test it by itself so for longer more complicated functions it s better to put them into a separate module and require the module you can do that using functionmodule field json result combinerule setto functionmodule test greetingdomain customresponderfunction js where the contents of test greetingdomain customresponderfunction js are javascript use strict let responderfunction function match stateaccessor selectorarray let intent match name if intent greetingintent stateaccessor mergereplacestate selectorarray customfunctionmodulewasrun true return text hi from the custom function module module exports responderfunction now if you re run the domain runner you will get this text please type user text hi there your text was hi there domain response match name greetingintent slots result text hi from the custom function module hello ssml speak hello speak state object greetingdomain customfunctionmodulewasrun true greetingalreadyused text hello ssml speak hello speak note that the function correctly ran the result includes the text from it combined with the text from other responders state object was correctly adjusted as well setting state object directly so now you have seen how simply returning a particular value can update the state randomdonotrepeat pick method but that is part of the default built in behavior you can also directly update the state to do that simply add an updatestate field to your responder e g json result combinerule ignore directvalue text ignore this text updatestate updaterule mergereplace updateselector someuselessvalue some other useless value directvalue update result of replacemerge updaterule first notice that in the above example while we do return a result field the combinerule is ignore so this responder will not contribute to the returned result the updatestate field contains a couple of fields that should look somewhat familiar now updaterule is similar to the combinerule of the result field but applies to how the state object is to be updated by this responder in this example we are specifying that the value of directvalue field is to be merged replacing existing values where conflicting with the current state object you can also use setto here to replace without merging but what about the updateselector well here is where you specify which part of the state object is to be updated in this example if we run domain runner again with the option to show state we ll see this text state object squirrelledawayalreadyused text thanks a bunch squirrelledawayalreadyused2 text second text 2 ssml speak thanks a bunch with a card speak videos http somethirdurl com card title card title someuselessvalue some other useless value update result of replacemerge updaterule so the other updates to the state object still take place and then the direct update of the state object happens right where the selector specified it note currently only directvalue field is supported but other ways including user custom functions will be added shortly important note on selectors selectors are used in multiple places in domains this is the first use that you ve seen but the concept is the same elsewhere also in this particular example the selector is used logically i e the way you d expect that s because behind the scenes domainrunner js is using a built in accessor collection of functions that actually access the state object there are two built in accessors at this time the other one is a read only accessor so if you used that one none of the changes to the state object would take place more built in accessors will be coming in the future but you can also write your own if you do that you can use the selector in different ways for instance if your state object is saved in a nosql database the selector may be a key that s used to look up portions of the state it s up to you how you implement it this of the state as not being manipulated directly rather manipulated through accessors so you could create a react compatible accessor that will treat the state as read only but will issue updates to the state via a separate mechanism the domain code doesn t care and the accessors are designed to be pluggable built in accessors there are two built in accessors basic and readonly they work the same way but the read only accessor does not update the state note that you can also select a different built in accessor when using domainrunner js by specifying an extra argument shell builtinaccessor basic readonly on the command line non default single value state match criteria so far you ve only seen default match criteria meaning you ve only seen a domain using a single recognizer without any regard to anything else here is the relevant snippet from the domain file text states matchcriteria default however you can specify that a particular recognizer should only be used under certain conditions for example if the state object contains field startedenrollment and its value is status yes then use a different recognizer if you update your domain file to include that e g text recognizers key mine path myrecognizer json key greeting path test greetingdomain greetingrecognizer json states matchcriteria type state selector startedenrollment match true value status yes matchspecs recognizer greeting responder result directvalue text hello to you too and re run domain runner without changing the state you will get text please type user text hi there your text was hi there domain response undefined state object but if you now edit the state object to include startedenrollment status yes then you ll get text please type user text hi there your text was hi there domain response match name greetingintent slots result text hello to you too state object startedenrollment status yes so this showed how you can specify which recognizer s to use based on some criteria note that default criteria will always match non default single value negative match criteria you can also set the match to false indicating that this match criteria will succeed if the state does not match the provided value text states matchcriteria type state selector startedenrollment match false value status yes matchspecs recognizer greeting responder result directvalue text hello to you too this should be used carefully the non matching criteria may should be designed to avoid frequent always matches an example of proper match might be a global setting indicating whether the user has completed some step and if not then proceeding down that user interaction path non default multi valued match criteria in addition to the single value match criteria you can also specify an array of matching values the only difference is that instead of text match true value status yes you would specify something like text match true values status yes status tbd this will match if the status is either yes or tbd non default multi valued negative match criteria just as for single values you can set match to be false to indicate that the match will succeed if the state does not match any of the provided values testing for null in the match criteria sometimes you just need to test whether a value is a null or not here is a simple and concise way of checking whether a value is null text matchcriteria type state selector some selector isnull true similarly here is how you would check if the value is not null text matchcriteria type state selector some selector isnull false testing for undefined in the match criteria testing for undefined is another common test that you may want to perform this would typically be done when you want to test whether some state value has been set the specification is similar to testing for null text matchcriteria type state selector some selector isundefined true similarly here is how you would check if the value is not undefined text matchcriteria type state selector some selector isundefined false testing for a numeric value being greater than a reference value in the match criteria testing for a numeric value being greater than a reference value is also fairly common here is how you could do it text matchcriteria type state selector some threshold greaterthan 5 note that unlike isnull or isundefined there is no way to specify false i e negative condition this can instead be done using lessthanorequal test lessthanorequal is not yet implemented testing for a numeric value being greater than or equal to a reference value in the match criteria similar to testing for a numeric value being greater than a reference value you can test for a numberic value being greater than or equal to a reference value text matchcriteria type state selector some threshold greaterthanorequal 4 similar to greaterthan there is no way to specify false i e negative condition this can instead be done using lessthan lessthan is not yet implemented testing for a numeric value being less than a reference value in the match criteria you can also test for a numeric value being less than a reference value text matchcriteria type state selector some threshold lessthan 10 testing for a numeric value being less than or equal to a reference value in the match criteria to test for a numeric value being less than or equal to a reference value text matchcriteria type state selector some threshold lessthanorequal 10 testing for a string value consisting only of alpha characters in the match criteria to test for a string value consisting entirely of alpha characters a through z case insensitive text matchcriteria type state selector some value isalpha true note that specifying isalpha false does not test for whether the entire string is composed of non alpha characters instead it s testing for whether any of the characters are not alpha text matchcriteria type state selector some value isalpha false testing for a string value consisting only of numeric characters in the match criteria to test for a string value consisting entirely of numeric characters text matchcriteria type state selector some value isnumeric true again specifying isnumeric false tests for whether any of the characters are not alpha text matchcriteria type state selector some value isnumeric false testing for a string value consisting only of alpha numeric characters in the match criteria to test for a string value consisting entirely of alpha numeric characters text matchcriteria type state selector some value isalphanumeric true again specifying isalphanumeric false tests for whether any of the characters are not alpha numeric text matchcriteria type state selector some value isalphanumeric false testing for a string value consisting only of white space characters in the match criteria to test for a string value consisting entirely of white space characters text matchcriteria type state selector some value iswhitespace true as before specifying iswhitespace false tests for whether any of the characters are not white space text matchcriteria type state selector some value iswhitespace false testing for a string value consisting only of upper case in the match criteria to test for a string value consisting entirely of upper case characters text matchcriteria type state selector some value isuppercase true again specifying isuppercase false tests for whether any of the characters are not upper case text matchcriteria type state selector some value isuppercase false testing for a string value consisting only of lower case in the match criteria to test for a string value consisting entirely of lower case characters text matchcriteria type state selector some value islowercase true same as with other isxxx specifying islowercase false tests for whether any of the characters are not lower case text matchcriteria type state selector some value islowercase false testing for a string value containing a substring in the match criteria to test for a string value containing a substring text matchcriteria type state selector some value containssubstring true substring somesubstring specifying containssubstring false tests to ensure that the specified substring is not contained text matchcriteria type state selector some value containssubstring false substring somesubstring match criteria within responders so far you have seen match criteria used only to specify whether a particular recognizer is to be used this gives a lot of flexibility however if limited to just this case this arrangement makes conditional use of responders difficult that s why you can also use the match criteria within a responder as well json matchcriteria type state selector conditionalrespondervalue isundefined false matchspecs recognizer conditionalrespondertest responders matchcriteria type state selector conditionalrespondervalue useresponder1 isundefined false result directvalue text conditional responder 1 matchcriteria type state selector conditionalrespondervalue useresponder3 isundefined false result combinerule mergeappend directvalue text conditional responder 3 matchcriteria type state selector conditionalrespondervalue useresponder2 isundefined false result combinerule mergeappend directvalue text conditional responder 2 in the example above match criteria is used both ways to restrict when to use the recognizer as well as to restrict when to apply a particular responder here if the state has a defined conditionalrespondervalue then the recognizer will be applied however each of the 3 responders also have their own match criteria if a particular responder s match criteria is not met then that responder will not contribute to the response for example if the state object is json conditionalrespondervalue useresponder1 true useresponder2 true and we match on the utterance only 2 out of 3 responders will contribute to the result json match name conditionalrespondertestingintent slots result text conditional responder 1 conditional responder 2 using conditional responders conditional responders are a very powerful tool you can create a different output depending on the current state for example imagine if your user says something like i would like to get two tickets for today s 7 30 showing of deadpool 27 if your user is already authenticated and the credit card info is on file you can respond with your etickets have been ordered please see your email however if the user has not yet authenticated you can respond with please log in first so we can help you note that you could still do this without conditional responders but you would have to create multiple recognizers to accomplish that and it would unnecessarily complicate the code slot test match criteria if you are using match criteria with responders then you can also test slot values this does not make sense when using match criteria as a recognizer use condition that s because you don t have the intent and thus slots parsed yet when you are deciding whether to use the recognizer the recognizer is the one doing the parsing but with responders you do have parsed slot values what follows is a description of the various slot tests available with match criteria non default single value slot match criteria use this when you want to compare the slot value to a single predefined value e g json matchcriteria default matchspecs recognizer conditionalrespondertest responders matchcriteria type slot slot numberslot value 5 match true result directvalue text you said 5 in this case if the slot value parsed out of the utterance for the numberslot equals 5 then the domain will add you said 5 to the result s text else nothing will be added subdomains if domains simply added results and state awareness and manipulation they would already be a pretty big improvement over just using recognizers directly however domains can use other domains this is particularly powerful because it allows one to modularize an application breaking it up into individual reusable modules this reduces complexity allows code reuse and has other benefits additionally separate domains can be defined by other people possibly outside your group your company organization etc as long as they are written correctly they can be used by other people teams using domain withing domain aka sub domains is easy and similar to using recognizers first you must add the sub domain to the list of domains text description simplest domain recognizers key mine path myrecognizer json domains key greeting path test greetingdomain greetingdomain json trusted read true write true selector greetingdomain states a few things to note here just as with including recognizers you must provide a key with a value this is an arbitrary value determined by you it s needed so that this domain can be referenced elsewhere by this key second there is a trusted field this field specifies whether this subdomain is trusted to read and or write to the parent s i e this domain s state in this exmaple we have a fully trusted sub domain such a subdomain has an optional selector field its value if provided is used to select a portion of the state object that this subdomain will be able to see thus in this example the selector is greetingdomain so any modifications will be done to the state object greetingdomain field as if it s the entire state if you have closely cooperating modules that need to know each other s state then the selector field would probably either be absent or be the same for these modules of course the super domain the one that s using a subdomain can alway see the subdomains portion of the state once you ve added the sub domain to the list of domains you can now use it this part is actually simpler than the set up for recogizers that s because the sub domain already describes the results and the state update thus all you have to do is specify when to use it you simply add it to the list of responders for the specific match criteria and you are all set text states matchcriteria default matchspecs domain greeting recognizer mine now whent you re run the domain runner you ll get something similar to this depending on which greeting get randomly chosen text please type user text hi there your text was hi there domain response match name greetingintent slots result text hello ssml speak hello speak state object greetingdomain greetingalreadyused text hello ssml speak hello speak you can include sub domains within sub domains the only restriction is that you can t do circular subdomains don t include b as a side domain of a if a is already a subdomain of b trusted vs non trusted sub domains in the initial example above you ve seen a fully trusted domain what if you are using a third party domain that you aren t sure about furthermore such a sub domain may not even need to have access to your state so why provide it you can simply accommodate this by specifying it as a non trusted sub domain text description simplest domain recognizers key mine path myrecognizer json domains key greeting path test greetingdomain greetingdomain json trusted read false write false states now even if there is no selector field in the trusted field the sub domain will be separated from the main part of the state and even custom responders won t be able to see outside of that sub portion if you were to re run the domain runner you would get text please type user text hi there your text was hi there newresult text hi from the custom function module newresult text nice to meet you ssml speak nice to meet you speak domain response match name greetingintent slots result text hi from the custom function module nice to meet you ssml speak nice to meet you speak state object untrusted customfunctionmodulewasrun true greetingalreadyused text nice to meet you ssml speak nice to meet you speak note that the state of the sub domain is now relegated to the untrusted subfield and only the contents of that subfield will be visible to anything in that sub domain sometimes you may want to be able to specify the name of this sand boxing field you can do it like this text description simplest domain recognizers key mine path myrecognizer json domains key greeting path test greetingdomain greetingdomain json trusted read false write false selector greetingdomain sandboxkeys separateddomain states now if you re run the domain runner yet again you will see text your text was hi there newresult text hi from the custom function module newresult text hello ssml speak hello speak domain response match name greetingintent slots result text hi from the custom function module hello ssml speak hello speak state object separateddomain greetingdomain customfunctionmodulewasrun true greetingalreadyused text hello ssml speak hello speak note that the entire sub domain s portion of the state has now been placed into the field specified by the sandboxkeys field and also that the selector field is still used if you specified it missing trusted specification you can also skip specifying trusted information completely when you do it s the same as specifying an untrusted domain with the sandboxkeys being set to untrusted selector value will be used if present for example json key greeting path test greetingdomain greetingdomain json selector greetingdomain is equivalent to json key greeting path test greetingdomain greetingdomain json trusted read false write false selector greetingdomain sandboxkeys untrusted hybrid trusted sub domains currently you can only specify completely trusted read and write or completely untrusted sub domains i am in the process of adding partially trusted sub domains e g can read but not write or can write but not read the parent domain s state this will function differently from the obvious expectation for example the domain that is trusted to read the value will still be able to write to it but it won t write to the parent s state rather to its own portion and vice versa the domain that is write trusted will be able to write to the parent s state but will be able to read only the values it has previously written non alexa support you don t have to generate just the alexa intents slot types this module can now generate other platform intents though the only one supported at this time is transcend which is what the other vui xxx projects are using as their native built in type support for microsoft xxx and possibly others will be added if when there is a need demand for it in order to specify an output type simply configure it in your config file json platform output transcend by default if you don t specify anything the output will be amazon for compatibility reasons transcend features supported currently there are two transcend built in slot type supported transcend us phone number and transcend us president transcend us phone number will match on seven digit number expression that s structured the way people tend to pronounce phone numbers so either listing it out as numbers or using one two digit word equivalents e g fifteen or thirty five additionally this slot type will accept parenthesis around the area code dash between exchange and user number or dots instead e g text 123 456 7890 123 456 7890 123 456 7890 one twenty three four fifty six seventy eight ninety alexa features supported currently you can parse 1 all alexa built in intents 2 utterances without slots 3 utterances with custom slots 4 utterances with all the numbers date time duration built in slot types amazon number amazon four digit number amazon date amazon time amazon duration 5 utterances with these list built in slot types amazon us state amazon us first name amazon airline all us canadian mexican airlines amazon airport usa canada mexico australia new zealand uk germany italy and austria airports amazon color amazon corporation amazon country amazon dayofweek amazon month amazon room amazon socialmediaplatform amazon sportsteam includes nfl cfl nba mlb nhl and mls teams 6 utterances with these list built in slot types with nominal support see nominal support section amazon actor amazon administrativearea amazon artist amazon athlete amazon author amazon book amazon bookseries amazon broadcastchannel amazon civicstructure amazon comic amazon dessert amazon director amazon educationalorganization amazon festival amazon fictionalcharacter amazon foodestablishment amazon game amazon landform amazon landmarksorhistoricalbuildings amazon localbusiness amazon localbusinesstype amazon medicalorganization amazon movie amazon movieseries amazon movietheater amazon musicalbum amazon musicgroup amazon musician amazon musicrecording amazon musicvenue amazon musicvideo amazon organization amazon person amazon professional amazon residence amazon screeningevent amazon service amazon softwareapplication amazon softwaregame amazon sportsevent amazon tvepisode amazon tvseason amazon tvseries amazon videogame more amazon built in slot types are coming shortly | ai |
|
FreeRTOS | to build this project import it into eclipse mars 2 select the desired configuration sam3x sam4e etc and press build the build depends on the eclipse workspace variable armgccpath being set to the directory where your arm none eabi g compiler resides for example c program files x86 gnu tools arm embedded 7 2018 q2 update bin on windows to set it go to windows preferences c c build build variables and click add | os |
|
llm_training_handbook | the large language model training handbook an open collection of methodologies to help with successful training of large language models this is technical material suitable for llm training engineers and operators that is the content here contains lots of scripts and copy n paste commands to enable you to quickly solve your problems if you are not interested in technical details but want more of a detailed overview and concepts please refer to the sister the large language model training playbook https github com huggingface large language model training playbook instead note the list of topics will expand over time at the moment filling in only a subset model parallelism parallelism maximizing throughput throughput tensor precision data types dtype training hyper parameters and model initializations hparams instabilities instabilities debugging software and hardware failures debug slurm slurm resources resources license the content of this site is distributed under attribution sharealike 4 0 international license cc by sa unless specified otherwise the code in this repo is licensed under apache license version 2 0 https www apache org licenses license 2 0 | cuda large-language-models llm nccl nlp performance python pytorch scalability troubleshooting | ai |
portfolio | readme of portfolio 1 | server |
|
SciEngCloud.github.io | cloudbook examples this contains the code samples and demos for the book cloud computing for science and engineering most of these samples are jupyter notebooks if you do not have a copy of jupyter you should download and install anaconda from continuum analytics https www continuum io downloads https www continuum io downloads another way to run jupyter is to install docker and run the jupyter scipy notebook container as follows docker run d p 8888 8888 e password yourword e use https yes jupyter scipy notebook then go to https localhost 8888 or if you started this on a remote host use the ip address of the remote host contents arvix data this folder contains the files needed to do the simple document classification experiments using the science abstract from arxiv catalog these are used in aws ml container for examples in chapter 7 6 the notebook doc analysisd7 physics ipynb illustrates how to build the models for analyzing the physics documents aws hpc cluster this is a folder containing the files need to deploy an hpc cluster on aws using cfncluster it is described in chapter 7 2 3 scale aws ml container is the files needed to build the containers used in the aws container service demo from chapter 7 6 datadir contains the simple files used in the chapter 3 using cloud storage docker demo is the source for the docker sample in section 6 4 in the containers chapter gcloud container celery gcloud kubernetes example again using the arxiv document predictor service kinesis spark aot this is the simple kinesis to spark example from 9 4 streaming data chapter movie7 gif this is the output of the movie created by the autoencoder ipynb example discussed in the autoencocer supplement notebooks this folder contains all the ipython notebooks sc tutorial contains the lectures and exercises for the sc17 tutorial singularity is a folder containing some of the files for the singularity supplement | cloud |
|
resources | resources and information for oim3690 web technologies class information syllabus syllabus 2023spring md project project md schedule schedule 2023spring md subject to changes resources resources cheatsheets resources md javascript resources cheatsheets javascript md glossary cheatsheets glossary md vscode keyboard shortcuts cheatsheets vscode keyboard shortcuts windows pdf html5 cheat sheet cheatsheets html5 cheat sheet pdf css cheat sheet cheatsheets css cheat sheet pdf | server |
|
web-component-devtools | web component devtools web component devtools is aimed at all developers working with web components the tooling provided creates a new chrome devtools panel which allows a quick look at the custom elements on the current page and enables modification of attributes and properties of said components why in the process of developing web components wether it be with a library like lit https github com lit lit or without any kind of library there comes situtations in which you might want to have a bit more control over your components than what the regular browser devtools gives you you might for example want to toggle the attributes of the element toggle the properties of the element monitor when events get dispatched from the element call functions and when you re working with web components shadow dom usually is present making it fairly difficult to find the path to the element and even if you got the path having to write document queryselector my selector string element name setattribute my attr foo every time you want to modify a value is quite cumbersome for this use case the web components devtools were created to enable the developer to easily modify the attributes properties and therefore state of their element straight from the devtools window with the click of a button features web component devtools provides advanced features to the developer straight from the browser s ui to for example listing custom elements on the page and accessible iframes inside the page filtering custom elements on the list inspecting and modifying the attributes of custom elements inspecting and modifying the properties of custom elements observing dispatched events calling functions of the custom element view the source code of web components on page interact directly with web components through the console download you can get the web component devtools from the chrome web store https chrome google com webstore detail web component devtools gdniinfdlmmmjpnhgnkmfpffipenjljo related and the mozilla add on marketplace https addons mozilla org en us firefox addon web component devtools setting up to get started with wcdt you only need to install the extension into your browser and you should be able to see a web components panel on your devtols window a brief video of setting up your development environment to get the most out of devtools https youtu be d6w5ix3 e9e supported libraries web component devtools also works with libraries built for developing web components currently the libraries with extra support by devtools are lit https github com lit lit fast https www fast design atomico https atomicojs github io polymer https polymer library polymer project org vaadin https vaadin com when developing with these libraries the feature set of the devtools is increased without the addition of the custom elements manifest extra features provided for these libraries include for example inspecting and editing of the properties of custom elements the list of extra support libraries will grow as adoption grows issues any issues you run into while using the devtools should be submitted to the github repository https github com matsuuu web component devtools issues discussion join the discussion in lit and friends slack in the channel web component devtools join here https join slack com t lit and friends shared invite zt llwznvsy lzwt13r66gognrg12pugqw architecture the current architecture of the project goes as follow html pages of the devtools lib all of the extension code excluding html pages and packages background all of the background pages of the devtools background page acts as a bridge between background tasks and the content scripts content content scripts https developer chrome com docs extensions mv3 content scripts crawler all of the code injected onto the inspected page to query elements and act upon events elements all of the custom elements used by the devtools types typings and enums util utility functions context menus js context menu actions and communication devtools js panel and general initialization actions lifecycle callbacks packages separate tools used for wcdt maybe later on built into their own tools nydus message passing and management between layers analyzer custom elements manifest analyzer https github com open wc custom elements manifest tree master packages analyzer integration playground playground elements https github com google playground elements integration with source view and console view local development required tools npm any up to date version should do a preferably up to date version of chrome edge firefox a zipping tool when working with firefox any os windows mac linux if you want to develop or use the devtools locally you can do so by following these steps 1 clone this repository 2 run npm install 3 run npm run build 4 go to chrome extensions 5 enable developer mode 6 choose load unpacked 7 select the generated dist directory in the project folder firefox for firefox you might need to create a zip of the dist folder to ad it to firefox as an extension feel free to use whatever zip tool you want to zip the dist folder generating a full package there is a combination script called npm run package which builds the project and packages it utilizing the zip command line tool for linux | javascript webcomponents chrome extension typescript lit web-components custom-elements hacktoberfest | front_end |
sem | a repo for sem with continuous integration set up using github actions master build status workflow https github com julkaswieta sem actions workflows main yml badge svg develop build status github workflow status branch https img shields io github workflow status julkaswieta sem a 20workflow 20for 20my 20hello 20world 20app develop license license https img shields io github license julkaswieta sem svg style flat square https github com julkaswieta sem blob master license release releases https img shields io github release julkaswieta sem all svg style flat square https github com julkaswieta sem releases codecov master codecov https codecov io gh julkaswieta sem branch master graph badge svg token voa8qllbqg https codecov io gh julkaswieta sem codecov develop codecov https codecov io gh julkaswieta sem branch develop graph badge svg token voa8qllbqg https codecov io gh julkaswieta sem | continuous-integration docker sql | server |
aptos-core | a href https aptos dev img width 100 src assets aptos banner png alt aptos banner a license https img shields io badge license apache green svg license lint test https github com aptos labs aptos core actions workflows lint test yaml badge svg https github com aptos labs aptos core actions workflows lint test yaml codecov https codecov io gh aptos labs aptos core branch main graph badge svg token x01rkxsgde https codecov io gh aptos labs aptos core discord chat https img shields io discord 945856774056083548 style flat square https discord gg aptosnetwork aptos is a layer 1 blockchain bringing a paradigm shift to web3 through better technology and user experience built with move to create a home for developers building next gen applications getting started aptos foundation https aptosfoundation org aptos developer network https aptos dev guide setup your environment https aptos dev category environment onboarding tutorials https aptos dev tutorials follow us on twitter https twitter com aptos network join us on the aptos discord https discord gg aptosnetwork contributing you can learn more about contributing to the aptos project by reading our contribution guide https github com aptos labs aptos core blob main contributing md and by viewing our code of conduct https github com aptos labs aptos core blob main code of conduct md aptos core is licensed under apache 2 0 https github com aptos labs aptos core blob main license | blockchain blockchain-network move smart-contracts aptos | blockchain |
ckeditor4-java-samples | this repository is no longer maintained ckeditor 4 for java samples important note this project was supposed to show how ckeditor 4 for java integration can be used inside a web application it has not been completed use it at your own risk please note that ckeditor is a javascript editor and as such may work with any web application regardless of the server side language that the project is using including java based apps a dedicated server side integration is not required to include ckeditor 4 in your application getting the code to use the code you need to clone it into local directory of your choice using below command git clone https github com ckeditor ckeditor java samples after cloning the repository you also need to initialize and update the ckeditor submodule git submodule update init recursive documentation the full developer documentation for the ckeditor for java integration is available online at http link to docs com license copyright c 2003 2015 cksource frederico knabben all rights reserved for licensing see license md or http ckeditor com license | front_end |
|
DGIIIT.github.io | developer s group homepage this is official website of developer s group iiit bhagalpur https dgiiit github io v 2 0 previous version v 1 0 dg iiit https dgiiit github io dg iiitbh github io iiitpreface html summary saichethan m reddy along with two other undergraduate students anuranjan kumar and pritam pal of iiit bhagalpur http iiitbh ac in started an organization called dg community iiit bhagalpur https github com dgiiit to get people together who share common struggles logo https github com dgiiit dg iiitbh github io blob master img logo svg inception 02 january 2018 reason bhagalpur is a technically very backward region and iiit bhagalpur is a newly established institute so to improve skills and for exposure we started this club community demographics currently the members of group are students of iiit bhagalpur inclusive of computer science and electronics branches saichethan reddy https www facebook com saichethanreddymiriyal author founder pritam pal https www facebook com pritampal99 co founder anuranjan https www facebook com anuranjan kumar 188 co founder communication all the communications are done through irc internet relay chat https matrix to dg community matrix org slack https dgiiit slack com twitter https twitter com dgiiit facebook page https m facebook com dg community iiit bhagalpur 177777156160101 culture we believe in equality we always welcome you we appreciate your support even if you think you did small part freedom of idea s speech are our at our core for more information visit code of conduct https dgiiit github io conduct html proposal community demographics iiit bhagalpur http iiitbh ac in goal workshops hackathons opensource impact target audience undergraduate students todos improve ui add a logo update content increase community https codelabs developers google com codelabs firebase web 0 check wiki for more information https github com dgiiit dgiiit github io wiki | dgiiit iiit-bhagalpur students club dsc | server |
Subsets and Splits