package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
antispoofing.eyeblink
This package implements an eye-blink detector using a similar frame-differences technique as described at the paperCounter-Measures to Photo Attacks in Face Recognition: a public database and a baseline, by Anjos and Marcel, International Joint Conference on Biometrics, 2011.If you use this package and/or its results, please cite the following publications:The original paper with the frame-differences and normalization technique explained in details:@inproceedings{Anjos_IJCB_2011, author = {Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien}, keywords = {Attack, Counter-Measures, Counter-Spoofing, Disguise, Dishonest Acts, Face Recognition, Face Verification, Forgery, Liveness Detection, Replay, Spoofing, Trick}, month = oct, title = {Counter-Measures to Photo Attacks in Face Recognition: a public database and a baseline}, booktitle = {International Joint Conference on Biometrics 2011}, year = {2011}, url = {http://publications.idiap.ch/downloads/papers/2011/Anjos_IJCB_2011.pdf} }Bob as the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, url = {http://publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf}, }If you decide to use the REPLAY-ATTACK database, you should also mention the following paper, where it is introduced:@inproceedings{Chingovska_BIOSIG_2012, author = {Chingovska, Ivana and Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien}, keywords = {Attack, Counter-Measures, Counter-Spoofing, Face Recognition, Liveness Detection, Replay, Spoofing}, month = sep, title = {On the Effectiveness of Local Binary Patterns in Face Anti-spoofing}, booktitle = {IEEE Biometrics Special Interest Group}, year = {2012}, url = {http://publications.idiap.ch/downloads/papers/2012/Chingovska_IEEEBIOSIG2012_2012.pdf}, }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.Raw dataThis method was originally conceived to work with thethe PRINT-ATTACK database, but has since evolved to work with the whole of thethe REPLAY-ATTACK database, which is a super-set of the PRINT-ATTACK database. You are allowed to select protocols in each of the applications described in this manual.The data used in these experiments is publicly available and should be downloaded and installedpriorto try using the programs described in this package.AnnotationsAnnotations for this work were generated with the free-software package calledflandmark. Please cite that work as well if you use the results of this package on your own publication.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.eyeblinkto download the latest stable version of this package.There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers likepip(oreasy_install) or manually download, unpack and usezc.buildoutto create a virtual work environment just for this package.Using an automatic installerUsingpipis the easiest (shell commands are marked with a$signal):$ pip install antispoofing.eyeblinkYou can also do the same witheasy_install:$ easy_install antispoofing.eyeblinkThis will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.This scheme works well with virtual environments byvirtualenvor if you have root access to your machine. Otherwise, we recommend you use the next option.Usingzc.buildoutDownload the latest version of this package fromPyPIand unpack it in your working area. The installation of the toolkit itself usesbuildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:$ python bootstrap.py $ ./bin/buildoutThese 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.NoteThe python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use ofBob, you must make sure that thebootstrap.pyscript is called with thesameinterpreter used to build Bob, or unexpected problems might occur.If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the filebuildout.cfgbeforerunning./bin/buildout. Find the section namedbuildoutand edit the lineprefixesto point to the directory where Bob is installed or built. For example:[buildout] ... prefixes=/Users/crazyfox/work/bob/buildUser GuideIt is assumed you have followed the installation instructions for the package and got this package installed and the REPLAY-ATTACK (or PRINT-ATTACK) database downloaded and uncompressed in a directory to which you have read access. Through this manual, we will call this directory/root/of/database. That would be the directory thatcontainsthe sub-directoriestrain,test,develandface-locations.Note for Grid UsersAt Idiap, we use the powerful Sun Grid Engine (SGE) to parallelize our job submissions as much as we can. At the Biometrics group, we have developed alittle toolbox <http://pypi.python.org/pypi/gridtk>that can submit and manage jobs at the Idiap computing grid through SGE. If you are at Idiap, you can download and install this toolset by addinggridtkat theeggssection of yourbuildout.cfgfile, if it is not already there. If you are not, you still may look inside for tips on automated parallelization of scripts.The following sections will explain how to reproduce the paper results in single (non-gridified) jobs. A note will be given where relevant explaining how to parallalize the job submission usinggridtk.NoteIf you decide to run using the grid at Idiap, please note that our Lustre filesystem does not work well with SQLite. So, donotplace thexbob.db.replaypackage inside that filesystem. You can andshouldsave your results on/idiap/tempthough.Calculate Frame DifferencesThe eye-blink detector calculates normalized frame differences like our faceversusbackground motion detector at theantispoofing.motion package, except it does it for the eye region and face remainer (the part of the face that does not contain the eye region). In the first stage of the processing, we compute the eye and face remainder regions normalized frame differences for each input video. To do this, just execute:$ ./bin/framediff.py /root/of/database /root/of/annotations results/framediffThere are more options for theframediff.pyscript you can use (such as the sub-protocol selection). Note that, by default, all applications are tunned to work with thewholeof the replay attack database. Just type–helpat the command line for instructions.There is one parameter in special you may need tunning on the above script, which relates to the--maximum-displacementoption. This option controls the percentage in eye-center movement in which the method still considers the current detection is valid, w.r.t. the previous frame. If the eye-center positions between the current and previous frame move more than the specified ratio of the eye-width, then the detection is considered invalid and is discarded.NoteTo parallelize this job, do the following:$ ./bin/jman submit --array=1300 ./bin/framediff.py /root/of/database /root/of/annotations results/framediffThemagicnumber of1300entries can be found by executing:$ ./bin/framediff.py --grid-countWhich just prints the number of jobs it requires for the grid execution.Creating Partial Score FilesTo create the final score files, you will need to executemake_scores.py, which contains a simple strategy for producing a single score per input frame in every video. The final score is calculated from the input eye and face remainder frame differences in the following way:S = ratio(eye/face_rem) - running_average(ratio(eye/face_rem)) The final score is set to S, unless any of the following conditions are met: 1 S < running_std_deviation(ratio(...)) 2 eye == 0 3 S < running_average(ratio(...)) In these cases S is replaced by the output of running_average(ratio(...)).To compute the scoresSfor every frame in every input video, do the following:$ ./bin/make_scores.py --verbose results/framediff results/partial_scoresThere are more options for theframediff.pyscript you can use (such as the sub-protocol selection). Note that, by default, all applications are tunned to work with thewholeof the replay attack database. Just type–helpat the command line for instructions.We don’t provide a grid-ified version of this step because the job runs quite fast, even for the whole database.Counting Eye-BlinksThe next step of the process is to use the partial scores for each video (a signal through time) to count the number of blinks perceived in every database element. You can use thecount_blinks.pyscript for that:$ ./bin/count_blinks.py --verbose results/partial_scores results/blinksThe output files will have integer values as scores for each frame, with the number of blinks accounted up to that point in time. These files can be used as score output files for fusion processes.Merging ScoresIf you wish to create a single5-column format fileby combining this counter-measure scores for every video into a single file that can be fed to external analysis utilities such as ourantispoofing.evaluation <http://pypi.python.org/pypi/antispoofing.evaluation>package, you should use the scriptcount_blinks.py. The merged scores represent the number of eye-blinks computed for each video sequence. You will have to specify how many of the scores in every video you will want to consider and the input directory containing the scores files that will be merged (by default, the procedure considers only the first 220 frames, which is some sort ofcommon denominatorbetween real-access and attack video number of frames).The output of the program consists of a single 5-column formatted file with the client identities and scores forevery videoin the input directory. A line in the output file corresponds to a video from the database.You run this program on the output ofmake_scores.py. So, it should look like this if you followed the previous example:$ ./bin/merge_scores.py --verbose results/partial_scores results/blinksThe above commandline example will generate 3 text files on theresultsdirectory containing the training, development and test scores, accumulated over each video in the respective subsets. You can use other options to limit the number of outputs in each file such as the protocol or support to use.There are two main options you may need to tweak on this program:--skip-framesand--threshold-ratio. The first one,--skip-frames, determines how many frames to skip between eye-blinks, to avoid multiple eye-blink detections on a single user blink (defaults to10). The other parameter defines how many standard-deviations from the running mean, a given signal peak should be considered as originating from an eye-blink. It is set by default to3.0.Creating MoviesYou can create animated movies showing the detector operation using themake_movie.pyscript. This script will combine all the above steps in the detection process and will generate a movie file showing the original input movie that is being analyzed, facial landmarks, the light normalization result and the resulting score evolution, together with instantaneous eye-blink thresholds. You can use it to debug the eye-blinking detector and better tune the parameters for batch processing. The script takes the full path to a movie file in the REPLAY-ATTACK database and an output movie filename:$ ./bin/make_movie.py database/train/attack/hand/attack_print_client001_session01_highdef_photo_controlled.mov test.aviYou can use many of the tweaking options defined in the batch processing scripts to fine tune the output behavior. Use--helpto find-out more information about this program.ProblemsIn case of problems, please contact any of the authors of the paper.
antispoofing.fusion
This package combines motion and texture analysis based countermeasures to 2D facial spoofing attacks as described in the paper ‘Complementary Countermeasures for Detecting Scenic Face Spoofing Attacks’, International Conference on Biometrics, 2013. However, it is possible to fuse scores of any combination of countermeasures using the tools provided by this package.If you use this package and/or its results, please cite the following publications:The original paper with the fusion of countermesures explained in details:@inproceedings{Komulainen_ICB_2013, author = {Komulainen, Jukka and Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien and Hadid, Abdenour and Pietik{\"a}inen, Matti}, month = Jun, title = {Complementary Countermeasures for Detecting Scenic Face Spoofing Attacks}, journal = {International Conference on Biometrics 2013}, year = {2013}, }Bob as the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.Raw dataThe data used in the paper is results obtained by two other satellite packages:antispoofing.motionandantispoofing.lbp. These packages should be downloaded, installed and runpriorto using the programs described in this package. Visitthe antispoofing.motionandthe antispoofing.lbppages for more information about how to use them.Of course, the package can be used for any other kind of scores that are saved in the same format as the two mentioned above.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.fusionto download the latest stable version of this package. Then, extract the .zip file to a folder of your choice.Theantispoofing.fusionpackage is a satellite package of the free signal processing and machine learning libraryBob. This dependency has to be downloaded manually. This version of the package depends onBobversion 2 or greater. To installpackages of Bob, please read theInstallation Instructions. ForBobto be able to work properly, some dependent Bob packages are required to be installed. Please make sure that you have read the Dependencies for your operating system.The most simple solution is to download and extractantispoofing.fusionpackage, then to go to the console and write:$ cd antispoofing.fusion $ python bootstrap-buildout.py $ bin/buildoutThis will download all required dependentBoband other packages and install them locally.User GuideIt is assumed that you have followed the installation instructions for this package and got it installed. Furthermore, we assume that you have installed the two related packages and produced output scores for each frame using boththe antispoofing.motionandthe antispoofing.lbppackages and that these scores sit on the directory./scores/. If not, please create symbolic links to this directory.Finding Valid Output ScoresThe previously generated outputs do not contain a valid score for each video frame. The motion based countermeasure needs 20 video frames for analyzing the motion correlation between the face and the background, i.e. the method cannot produce scores for the first 19 frames. On the other hand, the LBP based countermeasure is able to produce an valid output score when a face is successfully detected and the face size is above 50x50 pixels.Therefore, the frames, in which the both countermeasures have valid score (i.e. not NaN value), must be found before performing the fusion at score level. This process is performed using the script./bin/find_valid_frames.pyand by giving the locations of all used output scores, e.g.:$ ./bin/find_valid_frames.py -s scores/motion_lda scores/lbp_lda -e replayCombining the Valid Output ScoresThe scriptfuse_scores.pyperforms the fusion of the any countermeasures at score level using some of the two different methods: sum of scores or logistic linear regression (LLR) with the selected score normalization scheme: minmax, zscore or without any normalization, e.g.:$ ./bin/fuse_scores.py -s scores/motion_lda scores/lbp_lda -f SUM -n ZNorm -o scores/lda_sum_z $ ./bin/fuse_scores.py -s scores/motion_lda scores/lbp_lda -f LLR -n None -o scores/lda_llr_rawAnalyzing the Results of Fusion at Frame-levelThe performance of the individual countermeasures and their fusion can be dumped in to a file./results/frame_based/results.txtusing the scriptframe_by_frame_analysis.py:$ ./bin/frame_by_frame_analysis.py -s scores/motion_lda scores/lbp_lda -f scores/lda_sum_z scores/lda_llr_raw -e replayTheresults.txtshows the performance of each method at frame-level.Running the Time AnalysisThe time analysis is the end of the processing chain, it fuses the scores of instantaneous scores for each method to give out a better estimation of attacks and real-accesses. To use it:$ ./bin/time_analysis.py -s scores/motion_lda scores/lbp_lda -f scores/lda_sum_z scores/lda_llr_raw -e replayThe time evolution for each method can be found in directory./results/evolution/. The folder also contains a PDF file in which you can find all methods in same figure.Mutual Error AnalysisThe scriptvenn.pyperforms mutual error analysis on the given countermeasures and outputs the results into a file./results/Venn&scatter/Venn.txt:$ ./bin/venn.py -s scores/motion_lda scores/lbp_lda -e replayProblemsIn case of problems, please contact any of the authors of the paper.
antispoofing.fusion_faceverif
This package contains methods for fusion of verification algorithms with anti-spoofing methods at decision and score-level, as well as well as for evaluation of the fused systems under spoofing attacks. In particular, the scripts in this package enable fusion of face verification and anti-spoofing algorithms on theReplay-Attackface spoofing database. The fusion scripts require score files for both the verification and anti-spoofing algorithm. Hence, at least two score files are needed for each video in Replay-Attack: one for the verification and one for the anti-spoofing scores. Each score file contains the scores for all the frames in the corresponding video of Replay-Attack. If there is no score for a particular frame, the score value needs to be Nan. The format of the score files is .hdf5. The score files should be organized in the same directory structure as the videos in Replay-Attack.Some of the scripts can receive multiple score files as input, enabling fusion of more then one verification algorithm with more then one anti-spoofing algorithm.To summarize, the methods in this package enable the user to:parse score files in 4 column or 5 column format (formatspecified inBob) and extract the necessary information from that to organize the score files for each video into Replay-Attack directory structure.evaluate the threshold of a classification system on the development setapply the threshold on an evaluation or any other setdo: AND, SUM, LR and PLR fusion of verification and anti-spoofing system(s)plot performance curvesplot score distributionsplot scatter plots and decision boundariesIf you use this package and/or its results, please cite the following publications:Bobas the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }Anti-spoofing in action: joint operation with an verification system@INPROCEEDINGS{Chingovska_CVPRWORKSHOPONBIOMETRICS_2013, author = {Chingovska, Ivana and Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien}, keywords = {biometric recognition, Counter-Measures, Fusion, Spoofing, trustworthy, vulnerability}, projects = {Idiap, TABULA RASA, BEAT}, month = jun, title = {Anti-spoofing in action: joint operation with a verification system}, booktitle = {Proceedings of CVPR 2013}, year = {2013}, location = {Portland, Oregon}, }InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.fusion_faceverifto download the latest stable version of this package. Then, extract the .zip file to a folder of your choice.Theantispoofing.fusion_faceverifpackage is a satellite package of the free signal processing and machine learning libraryBob. This dependency has to be downloaded manually. This version of the package depends onBobversion 2 or greater. To installpackages of Bob, please read theInstallation Instructions. ForBobto be able to work properly, some dependent Bob packages are required to be installed. Please make sure that you have read the Dependencies for your operating system.The most simple solution is to download and extractantispoofing.fusion_faceverifpackage, then to go to the console and write:$ cd antispoofing.fusion_faceverif $ python bootstrap-buildout.py $ bin/buildoutThis will download all required dependentBoband other packages and install them locally.Requirements and dependenciesAs mentioned before, this satellite package requires verification and anti-spoofing score files. To generate the face verification scores for the results in the paperAnti-spoofing in action: joint operation with a verification system, we usedFaceRecLib. To generate the anti-spoofing scores, we used the anti-spoofing algorithm from the following satelite packages:antispoofing.lbptopfor LBP and LBP-TOP counter-measures andantispoofing.motionfor motion-based counter-measure. Both the face verification and anti-spoofing scores were generated on per-frame basis. Of course, you can experiment with different verification and anti-spoofing algorithms, as long as your score files are organized as the directory structure of Replay-Attack.This satellite package relies on the following satellite packages:antispoofing.utilsandantispoofing.fusion.User guideThis section explains the step by step procedure how to generate the results presented in the paperAnti-spoofing in action: joint operation with a verification system. The code is tied toReplay-Attackdatabase at the moment.Step 1: Generate score files from the verification and anti-spoofing algorithmsThe first step is to train a face verification algorithm and to create models for each user into it. To generate the face verifications scores, you need to create a protocol for matching real-access (LICIT protocol) and spoof (SPOOF protocol) samples to user models that the algorithms has learned. The LICIT protocol is created by exhaustively matching each real access sample to the user model belonging to the sample’s user and to all the other models. The SPOOF protocol is created my matching the spoof samples to the user model belonging to the sample’s user. In our case, the algorithms work on a frame-by-frame basis. Due to computational limitations, we computed the scores only for every 10th frame of each video. The matching files for the LICIT and SPOOF protocol were then fed into FaceRecLib.To generate the anti-spoofing scores, simply pick your favourite face verification algorithm and feed Replay-Attack to it.This user guide does not give details on the exact commands how to generate the scores. To learn how to do it, please refer directly to FaceRecLib and the anti-spoofing algorithm of your choice.Step 2: Convert the score files to the requested directory structureAs explained before, the score files need to be organized as the directory structure of Replay-Attack. While the anti-spoofing algorithms we use already give the scores in this format, FaceRecLib outputs score files in 4-column format (format), particularly, separate score files for the real accesses (LICIT protocol) and attacks (SPOOF protocol) videos. So, the first step is to convert them into the required format. For example, to convert the licit scores in the development set, run the following command:$ ./bin/four_column_to_dir_structure.py score_file out_dir -t licit -s devel replayThe argumentsscore_fileandout_dirrefer to the 4-column score file which is input, and the directory for the converted scores, respectively. To see all the options for the scriptfour_column_to_dir_structure.py. just type--helpat the command line. If you want to do the conversion for a particular protocol of Replay-Attack (eg. print protocol), state that protocol at the end of the command:$ ./bin/four_column_to_dir_structure.py score_file out_dir -t licit -s devel replay printDo not forget to do this step for all the dataset subsets (train, development and test set) and the two protocols (LICIT and SPOOF), using the appropriate input files and script options. Depending on the protocol, the scores will be saved into subdirectories calledlicitandspoofwithinout_dir.The score files in 4-column format generated by the recognition algorithm of FaceRecLib used in our work are supplied in this satellite package for your convenience. They can be found in the directory namedsupplemental_data.If it happens that your face verification or anti-spoofing algorithms output the scores in different format, feel free to implement your own convertor to get the scores into Replay-Attack directory structure.Step 3: Decision-level fusionAND decision fusion is supported via the scriptand_decision_fusion.py. AND decision fusion depends on the decision thresholds of the verification and anti-spoofing algorithms separately. Therefore, we first need to compute them:$ ./bin/antispoof_threshold.py as_score_dir replay $ ./bin/faceverif_threshold.py fv_score_dir replayThe arguments as_score_dir and fv_score_dir refer to the directory with the score files for the anti-spoofing and face verification threshold respectively. The thresholds calculated with these methods are then fed as an input to theand_decision_fusion.pyscript:$ ./bin/and_decision_fusion.py -s fv_score_dir -a as_score_dir --ft fv_thr --at as_thr -v replayThe script directly prints the error rates. To see all the options for the scriptand_decision_fusion.pyjust type--helpat the command line.Step 4: Score-level fusionThree strategies for score-level fusion are available: SUM, LR and PLR. The score-fusion can be performed using the scriptfusion_fvas.py:$ ./bin/fusion_fvas.py -s fv_score_dir -a as_score_dir -o outdirThe script writes the fused scores for each file in the specified output directory in a 4-column format. Having them, you can easily run any script for computing the performance or plotting. Note that you need to run this script separately for the LICIT and the SPOOF protocol for both development and test set at least. This will result in a total of 4 score files. To see all the options for the scriptfusion_fvas.py, just type--helpat the command line. A very important parameter is--spthat will save the normalization parameters and the machine of the fusion for further use.Step 5: Compute performanceTo compute the performance using the 4-column format score-files containing the fused scores, you can use the scriptseval_threshold.pyto calculate the threshold on the development set andapply_threshold.pyto compute the performance using the obtained treshold. Do not forget that you have 4 score files (one for development and one for test set for the LICIT and SPOOF protocol), and depending on your needs, you can use any of them for evaluation or application of the threshold.:$ ./bin/eval_threshold.py -s develset_score_file -c eer $ ./bin/apply_threshold.py -s testset_score_file -t thrUsually, the development set score file of the LICIT protocol is used to evaluate the threshold. That threshold can be used on any of the 4 score files.To see all the options for the scripts, just type--helpat the command line.Step 6: Plot performance curvesUsing the scriptplot_on_demand.py, you can choose to plot many different plots like score distributions or DET curves on the LICIT, SPOOF protocol or both. Just see at the documentation of the script to see what input you need to specify for the desired curve. As mandatory input, you need to give the score files for the LICIT and SPOOF protocol for both the development and test set.:$ ./bin/plot_on_demand.py devel_licit_scores eval_licit_scores devel_spoof_scores eval_spoof_scores -b eer -i 2This will plot the DET curve of the LICIT protocol overlayed with the DET curve of the SPOOF protocol. To see all the options for the scriptplot_on_demand.py, just type--helpat the command line.Step 7: Scatter plotsScatter plots plot the verification and anti-spoofing scores in the 2D space, together with a decision boundary depending on the algorithms used for their fusion. To plot a scatter plot for LLR fused scores, type:$ ./bin/scatter_plot.py -s fv_score_dir -a as_score_dir -m machine_file -n norm_file -d thr -f LLRThe devel threshold (specified with-dparameter) is a mandatory argument in this script. In the case of AND fusion (option-fAND), two thresholds need to be specified with this argument. Normalization parameter (parameter-nnorm_file) needs to be specified for the score fusion algorithms (i.s. option-fSUM,-fLLRor-fLLR_P), where norm_file is a file containing the normalization parameters. Usually, this is the file saved when the option--spis set when running the scriptfvas_fusion.pyin Step 4. Similarly, the score fusion algorithms require the parameter-mmachine_file, where machine_file contains the of the fusion algorithm. It is also saved when the option--spis set when running the scriptfvas_fusion.pyin Step 4.To see all the options for the scriptscatter_plot.py, just type--helpat the command line.Additional informationThe package contains an additional scriptdir_to_four_column.pythat might be useful in some cases. It converts scores from Replay-Attack directory structure to 4-column file structure.ProblemsIn case of problems, please [email protected]
antispoofing.fvcompetition_icb2015
This package provides the source code to run the experiments published in the paperThe 1st Competition on Counter Measures to Finger Vein Spoofing Attacks. It relies on some satellite packages fromBobto compute the evaluation.NoteCurrently, this package only works in Unix-like environments and under MacOS. Due to limitations of theBoblibrary, MS Windows operating systems are not supported. We are working on a port ofBobfor MS Windows, but it might take a while. In the meanwhile you could use ourVirtualBoximages that can be downloadedhere.InstallationThe installation of this package relies on theBuildOutsystem. By default, the command line sequence:$ ./python bootstrap.py $ ./bin/buildoutThere are a few exceptions, which are not automatically downloaded:BobTo installBob, please visithttp://www.idiap.ch/software/boband follow the installation instructions. Please verify that you have at least version 2.0 of Bob installed. If you have installed Bob in a non-standard directory, please open the buildout.cfg file from the base directory and set theprefixesdirectory accordingly.NoteThe experiments that we report in thePaperwere generated withBobversion 2.0. If you use different versions of either of these packages, the results might differ slightly. For example, we are aware that, due to some initialization differences, the results using Bob 1.2.0 and 1.2.1 are not identical, but similar.Image DatabasesThe experiments are run on external image database. We do not provide the images from the database itselve. Hence, please contact the database owners to obtain a copy of the images. The database used in the competition can be downloaded here:Spoofing-Attack Finger Vein Database [fvspoofingattack]:http://www.idiap.ch/dataset/fvspoofingattackNoteAfter downloading the databases, you will need to tell our software, where it can find them by changing therun_icb2015_competition.shfile. In particular, please update thePATHDATABASEto indicate the directory of the dataset: PATHDATABASE=”YOUR_PATH/FVspoofingAttack”.Please let all other configuration parameters unchanged as this might influence the competition analysis and, hence, the reproducibility of the results.Getting helpIn case anything goes wrong, please feel free to open a new ticket in ourGitLabpage, or send an email [email protected] the results of thePaperAfter successfully setting up the databases, you are now able to run the anti-spoofing finger vein evaluation as explained in thePaper.Running the experimentsThe competition results are run using therun_icb2015_competition.shfile.Additionally, the individual scripts used can be found inbin/directory. See./bin/icb2015_baseline_countermeasure.py--helpand./bin/icb2015_evaluation_results.py--helpfor a complete list of options.Run the competition results on the Spoofing-Attack Finger Vein database:$ ./run_icb2015_competition.shNoteAll output directories of the scripts will be automatically generated if they do not exist yet. The submissions folder contains the score files submitted to the competition.Cite our paperIf you use the results in any of your contributions, please cite the following paper:@inproceedings{Tome_ICB2015_AntiSpoofFVCompetition, author = {Tome, Pedro and Raghavendra, R. and Busch, Christoph and Tirunagari, Santosh and Poh, Norman and Shekar, B. H. and Gragnaniello, Diego and Sansone, Carlo and Verdoliva, Luisa and Marcel, S{\'{e}}bastien}, keywords = {Biometrics, Finger vein, Spoofing Attacks, Competition}, month = dec, title = {The 1st Competition on Counter Measures to Finger Vein Spoofing Attacks}, booktitle = {International Conference on Biometrics (ICB)}, series = {}, volume = {}, year = {2015}, pages = {}, location = {Thailand}, url = {http://publications.idiap.ch/index.php/publications/show/} }
antispoofing.lbp
This package implements the LBP counter-measure to spoofing attacks to face recognition systems as described at the paperOn the Effectiveness of Local Binary Patterns in Face Anti-spoofing, by Chingovska, Anjos and Marcel, presented at the IEEE BioSIG 2012 meeting.If you use this package and/or its results, please cite the following publications:Theoriginal paperwith the counter-measure explained in details:@INPROCEEDINGS{Chingovska_BIOSIG_2012, author = {Chingovska, Ivana and Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien}, keywords = {Attack, Counter-Measures, Counter-Spoofing, Face Recognition, Liveness Detection, Replay, Spoofing}, month = sep, title = {On the Effectiveness of Local Binary Patterns in Face Anti-spoofing}, journal = {IEEE BIOSIG 2012}, year = {2012}, }Bobas the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.Raw dataThe data used in the paper is publicly available and should be downloaded and installedpriorto try using the programs described in this package. Visitthe REPLAY-ATTACK database portalfor more information.This satellite package can also work with theCASIA_FASD database.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.lbpto download the latest stable version of this package. Then, extract the .zip file to a folder of your choice.Theantispoofing.lbppackage is a satellite package of the free signal processing and machine learning libraryBob. This dependency has to be downloaded manually. This version of the package depends onBobversion 2 or greater. To installpackages of Bob, please read theInstallation Instructions. ForBobto be able to work properly, some dependent Bob packages are required to be installed. Please make sure that you have read the Dependencies for your operating system.The most simple solution is to download and extractantispoofing.lbppackage, then to go to the console and write:$ cd antispoofing.lbp $ python bootstrap-buildout.py $ bin/buildoutThis will download all required dependentBoband other packages and install them locally.User GuideThis section explains how to use the package in order to: a) calculate the LBP features on the REPLAY-ATTACK or CASIA_FASD database; b) perform classification using Chi-2, Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). At the bottom of the page, you can find instructions how to reproduce the exact paper results.It is assumed you have followed the installation instructions for the package, and got the required database downloaded and uncompressed in a directory. After running thebuildoutcommand, you should have all required utilities sitting inside thebindirectory. We expect that the video files of the database are installed in a sub-directory calleddatabaseat the root of the package. You can use a link to the location of the database files, if you don’t want to have the database installed on the root of this package:$ ln -s /path/where/you/installed/the/database databaseIf you don’t want to create a link, use the--input-dirflag (available in all the scripts) to specify the root directory containing the database files. That would be the directory thatcontainsthe sub-directoriestrain,test,develandface-locations.Calculate the LBP featuresThe first stage of the process is calculating the feature vectors, which are essentially normalized LBP histograms. There are two types of feature vectors:per-video averaged feature-vectors (the normalized LBP histograms for each frame, averaged over all the frames of the video. The result is a single feature vector for the whole video), ora single feature vector for each frame of the video (saved as a multiple row array in a single file).The program to be used for the first case is./bin/calclbp.py, and for the second case./bin/calcframelbp.py. They both use the utility scriptspoof/calclbp.py. Depending on the command line arguments, they can compute different types of LBP histograms over the normalized face bounding box. Furthermore, the normalized face-bounding box can be divided into blocks or not.The following command will calculate the per-video averaged feature vectors of all the videos in the REPLAY-ATTACK database and will put the resulting.hdf5files with the extracted feature vectors in the default output directory./lbp_features:$ ./bin/calclbp.py --ff 50 replayIn the above command, the face size filter is set to 50 pixels (as in the paper), and the program will discard all the frames with detected faces smaller then 50 pixels as invalid.To calculate the feature vectors for each frame separately (and save them into a single file for the full video), you have to run:$ ./bin/calcframelbp.py --ff 50 replayTo see all the options for the scriptscalclbp.pyandcalcframelbp.py, just type--helpat the command line. Change the default option in order to obtain various features, as described in the paper.If you want to see all the options for a specific database (e.g. protocols, lighting conditions etc.), type the following command (for Replay-Attack):$ ./bin/calclbp.py replay --helpClassification using Chi-2 distanceThe clasification using Chi-2 distance consists of two steps. The first one is creating the histogram model (average LBP histogram of all the real access videos in the training set). The second step is comparison of the features of development and test videos to the model histogram and writing the results.The script to use for creating the histogram model is./bin/mkhistmodel.py. It expects that the LBP features of the videos are stored in a folder./bin/lbp_features. The model histogram will be written in the default output folder./res. You can change this default features by setting the input arguments. To execute this script fro Replay-Attack, just run:$ ./bin/mkhistmodel.py replayThe script for performing Chi-2 histogram comparison is./bin/cmphistmodels.py, and it assumes that the model histogram has been already created. It makes use of the utility scriptspoof/chi2.py. The default input directory is./lbp_features, while the default input directoru for the histogram model as well as default output directory is./res. To execute this script for Replay-Attack, just run:$ ./bin/cmphistmodels.py -s replayDo not forget the-soption if you want the scores for each video saved in a file.To see all the options for the scriptsmkhistmodel.pyandcmphistmodels.py, just type--helpat the command line.Classification with linear discriminant analysis (LDA)The classification with LDA is performed using the script./bin/ldatrain_lbp.py. The default input and output directories are./lbp_featuresand./res. To execute the script with prior PCA dimensionality reduction as is done in the paper (for Replay-Attack), call:$ ./bin/ldatrain_lbp.py -r -s replayDo not forget the-soption if you want the scores for each video saved in a file.To see all the options for this script, just type--helpat the command line.Classification with support vector machine (SVM)The classification with SVM is performed using the script./bin/svmtrain_lbp.py. The default input and output directories are./lbp_featuresand./res. To execute the script with prior normalization of the data in the range[-1,1]as in the paper (for Replay-Attack), call:$ ./bin/svmtrain_lbp.py -n --eval -s replayDo not forget the-soption if you want the scores for each video saved in a file.To see all the options for this script, just type--helpat the command line.Classification with support vector machine (SVM) on a different database or database subsetIn the training process, the SVM machine, as well as the normalization and PCA parameters are saved in an .hdf5 file. They can be used later for classification of data from a different database or database subset. This can be done using the script./bin/svmtrain_lbp.py. The default input and output directories are./lbp_featuresand./res. To execute the script, call:$ ./bin/svmeval_lbp.py replayDo not forget the-soption if you want the scores for each video saved in a file. Also, do not forget to specify the right .hdf5 file where the SVM machine and the parameters are saved using the-iparameter (the default one is./res/svm_machine.hdf5To see all the options for this script, just type--helpat the command line.Reproduce paper resultsThe exact commands to reproduce the results from the paper are given here. First, feature exatraction should be done as follows:$ ./bin/calcframelbp.py -d features/regular replay $ ./bin/calcframelbp.py -d features/transitional replay $ ./bin/calcframelbp.py -d features/direction_coded replay $ ./bin/calcframelbp.py -d features/modified replay $ ./bin/calcframelbp.py -d features/per-block -b 3 replayThe results in Table II are obtained with the following commands:$ ./bin/mkhistmodel.py -v features/regular -d models/regular replay $ ./bin/cmphistmodels.py -v features/regular -m models/regular -d scores/regular -s replayBy changing the-vparameter, you can change the type of features, resulting in the scores for the different columns of the table.The results in Table III are obtained by the same commands, using the corresponding value for the-vparameter for the per-block computed feature.The results in Table IV for LDA and SVM classification are obtained by the following two commands, respectively:$ ./bin/ldatrain_lbp.py -v features/regular -d scores/regular -n replay $ ./bin/svmtrain_lbp.py -v features/regular -d scores/regular -n -r replayThe results for the CASIA-FASD database can be obtained in the same way, by specifying thecasiaparameter at the end of the commands. Note that the results for CASIA-FASD are reported on per-block basis, and using 5-fold cross validation. This means that the results need to be generated 5 times, training with different fold, which can be specified as an argument as well.Important note: the results in the last column of Table V are not straight-forwardly reproducible at the moment (in particular, the concatenation of histograms is not directly supported using the scripts in this satellite package). Furthermore, at the present state, the scripts do not support the NUAA database. Work to solve this incovenience is in progress :)ProblemsIn case of problems, please contact any of the authors of the paper.
antispoofing.lbptop
This package implements an LBP-TOP based countermeasure to spoofing attacks to face recognition systems as described at the paper LBP-TOP based countermeasure against facial spoofing attacks, International Workshop on Computer Vision With Local Binary Pattern Variants, 2012.If you use this package and/or its results, please cite the following publications:The original paper with the counter-measure explained in details:@inproceedings{Pereira_LBP_2012, author = {Pereira, Tiago de Freitas and Anjos, Andr{\'{e}} and De Martino, Jos{\'{e}} Mario and Marcel, S{\'{e}}bastien}, keywords = {Attack, Countermeasures, Counter-Spoofing, Face Recognition, Liveness Detection, Replay, Spoofing}, month = nov, year = {2012}, title = {LBP-TOP based countermeasure against facial spoofing attacks}, journal = {International Workshop on Computer Vision With Local Binary Pattern Variants - ACCV}, }Bob as the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }Raw DataThe dataset used in the paper is REPLAY-ATTACK database and it is publicly available. It should be downloaded and installedpriorto using the programs described in this package. Visitthe REPLAY-ATTACK database pagefor more information.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.lbptopto download the latest stable version of this package.There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers likepip(oreasy_install) or manually download, unpack and usezc.buildoutto create a virtual work environment just for this package.Using an automatic installerUsingpipis the easiest (shell commands are marked with a$signal):$ pip install antispoofing.lbptopYou can also do the same witheasy_install:$ easy_install antispoofing.lbptopThis will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.This scheme works well with virtual environments byvirtualenvor if you have root access to your machine. Otherwise, we recommend you use the next option.Usingzc.buildoutDownload the latest version of this package fromPyPIand unpack it in your working area. The installation of the toolkit itself usesbuildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:$ python bootstrap.py $ ./bin/buildoutThese 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.NoteThe python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use ofBob, you must make sure that thebootstrap.pyscript is called with thesameinterpreter used to build Bob, or unexpected problems might occur.If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the filebuildout.cfgbeforerunning./bin/buildout. Find the section namedexternaland edit the lineegg-directoriesto point to thelibdirectory of the Bob installation you want to use. For example:[external] recipe = xbob.buildout:external egg-directories=/Users/crazyfox/work/bob/build/libUser GuideIt is assumed you have followed the installation instructions for the package and got this package installed and the REPLAY-ATTACK database downloaded and uncompressed in a directory. You should have all required utilities sitting inside a binary directory depending on your installation strategy (utilities will be inside thebinif you used the buildout option). We expect that the video files downloaded for the REPLAY-ATTACK database are installed in a sub-directory calleddatabaseat the root of the package. You can use a link to the location of the database files, if you don’t want to have the database installed on the root of this package:$ ln -s /path/where/you/installed/the/replay-attack-database databaseIf you don’t want to create a link, use the--input-dirflag to specify the root directory containing the database files. That would be the directory thatcontainsthe sub-directoriestrain,test,develandface-locations.Calculate the multiresolution and single resolution LBP-TOP featuresThe first stage of the process is calculating the feature vectors, which are essentially LBP-TOP histograms (XY, XT and YT directions) for each frame of the video.The program to be used isscript/lbptop_calculate_parameters.py.The resulting histograms will be put in .hdf5 files in the default output directory./lbp_features:$ ./bin/lbptop_calculate_parameters.py replayTo gerate LBP-TOP features following the multiresolution strategy in time domain, it is necessary to set different values for Rt. For example, to generate a multiresolution description in time domain for Rt=[1-4] the code is the follows:$ ./bin/lbptop_calculate_parameters.py -rT 1 2 3 4 replayTo gerate a single resolution strategy in time domain, it is necessary to set only one value for Rt. For example, to generate a single resolution description in time domain for Rt=1 the code is the follows:$ ./bin/lbptop_calculate_parameters.py -rT 1 replayTo see all the options for the scriptslbptop_calculate_parameters.pyjust type–helpat the command line.Classification using Chi-2 DistanceThe clasification using Chi-2 distance consists of two steps. The first one is creating the histogram model (average LBP-TOP histogram for each plane and it combinations of all the real access videos in the training set). The second step is comparison of the features of development and test videos to the model histogram and writing the results.The script to use for creating the histogram model isscript/lbptop_mkhistmodel.py. It expects that the LBP-TOP features of the videos are stored in a folder./lbp_features. The model histogram will be written in the default output folder./res. You can change this default features by setting the input arguments. To execute this script, just run:$ ./bin/lbptop_mkhistmodel.pyThe script for performing Chi-2 histogram comparison isscript/lbptop_cmphistmodels.py, and it assumes that the model histogram has been already created. It makes use of the utility scriptspoof/chi2.pyandml/perf.pyfor writing the results in a file. The default input directory is./lbp_features, while the default input directory for the histogram model as well as default output directory is./res. To execute this script, just run:$ ./bin/lbptop_cmphistmodels.pyThe performance results will be calculated for each LBP-TOP planes and the combinations XT+YT and XY+XT+YT.To see all the options for the scriptslbptop_mkhistmodel.pyandlbptop_cmphistmodels.py, just type–helpat the command line.Classification with Linear Discriminant Analysis (LDA)The classification with LDA is performed using the scriptscript/lbptop_ldatrain.py. It makes use of the scriptsml/lda.py,ml/pca.py(if PCA reduction is performed on the data) andml/norm.py(if the data need to be normalized). The default input and output directories are./lbp_featuresand./res. To execute the script with the default parameters, call:$ ./bin/lbptop_ldatrain.pyThe performance results will be calculated for each LBP-TOP planes and the combinations XT+YT and XY+XT+YT.To see all the options for this script, just type–helpat the command line.Classification with Support Vector Machine (SVM)The classification with SVM is performed using the scriptscript/lbptop_svmtrain.py. It makes use of the scriptsml/pca.py(if PCA reduction is performed on the data) andml/norm.py(if the data need to be normalized). The default input and output directories are./lbp_featuresand./res. To execute the script with the default parameters, call:$ ./bin/lbptop_svmtrain.pyThe performance results will be calculated for each LBP-TOP planes and the combinations XT+YT and XY+XT+YT.To see all the options for this script, just type–helpat the command line.Generating paper resultsThe next code blocks are codes to generate the results from lines 4, 5, 6, 7, 8 of Table 1.Line 4#Extracting the LBP-TOP features $ ./bin/lbptop_calculate_parameters.py --directory lbptop_features/ --input-dir database/ -rX 1 -rY 1 -rT 1 2 3 4 5 6 -cXY -cXT -cYT --lbptypeXY riu2 --lbptypeXT riu2 --lbptypeYT riu2 replay #Running the SVM machine $ ./bin/lbptop_svmtrain.py -n --input-dir lbptop_features/ --output-dir res/ replay #Extracting the scores for each plane $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-plane.txt --normalization-file svm_normalization_XY-plane.txt --machine-type SVM --plane XY --output-dir res/scores/scores_XY replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-Plane.txt --normalization-file svm_normalization_XT-Plane.txt --machine-type SVM --plane XT --output-dir res/scores/scores_XT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_YT-Plane.txt --normalization-file svm_normalization_YT-Plane.txt --machine-type SVM --plane YT --output-dir res/scores/scores_YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-YT-Plane.txt --normalization-file svm_normalization_XT-YT-Plane.txt --machine-type SVM --plane XT-YT --output-dir res/scores/scores_XT-YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-XT-YT-plane.txt --normalization-file svm_normalization_XY-XT-YT-plane.txt --machine-type SVM --plane XY-XT-YT --output-dir res/scores/scores_XY-XT-YT replay #Result Analysis $./bin/lbptop_result_analysis.py --scores-dir res/scores/ --output-dir res/results/ replayLine 5:#Extracting the LBP-TOP features $ ./bin/lbptop_calculate_parameters.py --directory lbptop_features/ --input-dir database/ -rX 1 -rY 1 -rT 1 2 3 4 5 6 -cXY -cXT -cYT -nXT 4 -nYT 4 replay #Running the SVM machine $ ./bin/lbptop_svmtrain.py -n --input-dir lbptop_features/ --output-dir res/ replay #Extracting the scores for each plane $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-plane.txt --normalization-file svm_normalization_XY-plane.txt --machine-type SVM --plane XY --output-dir res/scores/scores_XY replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-Plane.txt --normalization-file svm_normalization_XT-Plane.txt --machine-type SVM --plane XT --output-dir res/scores/scores_XT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_YT-Plane.txt --normalization-file svm_normalization_YT-Plane.txt --machine-type SVM --plane YT --output-dir res/scores/scores_YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-YT-Plane.txt --normalization-file svm_normalization_XT-YT-Plane.txt --machine-type SVM --plane XT-YT --output-dir res/scores/scores_XT-YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-XT-YT-plane.txt --normalization-file svm_normalization_XY-XT-YT-plane.txt --machine-type SVM --plane XY-XT-YT --output-dir res/scores/scores_XY-XT-YT replay #Result Analysis $./bin/lbptop_result_analysis.py --scores-dir res/scores/ --output-dir res/results/ replayLine 6:#Extracting the LBP-TOP features $ ./bin/lbptop_calculate_parameters.py --directory lbptop_features/ --input-dir database/ -rX 1 -rY 1 -rT 1 2 3 4 -cXY -cXT -cYT replay #Running the SVM machine $ ./bin/lbptop_svmtrain.py -n --input-dir lbptop_features/ --output-dir res/ replay #Extracting the scores for each plane $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-plane.txt --normalization-file svm_normalization_XY-plane.txt --machine-type SVM --plane XY --output-dir res/scores/scores_XY replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-Plane.txt --normalization-file svm_normalization_XT-Plane.txt --machine-type SVM --plane XT --output-dir res/scores/scores_XT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_YT-Plane.txt --normalization-file svm_normalization_YT-Plane.txt --machine-type SVM --plane YT --output-dir res/scores/scores_YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-YT-Plane.txt --normalization-file svm_normalization_XT-YT-Plane.txt --machine-type SVM --plane XT-YT --output-dir res/scores/scores_XT-YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-XT-YT-plane.txt --normalization-file svm_normalization_XY-XT-YT-plane.txt --machine-type SVM --plane XY-XT-YT --output-dir res/scores/scores_XY-XT-YT replay #Result Analysis $./bin/lbptop_result_analysis.py --scores-dir res/scores/ --output-dir res/results/ replayLine 7:#Extracting the LBP-TOP features $ ./bin/lbptop_calculate_parameters.py --directory lbptop_features/ --input-dir database/ -rX 1 -rY 1 -rT 1 2 -cXY -cXT -cYT --lbptypeXY regular --lbptypeXT regular --lbptypeYT regular replay #Running the SVM machine $ ./bin/lbptop_svmtrain.py -n --input-dir lbptop_features/ --output-dir res/ replay #Extracting the scores for each plane $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-plane.txt --normalization-file svm_normalization_XY-plane.txt --machine-type SVM --plane XY --output-dir res/scores/scores_XY replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-Plane.txt --normalization-file svm_normalization_XT-Plane.txt --machine-type SVM --plane XT --output-dir res/scores/scores_XT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_YT-Plane.txt --normalization-file svm_normalization_YT-Plane.txt --machine-type SVM --plane YT --output-dir res/scores/scores_YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-YT-Plane.txt --normalization-file svm_normalization_XT-YT-Plane.txt --machine-type SVM --plane XT-YT --output-dir res/scores/scores_XT-YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-XT-YT-plane.txt --normalization-file svm_normalization_XY-XT-YT-plane.txt --machine-type SVM --plane XY-XT-YT --output-dir res/scores/scores_XY-XT-YT replay #Result Analysis $./bin/lbptop_result_analysis.py --scores-dir res/scores/ --output-dir res/results/ replayLine 8:#Extracting the LBP-TOP features $ ./bin/lbptop_calculate_parameters.py --directory lbptop_features/ --input-dir database/ -rX 1 -rY 1 -rT 1 2 -cXY -cXT -cYT -nXT 16 -nYT 16 replay #Running the SVM machine $ ./bin/lbptop_svmtrain.py -n --input-dir lbptop_features/ --output-dir res/ replay #Extracting the scores for each plane $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-plane.txt --normalization-file svm_normalization_XY-plane.txt --machine-type SVM --plane XY --output-dir res/scores/scores_XY replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-Plane.txt --normalization-file svm_normalization_XT-Plane.txt --machine-type SVM --plane XT --output-dir res/scores/scores_XT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_YT-Plane.txt --normalization-file svm_normalization_YT-Plane.txt --machine-type SVM --plane YT --output-dir res/scores/scores_YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XT-YT-Plane.txt --normalization-file svm_normalization_XT-YT-Plane.txt --machine-type SVM --plane XT-YT --output-dir res/scores/scores_XT-YT replay $ ./bin/lbptop_make_scores.py --features-dir lbptop_features --machine-file svm_machine_XY-XT-YT-plane.txt --normalization-file svm_normalization_XY-XT-YT-plane.txt --machine-type SVM --plane XY-XT-YT --output-dir res/scores/scores_XY-XT-YT replay #Result Analysis $./bin/lbptop_result_analysis.py --scores-dir res/scores/ --output-dir res/results/ replayAfter that, it’s recommended to go out for a long coffee. This procedure can take a week.ProblemsIn case of problems, please contact any of the authors of the paper.
antispoofing.motion
This package implements a motion-based counter-measure to spoofing attacks to face recognition systems as described at the paperCounter-Measures to Photo Attacks in Face Recognition: a public database and a baseline, by Anjos and Marcel, International Joint Conference on Biometrics, 2011.If you use this package and/or its results, please cite the following publications:The original paper with the counter-measure explained in details:@inproceedings{Anjos_IJCB_2011, author = {Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien}, keywords = {Attack, Counter-Measures, Counter-Spoofing, Disguise, Dishonest Acts, Face Recognition, Face Verification, Forgery, Liveness Detection, Replay, Spoofing, Trick}, month = oct, title = {Counter-Measures to Photo Attacks in Face Recognition: a public database and a baseline}, booktitle = {International Joint Conference on Biometrics 2011}, year = {2011}, url = {http://publications.idiap.ch/downloads/papers/2011/Anjos_IJCB_2011.pdf} }Bob as the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos and L. El Shafey and R. Wallace and M. G\"unther and C. McCool and S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, url = {http://publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf}, }If you decide to use the REPLAY-ATTACK database, you should also mention the following paper, where it is introduced:@inproceedings{Chingovska_BIOSIG_2012, author = {Chingovska, Ivana and Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien}, keywords = {Attack, Counter-Measures, Counter-Spoofing, Face Recognition, Liveness Detection, Replay, Spoofing}, month = sep, title = {On the Effectiveness of Local Binary Patterns in Face Anti-spoofing}, booktitle = {IEEE Biometrics Special Interest Group}, year = {2012}, url = {http://publications.idiap.ch/downloads/papers/2012/Chingovska_IEEEBIOSIG2012_2012.pdf}, }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.Raw dataThis method was originally conceived to work with thethe PRINT-ATTACK database, but has since evolved to work with the whole of thethe REPLAY-ATTACK database, which is a super-set of the PRINT-ATTACK database. You are allowed to select protocols in each of the applications described in this manual. To generate the results for the paper, just selectprintas protocol option where necessary. Detailed comments about specific results or tables are given where required.The data used in the paper is publicly available and should be downloaded and installedpriorto try using the programs described in this package. The root directory of the database installation is used by the first program in the Antispoofing-Motion toolchain.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.motionto download the latest stable version of this package. Then, extract the .zip file to a folder of your choice.Theantispoofing.motionpackage is a satellite package of the free signal processing and machine learning libraryBob. This dependency has to be downloaded manually. This version of the package depends onBobversion 2 or greater. To installpackages of Bob, please read theInstallation Instructions. ForBobto be able to work properly, some dependent Bob packages are required to be installed. Please make sure that you have read the Dependencies for your operating system.The most simple solution is to download and extractantispoofing.motionpackage, then to go to the console and write:$ cd antispoofing.motion $ python bootstrap-buildout.py $ bin/buildoutThis will download all required dependentBoband other packages and install them locally.User GuideIt is assumed you have followed the installation instructions for the package and got this package installed and the REPLAY-ATTACK (or PRINT-ATTACK) database downloaded and uncompressed in a directory to which you have read access. Through this manual, we will call this directory/root/of/database. That would be the directory thatcontainsthe sub-directoriestrain,test,develandface-locations.Note for Grid UsersAt Idiap, we use the powerful Sun Grid Engine (SGE) to parallelize our job submissions as much as we can. At the Biometrics group, we have developed alittle toolbox <http://pypi.python.org/pypi/gridtk>that can submit and manage jobs at the Idiap computing grid through SGE. If you are at Idiap, you can download and install this toolset by addinggridtkat theeggssection of yourbuildout.cfgfile, if it is not already there. If you are not, you still may look inside for tips on automated parallelization of scripts.The following sections will explain how to reproduce the paper results in single (non-gridified) jobs. A note will be given where relevant explaining how to parallalize the job submission usinggridtk.Calculate Frame DifferencesThe first stage of the process is to calculate the normalized frame differences using video sequences. The program that will do that should be sitting inbin/motion_framediff.py. It can calculate normalize frame differences in distinct parts of the scene (given you provide face locations for each of the frames in all video sequences to be analyzed).To execute the frame difference process to all videos in the REPLAY-ATTACK database, just execute:$ ./bin/motion_framediff.py /root/of/database results/framediff replayThere are more options for themotion_framediff.pyscript you can use (such as the sub-protocol selection for the Replay Attack database). Note that, by default, all applications are tunned to work with thewholeof the database. Just type--helpafterthe keywordreplayat the command line for instructions.NoteTo parallelize this job, do the following:$ ./bin/jman submit --array=1200 ./bin/motion_framediff.py /root/of/database results/framediff replayThemagicnumber of1200entries can be found by executing:$ ./bin/motion_framediff.py --grid-count replayWhich just prints the number of jobs it requires for the grid execution.Calculate the 5 QuantitiesThe second step in calculating the frame differences is to compute the set of 5 quantities that are required for the detection process. To reproduce the results in the paper, we accumulate the results in windows of 20 frames, without overlap:$ ./bin/motion_diffcluster.py results/framediff results/quantities replayThere are more options for themotion_diffcluster.pyscript you can use (such as the sub-protocol selection). Just type–helpat the command line for instructions.NoteThis job is very fast and normally does not require parallelization. You can still do it with:$ ./bin/jman submit --array=1200 ./bin/motion_diffcluster.py results/framediff results/quantities replayTraining with Linear Discriminant Analysis (LDA)Training a linear machine to perform LDA should go like this:$ ./bin/motion_ldatrain.py --verbose results/quantities results/lda replayThis will create a new linear machine train it using the training data. Evaluation based on the EER on the development set will be performed by the end of the training:Performance evaluation: -> EER @ devel set threshold: 8.11125e-02 -> Devel set results: * FAR : 16.204% (175/1080) * FRR : 16.174% (558/3450) * HTER: 16.189% -> Test set results: * FAR: 16.389% (236/1440) * FRR: 18.641% (856/4592) * HTER: 17.515%The resulting linear machine will be saved in the output directory calledresults/lda.Training an MLPTraining MLPs to perform discrimination should go like this:$ ./bin/motion_rproptrain.py --verbose --epoch=10000 --batch-size=500 --no-improvements=1000000 --maximum-iterations=10000000 results/quantities results/mlp replayThis will create a new MLP and train it using the data produced by the “clustering” step. The training can take anywhere from 20 to 30 minutes (or even more), depending on your machine speed. You should see some debugging output with the partial results as the training go along:... iteration: RMSE:real/RMSE:attack (EER:%) ( train | devel ) 0: 9.1601e-01/1.0962e+00 (60.34%) | 9.1466e-01/1.0972e+00 (58.71%) 0: Saving best network so far with average devel. RMSE = 1.0059e+00 0: New valley stop threshold set to 1.2574e+00 10000: 5.6706e-01/4.2730e-01 (8.29%) | 7.6343e-01/4.3836e-01 (11.90%) 10000: Saving best network so far with average devel. RMSE = 6.0089e-01 10000: New valley stop threshold set to 7.5112e-01 20000: 5.6752e-01/4.2222e-01 (8.21%) | 7.6444e-01/4.3515e-01 (12.07%) 20000: Saving best network so far with average devel. RMSE = 5.9979e-01 20000: New valley stop threshold set to 7.4974e-01The resulting MLP will be saved in the output directory calledresults/mlp. The resulting directory will also contain performance analysis plots. The results derived after this step are equivalent to the results shown at Table 2 and Figure 3 at the paper.To get results for specific supports as shown at the first two lines of Table 2, just select the support using the--support=handor--support=fixedas a flag tomotion_rproptrain.py. Place this flagsafterthe keywordreplayat the command line. At this point, it is adviseable to use different output directories using the--output-dirflag as well. If you need to modify or regenerate Figure 3 at the paper, just look atantispoofing/motion/ml/perf.py, which contains all plotting and analysis routines.NoteIf you think that the training is taking too long, you can interrupt it by pressingCTRL-C. This will cause the script to quit gracefully and still evaluate the best MLP network performance to that point.NoteTo execute this script in the grid environment, just set the output directory to depend on the SGE_TASK_ID environment variable:$ ./bin/jman submit --array=10 ./bin/motion_rproptrain.py --verbose --epoch=10000 --batch-size=500 --no-improvements=1000000 --maximum-iterations=10000000 results/quantities 'results/mlp.%(SGE_TASK_ID)s' replayDumping Machine (MLP or LDA) ScoresYou should now dump the scores for every input file in theresults/quantitiesdirectory using themotion_make_scores.pyscript, for example, to dump scores produced with by an MLP:$ ./bin/motion_make_scores.py --verbose results/quantities results/mlp/mlp.hdf5 results/mlp-scores replayThis should give you the detailed output of the machine for every input file in the training, development and test sets. You can use these score files in your own score analysis routines, for example.NoteThe score file format is an HDF5 file with a single array, which contains the scores for every frame in the input video. Values which are marked as NaN should be ignored by your procedure. The reason varies: it may mean no valid face was detected on such a frame or that the motion-detection procedure decided to skip (on user configuration) the analysis of that frame.Running the Time AnalysisThe time analysis is the end of the processing chain, it fuses the scores of instantaneous outputs to give out a better estimation of attacks and real-accessesfor a set of frames. You can used with the scores output by MLPs or linear machines (LDA training). To use it, write something like:$ ./bin/motion_time_analysis.py --verbose results/mlp-scores results/mlp-time replayThe 3 curves on Figure 4 at the paper relate to the different support types. Just repeat the procedure for every system trained with data for a particular support (equivalent for then entries in Table 2). To set the support use--helpafter the keywordreplayon the command-line above to find out how to specify the support to this program. The output for this script is dumped in PDF (plot) and text (.rstfile) on the specified directory.Merging ScoresIf you wish to create a single5-column format fileby combining this counter-measure scores for every video into a single file that can be fed to external analysis utilities such as ourantispoofing.evaluation <http://pypi.python.org/pypi/antispoofing.evaluation>package, you should use the scriptmotion_merge_scores.py. You will have to specify how many of the scores in every video you will want to average and the input directory containing the scores files that will be merged.The output of the program consists of three 5-column formatted files with the client identities and scores forevery videoin the input directory. A line in the output file corresponds to a video from the database.You run this program on the output ofmotion_make_scores.py. So, it should look like this if you followed the previous example:$ ./bin/motion_merge_scores.py results/mlp-scores results/mlp-merged replayThe above commandline examples will generate 3 files containing the training, development and test scores, accumulated over each video in the respective subsets, for input scores in the given input directory.ProblemsIn case of problems, please contact any of the authors of the paper.
antispoofing.optflow
This package contains our published Optical Flow algorithm for face recognition anti-spoofing. This document explains how to install it and use it to produce our paper results.If you use this package and/or its results, please cite the following publications:The original paper with the counter-measure explained in details (under review):@article{Anjos_IETBMT_2013, author = {Anjos, Andr{\'{e}} and Murali Mohan Chakka and Marcel, S{\'{e}}bastien}, keywords = {Attack, Counter-Measures, Counter-Spoofing, Disguise, Dishonest Acts, Face Recognition, Face Verification, Forgery, Liveness Detection, Replay, Spoofing, Trick, Optical Flow}, month = apr, title = {Motion-Based Counter-Measures to Photo Attacks in Face Recognition}, journal = {Institution of Engineering and Technology - Biometrics}, year = {2013}, }Bob as the core framework used to run the experiments:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, url = {http://publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf}, }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.Raw dataThe data used in the paper is publicly available and should be downloaded and installedpriorto try using the programs described in this package. Visitthe PHOTO-ATTACK database portalfor more information.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.optflowto download the latest stable version of this package.There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers likepip(oreasy_install) or manually download, unpack and usezc.buildoutto create a virtual work environment just for this package.Using an automatic installerUsingpipis the easiest (shell commands are marked with a$signal):$ pip install antispoofing.optflowYou can also do the same witheasy_install:$ easy_install antispoofing.optflowThis will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.This scheme works well with virtual environments byvirtualenvor if you have root access to your machine. Otherwise, we recommend you use the next option.Usingzc.buildoutDownload the latest version of this package fromPyPIand unpack it in your working area. The installation of the toolkit itself usesbuildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:$ python bootstrap.py $ ./bin/buildoutThese 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.NoteThe python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use ofBob, you must make sure that thebootstrap.pyscript is called with thesameinterpreter used to build Bob, or unexpected problems might occur.If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the filebuildout.cfgbeforerunning./bin/buildout. Find the section namedbuildoutand edit or add the lineprefixesto point to the directory where Bob is installed or built. For example:[buildout] ... prefixes=/Users/crazyfox/work/bob/buildUser GuideIt is assumed you have followed the installation instructions for the package and got this package installed and the PHOTO-ATTACK database downloaded and uncompressed in a directory. You should have all required utilities sitting inside a binary directory depending on your installation strategy (utilities will be inside thebinif you used the buildout option). We expect that the video files downloaded for the PHOTO-ATTACK database are installed in a sub-directory calleddatabaseat the root of the package. You can use a link to the location of the database files, if you don’t want to have the database installed on the root of this package:$ ln -s /path/where/you/installed/the/photo-attack-database databaseIf you don’t want to create a link, use the--input-dirflag to specify the root directory containing the database files. That would be the directory thatcontainsthe sub-directoriestrain,test,develandface-locations.Paper Layout: How to Reproduce our ResultsThe paper studies 4 algorithms (the first 3 are published elsewhere and arenota contribution to this paper):Reference System 1 (RS1) - Kollreider’s Optical Flow anti-spoofing:@article{Kollreider_2009, author={K. Kollreider AND H. Fronthaler AND J. Bigun}, title={Non-intrusive liveness detection by face images}, volume={27}, number={3}, journal={Image and Vision Computing}, publisher={Elsevier B.V.}, year={2009}, pages={233--244}, }Reference System 2 (RS2) - Bao’s Optical Flow anti-spoofing:@inproceedings{Bao_2009, author={Wei Bao AND H. Li AND Nan Li AND Wei Jiang}, title={A liveness detection method for face recognition based on optical flow field}, booktitle={2009 International Conference on Image Analysis and Signal Processi ng}, publisher={IEEE}, year={2009}, pages={233--236}, }Reference System 3 (RS3) - Our own Frame Difference’s based anti-spoofing:@inproceedings{Anjos_IJCB_2011, author = {Anjos, Andr{\'{e}} and Marcel, S{\'{e}}bastien}, keywords = {Attack, Counter-Measures, Counter-Spoofing, Disguise, Dishonest Acts, Face Recognition, Face Verification, Forgery, Liveness Detection, Replay, Spoofing, Trick}, month = oct, title = {Counter-Measures to Photo Attacks in Face Recognition: a public database and a baseline}, booktitle = {International Joint Conference on Biometrics 2011}, year = {2011}, url = {http://publications.idiap.ch/downloads/papers/2011/Anjos_IJCB_2011.pdf} }The final algorithm based on Optical Flow Correlation (OFC)represents our contribution in this paper.To reproduce the results for RS3, you can follow the instructions onits own satellite package for Bob. The scripts for that package should be auto-generated and made available for you under yourbinas well (this package depends on that one).In this manual, we address how to extract results for RS1, 2 and OFC, which operate on the top of a previously estimated Optical Flow (OF) field. OF is, therefore, the first topic in this manual.Extract the Optical Flow FeaturesWe ship this package with a preset to useCe Liu’s OF framework. This is of course not required, but it is the framework we have tested our method with, therefore the one we recommend you to start using. This framework estimates thedenseOF field between any two successive frames. It is quite slow. Be warned, it may take quite some time to get through all the videos. To run the extraction sequentially, for all videos, use the following command:$ ./bin/optflow_estimate.py --verbose /root/of/database results/flows replay --protocol=photoNoteThe command line above is going to take a lot of time to complete. You may need to parallelize the job. If you are Idiap, you can use thegridtkpackage, which should be downloaded and installed on your current environment:$ ./bin/jman submit --array=800 --queue=q1d ./bin/optflow_estimate.py --verbose /root/of/database results/flows replay --protocol=photoThemagicnumber of800entries can be found by executing:$ ./bin/optflow_estimate.py --grid-count replay --protocol=photoWhich just prints the number of jobs it requires for the grid execution. Each job is consuming less than 2 gigabytes of RAM, but more than 1 gigabyte. Therefore, you must choose the right queue and may need to set memory requirements for the machines you will be running at.NoteIn case you want to replace this phase by another algorithm of your own. Notice that the output format is 1 HDF5 file per input video in the database, organized in the same way as in the original database. Each input video should contain a single 4D 64-bit float point array which has the following shape: (frames, u+v, height, width). The second dimension corresponds to the U (horizontal) and V (vertical) velocities as output by your algorithm, for every pixel in the image. If you have not used a dense OF estimator, please extrapolate yourself the values before calling the next scripts.If you respect this requirement, than you can test the results of this framework with any OF estimation technique of your choice.Once you are in possession of the flow fields. You can start calculating the scores required by each of the methods reviewed in the paper. It can help in terms of processing speed to have the features located on a local hard-drive. The HDF5 files tend to be huge.ImportantDepending on the version of FFmpeg you have installed on your platform when you estimate the OF, you may get slightly different results at this step. These are due to imprecisions on the video decoding.You can also use the Matlab version of Ce Liu’s code directly to produce the flow fields. In this case, you may also find small differences on the estimated velocities. The differences are due to the movie decoding and gray-scale conversion, which are different than Bob’s.In any of these conditions, our tests show these do not affect the overall performance of our method. It may slightly change the final results you can obtain.Reference System 1: Scores from Kollreider’sTo calculate scores using Kollreider’s method, use the scriptoptflow_kollreider.pyin the following way:$ ./bin/optflow_kollreider.py --verbose /root/of/database results/flows results/kollreider replay --protocol=photoYou can modify the\(\tau\)parameter required by the method with the program option--tau=<float-value>. By default, this parameter is set to1.0. Refer to the original paper by Kollreider to understand the meaning and how to tune this parameter. If you tune the parameter and execute the error analysis as explained below, you will get to the results shown on Table 1 of our paper.NoteThe above program can be somewhat slow, even if it is much faster than the flow field estimation itself. If you want to speed things, up, just run it on the grid:$ ./bin/jman submit --array=800 ./bin/optflow_kollreider.py --verbose /root/of/database results/flows results/kollreider replay --protocol=photoThe programoptflow_kollreider.pycan also print the number of jobs it can be broken into with the--grid-countoption:./bin/optflow_kollreider.py --grid-count replay --protocol=photoImportantWe estimate the position of the face center and the ears based on the bounding-box provided by the face locations. This way, we could compare all algorithms using the same input set. We have not tested if specialized key-point localizers would give better results than ours for this method.Besides generating output for the tests in the paper, you can also generate an annotated video, showing how our extrapolation of the face bounding boxes work for finding out the regions of interest to apply Kollreider’s work on. To do this, use the scriptoptflow_kollreider_annotate.py. It works on a similar way to the above script and will process the whole database if not told otherwise. This can be somewhat long as well, but you can grid-fy it if you wish or use filtering options for the database to limit the number of videos analysed. For example:$ bin/optflow_kollreider_annotate.py -v /idiap/group/replay/database/protocols/replayattack-database tmp replay --protocol=photo --client=101 --light=adverseReference System 2: Scores from Bao’sTo calculate scores for Bao’s method, use the scriptoptflow_bao.pyin the following way:$ ./bin/optflow_bao.py --verbose /root/of/database results/flows results/bao replay --protocol=photoYou can modify the border parameter required by the method with the program option--border=<integer-value>. By default, this parameter is set to5(pixels). The original paper by Bao and others does not suggest such a parameter or mention how does the face bounding-boxes are set. We assume a default value of pixels surrounding our detected face. In the paper, we scan this value from0(zero) to a number of pixels to test the method. If you tune the parameter and execute the error analysis as explained below, you will get to the results shown on Table 2 of our paper.NoteThe above program can be somewhat slow, even if it is much faster than the flow field estimation itself. If you want to speed things, up, just run it on the grid:$ ./bin/jman submit --array=800 ./bin/optflow_bao.py --verbose /root/of/database results/flows results/bao replay --protocol=photoThe programoptflow_bao.pycan also print the number of jobs it can be broken into with the--grid-countoption:./bin/optflow_bao.py --grid-count replay --protocol=photoReference System 3: Frame-differencesAs mentioned before, you should follow the instructions onits own satellite package for Bob. The scripts for that package should be auto-generated and made available for you under yourbinas well (this package depends on that one).Optical Flow Correlation (OFC)To reproduce the results on our paper, you will need first to generate the scores for the\(\chi^2\)comparison for every frame in the sequence. Frames with no faces detected generate a score valuednumpy.NaN, similar to other counter-measures implemented by our group. To generate each score per frame, you can use the applicationoptflow_histocomp.py:$ ./bin/optflow_histocomp.py --verbose /root/of/database results/flows results/histocomp replay --protocol=photoNoteThe above program can be somewhat slow, even if it is much faster than the flow field estimation itself. If you want to speed things, up, just run it on the grid:$ ./bin/jman submit --array=800 ./bin/optflow_histocomp.py --verbose /root/of/database results/flows results/histocomp replay --protocol=photoThe programoptflow_histocomp.pycan also print the number of jobs it can be broken into with the--grid-countoption:./bin/optflow_histocomp.py --grid-count replay --protocol=photoYou can generate the results in Figure 5 and 6 of our paper by setting 2 parameters on the above script:--number-of-binsThis changes the parameter\(Q\), explained on the paper, related to the quantization of the angle space. (see results in Figure 5.)--offsetThis changes the offset for the quantization. Its effect is studied in Figure 6, for--number-of-bins=2, as explained in the paper.By modifying the above parameters and executing an error analysis as explained bellow, with--window-size=220, you will get to the results plotted.Error AnalysisOnce the scores you want to analyze are produced by one of the methods above, you can calculate the error on the database using the applicationscore_analysis.py. This program receives one directory (containing the scores output by a given method) and produces a console analysis of such a method, which is used by the paper:$ ./bin/score_analysis.py results/histocomp replay --protocol=photoThat command will calculate a development set threshold at the Equal Error Rate (EER) and will apply it to the test set, reporting errors on both sets. A typical output would be like this:Input data: /idiap/temp/aanjos/spoofing/scores/optflow_histocomp Thres. at EER of development set: 6.9459e-02 [EER @devel] FAR: 37.04% (15601 / 42120) | FRR: 37.04% (8312 / 22440) | HTER: 37.04% [HTER @test] FAR: 37.11% (20843 / 56160) | FRR: 35.75% (10696 / 29920) | HTER: 36.43%The error analysis program considers, by default, every frame analyzed asan individual (independent) observationand calculates the error rates based on the overall set of frames found on the whole development and test sets. The numbers printed inside the parentheses indicate how many frames were evaluated in each set (denominator) and how many of those contributed to the percentage displayed (numerator). The Half-Total Error Rate (HTER) is evaluated for both the development set and test sets. The HTER for the develpment set is equal to the EER on the same set, naturally.Thescore_analysis.pyscript contains 2 parameters that can be used to fine-tune the program behaviour, to be known:--window-size=<integer>Defines a window size to which the scores are going to be averaged to, within the same score sequence. So, for example, if one of the files produced byoptflow_histocomp.pycontains a sequence of scores that reads like[1.0, 2.0, 1.5, 3.5, 0.5], and the window-size parameter is set to 2, then, the scores evaluated by this procedure are[1.5, 1.75, 2.5, 2.0], which represent the averages of[1.0, 2.0],[2.0, 1.5],[1.5, 3.5]and[3.5, 0.5].--overlap=<integer>Controls the amount of overlap between the windows. If not set, the default overlap is set towindow-size- 1. You can modify this behaviour by setting this parameter to a different value. Taking the example above, if you set the window-size to 2 and the overlap to zero, then the score set produced by this analysis would be[1.5, 2.5]. Notice that the frame value0.5(the last of the sequence) is ignored.You will observe the effect of setting the window-size on the score analysis by looking at the number ofaveraged framesanalyzed:$ ./bin/score_analysis.py --window-size=220 --overlap=80 results/histocomp replay --protocol=photoAnd the output:Input data: /idiap/temp/aanjos/spoofing/scores/optflow_histocomp Window size: 220 (overlap = 80) Thres. at EER of development set: 1.4863e-01 [EER @devel] FAR: 2.78% (5 / 180) | FRR: 2.50% (3 / 120) | HTER: 2.64% [HTER @test] FAR: 2.92% (7 / 240) | FRR: 1.88% (3 / 160) | HTER: 2.40%You can generate the results in Figure 7 and Table III on the paper just by manipulating this program.Our paper also shows a break-down analysis (by device attack type and support) on Figure 8 (last figure). To generate such a figure, one must produce the break-down analysis per device (Figure 8.a) and attack support (Figure 8.b). To do this, pass the--breakdownoption to thescore_analysis.pyscript:$ ./bin/score_analysis.py --window-size=220 --overlap=80 --breakdown results/histocomp replay --protocol=photoOur paper also discusses the impact of skipping the OF calculation on certain frames (see Discussion section) in the interest of saving computational resources. You can generate the table presented at the paper by playing with the--skipparameter ofscore_analysis.py. By default, we don’t skip any frames, if you set this parameter to 1, then we’d skip every other frame. If you set it to 2, then we only consider 1 in every 3 frames, and so on.ProblemsIn case of problems, please contact any of the authors of the paper.
antispoofing.utils
Utility package for Anti-Spoofing Countermeasures for BobThis package contains some utility functions to be used in anti-spoofing countermeasures codes. The goal of this package is to centralize some functions in common. This package contains:LDA MachinePCA MachineFunction to normalize parametersPerformance measure functionsEyes localization functionIf you use this package and/or its results, please cite the following publications:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }InstallationTo install this package – alone or together with otherPackages of Bob– please read theInstallation Instructions. ForBobto be able to work properly, some dependent packages are required to be installed. Please make sure that you have read theDependenciesfor your operating system.DocumentationFor further documentation on this package, please read theStable Versionor theLatest Versionof the documentation. For a list of tutorials on this or the other packages obBob, or information on submitting issues, asking questions and starting discussions, please visit its website.
antispoofing.verification.gmm
ThisBobsatellite package allows you to run a baseline Parts-Based GMM face verification system on the Replay Attack Database. It explains how to setup this package, generate the Universal Background Model (UBM), client models and finally, scores.If you use this package and/or its results, please cite the following publications:The Replay-Attack Database and baseline GMM results for it:@inproceedings{Chingovska_BIOSIG_2012, author = {I. Chingovska AND A. Anjos AND S. Marcel}, keywords = {Attack, Counter-Measures, Counter-Spoofing, Face Recognition, Liveness Detection, Replay, Spoofing}, month = sep, title = {On the Effectiveness of Local Binary Patterns in Face Anti-spoofing}, booktitle = {IEEE BioSIG 2012}, year = {2012}, }Bob as the core framework used for these results:@inproceedings{Anjos_ACMMM_2012, author = {A. Anjos AND L. El Shafey AND R. Wallace AND M. G\"unther AND C. McCool AND S. Marcel}, title = {Bob: a free signal processing and machine learning toolbox for researchers}, year = {2012}, month = oct, booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan}, publisher = {ACM Press}, }If you wish to report problems or improvements concerning this code, please contact the authors of the above mentioned papers.InstallationNoteIf you are reading this page through our GitHub portal and not through PyPI, notethe development tip of the package may not be stableor become unstable in a matter of moments.Go tohttp://pypi.python.org/pypi/antispoofing.verification.gmmto download the latest stable version of this package.There are 2 options you can follow to get this package installed and operational on your computer: you can use automatic installers likepip(oreasy_install) or manually download, unpack and usezc.buildoutto create a virtual work environment just for this package.Using an automatic installerUsingpipis the easiest (shell commands are marked with a$signal):$ pip install antispoofing.verification.gmmYou can also do the same witheasy_install:$ easy_install antispoofing.verification.gmmThis will download and install this package plus any other required dependencies. It will also verify if the version of Bob you have installed is compatible.This scheme works well with virtual environments byvirtualenvor if you have root access to your machine. Otherwise, we recommend you use the next option.Usingzc.buildoutDownload the latest version of this package fromPyPIand unpack it in your working area. The installation of the toolkit itself usesbuildout. You don’t need to understand its inner workings to use this package. Here is a recipe to get you started:$ python bootstrap.py $ ./bin/buildoutThese 2 commands should download and install all non-installed dependencies and get you a fully operational test and development environment.NoteThe python shell used in the first line of the previous command set determines the python interpreter that will be used for all scripts developed inside this package. Because this package makes use ofBob, <you must make sure that thebootstrap.pyscript is called with thesameinterpreter used to build Bob, or unexpected problems might occur.If Bob is installed by the administrator of your system, it is safe to consider it uses the default python interpreter. In this case, the above 3 command lines should work as expected. If you have Bob installed somewhere else on a private directory, edit the filebuildout.cfgbeforerunning./bin/buildout. Find the section namedexternaland edit the lineegg-directoriesto point to thelibdirectory of the Bob installation you want to use. For example:[external] recipe = xbob.buildout:external egg-directories=/Users/crazyfox/work/bob/build/libUser GuideConfiguration Tweaking (optional)The current scripts have been tunned to reproduce the results presented on some of our publications (as indicated above), as well as on FP7 ProjectTABULA RASAreports. They still accept an alternate (python) configuration file that can be passed as input. If nothing is passed, a default configuration file located atantispoofing/verification/gmm/config/gmm_replay.pyis used. Copy that file to the current directory and edit it to modify the overall configuration for the mixture-model system or for the (DCT-based) feature extraction. Use the option--config=myconfig.pyto set your private configuration if you decide to do so. Remember to set the option thoroughly through out all script calls or unexpected results may happen.Running the ExperimentsFollow the sequence described here to reproduce paper results.Runfeature_extract.pyto extract the DCT block features. This step is the only that requires the original database videos as input. It will generate,per video frame, all input features required by the scripts that follow this one:$ ./bin/feature_extract.py /root/of/replay/attack/database results/dctThis will run through the 1300 videos in the database and extract the features at the frame intervals defined at the configuration. In a relatively fast machine, it will take about 10-20 seconds per input video, with a frame-skip parameter set to 10 (the default). If you want to be thorough, you will need to parallelize this script so that the overall database can be processed in a reasonable amount of time.You can parallelize the execution of the above script (and of some of the scripts below as well) if you are a Idiap. Just do the following instead:$ ./bin/jman submit --array=1300 ./bin/feature_extract.py /root/of/replay/attack/database results/dct --gridNotice the--array=1300and--gridoption by the end of the script. The above instruction tells SGE to run 1300 versions of my script with the same input parameters. The only difference isSGE_TASK_IDenvironment variable that is changed at every interation (thanks to the--array=1300option). The--gridoption the execution of the script analyze first the value ofSGE_TASK_IDand re-set the internal processing so that particular instance offeature_extract.pyonly processes one of the 1300 videos that requires processing. You can check the status of the jobs in the grid withjman refresh(refer to theGridTk manual <http://packages.python.org/gridtk>for details).NoteIf you are not, you can still take a look at ourGridTk packagefor a logging grid job manager for SGE.UBM TrainingRuntrain_ubm.pyto create the GMM Universal Background Model from selected features (in the enrollment/training subset):$ ./bin/train_ubm.py results/dct results/ubm.hdf5NoteNote: if you use ~1k files, it will take a few hours to complete and there is currently no way to parallelize this. This step requires all features for the training set/enrollment are calculated. The job can take many gigabytes of physical memory from your machine, so we advise you to run it in a machine with, at least, 8 gigabytes of free memory.Unfortunately, you cannot easily parallelize this job. Nevertheless, you can submit it to the grid with the following command and avoid it to run on your machine (nice if you have a busy day of work):$ ./bin/jman submit --queue=q_1week --memory=8G ./bin/train_ubm.py results/dct results/ubm.hdf5Even if you choose a long enough queue, it is still prudent to set the memory requirements for the node you will be assigned to, to guarantee a minimum amount of memory.UBM Statistics GenerationRungenerate_statistics.pyto create the background statistics for all datafiles so we can calculate scores later. This step requires that the UBM is trained and all features are available:$ ./bin/generate_statistics.py results/dct results/ubm.hdf5 results/statsThis will take a lot of time to go through all the videos in the replay database. You can optionally submit the command to the grid, if you are at Idiap, with the following:$ ./bin/jman submit --array=840 ./bin/generate_statistics.py results/dct results/ubm.hdf5 results/stats --gridThis command will spread the GMM UBM statistics calculation over 840 processes that will run in about 5-10 minutes each. So, the whole job will take a few hours to complete - taking into consideration current settings for SGE at Idiap.Client Model trainingNoteYou can do this in parallel with the step above as it only depends on the input features pre-calculated at step 3Generate the models for all clients:$ ./bin/enrol.py results/dct results/ubm.hdf5 results/modelsIf you think the above job is too slow, you can throw it at the grid as well:$ ./bin/jman submit --array=35 ./bin/enrol.py results/dct results/ubm.hdf5 results/models --gridScoringIn this step you will score the videos (every N frames up to a certain frame number) against the generated client models. We do this exhaustively for both the test and development data. Command line execution goes like this:$ ./bin/score.py results/stats results/ubm.hdf5 results/models results/scoresLinear scoring is fast, but you can also submit a client-based break-down of this problem like this:$ ./bin/jman submit --array=35 ./bin/score.py results/stats results/ubm.hdf5 results/models results/scores --gridFull Score FilesAfter scores are calculated, you need to put them together to setup development and test text files in a 4 or 5 column format. To do that, use the applicationbuild_score_files.py. The next command will generate the baseline verification results by thouroughly matching every client video against every model available in the individual sets, averaging over (the first) 220 frames:$ ./bin/build_score_files.py results/scores results/perf --thorough --frames=220You can specify to use the attack protocols like this (avoid using the–thouroughoption):$ ./bin/build_score_files.py results/scores results/perf --protocol=grandtest --frames=220WarningIt is possible you see warnings being emitted by the above programs in certain cases. This isnormal. The warnings correspond to cases in which the program is trying to collect data from a certain frame number in which a face was not detected on the originating video.Reproduce Paper ResultsTo reproduce our paper results (~82% of attacks passing the verification system), you must generate two score files as defined above and then call a few programs that compute the threshold on the development set and apply it to the licit and spoofing test sets:$ ./bin/eval_threshold.py --scores=results/perf/devel-baseline-thourough-220.4c Threshold: 0.686207566 FAR : 0.000% (0/840) FRR : 0.000% (0/60) HTER: 0.000% $ ./bin/apply_threshold.py --scores=results/perf/test-grandtest-220.4c --threshold=0.686207566 FAR : 82.500% (330/400) FRR : 0.000% (0/80) HTER: 41.250%
antissrf
No description available on PyPI.
antistandby
Failed to fetch description. HTTP Status Code: 404
antistasi_logbook
Antistasi LogbookVersion:0.4.5
antistasi_sqf_tools
Antistasi SQF ToolsWiP
anti-sybil
BrightID Anti-SybilThis package provides a framework to evaluate the quality of different anti-sybilalgorithms, by simulating differentattackstoBrightID's social graph.Comparing performance of different algorithms in detecting sybils in different attacksAlgorithmsSybilRankis a well-known sybil detection algorithm that is based on the assumption that sybils have limited social connections to real users. It relies on the observation that an early-terminated random walk starting from a non-Sybil node in a social network has a higher degree-normalized (divided by the degree) landing probability to land at a non-sybil node than a sybil node.GroupSybilRankis an enhanced version of the SybilRank algorithm. In this algorithm, a graph is created in which the BrightID groups are nodes and edges are weighted based on affinity between groups. Then original SybilRank algorithm will be applied to this graph of groups and users get scores from the best group they belong to. This algorithm achieved best results so far in identifying sybils based on modeledattacksand is being used as official BrightID anti-sybil algorithm. up edges.WeightedSybilRankis an enhanced version of the SybilRank algorithm that uses the number of common neighbors of the tow connected nodes as weight (trustworthy factor) of the edge.AttacksLone AttacksOne attacker attempting to propagate score to the Sybils to verify them by connecting to other nodes and creating groups. We assumed that an attacker will have one account with a normal or above-average number of direct connections to honest users which they can use for interconnections to sybil accounts.One attacker attempts to connect to some of the seed nodes and create some sybil nodes.implementation-graphOne attacker attempts to connect to some of the top-ranked honests and create some sybil nodes.implementation-graphOne attacker attempts to connect to some of the honests and create some sybil nodes.implementation-graphOne attacker attempts to connect to one of the top-ranked honests and create multiple (duplicate) groups of the sybils.implementation-graphA seed node attempts to create some sybils nodes.implementation-graphAn honest node attempts to create some sybils nodes.implementation-graphCollaborative AttacksMultiple attackers attempting to propagate score to the Sybils to verify them by connecting to other nodes and creating groups. Attackers are able to connect to each other and each others’ sybil accounts in any way. We assumed that each attacker will have one account with a normal or above-average number of direct connections to honest users which they can use for interconnections to sybil accounts. All these attacks can be performed by one or more groups of attackers who collaborate together.One or more groups of attackers attempt to connect to some of the seeds and create some sybil nodes.implementation-graph for single group-graph for multiple groupsOne or more groups of attackers attempt to connect to some of the top-ranked honests and create some sybil nodes.implementation-graph for single group-graph for multiple groupsOne or more groups of attackers attempt to connect to some honest and create some sybil nodes.implementation-graph for single group-graph for multiple groupsA group of attackers attempts to connect to some of the top-ranked honests and create multiple (duplicate) groups of the sybils.implementation-graph for signle groupOne or more groups of seed nodes attempt to create some sybil nodes.implementation-graph for single group-graph for multiple groupsOne or more groups of honest nodes attempt to create some sybil nodes.implementation-graph for single group-graph for multiple groupsManual attackThis is a way to manually add new nodes/edges/groups to the BrightID graph and see how different algorithms rank those nodes. You can useMANUAL_ATTACK_OPTIONSvariable in theconfig.pyfile to define the manual attack. This example adds 3 sybil nodes, connect them toxGUyVQLYV80pajm8QP-9cfHC7xri49V58k02kqTAiUIas attacker and add them to a new group.graph for this manual attackMANUAL_ATTACK_OPTIONS = { 'top': True, 'connections': [ ['xGUyVQLYV80pajm8QP-9cfHC7xri49V58k02kqTAiUI', 'sybil1'], ['xGUyVQLYV80pajm8QP-9cfHC7xri49V58k02kqTAiUI', 'sybil2'], ['xGUyVQLYV80pajm8QP-9cfHC7xri49V58k02kqTAiUI', 'sybil3'], ['sybil1', 'sybil2'], ['sybil1', 'sybil3'], ], 'groups': { 'new_group_1': [ 'sybil1', 'sybil2', 'sybil3' ] } }Install$ git clone https://github.com/BrightID/BrightID-AntiSybil.git $ cd BrightID-AntiSybil $ pip3 install .Running TestsYou can configure the algorithms and attacks you want to test by editing theconfig.pyfile inanti_sybil/tests/attacks/config.pyand then run the tests by$ python3 anti_sybil/tests/attacks/run.pyThe result will contain:An interactive graph (example) per algorithm/attack that visualize the graph and scores each sybil/attacker/honest node achievedA CSV file (example) that has a column per algorithm/attack and provide following information for each algorithm/attackResultsGroupSybilRank one group group target attackNo. Successful Honests416Successful Honests Percent78.1954887218045Sybils scored >= %0.080091533180778Avg Honest - Avg Sybil17.4819290581162Max Seed100Avg Seed59.6248484848485Min Seed31.88Max Honest100Avg Honest29.3145290581162Min Honest0Max Attacker13.96Avg Attacker13.96Min Attacker13.96Max Sybil13.96Avg Sybil11.8326Min Sybil5.87Border14A chart to compare effectiveness of different anti-sybil algorithms to detect sybils in different attacksOld VersionThe old version of BrightID Anti-Sybil algorithms, tests and documents can be foundhere.
antitesting
There are standard ways to temporarily disable individual tests. As an example, using of thepytest.mark.skipdecorator. This plugin adds a new way to do this by extending the standard features of Pytest. Now you can put the names of the disabled tests in a separate file, and then correct and supplement them without going into the source code of the tests.Install the plugin:pipinstallantitestingCreate one or more files containing the names of the tests that you want to disable. In our example, this will be a filedisabled_tests.txtcontaining the text like this:test_1 test_2 : 12.12.2012 test_3 : 12.12.2025 test_4 : 13.12.2025 # fix after test_3Finally, add these lines to the fileconftest.py:importantitestingantitesting("disabled_tests.txt")Thedisabled_tests.txtfile that we created contains the names of the tests that we want to disable. This is equivalent to putting askipdecorator on each of them, but it does not require getting into the source code of the tests and saves you time. You could also see the dates in the file in the formatDD.MM.YYYY. If there is a date in this format in the line with the test name, the test will be ignored only until that date, and after that it will become available. If necessary, you can accompany the lines with comments separated by sharps ("#").
anti-useragent
anti-useragentinfo: fake pc or app browser useragent, anti useragent, and other awesome toolsFeaturesmore browser up to datemore randomize rulermore fun awesome toolsEnglish |中文Installationpipinstallanti-useragentUsagefromanti_useragentimportUserAgentua=UserAgent()ua.opera# Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11ua.chrome# Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36ua['chrome']# Mozilla/5.0 (Windows NT 5.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.2.3576.5 Safari/537.36ua.firefox# Mozilla/5.0 (Windows NT 5.1; WOW64; rv:47.0) Gecko/20100101 Firefox/45.0ua['firefox']# Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:49.0) Gecko/20100101 Firefox/31.0ua.android# Mozilla/5.0 (Linux; Android 7.5.2; M571C Build/LMY47D) AppleWebKit/666.7 (KHTML, like Gecko) Chrome/72.7.7953.78 Mobile Safari/666.7ua.iphone# Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/349.56 (KHTML, like Gecko) Mobile/J9UMJN baiduboxapp/0_17.7.6.6_enohpi_8957_628/2.01_4C2%258enohPi/1099a/P0SJ2RX4DXJT3RW906040KVOSH2E76RJUNHVIJUPCJQCZMEM2GL/1ua.wechat# Mozilla/5.0 (Linux; Android 10.9.8; MI 5 Build/NRD90M) AppleWebKit/536.93 (KHTML, like Gecko) Chrome/81.7.8549.56 Mobile Safari/536.93# and the best one, random via real world browser usage statisticua.random# Mozilla/5.0 (Macintosh; Intel Mac OS X) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.3.8610.5 Safari/537.36Supported platformbrowser/platfomwindowsmaclinuxiphoneandroidchrome✔✔✔✔✔firefox✔✔✔❌❌opera✔✔✔❌❌wechat❌❌❌✔✔baidu❌❌❌✔✔uc❌❌❌❌✔If You want to specify the platform just:fromanti_useragentimportUserAgentua=UserAgent(platform='mac')# windows, linux, android, iphoneIf You want to specify the browser max version or min version just:fromanti_useragentimportUserAgentua=UserAgent(max_version=90)ua=UserAgent(min_version=50)# 1.0.9 new supportedua=UserAgent(versions=(90,100))If You want to specify the enable logger just:fromanti_useragentimportUserAgentua=UserAgent(logger=True)# the default install logurutry:fromloguruimportloggerexcept:install("loguru")fromloguruimportloggerMake sure that You using latest versionpip install -U anti-useragentCheck version via python console:import anti_useragent print(anti_useragent.VERSION)Add awesome tools usage:# requests:fromanti_useragent.utils.cipersimportset_requests_cipers,set_tls_protocol# ja3 tls verify@set_requests_cipersdefget_html():requests.get(...)# ja3 tls versionsession=set_tls_protocol(version="TLSv1_2")# aiohttp:fromanti_useragent.utils.cipersimportsslgenasyncwithClientSession(connector=aiohttp.TCPConnector(ssl=False))assession:# ja3 tls verifyawaitsession.get(...,ssl=sslgen())# ja3 tls versionawaitsession.get(...,ssl=sslgen(_ssl="TLSv1_2"))# scrapy:# settings.py ja3 tls verifyDOWNLOADER_CLIENTCONTEXTFACTORY='anti_useragent.utils.scrapy_contextfactory.Ja3ScrapyClientContextFactory'
antivaxxtweetanalyzer
No description available on PyPI.
antivirals
Coronavirus AntiviralsThis project hopes to discover an antiviral that can treat the novel coronavirus disease. This project will use newer approaches in machine learning to model and optimize for the properties needed in candidate therapeutics.In therapeutic drug development the properties we minimize are toxicity and side-effects (also called “off-target effects”). Properties we maximize are bioactivity on target (also called “on-target effects”), absorption, synthesizability, and large-scale manufacturability.We aim to search the molecular space (both drug-like molecules, and already approved drugs) for drug candidates maximizing these properties.How to ContributeWe need your help if you have one of the following skills:Software development in PythonMachine learningWeb development (for the UI)CheminformaticsDevOps – especially with Kubernetes and HelmFork the project and create a new branch:git checkout -b feature/my_new_featurePush your changes and use GitHub to submit a pull request.Any contributions are helpful. Look at the open issues for inspiration on what to work on.LicenseApache 2. See LICENSE file for details.InstallationUsing the system through our highly optimized Docker container is recommended:docker run -v my_host_data_directory:/app/data -it inqtel/antivirals up sqlite:////app/data/antivirals.dbAlternatively, we publish a Python package:pip install antiviralsArchitectureThe system is structured in a quazi Model-View-Controller (MVC) architecture.__init__.py: Agents that execute operations and connect everything together. The “controller” layer in MVC.schema.py: Contains all the code for adding to and querying the molecular database. The “model” layer in MVC.__main__.py: A command-line user interface. The “view” layer in MVC.data.py: Maps from public datasets (eg. ZINC Clean Leads, Tox21) to the molecular database schema.chem.py: The actual cheminformatics machine learning algorithms.parser.py: A Cython-optimized SMILES language parser used by the cheminformatics algorithms.We are investigating how to deploy it at scale on Kubernetes. Help is needed!UsageThe Coronavirus Antivirals project comes bundled with a command line tool.You must have aSQLAlchemy compatible databaserunning. Otherwise everything gets stored in volitile memory. Any database string can be used in theory, but it has only been tested with SQLite.To completely set up the project and run an agent that runs indefinitely doing antivirals experiments, simply run:antivirals up sqlite:///path/to/your/dbThis command sets up the whole Coronavirus Antivirals system de novo (downloading data, training models, etc).Any models generated will be dumped in the current working directory in “data”. When you use Docker or Kubernetes you want to mount the /app/data folder in the container to a volume.There is some more advanced usage. Inline documentation about other actions is available:antivirals -hAcknowledgmentA project ofB.Next.
anti-war-handbook-package
Справочник для антивоенных споров в семье и на работе17 ответов на самые частые доводы, оправдывающие войнуТекст: Руслан Ленин, А. П. Фогт, Саша Б., И. С., слива Публикация: 27 февраля 2022В эти дни наша главная задача — создать атмосферу полного неприятия российской военной агрессии в Украине. Но в разговорах с коллегами, друзьями, знакомыми и родными нас часто переполняют эмоции, которые не дают спокойно выстроить аргументацию. Мы приходим в отчаяние от вопросов и тезисов: неужели они это всерьез? В лучшем случае разговор заканчивается, хотя мог бы и предотвратить раскол в отношениях, и помочь консенсусу против войны.Мы собрали самые частые заявления тех, кто не готов однозначно осудить российскую военную агрессию, и ответили на них. Мы опрашивали знакомых, строили и тестировали друг на друге аргументы, переписывали их снова и снова до последнего момента. Поэтому призываем вас делиться и новыми вопросами, и своими аргументами.1. «Разве российская армия не наносит удары только по военным объектам?»К сожалению, нет. Несмотря на постоянные заверения представителей власти и министерства обороны в обратном, российские снаряды все эти дни постоянно попадают в жилые кварталы, и уже попадали в больницы и детские сады. 25 февраля баллистическая ракетаразорваласьу больницы в городе Угледар Донецкой области. В тот же день в городе Ахтырка Сумской области баллистическая ракетапопалав детский сад. Неудивительно, что новорожденных в госпитале в Днепре медсестры вынужденыукрыватьот ракетных обстрелов в подвалах. За день до этого в городеЧугуевХарьковской области российские снаряды разнесли жилой квартал. 26 февраля, обстреляны жилые дома вЧернигове. Это происходит не только в приграничных районах или на юго-востоке, но и встолице. Мы знаем это как от множестваочевидцевифотографов, снимающих происходящее (так и от независимых правозащитных и расследовательскихорганизаций. Также мы знаем это от наших друзей и родственников, находящихся сейчас в Украине. Многим из них страшно, они рассказывают, что проводят часть дня в бомбоубежищах и слышат выстрелы. Мы верим этим людям, и призываем тоже говорить с близкими по ту сторону границы, если они там есть.Это не просто часть обычных военных действий, а нарушение законов военного времени, то есть это то, что называется военными преступлениями.2. «Как можно чему-то верить? Ведь идет информационная война»Находить достоверную информацию действительно сложно. Здесь можно пойти от противного и отметить то, что вызывает наименьшее доверие. Представитель Минобороны РФ в брифинге после первого дня войныобъявил, что российская армия не понесла никаких потерь. Подобные сообщения от Минобороны появляются и в последующие дни. В это сложно поверить, ведь ни одна военная операция не обходится без потерь. Доверия не повышает и тот факт, что Роскомнадзор фактически ввел цензуру в российских СМИ, запретив упоминание информации о «спецоперации», исходящей не от Министерства обороны. С учетом того, что Минобороны выдает информацию очень скудно, это стоит расценивать как попытку создать пузырь непонимания происходящего в Украине. Напротив, независимые российские СМИ (подугрозойблокировки) ведут непрерывный онлайн с максимально проверенной информацией — такие сводки есть, например, у«Медузы»и«Медиазоны». Минобороны Украины такжеделаетрегулярные обновления ситуации, в том числе неприятные для своей стороны, что вызывает куда больше доверия чем сообщения российской стороны.Мы считаем, что в такой ситуации есть смысл доверять вышеперечисленному, при этом не забывая всегда проверять полученную информацию в нескольких источниках.3. «А Донбассу не было страшно 8 лет? О них остальные украинцы думали все эти годы?»Прежде всего нам нужно отказаться от того, чтобы говорить про всех жителей Украины как про единого агента, который либо что-то делал, либо не делал. Все эти годы люди в Украине занимали разные позиции по отношению к войне на Донбассе. Одни голосовали за более бескомпромиссно настроенных кандидатов и партии, другие — за дипломатию. Программа нынешнего президента, за которого проголосовало большинство на выборах в 2019, году была направлена на мирное урегулирование, обмен пленными и отведение вооружений.Кто-то годами принимал беженцев с Донбасса в других областях Украины и помогал им. СогласноотчетамООН, более миллиона человек из Донецкой и Луганской областей Украины в 2014–16 годах вынужденно переселились в другие регионы Украины. В 2016 году было создано Министерство по вопросам реинтеграции временно оккупированных территорий Украины – с целью координировать поселение, помощь и трудоустройство пострадавшим от военных действий на Донбассе. Были и те, кто создавали проекты помощи тем, ктоосталсяна Донбассе:Донбасс SOS,Восток СОС,Країна вільних людей,Пролиска.При этом у процесса мирного разрешения противоречий есть главное препятствие — линия фронта. Когда над Донбассом летают снаряды, неудивительно, что многие из местных жителей хотят, чтобы кто угодно пришел и остановил происходящее войсками. Когда в конституции Украины Донбасс закреплен как часть ее территории, хотя на самом деле контролируется войсками «ЛДНР» и России, логично ожидать, что внутренний диалог в украинском обществе будет включать в себя тех, кто будет считать, что у правительства есть мандат на возвращение Донбасса военным путем.Страшно ли было Донбассу 8 лет? Сейчас мы как граждане России должны спросить у себя самих о ролинашейстраны в этих событиях.Российская Федерация много лет помогает «ЛДНР» как ресурсами, так и войскам, а значит, уже является участницей конфликта и занимает конкретную сторону. Значит, мы должны спросить себя, какнашастрана могла бы помочь прекратить огонь на Донбассе как можно скорее? Россия — сторона конфликта на Донбассе, но это никак не делает ее полноправной участницей внутреннего политического процесса в Украине. Как она могла бы решатьэтотконфликт, а не менять политический режим всей Украины военным вторжением?И если бы власть в России была демократической и мы могли бы через настоящих представителей влиять и на дипломатический процесс, и на действия войск, и на расходование ресурсов, какие действия мы бы проводили? Напротив, пока Россией правит президент, которого сейчас интересует реализация своих геополитических амбиций (на что он прямо указывает заявлениями о «рисках для России» и о том, что суверенитет Украины — историческая ошибка Ленина), он будет приоритизировать эти задачи над всеми прочими, включая мир на Донбассе. Настоящий вопрос: что будет, если прекратить решать задачи Путина вместо проблем Донбасса?4. «А кто защитит Донбасс от обстрелов?»Сейчас российская армия не помогает мирным жителям Донбасса, а атакует Украину с трех фронтов и ставит целью ее полный захват. Какзаявляютуже и депутаты Госдумы от КПРФ, «Голосуя за признание ДНР/ЛНР я голосовал за мир, а не за войну. За то, чтобы Россия стала щитом, чтобы не бомбили Донбасс, а не за то, чтобы бомбили Киев». В то же время Путин и другие представители власти излагают самые разные обоснования интервенции: то восстановление «исторического единства» народов, то восстановлении исторических границ, то — будто Украина и НАТО «создали риски» для «существования» самой России, то о защите всей Украины от неонацистов.Помощь мирным жителям Донбасса в них почти не фигурирует.При этом союзничество с «ЛДНР» — далеко не самый очевидный способ такой помощи. Если бы во главу угла было поставлено разрешение ситуации, а не захватнические планы и собственное влияние на Украину, то российское правительство могло бы потратить силы на сотрудничество с гражданским обществом внутри и вне Донбасса. В этой связи напомним и о расследованииBaza, которое называет одним из главных выгодополучателей войны в Донбассе беглого украинского олигарха Сергея Курченко — сотрудничающего с высокопоставленными чиновниками и силовиками из спецслужб РФ. В ходе конфликта на Донбассе под его контроль перешла большая часть угольных и металлургических предприятий на территории «ЛДНР». В то время как Курченко обогащается, рабочим предприятий в 2021 году месяцамине платилизарплаты, и так держащиеся на уровне 10-20 тысяч рублей. Стоит поставить вопрос о том, зачем на самом деле нужна российская армия на Донбассе и ее поддержка вооруженных сил «ЛДНР».5. «А где же вы (российские граждане, осуждающие войну) были все эти 8 лет?»Кто-то из нас был подростком, кто-то не имел позиции или не интересовался политикой, а кто-то сопротивлялся войне с Украиной. Важнее всего не это, а то, как мы будем действовать сейчас.В России на антивоенные протесты после присоединения Крыма в 2014 вышло до20 000человек, от либералов до анархистов. Именно с Крыма, а не Донбасса, начался отсчет — это важно помнить, чтобы не путать нападение и самооборону.Антивоенные лозунги звучали на многих других акциях: например, на траурных шествиях, посвященных Борису Немцову — до пандемии они набирали в среднем по 60 000 тысяч человек. Отказ от претензий на Крым и на Донбасс, резкое улучшение отношений с Украиной — позиция, которая никогда не уходила полностью из протестной и оппозиционной повестки. Хотя главный фокус и сместился на масштабные внутренние проблемы России — такие как фальсификация выборов, преследование политзаключенных, экологические катастрофы, пенсионная реформа.Тем не менее, борясь против авторитаризма в РФ, мы боролись и за возможность менять власть и отказаться от агрессивной внешней политики. Возможность прекратить стычки с украинскими войсками и спонсирование отрядов, ведущих бои с украинской армией. Именно это, а также наше полное доверие к украинскому гражданскому обществу в деле решения ихвнутреннихпроблем, могло бы принести мир на Донбасс.6. «Путин хочет закончить эту войну, которая длится 8 лет»Путин и другие представители российской власти постоянно заявляли, что Россия — не участница военных действий на Донбассе. Значит, с его собственной точки зрения, он объявил войну, а не заканчивает ее.На самом деле российская армия уже в 2014 году сражалась с украинской на украинской территории — например, под Иловайском. Война идет 8 лет.Закончить войну — отличная цель. Однако она не объясняет, зачем оккупировать другую страну и стремиться взять под контроль ее политические институты. А Путин уже не раз поставил вопрос именно так — например,25 февраля, когда попытался убедить украинских военных совершить военный переворот: «Берите власть в свои руки, похоже нам с вами будет легче договориться, чем с этой шайкой наркоманов и неонацистов, которая засела в Киеве и взяла в заложники весь украинский народ».Наконец, вместо идеи окончания войны, за последние дни мы получаем от российской власти только новые для нее оправдания — см. п.4.7. «Разве мы не спасаем Украину и Россию от неонацистов?»Кто мониторит фашистскую активность в обществе, причем скрупулезнее всех? Антифашисты. Именно антифашистские объединения систематически сопротивляются фашистским и неонацистским организациям. Украинские антифашистырегулярнозаявляют, что путинская пропаганда крайне преувеличивает влияние ультраправых на украинское общество и государство. Антифашисты не стали быпреуменьшатьстепень «нацификации».Если российская власть и государственные медиа оценивают ее выше, чем это делают антифашисты — которые находятся в стране и контексте — это ясный знак, что врут именно первые.Также антифашистызаявляли, что будут сражаться против российского вторжения на стороне армии Украины — очевидно, они не считают ее фашистской.Существуют неонацистские боевые группы, которыедействуютпротивукраинских сил и на стороне «ЛДНР» и армии России, а их представители в течение последних лет встречались с Путиным. Примеры таких образований — отряды «Русич» и «Ратибор». Историк и политолог Вячеслав Лихачев, исследуя (и признавая) роль неонацистов собеихсторон конфликтаписал: «члены ультраправых групп сыграли гораздо большую роль с российской стороны конфликта чем с украинской».Известная своими националистическими взглядамипартия «Правый Сектор» не получила ни одного места в действующей Верховной Раде.В предвыборнойпрограммеЗеленского, за которого проголосовали 73% избирателей при явке 61,37%, почти 13,5 млн человек, не было ни единого националистического лозунга. Зато был такой: «Надо объединяться всем, кто независимо от пола, языка, веры, национальности просто ЛЮБИТ УКРАИНУ!»Наконец, агрессию России осуждают Талибан, европейские государства, Израиль — очень пестрая палитра политических сил. В России на антивоенном митинге была задержана ленинградскаяблокадница. Люди и нации, действительно пострадавшие от нацизма, не на стороне Путина.8. «Украинцы сами просят Путина вмешаться, чтобы он всех спас»В первую очередь, нужно спросить, от кого мог бы спасать гражданок и граждан Украины Путин. Выше мы упоминали о том, что угрозы со стороны неонацистских сил цинично раздуваются российскими властями, которые не говорят о том, что стоит, в первую очередь, доверить антифашизм самому украинскому обществу.Даже если сколько-то писем от озабоченных украинцев лежит на столе у Путина, можно предположить, что таких просителей не больше, чем украинцев,доверяющихроссийским СМИ — около 3%. В то же времядовериеукраинцев к собственной армии – 70%. И прямо сейчас на улицах украинских городов —очередина запись в добровольческие части.Украинок и украинцев представляют действующие президент и парламент. Действующий президент получил значительную долю голосов даже в Донецкой и Луганской области. В первом туре выборов в 2019 году Зеленский получил больше 20% голосов в этих областях, слегка уступив Порошенко. Во втором туре он обошел Порошенко и победил. Сейчас эти органы украинской власти призывают к миру и переговорам. Источники же призывов о спасении извне неизвестны. Заметим, сам Кремль не приводит никаких доводов в пользу нелегитимности действующей власти: ни свидетельств об уничтожении политической конкуренции, ни данных о фальсификации выборов. Вместо этого Путин простосамсчитает, что они нелегитимны, так как являются «шайкой наркоманов и неонацистов».Кому мы должны доверять суждения о легитимности украинской власти — Путину или гражданкам и гражданам Украины?И даже если мы представим, что существует меньшинство, нуждающееся в срочной ивнешнейпомощи, то логичнее их вывезти, а не развязать войну на всей территории страны.9. «Путин просто защищает Россию от НАТО»Это подмена понятий. Почему тогда Путин нападает на Украину, а не на сами страны НАТО? Украина не входит в их число. Идея, что ради «защиты от НАТО» можно нападать на Украину, опирается на представление, будто Украина — продолжение России, лишь разменная территория в противостоянии империй. Эта позиция бесчеловечна по отношению к жителям Украины.Лидер, который бы хотел двигать планету к миру и демилитаризации, в первую очередь отказался бы от утверждений, будто суверенитет соседних стран — историческая ошибка. Но Путин — например, в телеобращении 21.02.22 — делал ровно противоположное. Как империалистической риторикой, так и нападением на Украину Путин только укрепляет военные лобби в других странах. Это лишь способствует повышению военных расходов других стран и отдаляет весь мир от всесторонней демилитаризации.Наконец, даже некоторые российские генералы, какпишетГригорий Юдин, говорили о том, что сейчас перед Россией не стоит риск военной угрозы со стороны НАТО.10. «Путин защищает Россию от ядерной угрозы со стороны Украины»Будапештский меморандумот 1994 года обязывает подписавшиеся страны (включая Россию) «уважать независимость, суверенитет и существующие границы Украины в обмен на ядерное разоружение страны». На сегодняшний день в Украине нет ядерного вооружения. Владимир Зеленский лишь заявил овозможностиего появления в феврале этого года. Почему же? Россияуженарушила обещание уважать «существующие границы» Украины, причем еще в 2014, присоединив Крым. (Даже если 2014 год в России не все рассматривают, как такое нарушение, Украина оценивает это именно так 8 лет). И в 2022, признав независимость самопровозглашенных республик. И ведя военные действия на территории Украины — начиная с 2014 года и сейчас. Это значит, что очевидно, невозможно оправдать российское полномасштабное вторжение, происходящее сейчас, нарушениемУкраинойБудапештского меморандума. И лишь сейчас — и лишь в качестве возможности — она заявила, что у нее может появиться ядерное оружие. Винить в этом заявлении мы, граждане России, должны в первую очередь агрессивные действия нашей власти. Напротив, именно нападение на целую страну, как это уже было в истории мировых войн, толкает мировое сообщество к тому, чтобы вступить в конфликт на одной из сторон, и тем самымувеличиваетриск применения ядерного оружия.11. «Надо было ввести войска еще во время Майдана, тогда бы не было настоящей войны»Войска как раз и были введены. Российские военныеподнимались по тревогена учения, и оказывались в Крыму и «ЛДНР». Российские военные в «ЛДНР»снималисимволику российской армии, как и военные в Крыму. В Псковской области в Россиинаходитсякладбище солдат. Все они погибли в 2014 году, с большой вероятностью — на юго-востоке Украины. Уже тогда следовало понять: Украина — это не абстрактная территория, на которой только российская армия может навести порядок, а страна, которая при всех, нормальных для любого общества противоречий, будет защищаться и видеть в российской армии не порядок, а инструмент подчинения Российской власти.Но главное — война не происходит сама собой. Решение о текущем нападении принял конкретный человек — который мог этого не делать, если бы захотел.12. «Может Путин и не прав, но нельзя желать своей армии поражения»Как наследники страны, победившей фашизм, мы видим патриотизм в защите достоинства нашей родины, а не в формальном следовании правилам офицерской чести. Чтобы сохранить наше человеческое достоинство, мы должныне допустить военных преступлений и убийств как солдат, так и мирного населения страны, которая нам не угрожает.Не тратить силы на солидарность с армией, наступающей на защищающих свою страну людей, а немедленно работать над созданием антивоенного движения для отвода войск из Украины — вот самый прямой путь избежать поражения. Участие в таких решениях определяет гражданскую свободу и смысл жизни человека.13. «Украина запрещает русским говорить по-русски»В первую очередь стоит сказать, что невозможно оправдать захват страны несогласием с ее внутренней языковой политикой.Согласно последним опросам, русский язык использует заметная часть населения: например, в2020году в Киеве русский язык использовали в интернете чаще украинского. Две трети украинцев говорят о необходимости продолжать текущую языковую политику (см.законо государственном языке), 20% — не соглашаются с ними. Мы — не полицейские и не менторы для украинского гражданского общества.Мы должны априори доверять самим жительницам и жителям Украины трансформацию своей судьбы демократическим путем, включая дальнейшую языковую политику. Доверять им такой ее выбор, который сможет устраивать разные регионы и группы.Затомы точно можем помешать спокойному демократическому выбору, если сделаем так, чтобы русский языкавтоматическиассоциировался с языком агрессоров и оккупантов.14. «А как же сожженный Дом Профсоюзов в Одессе — они же так со всеми русскими будут поступать?»Миссия Управления ООН по правам человека — ее представители непосредственно наблюдали за событиями —сообщает, что детальное расследование инцидента продолжается. Известно, что обе стороны были вооружены и проявляли жестокость, первыми бросать коктейли Молотова начали противники Майдана — те, ктопосчитал, что события на Майдане приведут к ущемлению интересов жительниц и жителей Юго-Востока Украины.Противостоящая им группа дала ожесточенный отпор, вынудила укрыться в Доме Профсоюзов. Коктейли Молотова и стрельба продолжали применяться обеими сторонами, здание загорелось, в ходе пожара забаррикадировавшиеся люди погибли.Здесь мы можем лишь повторить нашу общую позицию.Мы, жители России, не полицейские и не менторы для украинского гражданского общества.Мы должны априори доверять самим жительницам и жителям Украины трансформацию своей судьбы, включая разрешение внутренних конфликтов, которые возможны в любой стране и фактически могут носить самый разный характер. Не до конца расследованный частный случай противостояния вооруженных групп, ни одна из которых не являлась избранными представителями всего народа Украины, не может быть серьезным аргументом для обсуждения тотальной угрозы.Первая помощь в таком трансформационном процессе — это не мешать. Мы как граждане России должны задать себе вопрос: мы хотим создавать новые поводы для ненависти как к себе (чем очевидно сейчас занимается Путин, напав на Украину) или нет? Хотим ли мы провоцировать своими действиями недоверие и вражду между жителями разных частей Украины или нет? То, чтомы можем сделать прямо сейчас для внутреннего мира — это показать всеми силами, что граждане России не желают использования внутренних противоречий для захвата власти над какой-либо частью страны.Напротив, российская власть просто цинично использует упоминания о трагедии в Доме Профсоюзов, чтобы привлечь на свою сторону людей, которые знают и помнят о ней, в то время как в числе настоящих и главных причин войны раз за разом упоминает «риски для существования России», которые создала Украина.15. «Меня это не касается — и своих проблем достаточно»Минобороны РФ по-прежнему не рассказывает о потерях, но Минобороны Украины заявляет о тысячах убитых военнослужащих из РФ. Минобороны Украины делится информацией об убитых, пленных и раненых (с фото и паспортными данными) для того, чтобы о них могли узнать родственники, не понимающие, где их сыновья, мужья и братья. Очень многие на этой войне — не профессиональные солдаты, а срочники. Еще несколько дней назад они могли не подозревать, что окажутся на войне. Российское командование несколько дней подряд пытается взять Киев почти любой ценой, а Путин хочет вести переговоры с сильной позиции. Если это не остановить сейчас, заплатить за такую решимость придется переброской новых солдат. Наших знакомых и родственников, непосредственно оказавшихся на войне, будет становиться все больше и больше.Россия — часть мировой экономики. Чем глубже наш режим затягивает Россию в войну, тем серьезнее будут становится санкции, ведь иностранные государства готовы использовать только такие (т.е. невоенные) методы для того, чтобы вынудить Путина прекратить войну. Обвал рубля резко поднимет цены на все товары в стране, и так растущие последние месяцы. Перестройка экономики на ведение войны будет означать конец всем надеждам обычных людей на построение своей собственной мирной жизни — в науке, индустрии, сельском хозяйстве, искусстве.И именно как обычные люди мы будем десятилетиями встречаться с простыми украинцами — в России, в Украине, в интернете, по всему миру. И чувствовать их недоверие и вражду к нам просто потому, что мы из России. Выражая нашу позицию сегодня, мы и поддерживаем украинцев, и спасаем наши отношения с ними на годы вперед.Украинки и украинцы будут знать не просто, что войну развязал Путин, а не обычные россияне, но и то, что когда это случилось, нам было не все равно.Только тогда у нас будет шанс уважать себя.16. «Разве наше мнение на что-то может повлиять?»Именно простые граждане — инструмент, используемый для оправдания войны. Путин говорит, что рассчитывает на«консолидированную патриотическую позицию», а Песков — что правительство должно«лучше объяснять свою позицию», когда кто-то не согласен. Молчание создает видимость поддержки, которой правительство легитимизирует войну. Только активный протест может это изменить. На утро второго дня войны Владимир Зеленскийобратилсяк российским гражданам, вышедшим вечером 24 февраля на протесты:«Мы вас видим. Это значит, что вы нас услышали. Это значит, вы нам начинаете верить. Боритесь за нас, боритесь против войны».Таким образом, выходя на протесты мы показываем свою поддержку украинцам и украинкам, и тем самым укрепляем их силы.(Не)молчание влияет и на армию. Путин и руководство страны не могут изложить одной, четкой икраткойпричины, по которой начата война. Когда тебе отчаянно сопротивляется противник, когда ты наступаешь на чужой территории, и вдобавок не знаешь, за что воюешь, то долго воевать гораздо сложнее. Если добавить к этому осуждение войны в родных городах, то солдаты и офицеры могут все больше начать сомневаться, а их рвение может начать ослабевать.17. «Выходить на улицы бесполезно. Всех разгонят и пересажают. У беларусов не получилось.При несомненной угрозе со стороны силовиков, регулярные и массовые мирные уличные демонстрации являются необходимым рычагом давления на систему. Относительно безопасной формой протеста, также способствующей гласности, являетсяакция#тихийпикет. Участницы и участники делают повседневные дела в городе с видимыми антивоенными нашивками на сумках или одежде. Это привлекает к войне внимание окружающих, которые могут присоединиться к протесту.Однако уличные протесты — не единственная возможная тактика. Символическое попадание в машину репрессий — не всегда лучшее политическое действие. Как напоминают активисты, масштабнейшие акции протеста не остановили войну в Ираке. Есть идругиетактики сопротивления войне: как открытые забастовки (право на забастовку защищено ст. 37 Конституции), так и просто взятие больничного, что ещё легче делать сейчас, во время пандемии (сейчас достаточно заявить о симптомах ОРВИ и уйти на больничный на 7 дней). Даже распространяя критические вопросы можно создать напряжение — и повысить цену военной агрессии. Стоит присоединиться к почти 1 000 000 подписавшихпетицию— это поможет в распространении информации.У беларусов мирный протест столкнулся с жестокой силовой реакцией и был задавлен. Однако не стоит забывать, что это произошло при активной поддержке Путина. В России же при массовых протестах по всей стране ресурсы на поддержку силовиков быстро закончатся. У Лукашенко был Путин, своего собственного Путина у Путина нет. Часть сил ОМОНа и Росгвардии переброшены на войну. У режима нет бесконечных ресурсов для подавления по-настоящему массового антивоенного движения.
anti-war-handbook-package-Lushios
Справочник для антивоенных споров в семье и на работе17 ответов на самые частые доводы, оправдывающие войнуТекст: Руслан Ленин, А. П. Фогт, Саша Б., И. С., слива Публикация: 27 февраля 2022В эти дни наша главная задача — создать атмосферу полного неприятия российской военной агрессии в Украине. Но в разговорах с коллегами, друзьями, знакомыми и родными нас часто переполняют эмоции, которые не дают спокойно выстроить аргументацию. Мы приходим в отчаяние от вопросов и тезисов: неужели они это всерьез? В лучшем случае разговор заканчивается, хотя мог бы и предотвратить раскол в отношениях, и помочь консенсусу против войны.Мы собрали самые частые заявления тех, кто не готов однозначно осудить российскую военную агрессию, и ответили на них. Мы опрашивали знакомых, строили и тестировали друг на друге аргументы, переписывали их снова и снова до последнего момента. Поэтому призываем вас делиться и новыми вопросами, и своими аргументами.1. «Разве российская армия не наносит удары только по военным объектам?»К сожалению, нет. Несмотря на постоянные заверения представителей власти и министерства обороны в обратном, российские снаряды все эти дни постоянно попадают в жилые кварталы, и уже попадали в больницы и детские сады. 25 февраля баллистическая ракетаразорваласьу больницы в городе Угледар Донецкой области. В тот же день в городе Ахтырка Сумской области баллистическая ракетапопалав детский сад. Неудивительно, что новорожденных в госпитале в Днепре медсестры вынужденыукрыватьот ракетных обстрелов в подвалах. За день до этого в городеЧугуевХарьковской области российские снаряды разнесли жилой квартал. 26 февраля, обстреляны жилые дома вЧернигове. Это происходит не только в приграничных районах или на юго-востоке, но и встолице. Мы знаем это как от множестваочевидцевифотографов, снимающих происходящее (так и от независимых правозащитных и расследовательскихорганизаций. Также мы знаем это от наших друзей и родственников, находящихся сейчас в Украине. Многим из них страшно, они рассказывают, что проводят часть дня в бомбоубежищах и слышат выстрелы. Мы верим этим людям, и призываем тоже говорить с близкими по ту сторону границы, если они там есть.Это не просто часть обычных военных действий, а нарушение законов военного времени, то есть это то, что называется военными преступлениями.2. «Как можно чему-то верить? Ведь идет информационная война»Находить достоверную информацию действительно сложно. Здесь можно пойти от противного и отметить то, что вызывает наименьшее доверие. Представитель Минобороны РФ в брифинге после первого дня войныобъявил, что российская армия не понесла никаких потерь. Подобные сообщения от Минобороны появляются и в последующие дни. В это сложно поверить, ведь ни одна военная операция не обходится без потерь. Доверия не повышает и тот факт, что Роскомнадзор фактически ввел цензуру в российских СМИ, запретив упоминание информации о «спецоперации», исходящей не от Министерства обороны. С учетом того, что Минобороны выдает информацию очень скудно, это стоит расценивать как попытку создать пузырь непонимания происходящего в Украине. Напротив, независимые российские СМИ (подугрозойблокировки) ведут непрерывный онлайн с максимально проверенной информацией — такие сводки есть, например, у«Медузы»и«Медиазоны». Минобороны Украины такжеделаетрегулярные обновления ситуации, в том числе неприятные для своей стороны, что вызывает куда больше доверия чем сообщения российской стороны.Мы считаем, что в такой ситуации есть смысл доверять вышеперечисленному, при этом не забывая всегда проверять полученную информацию в нескольких источниках.3. «А Донбассу не было страшно 8 лет? О них остальные украинцы думали все эти годы?»Прежде всего нам нужно отказаться от того, чтобы говорить про всех жителей Украины как про единого агента, который либо что-то делал, либо не делал. Все эти годы люди в Украине занимали разные позиции по отношению к войне на Донбассе. Одни голосовали за более бескомпромиссно настроенных кандидатов и партии, другие — за дипломатию. Программа нынешнего президента, за которого проголосовало большинство на выборах в 2019, году была направлена на мирное урегулирование, обмен пленными и отведение вооружений.Кто-то годами принимал беженцев с Донбасса в других областях Украины и помогал им. СогласноотчетамООН, более миллиона человек из Донецкой и Луганской областей Украины в 2014–16 годах вынужденно переселились в другие регионы Украины. В 2016 году было создано Министерство по вопросам реинтеграции временно оккупированных территорий Украины – с целью координировать поселение, помощь и трудоустройство пострадавшим от военных действий на Донбассе. Были и те, кто создавали проекты помощи тем, ктоосталсяна Донбассе:Донбасс SOS,Восток СОС,Країна вільних людей,Пролиска.При этом у процесса мирного разрешения противоречий есть главное препятствие — линия фронта. Когда над Донбассом летают снаряды, неудивительно, что многие из местных жителей хотят, чтобы кто угодно пришел и остановил происходящее войсками. Когда в конституции Украины Донбасс закреплен как часть ее территории, хотя на самом деле контролируется войсками «ЛДНР» и России, логично ожидать, что внутренний диалог в украинском обществе будет включать в себя тех, кто будет считать, что у правительства есть мандат на возвращение Донбасса военным путем.Страшно ли было Донбассу 8 лет? Сейчас мы как граждане России должны спросить у себя самих о ролинашейстраны в этих событиях.Российская Федерация много лет помогает «ЛДНР» как ресурсами, так и войскам, а значит, уже является участницей конфликта и занимает конкретную сторону. Значит, мы должны спросить себя, какнашастрана могла бы помочь прекратить огонь на Донбассе как можно скорее? Россия — сторона конфликта на Донбассе, но это никак не делает ее полноправной участницей внутреннего политического процесса в Украине. Как она могла бы решатьэтотконфликт, а не менять политический режим всей Украины военным вторжением?И если бы власть в России была демократической и мы могли бы через настоящих представителей влиять и на дипломатический процесс, и на действия войск, и на расходование ресурсов, какие действия мы бы проводили? Напротив, пока Россией правит президент, которого сейчас интересует реализация своих геополитических амбиций (на что он прямо указывает заявлениями о «рисках для России» и о том, что суверенитет Украины — историческая ошибка Ленина), он будет приоритизировать эти задачи над всеми прочими, включая мир на Донбассе. Настоящий вопрос: что будет, если прекратить решать задачи Путина вместо проблем Донбасса?4. «А кто защитит Донбасс от обстрелов?»Сейчас российская армия не помогает мирным жителям Донбасса, а атакует Украину с трех фронтов и ставит целью ее полный захват. Какзаявляютуже и депутаты Госдумы от КПРФ, «Голосуя за признание ДНР/ЛНР я голосовал за мир, а не за войну. За то, чтобы Россия стала щитом, чтобы не бомбили Донбасс, а не за то, чтобы бомбили Киев». В то же время Путин и другие представители власти излагают самые разные обоснования интервенции: то восстановление «исторического единства» народов, то восстановлении исторических границ, то — будто Украина и НАТО «создали риски» для «существования» самой России, то о защите всей Украины от неонацистов.Помощь мирным жителям Донбасса в них почти не фигурирует.При этом союзничество с «ЛДНР» — далеко не самый очевидный способ такой помощи. Если бы во главу угла было поставлено разрешение ситуации, а не захватнические планы и собственное влияние на Украину, то российское правительство могло бы потратить силы на сотрудничество с гражданским обществом внутри и вне Донбасса. В этой связи напомним и о расследованииBaza, которое называет одним из главных выгодополучателей войны в Донбассе беглого украинского олигарха Сергея Курченко — сотрудничающего с высокопоставленными чиновниками и силовиками из спецслужб РФ. В ходе конфликта на Донбассе под его контроль перешла большая часть угольных и металлургических предприятий на территории «ЛДНР». В то время как Курченко обогащается, рабочим предприятий в 2021 году месяцамине платилизарплаты, и так держащиеся на уровне 10-20 тысяч рублей. Стоит поставить вопрос о том, зачем на самом деле нужна российская армия на Донбассе и ее поддержка вооруженных сил «ЛДНР».5. «А где же вы (российские граждане, осуждающие войну) были все эти 8 лет?»Кто-то из нас был подростком, кто-то не имел позиции или не интересовался политикой, а кто-то сопротивлялся войне с Украиной. Важнее всего не это, а то, как мы будем действовать сейчас.В России на антивоенные протесты после присоединения Крыма в 2014 вышло до20 000человек, от либералов до анархистов. Именно с Крыма, а не Донбасса, начался отсчет — это важно помнить, чтобы не путать нападение и самооборону.Антивоенные лозунги звучали на многих других акциях: например, на траурных шествиях, посвященных Борису Немцову — до пандемии они набирали в среднем по 60 000 тысяч человек. Отказ от претензий на Крым и на Донбасс, резкое улучшение отношений с Украиной — позиция, которая никогда не уходила полностью из протестной и оппозиционной повестки. Хотя главный фокус и сместился на масштабные внутренние проблемы России — такие как фальсификация выборов, преследование политзаключенных, экологические катастрофы, пенсионная реформа.Тем не менее, борясь против авторитаризма в РФ, мы боролись и за возможность менять власть и отказаться от агрессивной внешней политики. Возможность прекратить стычки с украинскими войсками и спонсирование отрядов, ведущих бои с украинской армией. Именно это, а также наше полное доверие к украинскому гражданскому обществу в деле решения ихвнутреннихпроблем, могло бы принести мир на Донбасс.6. «Путин хочет закончить эту войну, которая длится 8 лет»Путин и другие представители российской власти постоянно заявляли, что Россия — не участница военных действий на Донбассе. Значит, с его собственной точки зрения, он объявил войну, а не заканчивает ее.На самом деле российская армия уже в 2014 году сражалась с украинской на украинской территории — например, под Иловайском. Война идет 8 лет.Закончить войну — отличная цель. Однако она не объясняет, зачем оккупировать другую страну и стремиться взять под контроль ее политические институты. А Путин уже не раз поставил вопрос именно так — например,25 февраля, когда попытался убедить украинских военных совершить военный переворот: «Берите власть в свои руки, похоже нам с вами будет легче договориться, чем с этой шайкой наркоманов и неонацистов, которая засела в Киеве и взяла в заложники весь украинский народ».Наконец, вместо идеи окончания войны, за последние дни мы получаем от российской власти только новые для нее оправдания — см. п.4.7. «Разве мы не спасаем Украину и Россию от неонацистов?»Кто мониторит фашистскую активность в обществе, причем скрупулезнее всех? Антифашисты. Именно антифашистские объединения систематически сопротивляются фашистским и неонацистским организациям. Украинские антифашистырегулярнозаявляют, что путинская пропаганда крайне преувеличивает влияние ультраправых на украинское общество и государство. Антифашисты не стали быпреуменьшатьстепень «нацификации».Если российская власть и государственные медиа оценивают ее выше, чем это делают антифашисты — которые находятся в стране и контексте — это ясный знак, что врут именно первые.Также антифашистызаявляли, что будут сражаться против российского вторжения на стороне армии Украины — очевидно, они не считают ее фашистской.Существуют неонацистские боевые группы, которыедействуютпротивукраинских сил и на стороне «ЛДНР» и армии России, а их представители в течение последних лет встречались с Путиным. Примеры таких образований — отряды «Русич» и «Ратибор». Историк и политолог Вячеслав Лихачев, исследуя (и признавая) роль неонацистов собеихсторон конфликтаписал: «члены ультраправых групп сыграли гораздо большую роль с российской стороны конфликта чем с украинской».Известная своими националистическими взглядамипартия «Правый Сектор» не получила ни одного места в действующей Верховной Раде.В предвыборнойпрограммеЗеленского, за которого проголосовали 73% избирателей при явке 61,37%, почти 13,5 млн человек, не было ни единого националистического лозунга. Зато был такой: «Надо объединяться всем, кто независимо от пола, языка, веры, национальности просто ЛЮБИТ УКРАИНУ!»Наконец, агрессию России осуждают Талибан, европейские государства, Израиль — очень пестрая палитра политических сил. В России на антивоенном митинге была задержана ленинградскаяблокадница. Люди и нации, действительно пострадавшие от нацизма, не на стороне Путина.8. «Украинцы сами просят Путина вмешаться, чтобы он всех спас»В первую очередь, нужно спросить, от кого мог бы спасать гражданок и граждан Украины Путин. Выше мы упоминали о том, что угрозы со стороны неонацистских сил цинично раздуваются российскими властями, которые не говорят о том, что стоит, в первую очередь, доверить антифашизм самому украинскому обществу.Даже если сколько-то писем от озабоченных украинцев лежит на столе у Путина, можно предположить, что таких просителей не больше, чем украинцев,доверяющихроссийским СМИ — около 3%. В то же времядовериеукраинцев к собственной армии – 70%. И прямо сейчас на улицах украинских городов —очередина запись в добровольческие части.Украинок и украинцев представляют действующие президент и парламент. Действующий президент получил значительную долю голосов даже в Донецкой и Луганской области. В первом туре выборов в 2019 году Зеленский получил больше 20% голосов в этих областях, слегка уступив Порошенко. Во втором туре он обошел Порошенко и победил. Сейчас эти органы украинской власти призывают к миру и переговорам. Источники же призывов о спасении извне неизвестны. Заметим, сам Кремль не приводит никаких доводов в пользу нелегитимности действующей власти: ни свидетельств об уничтожении политической конкуренции, ни данных о фальсификации выборов. Вместо этого Путин простосамсчитает, что они нелегитимны, так как являются «шайкой наркоманов и неонацистов».Кому мы должны доверять суждения о легитимности украинской власти — Путину или гражданкам и гражданам Украины?И даже если мы представим, что существует меньшинство, нуждающееся в срочной ивнешнейпомощи, то логичнее их вывезти, а не развязать войну на всей территории страны.9. «Путин просто защищает Россию от НАТО»Это подмена понятий. Почему тогда Путин нападает на Украину, а не на сами страны НАТО? Украина не входит в их число. Идея, что ради «защиты от НАТО» можно нападать на Украину, опирается на представление, будто Украина — продолжение России, лишь разменная территория в противостоянии империй. Эта позиция бесчеловечна по отношению к жителям Украины.Лидер, который бы хотел двигать планету к миру и демилитаризации, в первую очередь отказался бы от утверждений, будто суверенитет соседних стран — историческая ошибка. Но Путин — например, в телеобращении 21.02.22 — делал ровно противоположное. Как империалистической риторикой, так и нападением на Украину Путин только укрепляет военные лобби в других странах. Это лишь способствует повышению военных расходов других стран и отдаляет весь мир от всесторонней демилитаризации.Наконец, даже некоторые российские генералы, какпишетГригорий Юдин, говорили о том, что сейчас перед Россией не стоит риск военной угрозы со стороны НАТО.10. «Путин защищает Россию от ядерной угрозы со стороны Украины»Будапештский меморандумот 1994 года обязывает подписавшиеся страны (включая Россию) «уважать независимость, суверенитет и существующие границы Украины в обмен на ядерное разоружение страны». На сегодняшний день в Украине нет ядерного вооружения. Владимир Зеленский лишь заявил овозможностиего появления в феврале этого года. Почему же? Россияуженарушила обещание уважать «существующие границы» Украины, причем еще в 2014, присоединив Крым. (Даже если 2014 год в России не все рассматривают, как такое нарушение, Украина оценивает это именно так 8 лет). И в 2022, признав независимость самопровозглашенных республик. И ведя военные действия на территории Украины — начиная с 2014 года и сейчас. Это значит, что очевидно, невозможно оправдать российское полномасштабное вторжение, происходящее сейчас, нарушениемУкраинойБудапештского меморандума. И лишь сейчас — и лишь в качестве возможности — она заявила, что у нее может появиться ядерное оружие. Винить в этом заявлении мы, граждане России, должны в первую очередь агрессивные действия нашей власти. Напротив, именно нападение на целую страну, как это уже было в истории мировых войн, толкает мировое сообщество к тому, чтобы вступить в конфликт на одной из сторон, и тем самымувеличиваетриск применения ядерного оружия.11. «Надо было ввести войска еще во время Майдана, тогда бы не было настоящей войны»Войска как раз и были введены. Российские военныеподнимались по тревогена учения, и оказывались в Крыму и «ЛДНР». Российские военные в «ЛДНР»снималисимволику российской армии, как и военные в Крыму. В Псковской области в Россиинаходитсякладбище солдат. Все они погибли в 2014 году, с большой вероятностью — на юго-востоке Украины. Уже тогда следовало понять: Украина — это не абстрактная территория, на которой только российская армия может навести порядок, а страна, которая при всех, нормальных для любого общества противоречий, будет защищаться и видеть в российской армии не порядок, а инструмент подчинения Российской власти.Но главное — война не происходит сама собой. Решение о текущем нападении принял конкретный человек — который мог этого не делать, если бы захотел.12. «Может Путин и не прав, но нельзя желать своей армии поражения»Как наследники страны, победившей фашизм, мы видим патриотизм в защите достоинства нашей родины, а не в формальном следовании правилам офицерской чести. Чтобы сохранить наше человеческое достоинство, мы должныне допустить военных преступлений и убийств как солдат, так и мирного населения страны, которая нам не угрожает.Не тратить силы на солидарность с армией, наступающей на защищающих свою страну людей, а немедленно работать над созданием антивоенного движения для отвода войск из Украины — вот самый прямой путь избежать поражения. Участие в таких решениях определяет гражданскую свободу и смысл жизни человека.13. «Украина запрещает русским говорить по-русски»В первую очередь стоит сказать, что невозможно оправдать захват страны несогласием с ее внутренней языковой политикой.Согласно последним опросам, русский язык использует заметная часть населения: например, в2020году в Киеве русский язык использовали в интернете чаще украинского. Две трети украинцев говорят о необходимости продолжать текущую языковую политику (см.законо государственном языке), 20% — не соглашаются с ними. Мы — не полицейские и не менторы для украинского гражданского общества.Мы должны априори доверять самим жительницам и жителям Украины трансформацию своей судьбы демократическим путем, включая дальнейшую языковую политику. Доверять им такой ее выбор, который сможет устраивать разные регионы и группы.Затомы точно можем помешать спокойному демократическому выбору, если сделаем так, чтобы русский языкавтоматическиассоциировался с языком агрессоров и оккупантов.14. «А как же сожженный Дом Профсоюзов в Одессе — они же так со всеми русскими будут поступать?»Миссия Управления ООН по правам человека — ее представители непосредственно наблюдали за событиями —сообщает, что детальное расследование инцидента продолжается. Известно, что обе стороны были вооружены и проявляли жестокость, первыми бросать коктейли Молотова начали противники Майдана — те, ктопосчитал, что события на Майдане приведут к ущемлению интересов жительниц и жителей Юго-Востока Украины.Противостоящая им группа дала ожесточенный отпор, вынудила укрыться в Доме Профсоюзов. Коктейли Молотова и стрельба продолжали применяться обеими сторонами, здание загорелось, в ходе пожара забаррикадировавшиеся люди погибли.Здесь мы можем лишь повторить нашу общую позицию.Мы, жители России, не полицейские и не менторы для украинского гражданского общества.Мы должны априори доверять самим жительницам и жителям Украины трансформацию своей судьбы, включая разрешение внутренних конфликтов, которые возможны в любой стране и фактически могут носить самый разный характер. Не до конца расследованный частный случай противостояния вооруженных групп, ни одна из которых не являлась избранными представителями всего народа Украины, не может быть серьезным аргументом для обсуждения тотальной угрозы.Первая помощь в таком трансформационном процессе — это не мешать. Мы как граждане России должны задать себе вопрос: мы хотим создавать новые поводы для ненависти как к себе (чем очевидно сейчас занимается Путин, напав на Украину) или нет? Хотим ли мы провоцировать своими действиями недоверие и вражду между жителями разных частей Украины или нет? То, чтомы можем сделать прямо сейчас для внутреннего мира — это показать всеми силами, что граждане России не желают использования внутренних противоречий для захвата власти над какой-либо частью страны.Напротив, российская власть просто цинично использует упоминания о трагедии в Доме Профсоюзов, чтобы привлечь на свою сторону людей, которые знают и помнят о ней, в то время как в числе настоящих и главных причин войны раз за разом упоминает «риски для существования России», которые создала Украина.15. «Меня это не касается — и своих проблем достаточно»Минобороны РФ по-прежнему не рассказывает о потерях, но Минобороны Украины заявляет о тысячах убитых военнослужащих из РФ. Минобороны Украины делится информацией об убитых, пленных и раненых (с фото и паспортными данными) для того, чтобы о них могли узнать родственники, не понимающие, где их сыновья, мужья и братья. Очень многие на этой войне — не профессиональные солдаты, а срочники. Еще несколько дней назад они могли не подозревать, что окажутся на войне. Российское командование несколько дней подряд пытается взять Киев почти любой ценой, а Путин хочет вести переговоры с сильной позиции. Если это не остановить сейчас, заплатить за такую решимость придется переброской новых солдат. Наших знакомых и родственников, непосредственно оказавшихся на войне, будет становиться все больше и больше.Россия — часть мировой экономики. Чем глубже наш режим затягивает Россию в войну, тем серьезнее будут становится санкции, ведь иностранные государства готовы использовать только такие (т.е. невоенные) методы для того, чтобы вынудить Путина прекратить войну. Обвал рубля резко поднимет цены на все товары в стране, и так растущие последние месяцы. Перестройка экономики на ведение войны будет означать конец всем надеждам обычных людей на построение своей собственной мирной жизни — в науке, индустрии, сельском хозяйстве, искусстве.И именно как обычные люди мы будем десятилетиями встречаться с простыми украинцами — в России, в Украине, в интернете, по всему миру. И чувствовать их недоверие и вражду к нам просто потому, что мы из России. Выражая нашу позицию сегодня, мы и поддерживаем украинцев, и спасаем наши отношения с ними на годы вперед.Украинки и украинцы будут знать не просто, что войну развязал Путин, а не обычные россияне, но и то, что когда это случилось, нам было не все равно.Только тогда у нас будет шанс уважать себя.16. «Разве наше мнение на что-то может повлиять?»Именно простые граждане — инструмент, используемый для оправдания войны. Путин говорит, что рассчитывает на«консолидированную патриотическую позицию», а Песков — что правительство должно«лучше объяснять свою позицию», когда кто-то не согласен. Молчание создает видимость поддержки, которой правительство легитимизирует войну. Только активный протест может это изменить. На утро второго дня войны Владимир Зеленскийобратилсяк российским гражданам, вышедшим вечером 24 февраля на протесты:«Мы вас видим. Это значит, что вы нас услышали. Это значит, вы нам начинаете верить. Боритесь за нас, боритесь против войны».Таким образом, выходя на протесты мы показываем свою поддержку украинцам и украинкам, и тем самым укрепляем их силы.(Не)молчание влияет и на армию. Путин и руководство страны не могут изложить одной, четкой икраткойпричины, по которой начата война. Когда тебе отчаянно сопротивляется противник, когда ты наступаешь на чужой территории, и вдобавок не знаешь, за что воюешь, то долго воевать гораздо сложнее. Если добавить к этому осуждение войны в родных городах, то солдаты и офицеры могут все больше начать сомневаться, а их рвение может начать ослабевать.17. «Выходить на улицы бесполезно. Всех разгонят и пересажают. У беларусов не получилось.При несомненной угрозе со стороны силовиков, регулярные и массовые мирные уличные демонстрации являются необходимым рычагом давления на систему. Относительно безопасной формой протеста, также способствующей гласности, являетсяакция#тихийпикет. Участницы и участники делают повседневные дела в городе с видимыми антивоенными нашивками на сумках или одежде. Это привлекает к войне внимание окружающих, которые могут присоединиться к протесту.Однако уличные протесты — не единственная возможная тактика. Символическое попадание в машину репрессий — не всегда лучшее политическое действие. Как напоминают активисты, масштабнейшие акции протеста не остановили войну в Ираке. Есть идругиетактики сопротивления войне: как открытые забастовки (право на забастовку защищено ст. 37 Конституции), так и просто взятие больничного, что ещё легче делать сейчас, во время пандемии (сейчас достаточно заявить о симптомах ОРВИ и уйти на больничный на 7 дней). Даже распространяя критические вопросы можно создать напряжение — и повысить цену военной агрессии. Стоит присоединиться к почти 1 000 000 подписавшихпетицию— это поможет в распространении информации.У беларусов мирный протест столкнулся с жестокой силовой реакцией и был задавлен. Однако не стоит забывать, что это произошло при активной поддержке Путина. В России же при массовых протестах по всей стране ресурсы на поддержку силовиков быстро закончатся. У Лукашенко был Путин, своего собственного Путина у Путина нет. Часть сил ОМОНа и Росгвардии переброшены на войну. У режима нет бесконечных ресурсов для подавления по-настоящему массового антивоенного движения.
antiweb
UNKNOWN
antiword
Basic Antiword via libreofficeThis package is nothing more than a convenience wrapper aroundlibreoffice --convert-to txtwhich dumps to stdout, cleaning up the generated temporary file as it goes.I probably should have written it in bash, but there we go.
antk
PurposeAutomated Neural Graph Toolkit is an extension library for Google’s Tensorflow. It is designed to facilitate rapid prototyping of Neural Network models which may consist of multiple models chained together. Multiple input streams and and or multiple output predictions are well supported.Documentation for ANTkYou will find complete documentation for ANTk atthe ANTk readthedocs page.PlatformANTk is compatible with linux 64 bit operating systems.Python DistributionANTk is written in python 2. Most functionality should be forwards compatible.InstallA virtual environment is recommended for installation. Make sure that tensorflow is installed in your virtual environment and graphviz is installed on your system.Install tensorflowInstall graphvizFrom the terminal:(venv)$ pip install antk
antlerinator
ANTLeRinatoris a Python utility package to help keeping components of ANTLR v4 in sync.RequirementsPython>= 3.6JavaSE >= 7 JRE or JDK (the latter is optional)InstallANTLeRinatorhas both run-time and build-time components, therefore it can be used both as an install requirement and as a setup requirement.To useANTLeRinatorat run-time, it can be added tosetup.cfgas an install requirement (if usingsetuptoolswith declarative config):[options]install_requires=antlerinatorantlr4-python3-runtime==4.9.2# optionalNote thatANTLeRinatorhas no direct dependency on theANTLRv4runtime.To useANTLeRinatorat build-time, it can be added topyproject.tomlas a build system/setup requirement (if usingPEP517builds):[build-system]requires=["antlerinator","setuptools",]build-backend="setuptools.build_meta"To installANTLeRinatormanually, e.g., into a virtual environment, usepip:pip install antlerinatorThe above approaches install the latest release ofANTLeRinatorfromPyPI. Alternatively, for the development version, clone the project and perform a local install:pip install .UsageDownloading the ANTLRv4 tool jar file at run-timeIf theANTLRv4runtime is installed,ANTLeRinatorcan be used to download the corresponding version of the tool jar file:importantlerinatorassertantlerinator.__antlr_version__isnotNone# alternatively: import antlr4path=antlerinator.download(lazy=True)If theANTLRv4runtime is not installed or a different version of the tool jar is needed, the required version must/can be specified:importantlerinatorpath=antlerinator.download(version='4.9.2',lazy=True)By default, these approaches download files to a~/.antlerinatordirectory, and only if necessary (i.e., the jar file has not been downloaded yet).Downloading the ANTLRv4 tool jar manuallyShould there be need for downloading the ANTLR v4 tool jar manually, a helper script is available:antlerinator-download --helpAdding ANTLRv4 support to the command line interfaceIf an application has anArgumentParser-based command line interface,ANTLeRinatorcan be used to add a CLI argument to specify whichANTLRv4tool jar to use. The default processing of the argument, also provided byANTLeRinator, is to download the tool jar version corresponding to theANTLRv4runtime if necessary:importantlerinatorimportargparseimportsubprocessassertantlerinator.__antlr_version__isnotNoneparser=argparse.ArgumentParser()antlerinator.add_antlr_argument(parser)args=parser.parse_args()antlerinator.process_antlr_argument(args)subprocess.call(['java','-jar',args.antlr])Building lexers/parsers at build-time with ANTLRv4ANTLeRinatoralso extendsSetuptoolsto allow building lexers/parsers at build-time from.g4grammars. It adds two newSetuptoolscommands,build_antlrandclean_antlr, to perform the building and the cleanup of lexers/parsers, and also ensures that these new commands are invoked by the standardbuild(install),develop, andcleancommands as well as by theSetuptools-internaleditable_wheelcommand as appropriate. The building of lexers/parsers is performed using theANTLRv4tool and is controlled by the[build_antlr]section insetup.cfg:[build_antlr]commands=antlerinator:4.9.2 path/to/Dummy.g4 -Dlanguage=Python2 -o pkg/parser/py2 -Xexact-output-dirantlerinator:4.9.2 path/to/Dummy.g4 -Dlanguage=Python3 -o pkg/parser/py3 -Xexact-output-diroutput=pkg/parser/py?/Dummy*.py#java =Thecommandsoption ofbuild_antlrlists the invocations of theANTLRv4tool. The first element of each invocation is a so-called provider specification that defines where to get theANTLRv4tool jar from. Currently, two providers are supported:antlerinator:N.MusesANTLeRinatorto download the requested version of the tool jar (if necessary), whilefile:/path/to/antlr.jaruses the explicitly given tool jar. The rest of the elements of each invocation are passed to the tool jar as command line arguments.Thejavaoption can be given to explicitly specify which Java VM to use to run theANTLRv4tool (javais used by default).Theoutputoption shall list the file names or glob patterns of the output of theANTLRv4tool invocations. Theclean_antlrcommand removes these files on cleanup.Copyright and LicensingLicensed under the BSD 3-ClauseLicense.
antlia
No description available on PyPI.
antlib
IntroductionPython wrapper for the ANT library to communicate serially with ANT devices. For more information about ANT, seehttp://thisisant.com/LicenseCopyright 2019 Garmin Canada, Inc.Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.NotesAntlib now supports both 32bit/64bit windows. There is a restriction on using 64bit package which does not support communications with USB Interface Board (UIF) boards.
antlr3-python-runtime
This is the runtime package for ANTLR3, which is required to use parsers generated by ANTLR3.
antlr4-grun
Pure-Python replacement of theantlrtest rig,org.antlr.v4.gui.TestRig(akagrun).There are a few places this executable differs in the interest of better or more Pythonic design. For example,I useclick’s conventions for CLI argument parsing, which have a double-dash for long-options, rather than Java’s convention, which have a single-dash.I use JSON strings to escape source lexemes. This is more elegant and is easily parsed in whatever next phase of processing exists.
antlr4-mysql
How to usepip install antlr4-mysqlHow to buildWhat You NeedJDK 1.8 or later (generate python code)Python3generated python codejava -jar antlr4-4.9.2-complete.jar -Dlanguage=Python3 *.g4 -visitor -o am/autogenRequired dependencyantlr4-python3-runtimeFor testmysql_base_test.py
antlr4-python
Failed to fetch description. HTTP Status Code: 404
antlr4-python2-runtime
No description available on PyPI.
antlr4-python2-runtime-thieman
No description available on PyPI.
antlr4-python3-runtime
No description available on PyPI.
antlr4-python-alt
Failed to fetch description. HTTP Status Code: 404
antlr4-tools
No description available on PyPI.
antlr4-vba
antlr4-vbaantlr4-vba is a package of the Lexer and Parser created by antlr with the commandantlr4 -Dlanguage=Python3 -listener -visitor vba.g4. The project also includes the empty listener and visitor classes, which are largely useless on their own, but good as reference. Similarly, there are grammar, lexer, and parser files which implement the conditional compilation process of VBA.InstallationThe project can be installed withpython -m pip install antlr4-vbaTyping stubsThe project also includes typed stubs for mypy. These stubs are autogenerated by stubgen, so they may be incomplete. Please let me know if they do not meet your needs and I can manage them manually.
antlr4-vba-parser
antlr4-vba-parserNavigate antlr VBA Parse Trees in python.This python package provides an interface to the the antlr4 tooling and allows parsing and lexing of VBA grammar.>>>fromantlr4_vba_parser.vba_parserimportAntlr4VbaParser>>>parsed=Antlr4VbaParser("""... SUB square(x)... DIM y: REM Some comment... y = x * x ' same as x**2... END SUB... """)# also accepts a filepath>>>frompprintimportpprint>>>pprint(parsed)('(startRule (module (endOfLine\\n) (moduleBody (moduleBodyElement (subStmt ''SUB (ambiguousIdentifier square) (argList ( (arg (ambiguousIdentifier x)) '')) (endOfStatement (endOfLine\\n )) (block (blockStmt (variableStmt DIM ''(variableListStmt (variableSubStmt (ambiguousIdentifier y))))) ''(endOfStatement : (endOfLine (remComment REM Some comment)) (endOfLine ''\\n )) (blockStmt (letStmt (implicitCallStmt_InStmt ''(iCS_S_VariableOrProcedureCall (ambiguousIdentifier y))) = (valueStmt ''(valueStmt (implicitCallStmt_InStmt (iCS_S_VariableOrProcedureCall ''(ambiguousIdentifier x)))) * (valueStmt (implicitCallStmt_InStmt ''(iCS_S_VariableOrProcedureCall (ambiguousIdentifier x))))))) (endOfStatement '"(endOfLine (comment ' same as x**2)) (endOfLine\\n))) END SUB)) "'(endOfLine\\n))) <EOF>)')Installationantlr4_vba_parseritself is a pure python package, but depends on ajavaruntime in order to run. The ANTLR4 jar needed to perform the parsing/lexing is included in the package distribution and is bundled from third-party sources at the time of packaging withsetup.py build.To install, simply try:pipinstallantlr4_vba_parserDevelopmentTo set up a development environment, first create either a new virtual or conda environment before activating it and then run the following:gitclonehttps://github.com/Liam-Deacon/antlr4-vba-parsercdantlr4-vba-parser pipinstall-rrequirements-dev.txtrequirements-test.txt-rrequirements.txt pythonsetup.pybuild_antlr4# needed to generate python bindingspipinstall-e.This will install the package in development mode. Note that is you have forked the repo then change the URL as appropriate.DocumentationDocumentation can be found within thedocs/directory. This project uses sphinx to autogenerate API documentation by scraping python docstrings.To generate the HTML documentation, simply do the following:cddocs makehtmlContribution GuidelinesContributions are extremely welcome and highly encouraged. To help with consistency please can the following areas be considered before submitting a PR for review:Useautopep8 -a -a -i -r .to run over any modified files to ensure basic pep8 conformance, allowing the code to be read in a style expected for most python projects.New or changed functionality should be tested, runningpytestshouldTry to document any new or changed functionality. Note: this project usesnumpydocfor it's docstring documentation style.LicenseReleased under the BSD license.TODOThis package is mostly a proof of concept and as such there are a number of areas to add to, fix and improve.Create listener(s) capable of capturing contextual information and creating a JSON-friendly dictionary output.Produce simple script turns the above into a command line tool.Contribute tooletools.vbato hopefully extend capabilities using this package.AcknowledgementsAndrew Lockhart for the initial idea of combining ANTLR4 and python to handle VBA grammar
antlr4-verilog
ANTLR4-Verilog-PythonGenerated files from ANTLR4 for Verilog parsing in PythonTutorialInstall this Python packagepython3 -m pip install antlr4_verilogUse your own listener to walk through the AST:fromantlr4_verilogimportInputStream,CommonTokenStream,ParseTreeWalkerfromantlr4_verilog.verilogimportVerilogLexer,VerilogParser,VerilogParserListenerdesign='''module ha(a, b, sum, c);input a, b;output sum, c;assign sum = a ^ b;assign c = a & b;endmodule'''classModuleIdentifierListener(VerilogParserListener):defexitModule_declaration(self,ctx):self.identifier=ctx.module_identifier().getText()lexer=VerilogLexer(InputStream(design))stream=CommonTokenStream(lexer)parser=VerilogParser(stream)tree=parser.source_text()listener=ModuleIdentifierListener()walker=ParseTreeWalker()walker.walk(listener,tree)print(listener.identifier)# 'ha'Take a look at other listener methods forVerilogandSystemVerilogYou may find more examples in thetest fileHow to generate those filesSystem requirements (Ubuntu):sudo apt-get install -y default-jre sudo apt-get install -y default-jdk sudo apt-get install -y curlGet ANTLR4:curlhttps://www.antlr.org/download/antlr-4.10.1-complete.jar-oextra/antlr-4.10.1-complete.jarGet ANTLR grammars:git clone https://github.com/antlr/grammars-v4.git extra/grammars-v4Call ANTLR for Verilog grammar:java -Xmx500M -cp "extra/antlr-4.10.1-complete.jar:${CLASSPATH}" org.antlr.v4.Tool -Dlanguage=Python3 -visitor `pwd`/extra/grammars-v4/verilog/verilog/VerilogLexer.g4 `pwd`/extra/grammars-v4/verilog/verilog/VerilogParser.g4 `pwd`/extra/grammars-v4/verilog/verilog/VerilogPreParser.g4 -o src/antlr4_verilog/verilogCall ANTLR for SystemVerilog grammar:java -Xmx500M -cp "extra/antlr-4.10.1-complete.jar:${CLASSPATH}" org.antlr.v4.Tool -Dlanguage=Python3 -visitor `pwd`/extra/grammars-v4/verilog/systemverilog/SystemVerilogLexer.g4 `pwd`/extra/grammars-v4/verilog/systemverilog/SystemVerilogParser.g4 `pwd`/extra/grammars-v4/verilog/systemverilog/SystemVerilogPreParser.g4 -o src/antlr4_verilog/systemverilogHow to test the grammarGenerate Java files:java -Xmx500M -cp "extra/antlr-4.10.1-complete.jar:${CLASSPATH}" org.antlr.v4.Tool -Dlanguage=Java -visitor `pwd`/extra/grammars-v4/verilog/verilog/VerilogLexer.g4 `pwd`/extra/grammars-v4/verilog/verilog/VerilogParser.g4 `pwd`/extra/grammars-v4/verilog/verilog/VerilogPreParser.g4 -o test/testrig/verilogjava -Xmx500M -cp "extra/antlr-4.10.1-complete.jar:${CLASSPATH}" org.antlr.v4.Tool -Dlanguage=Java -visitor `pwd`/extra/grammars-v4/verilog/systemverilog/SystemVerilogLexer.g4 `pwd`/extra/grammars-v4/verilog/systemverilog/SystemVerilogParser.g4 `pwd`/extra/grammars-v4/verilog/systemverilog/SystemVerilogPreParser.g4 -o test/testrig/systemverilogCompile these recently generated files:javac -cp "extra/antlr-4.10.1-complete.jar:${CLASSPATH}" test/testrig/verilog/*.javajavac -cp "extra/antlr-4.10.1-complete.jar:${CLASSPATH}" test/testrig/systemverilog/*.javaCreate JAR files:jar cf test/testrig/verilog.jar -C test/testrig/verilog .jar cf test/testrig/systemverilog.jar -C test/testrig/systemverilog .Finally, fortest.vandtest.svfiles:java -Xmx500M -cp "extra/antlr-4.9-complete.jar:test/testrig/verilog.jar:${CLASSPATH}" org.antlr.v4.gui.TestRig Verilog source_text test/testrig/test.v -treejava -Xmx500M -cp "extra/antlr-4.9-complete.jar:test/testrig/systemverilog.jar:${CLASSPATH}" org.antlr.v4.gui.TestRig SystemVerilog source_text test/testrig/test.sv -treeYou can use-guito test it interactivelyAcknowledgementWe would like to appreciate the work from the ANTLR team and the Verilog/SystemVerilog grammar written byMustafa Said Ağca.
antlr-ast
antlr-astThis package allows you to use ANTLR grammars and use the parser output to generate an abstract syntax tree (AST).Installpipinstallantlr-astNote:this package is not python2 compatible.Running Tests# may need:# pip install pytestpy.testUsageUsingantlr-astinvolves four steps:Using ANTLR to define a grammar and to generate the necessary Python files to parse this grammarUsingparseto get the ANTLR runtime output based on the generated grammar filesUsingprocess_treeon the output of the previous stepABaseAstVisitor(customisable by providing a subclass) transforms the ANTLR output to a serializable tree ofBaseNodes, dynamically created based on the rules in the ANTLR grammarABaseNodeTransformersubclass can be used to transform each kind of nodeThe simplify option can be used to shorten paths in the tree by skipping nodes that only have a single descendantUsing the resulting treeThe next sections go into more detail about these steps.To visualize the process of creating and transforrming these parse trees, you can usethis ast-viewer.Using ANTLRNote: For this part of this tutorial you need to know how to parse codeSee the ANTLRgetting started guideif you have never installed ANTLR.TheANTLR Mega Tutorialhas useful Python examples.This page explains how to write ANTLR parser rules.The rule definition below is an example with descriptive names for important ANTLR parser grammar elements:rule_name: rule_element? rule_element_label='literal' #RuleAlternativeLabel | TOKEN+ #RuleAlternativeLabel ;Rule element and alternative labels are optional.+,*,?,|and()have the same meaning as in RegEx.Below, we'll use a simple grammar to explain howantlr-astworks. This grammar can be found in/tests/Expr.g4.grammar Expr; // parser expr: left=expr op=('+'|'-') right=expr #BinaryExpr | NOT expr #NotExpr | INT #Integer | '(' expr ')' #SubExpr ; // lexer INT : [0-9]+ ; // match integers NOT : 'not' ; WS : [ \t]+ -> skip ; // toss out whitespaceANTLR can use the grammar above to generate a parser in a number of languages. To generate a Python parser, you can use the following command.antlr4-Dlanguage=Python3-visitor/tests/Expr.g4This will generate a number of files in the/tests/directory, including a Lexer (ExprLexer.py), a parser (ExprParser.py), and a visitor (ExprVisitor.py).You can use and import these directly in Python. For example, from the root of this repo:fromtestsimportExprVisitorTo easily use the generated files, they are put in theantlr_pypackage. The__init__.pyfile exports the generated files under an alias that doesn't include the name of the grammar.Base nodesABaseNodesubclass has fields for all rule elements and labels for all rule element labels in its corresponding grammar rule. Both fields and labels are available as properties onBaseNodeinstances. Labels take precedence over fields if the names would collide.The name of aBaseNodeis the name of the corresponding ANTLR grammar rule, but starting with an uppercase character. If rule alternative labels are specified for an ANTLR rule, these are used instead of the rule name.Transforming nodesTypically, there is no 1-to-1 mapping between ANTLR rules and the concepts of a language: the rule hierarchy is more nested. Transformations can be used to make the initial tree of BaseNodes based on ANTLR rules more similar to an AST.TransformerTheBaseNodeTransformerwill walk over the tree from the root node to the leaf nodes. When visiting a node, it is possible to transform it. The tree is updated with transformed node before continuing the walk over the tree.To define a node transform, add a static method to theBaseNodeTransformersubclass passed toprocess_tree.The name of the method you should define follows this pattern:visit_<BaseNode>, where<BaseNode>should be replaced by the name of theBaseNodesubclass to transform.The method should return the transformed node.This is a simple example:classTransformer(BaseNodeTransformer):@staticmethoddefvisit_My_antlr_rule(node):returnnode.name_of_partCustom nodesA custom node can represent a part of the parsed language, a type of node present in an AST.To make it easy to return a custom node, you can defineAliasNodesubclasses. Normally, fields ofAliasNodes are like symlinks to navigate the tree ofBaseNodes.Instances of custom nodes are created from aBaseNode. Fields and labels of the sourceBaseNodeare also available on theAliasNode. If anAliasNodefield name collides with these, it takes precedence when accessing that property.This is what a custom node looks like:classNotExpr(AliasNode):_fields_spec=["expr","op=NOT"]This code defines a custom node,NotExprwith anexprand anopfield.Field specsThe_fields_specclass property is a list that defines the fields the custom node should have.This is how a field spec in this list is used when creating an custom node from aBaseNode(the source node):If a field spec does not exist on the source node, it is set toNoneIf multiple field specs define the same field, the first one that isn'tNoneis usedIf a field spec is just a name, it is copied from the source nodeIf a field spec is an assignment, the left side is the name of the field on theAliasNodeand the right side is the path that should be taken starting in the source node to get the node that should be the value for the field on the custom node. Parts of this path are separated using.Connecting to the transformerTo use this custom node, add a method to the transformer:classTransformer(BaseNodeTransformer):# ...# here the BaseNode name is the same as the custom node name# but that isn't required@staticmethoddefvisit_NotExpr(node):returnNotExpr.from_spec(node)Instead of defining methods on the transformer class to use custom nodes, it's possible to do this automatically:Transformer.bind_alias_nodes(alias_nodes)To make this work, theAliasNodeclasses in the list should have a_rulesclass property with a list of theBaseNodenames it should transform.This is the result:classNotExpr(AliasNode):_fields_spec=["expr","op=NOT"]_rules=["NotExpr"]classTransformer(BaseNodeTransformer):passalias_nodes=[NotExpr]Transformer.bind_alias_nodes(alias_nodes)An item in_rulescan also be a tuple. In that case, the first item in the tuple is aBaseNodename and the second item is the name of a class method of the custom node.It's not useful in the example above, but it is equivalent to this:classNotExpr(AliasNode):_fields_spec=["expr","op=NOT"]_rules=[("NotExpr","from_not")]@classmethoddeffrom_not(cls,node):returncls.from_spec(node)classTransformer(BaseNodeTransformer):passalias_nodes=[NotExpr]Transformer.bind_alias_nodes(alias_nodes)Using the final treeIt's easy to use a tree that has a mix ofAliasNodes and dynamicBaseNodes: the whole tree is just a nested Python object.When searching nodes in a tree, the priority of nodes can be taken into account. By default,BaseNodes have priority 3 andAliasNodes have priority 2.When writing code to work with trees, it can be affected by changes in the grammar, the transforms and the custom nodes. The grammar is the most likely to change.To make grammar updates have no impact on your code, don't rely onBaseNodes. You can still check whether theAliasNodeparent node of aBaseNodehas the correct fields set and search for nestedAliasNodes in a subtree.If you do rely onBaseNodes, code could break by the addition ofAliasNodes that replace some of these if a field name collides with a field name on a usedBaseNode.
antlr-denter
Python-like indentation tokens for ANTLR4A mostly-readymade solution to INDENT/DEDENT tokens in ANTLR v4. Just plug in theDenterHelperand you'll be good to go! Seethis blog postfor some of the motivations behind this project.antlr-helper is released underthe MIT license, which basically means you can do whatever you want with it. That said, I'd really appreciate hearing from you if you find this project useful! Maybe star the project?Usage (Java)maven<dependency> <groupId>com.yuvalshavit</groupId> <artifactId>antlr-denter</artifactId> <version>1.1</version> </dependency>Adding INDENT / DEDENT tokens to your lexerDefine INDENT and DEDENT tokens in your grammarIn your@lexer::memberssection, instantiate aDenterHelperwhosepullTokenmethod delegates to your lexer'ssuper.nextToken()Override your lexer'ssuper.nextTokenmethod to useDenterHelper::nextTokeninstead.Modify yourNLtokento also grab any whitespace that follows (in other words, have it end in' '*,'\t'*or similar).DenterHelperis an abstract class, and it also takes three arguments for its constructor: the token types for newline, INDENT and DEDENT. It's probably easiest to instantiate it as an anonymous class. The whole thing should look something like this:tokens { INDENT, DEDENT } @lexer::header { import com.yuvalshavit.antlr4.DenterHelper; } @lexer::members { private final DenterHelper denter = new DenterHelper(NL, MyCoolParser.INDENT, MyCoolParser.DEDENT) { @Override public Token pullToken() { return MyCoolLexer.super.nextToken(); } }; @Override public Token nextToken() { return denter.nextToken(); } } NL: ('\r'? '\n' ' '*);There is also a builder available, which is especially useful for Java 8:tokens { INDENT, DEDENT } @lexer::header { import com.yuvalshavit.antlr4.DenterHelper; } @lexer::members { private final DenterHelper denter = DenterHelper.builder() .nl(NL) .indent(MyCoolParser.INDENT) .dedent(MyCoolParser.DEDENT) .pullToken(MyCoolLexer.super::nextToken); @Override public Token nextToken() { return denter.nextToken(); } } NL: ('\r'? '\n' ' '*);Usage (Python3)tokens { INDENT, DEDENT } @lexer::header{ from DenterHelper import DenterHelper from MyCoolParser import MyCoolParser } @lexer::members { class MyCoolDenter(DenterHelper): def __init__(self, lexer, nl_token, indent_token, dedent_token, ignore_eof): super().__init__(nl_token, indent_token, dedent_token, ignore_eof) self.lexer: MyCoolLexer = lexer def pull_token(self): return super(MyCoolLexer, self.lexer).nextToken() denter = None def nextToken(self): if not self.denter: self.denter = self.MyCoolDenter(self, self.NL, MyCoolParser.INDENT, MyCoolParser.DEDENT, ***Should Ignore EOF***) return self.denter.next_token() }Using the tokens in your parserBasically, just use them. One bit worth noting is that when the denter injects DEDENT tokens, it'll prefix any string of them with a singleNL. A singleNLis also inserted before the EOF token if there are no DEDENTs to insert. For instance, given this input:hello world universe dolly... the tokens would be (roughly):"hello" INDENT "world" INDENT "universe" NL DEDENT DEDENT "dolly" NL <eof>This is done so that simple expressions can be terminated by theNLtoken without worrying about surrounding context (an impending dedent or EOF). In this case,universeanddollyrepresent simple expressions, and you can imagine that the grammar would contain something likestatement: expr NL | helloBlock;. Easy peasy!"Half-DEDENTs"What happens when you dedent to an indentation level that was never established?someStatement() if foo(): if bar(): fooAndBar() bogusLine()Notice thatbogusLine()doesn't match with any indentation level: it's more indented thanif foo()but less than its first statement,if bar().This is a buggy program in python. If you to run such a program, you'll get:IndentationError: unindent does not match any outer indentation levelTheDenterHelperprocessor handles this by inserting two tokens: aDEDENTfollowed immediately by anINDENT(the total sequence here would actually be twoDEDENTs followed by anINDENT, sincebogusLine()is twice-dedented fromfooAndBar()). The rationale is that the line has dedened to its parent, and then indented. It's consistent with the indentation tokens for something like:someStatement() bogusLine()If your indentation scheme is anything like python's, chances are you want this to be a compilation error. The good news is that it will be, as long as your parser doesn't allow "spontaneous" indents. That is, if the example just before this paragraph fails, then so will the half-dedent example above. In both cases, the parser rules will bork on an unexpectedINDENTtoken.Repo layoutJava/core: The real thing. This is what you're interested in. Maven artifactantlr-denter.Java/examples: Contains a real-life example of a language that usesDenterHelper, so you can see a full solution, including the pom, how to set up the parser (which is nothing extra relative to usual antlr stuff) and how to define a language that uses these INDENT/DEDENT tokens. The language itself is pretty basic, but it should get the point across. Maven artifactantlr-denter-example-examples.Python3: The python3 implementationThe maven run is as simple asmvn install(or your favorite goal).Comments? Suggestions? Bugs?Don't be shy about opening an issue!
antlr-plsql
antlr-plsqlDevelopmentANTLR requires Java, so we suggest you use Docker when building grammars. TheMakefilecontains directives to clean, build, test and deploy the ANTLR grammar. It does not run Docker itself, so runmakeinside Docker.Build the grammar# Build the docker containerdockerbuild-tantlr_plsql.# Run the container to build the python grammar# Write parser files to local file system through volume mountingdockerrun-it-v${PWD}:/usr/src/appantlr_plsqlmakebuildSet up the Python moduleNow that the Python parsing files are available, you can install them withpip:pipinstall-rrequirements.txt pipinstall-e.And parse SQL code in Python:fromantlr_plsqlimportastast.parse("SELECT a from b")Using the AST viewerIf you're actively developing on the ANLTR grammar or the tree shaping, it's a good idea to set up theAST viewerlocally so you can immediately see the impact of your changes in a visual way.Clone the ast-viewer repo and build the Docker image according to the instructions.Spin up a docker container that volume mounts the Python package, symlink-installs the package and runs the server on port 3000:dockerrun-it\-uroot\-v~/workspace/antlr-plsql:/app/app/antlr-plsql\-p3000:3000\ast-viewer\/bin/bash-c"echo 'Install development requirements in development:' \&& pip install --no-deps -e app/antlr-plsql \&& python3 run.py"When simultaneously developing other packages, volume mount and install those too:dockerrun-it\-uroot\-v~/workspace/antlr-ast:/app/app/antlr-ast\-v~/workspace/antlr-plsql:/app/app/antlr-plsql\-v~/workspace/antlr-tsql:/app/app/antlr-tsql\-p3000:3000\ast-viewer\/bin/bash-c"echo 'Install development requirements in development:' \&& pip install --no-deps -e app/antlr-ast \&& pip install --no-deps -e app/antlr-plsql \&& pip install --no-deps -e app/antlr-tsql \&& python3 run.py"If you update the tree shaping logic in this repo, the app will auto-update.If you change the grammar, you will have to first rebuild the grammar (with theantlr_plsqldocker image) and restart theast-viewercontainer.Run tests# Similar to building the grammar, but running tests# and not saving the generated filesdockerbuild-tantlr_plsql. dockerrun-tantlr_plsqlmakebuildtestOr run the test locally, first build the grammar then run:pytestTravis deploymentBuilds the Docker image.Runs the Docker image to build the grammar and run the unit tests.Deploys the resulting python files to PyPi when a new release is made, so they can be installed easily.
antlr_python_runtime
This is the runtime package for ANTLR3, which is required to use parsers generated by ANTLR3.
antlr-tsql
antlr-tsqlDevelopmentANTLR requires Java, so we suggest you use Docker when building grammars. TheMakefilecontains directives to clean, build, test and deploy the ANTLR grammar. It does not run Docker itself, so runmakeinside Docker.Build the grammar# Build the docker containerdockerbuild-tantlr_tsql.# Run the container to build the python grammar# Write parser files to local file system through volume mountingdockerrun-it-v${PWD}:/usr/src/appantlr_tsqlmakebuildSet up the Python moduleNow that the Python parsing files are available, you can install them withpip:pipinstall-rrequirements.txt pipinstall-e.And parse SQL code in Python:fromantlr_tsqlimportastast.parse("SELECT a from b")Using the AST viewerIf you're actively developing on the ANLTR grammar or the tree shaping, it's a good idea to set up theAST viewerlocally so you can immediately see the impact of your changes in a visual way.Clone the ast-viewer repo and build the Docker image according to the instructions.Spin up a docker container that volume mounts the Python package, symlink-installs the package and runs the server on port 3000:dockerrun-it\-uroot\-v~/workspace/antlr-tsql:/app/app/antlr-tsql\-p3000:3000\ast-viewer\/bin/bash-c"echo 'Install development requirements in development:' \&& pip install --no-deps -e app/antlr-tsql \&& python3 run.py"When simultaneously developing other packages, volume mount and install those too:dockerrun-it\-uroot\-v~/workspace/antlr-ast:/app/app/antlr-ast\-v~/workspace/antlr-plsql:/app/app/antlr-plsql\-v~/workspace/antlr-tsql:/app/app/antlr-tsql\-p3000:3000\ast-viewer\/bin/bash-c"echo 'Install development requirements in development:' \&& pip install --no-deps -e app/antlr-ast \&& pip install --no-deps -e app/antlr-plsql \&& pip install --no-deps -e app/antlr-tsql \&& python3 run.py"If you update the tree shaping logic in this repo, the app will auto-update.If you change the grammar, you will have to first rebuild the grammar (with theantlr_tsqldocker image) and restart theast-viewercontainer.Run tests# Similar to building the grammar, but running tests# and not saving the generated filesdockerbuild-tantlr_tsql. dockerrun-tantlr_tsqlmakebuildtestOr run the test locally, first build the grammar then run:pytestTravis deploymentBuilds the Docker image.Runs the Docker image to build the grammar and run the unit tests.Deploys the resulting python files to PyPi when a new release is made, so they can be installed easily.
antm
ANTMANTM: An Aligned Neural Topic Model for Exploring Evolving TopicsDynamic topic models are effective methods that primarily focus on studying the evolution of topics present in a collection of documents. These models are widely used for understanding trends, exploring public opinion in social networks, or tracking research progress and discoveries in scientific archives. Since topics are defined as clusters of semantically similar documents, it is necessary to observe the changes in the content or themes of these clusters in order to understand how topics evolve as new knowledge is discovered over time. Here, we introduce a dynamic neural topic model called ANTM, which uses document embeddings (data2vec) to compute clusters of semantically similar documents at different periods, and aligns document clusters to represent their evolution. This alignment procedure preserves the temporal similarity of document clusters over time and captures the semantic change of words characterized by their context within different periods. Experiments on four different datasets show that ANTM outperforms probabilistic dynamic topic models (e.g. DTM, DETM) and significantly improves topic coherence and diversity over other existing dynamic neural topic models (e.g. BERTopic).InstallationInstallation can be done using:pipinstallantmQuick StartAs implemented in the notebook, we can quickly start extracting evolving topics from DBLP dataset containing computer science articles.To Fit and Save a ModelfromantmimportANTMimportpandasaspd# load datadf=pd.read_parquet("./data/dblpFullSchema_2000_2020_extract_big_data_2K.parquet")df=df[["abstract","year"]].rename(columns={"abstract":"content","year":"time"})df=df.dropna().sort_values("time").reset_index(drop=True).reset_index()# choosing the windows size and overlapping length for time frameswindow_size=6overlap=2#initialize modelmodel=ANTM(df,overlap,window_size,umap_n_neighbors=10,partioned_clusttering_size=5,mode="data2vec",num_words=10,path="./saved_data")#learn the model and save ittopics_per_period=model.fit(save=True)#output is a list of timeframes including all the topics associated with that periodTo Load a ModelfromantmimportANTMimportpandasaspd# load datadf=pd.read_parquet("./data/dblpFullSchema_2000_2020_extract_big_data_2K.parquet")df=df[["abstract","year"]].rename(columns={"abstract":"content","year":"time"})df=df.dropna().sort_values("time").reset_index(drop=True).reset_index()# choosing the windows size and overlapping length for time frameswindow_size=6overlap=2#initialize modelmodel=ANTM(df,overlap,window_size,mode="data2vec",num_words=10,path="./saved_data")topics_per_period=model.load()Plug-and-Play Functions#find all the evolving topicsmodel.save_evolution_topics_plots(display=False)#plots a random evolving topic with 2-dimensional document representationsmodel.random_evolution_topic()#plots partioned clusters for each time framemodel.plot_clusters_over_time()#plots all the evolving topicsmodel.plot_evolving_topics()Topic Quality Metrics#returns pairwise jaccard diversity for each periodmodel.get_periodwise_pairwise_jaccard_diversity()#returns proportion unique words diversity for each periodmodel.get_periodwise_puw_diversity()#returns topic coherence for each periodmodel.get_periodwise_topic_coherence(model="c_v")DatasetsArxiv articlesDBLP articlesElon Musk's TweetsNew York Times NewsExperimentsYou can use the notebooks provided in "./experiments" in order to run ANTM on other sequential datasets.CitationTo citeANTM, please use the following bibtex reference:@misc{rahimi2023antm, title={ANTM: An Aligned Neural Topic Model for Exploring Evolving Topics}, author={Hamed Rahimi and Hubert Naacke and Camelia Constantin and Bernd Amann}, year={2023}, eprint={2302.01501}, archivePrefix={arXiv}, primaryClass={cs.IR} }
ant-mess-client
No description available on PyPI.
ant-mess-server
No description available on PyPI.
antmiral
antmiralcowptain but awsgi
ant-nest
OverviewAntNest is a simple, clear and fast Web Crawler framework build on python3.6+, powered by asyncio. It has only 600+ lines core code now(thanks powerful lib like aiohttp, lxml and other else).FeaturesUseful http client out of boxEasy pipelines, in async or notEasy item extractor, define data detail(by xpath, jpath or regex) and extract from html, json or strings.Easy async work flow, build in async task poolInstallpip install ant_nestUsageCreate one demo project:>>> ant_nest -c examplesThen we get a project:drwxr-xr-x 5 bruce staff 160 Jun 30 18:24 ants -rw-r--r-- 1 bruce staff 208 Jun 26 22:59 settings.pyPresume we want to get hot repos from github, let`s create “examples/ants/example2.py”:from yarl import URL from ant_nest.ant import Ant from ant_nest.pipelines import ItemFieldReplacePipeline from ant_nest.items import Extractor class GithubAnt(Ant): """Crawl trending repositories from github""" item_pipelines = [ ItemFieldReplacePipeline( ("meta_content", "star", "fork"), excess_chars=("\r", "\n", "\t", " ") ) ] def __init__(self): super().__init__() self.item_extractor = Extractor(dict) self.item_extractor.add_extractor( "title", lambda x: x.html_element.xpath("//h1/strong/a/text()")[0] ) self.item_extractor.add_extractor( "author", lambda x: x.html_element.xpath("//h1/span/a/text()")[0] ) self.item_extractor.add_extractor( "meta_content", lambda x: "".join( x.html_element.xpath( '//div[@class="repository-content "]/div[2]//text()' ) ), ) self.item_extractor.add_extractor( "star", lambda x: x.html_element.xpath( '//a[@class="social-count js-social-count"]/text()' )[0], ) self.item_extractor.add_extractor( "fork", lambda x: x.html_element.xpath('//a[@class="social-count"]/text()')[0], ) self.item_extractor.add_extractor("origin_url", lambda x: str(x.url)) async def crawl_repo(self, url): """Crawl information from one repo""" response = await self.request(url) # extract item from response item = self.item_extractor.extract(response) item["origin_url"] = response.url await self.collect(item) # let item go through pipelines(be cleaned) self.logger.info("*" * 70 + "I got one hot repo!\n" + str(item)) async def run(self): """App entrance, our play ground""" response = await self.request("https://github.com/explore") for url in response.html_element.xpath( "/html/body/div[4]/main/div[2]/div/div[2]/div[1]/article/div/div[1]/h1/a[2]/" "@href" ): # crawl many repos with our coroutines pool self.schedule_task(self.crawl_repo(response.url.join(URL(url)))) self.logger.info("Waiting...")Then we can list all ants we defined (in “examples”)>>> $ant_nest -l ants.example2.GithubAntRun it! (without debug log):>>> ant_nest -a ants.example2.GithubAnt INFO:GithubAnt:Opening INFO:GithubAnt:Waiting... INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'app-ideas', 'author': 'florinpop17', 'meta_content': 'A Collection of application ideas which can be used to improve your coding skills.', 'star': '11.7k', 'fork': '500', 'origin_url': URL('https://github.com/florinpop17/app-ideas')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'Carbon', 'author': 'briannesbitt', 'meta_content': 'A simple PHP API extension for DateTime.https://carbon.nesbot.com/', 'star': '14k', 'fork': '249', 'origin_url': URL('https://github.com/briannesbitt/Carbon')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'org-roam', 'author': 'jethrokuan', 'meta_content': 'Rudimentary Roam replica with Org-modehttps://org-roam.readthedocs.io/en/la…', 'star': '261', 'fork': '27', 'origin_url': URL('https://github.com/jethrokuan/org-roam')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'joplin', 'author': 'laurent22', 'meta_content': 'Joplin - an open source note taking and to-do application with synchronization capabilities for Windows, macOS, Linux, Android and iOS. Forum: https://discourse.joplinapp.org/https://joplinapp.org', 'star': '13k', 'fork': '335', 'origin_url': URL('https://github.com/laurent22/joplin')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'snoop', 'author': 'snooppr', 'meta_content': 'Snoop — инструмент разведки на основе открытых данных', 'star': '281', 'fork': '9', 'origin_url': URL('https://github.com/snooppr/snoop')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': '1on1-questions', 'author': 'VGraupera', 'meta_content': 'Mega list of 1 on 1 meeting questions compiled from a variety to sources', 'star': '4k', 'fork': '93', 'origin_url': URL('https://github.com/VGraupera/1on1-questions')} INFO:GithubAnt:Get 8 Request in total with 8/60s rate INFO:GithubAnt:Get 7 Response in total with 7/60s rate INFO:GithubAnt:Get 6 dict in total with 6/60s rate INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'python-small-examples', 'author': 'jackzhenguo', 'meta_content': 'Python有趣的小例子一网打尽。Python基础、Python坑点、Python字符串和正则、Python绘图、Python日期和文件、Web开发、数据科学、机器学习、深度2.4k', 'fork': '102', 'origin_url': URL('https://github.com/jackzhenguo/python-small-examples')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'system-design-primer', 'author': 'donnemartin', 'meta_content': 'Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.', 'star': '83.2k', 'fork': '4.4k', 'origin_url': URL('https://github.com/donnemartin/system-design-primer')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'awesome-scalability', 'author': 'binhnguyennus', 'meta_content': 'The Patterns of Scalable, Reliable, and Performant Large-Scale Systemshttp://awesome-scalability.com/', 'star': '24.5k', 'fork': '1.4k', 'origin_url': URL('https://github.com/binhnguyennus/awesome-scalability')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'gdb-frontend', 'author': 'rohanrhu', 'meta_content': '☕ GDBFrontend is an easy, flexible and extensionable gui debugger.https://oguzhaneroglu.com/projects/gd…', 'star': '716', 'fork': '14', 'origin_url': URL('https://github.com/rohanrhu/gdb-frontend')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'Complete-Python-3-Bootcamp', 'author': 'Pierian-Data', 'meta_content': 'Course Files for Complete Python 3 Bootcamp Course on Udemy', 'star': '8.1k', 'fork': '1.8k', 'origin_url': URL('https://github.com/Pierian-Data/Complete-Python-3-Bootcamp')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'leon', 'author': 'leon-ai', 'meta_content': '\U0001f9e0 Leon is your open-source personal assistant.https://getleon.ai', 'star': '6.3k', 'fork': '147', 'origin_url': URL('https://github.com/leon-ai/leon')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'esbuild', 'author': 'evanw', 'meta_content': 'An extremely fast JavaScript bundler and minifier', 'star': '2.3k', 'fork': '38', 'origin_url': URL('https://github.com/evanw/esbuild')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'wearable-microphone-jamming', 'author': 'y-x-c', 'meta_content': 'Repository for our paper Wearable Microphone Jamminghttp://sandlab.cs.uchicago.edu/jammer/', 'star': '138', 'fork': '10', 'origin_url': URL('https://github.com/y-x-c/wearable-microphone-jamming')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'efcore', 'author': 'dotnet', 'meta_content': 'EF Core is a modern object-database mapper for .NET. It supports LINQ queries, change tracking, updates, and schema migrations.https://docs.microsoft.com/ef/core/', 'star': '8.7k', 'fork': '965', 'origin_url': URL('https://github.com/dotnet/efcore')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'playwright', 'author': 'microsoft', 'meta_content': 'Node library to automate Chromium, Firefox and WebKit with a single APIhttps://www.npmjs.com/package/playwright', 'star': '9k', 'fork': '92', 'origin_url': URL('https://github.com/microsoft/playwright')} INFO:GithubAnt:Get 18 Request in total with 10/60s rate INFO:GithubAnt:Get 17 Response in total with 10/60s rate INFO:GithubAnt:Get 16 dict in total with 10/60s rate INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'degoogle', 'author': 'tycrek', 'meta_content': 'A huge list of alternatives to Google products. Privacy tips, tricks, and links.https://degoogle.jmoore.dev', 'star': '2k', 'fork': '50', 'origin_url': URL('https://github.com/tycrek/degoogle')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'sherlock', 'author': 'sherlock-project', 'meta_content': '🔎 Hunt down social media accounts by username across social networkshttp://sherlock-project.github.io', 'star': '10.4k', 'fork': '207', 'origin_url': URL('https://github.com/sherlock-project/sherlock')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'the-art-of-command-line', 'author': 'jlevy', 'meta_content': 'Master the command line, in one page', 'star': '68.9k', 'fork': '2.2k', 'origin_url': URL('https://github.com/jlevy/the-art-of-command-line')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'freespeech', 'author': 'Merkie', 'meta_content': 'A free program designed to help the non-verbal.', 'star': '168', 'fork': '20', 'origin_url': URL('https://github.com/Merkie/freespeech')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'awesome-pentest', 'author': 'enaqx', 'meta_content': 'A collection of awesome penetration testing resources, tools and other shiny things', 'star': '11.4k', 'fork': '1k', 'origin_url': URL('https://github.com/enaqx/awesome-pentest')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'trax', 'author': 'google', 'meta_content': 'Trax — your path to advanced deep learning', 'star': '2.7k', 'fork': '90', 'origin_url': URL('https://github.com/google/trax')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'introtodeeplearning', 'author': 'aamini', 'meta_content': 'Lab Materials for MIT 6.S191: Introduction to Deep Learning', 'star': '1.6k', 'fork': '116', 'origin_url': URL('https://github.com/aamini/introtodeeplearning')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': 'CleanArchitecture', 'author': 'ardalis', 'meta_content': 'A starting point for Clean Architecture with ASP.NET Core', 'star': '3.8k', 'fork': '300', 'origin_url': URL('https://github.com/ardalis/CleanArchitecture')} INFO:GithubAnt:**********************************************************************I got one hot repo! {'title': '3y', 'author': 'ZhongFuCheng3y', 'meta_content': '📓从Java基础、JavaWeb基础到常用的框架再到面试题都有完整的教程,几乎涵盖了Java后端必备的知识点', 'star': '5.1k', 'fork': '285', 'origin_url': URL('https://github.com/ZhongFuCheng3y/3y')} INFO:GithubAnt:Closed INFO:GithubAnt:Get 26 Request in total INFO:GithubAnt:Get 26 Response in total INFO:GithubAnt:Get 25 dict in total INFO:GithubAnt:Run GithubAnt in 180.234251 secondsAbout ItemWe use dict to store one item in examples, actually it support many way: dict, normal class, atrrs’s class, data class and ORM class, it depend on your need and choice.ExamplesYou can get some example in “./examples”DefectComplex exception handleone coroutine’s exception will break await chain especially in a loop, unless we handle it by hand. eg:for cor in self.as_completed((self.crawl(url) for url in self.urls)): try: await cor except Exception: # may raise many exception in a await chain passbut we can use “self.as_completed_with_async” now, eg:async fo result in self.as_completed_with_async( self.crawl(url) for url in self.urls, raise_exception=False): # exception in "self.crawl(url)" will be passed and logged automatic self.handle(result)High memory usageIt`s a “feature” that asyncio eat large memory especially with high concurrent IO, we can set a concurrent limit(“connection_limit” or “concurrent_limit”) simply, but it`s complex to get the balance between performance and limit.Coding styleFollow “Flake8”, Format by “Black”, typing check by “MyPy”, sea Makefile for more detail.Todo[*] Log system [*] Nest item extractor [ ] Docs
ant-net-monitor
Graduation Project这事Ant的毕业设计,基于Web的Linux服务器状态监控.为什么要做这个毕设想做Web项目服务器上的监控器太臃肿了想沉下心做点什么综合多方面考虑,自拟了这个题目.这事我最后的波纹了,JOJO!下载已于Pypi发布构建,可直接下载安装$pipinstallant-net-monitor如果需要尝鲜,可指定预发布版本(智将部署参照Flask Docs,可使用Gunicorn进行部署,即:$gunicorn-b127.0.0.1:5000"ant_net_monitor:create_app(ENABLE_SNMP)"通过设置ENABLE_SNMP为True或False控制数据采集方式.玩得愉快!ps:建议先把它放进虚拟环境里运行,因为我还没有想好把数据库放在哪.
anto
UNKNOWN
antodo
antodoanother todo CLI app with some useful featuresRequirementsPython 3.7InstallationpipinstallantodoUsageadd todoantodoadddosomethingadd urgent todoantodoadd-udosomethingshow current todosantodoshowremove todo by index based on todos orderantodoremove12set todo as urgent by index based on todos orderantodourgent3
antoine-dragonfly
# Dragonfly[![Build Status](https://travis-ci.org/ladybug-tools/dragonfly.svg?branch=master)](https://travis-ci.org/ladybug-tools/dragonfly)Dragonfly is a utility API to work with UWG.
antolib
UNKNOWN
anton
antonantonis a Python library for auto instantiating yaml definitions to user defined dataclasses.Avoid boilerplate and get runtime type checking before the objects are created.UsageGiven ayamlfile definition in a fileindex.yamlas follows:# index.yamlinteger:23string:"Helloworld"point:x:0y:0line_segment:first_point:x:10y:10second_point:x:10y:10yaml_conflets you avoid writing the biolerplate code for loading theyamlfile and parsing the python dictionary to instantiate the Dataclass object as follows:>>>fromdataclassesimportdataclass>>>fromantonimportyaml_conf>>>>>>@dataclass...classPoint:...x:int...y:int...>>>@dataclass...classLineSegment:...first_point:Point...second_point:Point...>>>@yaml_conf(conf_path="index.yaml")...classExampleClass:...integer:int...string:str...point:Point...line_segment:LineSegment...>>>example_obj=ExampleClass()>>>example_objExampleClass(integer=23,string='Hello world',point=Point(x=0,y=0),line_segment=LineSegment(first_point=Point(x=10,y=10),second_point=Point(x=10,y=10)))RoadmapCurrently the project only supports Python3.8Runtime type checking is supported for the following types:intfloatstrbooltyping.Listtyping.Dicttyping.UnionAny user defined dataclassThe ultimate aim is to support all python versions Python3.8+ and all possible type combinations.ContributingPull requests are welcome !!! Please make sure to update tests as appropriate.For major changes, please open an issue first to discuss what you would like to change.Please do go through theContributing Guideif some help is required.Note:antoncurrently in active development. Pleaseopen an issueif you find anything that isn't working as expected.
anton-axenov-andan-lab-2
anton axenov andan lab 2 package
anton-axenov-andan-lab-2-1
anton axenov andan lab 2 package
anton-cli
Collection Frameworkanton_cliis a commands-line program that takes a string and returns the number of unique characters in the string.Installpipinstallanton-cliHow to Usefromanton_cliimportsplit_letters,read_from_fileprint(split_letters('hello'))# 3also you can get a unique numbers of string from the fileprint(read_from_file('words.txt'))# SOME RESULTLaunchYou can use this program from the terminalif you want to pass a string use --string [YOUR STRING]python-manton_cli.cli--string[YOURSTRING]or if you want to pass a file use --file [YOUR FILE PATH]python-manton_cli.cli--file[YOURPATHTOFILE]See the source atLink© 2022 Anton Skazko
antonio
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
antonio2202-sales
Tickets_bdi
antonio-example-app-kc
Python example app
antoniohellintestingexample
Example PackageThis is a simple example package.
antonpdf
This is the homepage of our project.
anton-test
No description available on PyPI.
antools
Antools - Anton's ToolsOverviewPrivate library which is also free to public use. Its purpose is to be useful code keeper.Anton's Tools package contains of following modules:loggingCustomizable logger (tested on Windows)multiprocessingMultiProcess class for logging and handling workflow when using multiprocessingMultiProcessHandler as easy multiprocessing frameworkthreadingThreadProcess class for logging and handling workflow when using threadingThreadHandler as easy multiprocessing frameworkvalidationVarValidator for validation data types and much morehelpersApproachComparator for testing effeciency of running function in main/multiprocess/threadingGetting StartedDependenciesThe antools package utilizes following libraries:*attrs==21.4.0 *numpy==1.22.1 *pandas==1.4.0 *python-dateutil==2.8.2 *pytz==2021.3 *six==1.16.0 *sqlitedict==1.7.0Package InstallationInstallable using pip. Execute:pip install antoolsAuthorsAntonín [email protected]://github.com/antonin-drozda/antoolsPypi-https://pypi.org/project/antools/Change Log2022.2.0 (06/02/2022)Improvements on Logger. Added MultiprocessHandler and ThreadHandler. Planned for the next release: TerminalApp (framework for easy apps in terminal)2022.1.0 (31/01/2022)Complete rework of previous platform. Start package includes Logger, MultiProcess, ThreadProcess and ApproachComparator with examples. Planned for the next release: NONE
antoonio
UNKNOWN
antopy
UNKNOWN
antp
antpModule & app for processing Jinja template files with json data and environment variables.Installationpython3-mpipinstallantpUsagepython3-mantp-t|--template<templatefile>[-o|--output<outputfile>][-d|--data<json_datafile1,json_datafile2,...,json_datafileN>]Templatefileis the jinja-template. Parameter can be omitted or parameter value set to dash to read template from stdin.Outputfileis the filename where the output will be written. Parameter can be omitted or parameter value can be set to dash to write the output to stdout.Json_datafileis the json-file with the data to be accessed in the templates. The data-parameter can have multiple datafiles, use comma to separate filenames.Data from datafiles will be in thedatavariable. Variables from environment will be in theenvvariable.Examplesdata1.json{"colors":{"black":"#000000","white":"#FFFFFF","red":"#FF0000","green":"#00FF00","blue":"#0000FF"},"food":["vegetables","meat","fish"]}data2.json{"capitals":{"Finland":"Helsinki","Sweden":"Stockholm","Denmark":"Copenhagen"}}template.jinjaThe capital of Finland is{{data["capitals"]["Finland"]}}.Typical foods:{{data["food"]|join(", ")}}.Black is{{data["colors"]["black"]}}.Environment variable FOO is{{env["FOO"]}}.Run command:FOO=barpython3-mantp-ttemplate.jinja-ddata1.json,data2.json-oout.txtout.txtThe capital of Finland is Helsinki. Typical foods: vegetables, meat, fish. Black is #000000. Environment variable FOO is bar.Docker containerContainerized version of antp is also available fromanttin/antp.Container usageSample template:Our message to you is: {{ env["Message"] }}Sample run:dockerrun--rm-it-v/local/path/template:/antp/template-v/local/path/output_directory:/antp/output-eMESSAGE="Hello world"antp:latestSample output:Our message to you is: Hello worldYou can useANTP_TEMPLATE,ANTP_OUTPUTandANTP_DATAenvironment variables to override the default paths. Datafiles are not used by default, so this env variable is needed if you want to use data files.Use casesThis container can be useful as a k8s init-container. Input files can be ConfigMaps. Values can be fed through ConfigMaps or env variables. Output can be set to an emptyDir volume that is shared between the containers.
antplus
🐧 Welcome to antplusHuman friendly interface to ANT+.Requirements:python >= 3.9serialioInstallationFrom within your favorite python environment:$pipinstallantplusTo run the examples you'll need:$pipinstallantplus[examples]To develop, run tests, build package, lint, etc you'll need:$pipinstallantplus[dev]To run docs you'll need:$pipinstallantplus[docs]
antpm
the ant path matcher python version
antpy
AntPyMy Python extensionsInstallpip install antpyMethodscollections.group_bycollections.merge_into_dictionary
antpycv
# antpycv គឺជា Computer Vision package ដែលងាយស្រួលសរសេរកូដ python ភ្ជាប់ទៅកាន់ Arduino ដើម្បីធ្វើការបង្ហាញរូបភាពនិងកិច្ចការងារពាក់ព័ន្ធនឹងមុខងារ AI (Artificial Intelligence)។ antpycv ត្រូវបានបង្កើតឡើងដោយប្រើប្រាស់ OpenCV និង Mediapipe libraries។ ## អ្នកអាចធ្វើការតំឡើង antpycv package ទៅកាន់ virtual environment (venv) project របស់អ្នកតាមរយៈ command ខាងក្រោម៖ ### pip install antpycv ##### ជាការសំនូមពរអ្នកគួរតែធ្វើការ upgrade តាម command ខាងក្រោម៖ ### pip install –upgrade pip setuptools wheel ###
antpy-ios-device
py-ios-deviceA python based Apple instruments protocol,you can get CPU, Memory and other metrics from real iOS deviceslink:https://testerhome.com/topics/27159中文文档Java link:https://github.com/YueChen-C/java-ios-device)pip :pip install py-ios-devicepython version: 3.7 +Instruments:Get system Memory and CPU dataGet processes Memory and CPU dataGet FPS dataGet network dataSet the device network status. eg: 2G, 3G ,100% LossSet the device behaves as though under a high thermal stateMonitoring app start、exit、backgroundLaunch and Kill appRun xctest. eg: WebDriverAgentDump core profile stack snapshotAnalyze the core profile data streamGet Metal GPU CountersOtherProfiles & Device Management. eg: Install and uninstall Fiddler certificateGet syslogGet crash logGet the captured packet traffic and forward it to wiresharkApp install and uninstallGet device batteryUsage:pip :> pip install py-ios-device > pyidevice --help > pyidevice instruments --helpGet device list$pyidevicedevicesGet device info$pyidevice--udid=xxxxxxdeviceinfoGet System performance data$pyideviceinstrumentsmonitorMemory>>{'App Memory':'699.69 MiB','Cached Files':'1.48 GiB','Compressed':'155.17 MiB','Memory Used':'1.42 GiB','Wired Memory':'427.91 MiB','Swap Used':'46.25 MiB'}Network>>{'Data Received':'4.07 GiB','Data Received/sec':'4.07 GiB','Data Sent':'2.54 GiB','Data Sent/sec':'2.54 GiB','Packets in':2885929,'Packets in/sec':6031576,'Packets Out':2885929,'Packets Out/sec':2885929}Disk>>{'Data Read':'117.91 GiB','Data Read/sec':0,'Data Written':'64.28 GiB','Data Written/sec':0,'Reads in':9734132,'Reads in/sec':9734132,'Writes Out':6810640,'Writes Out/sec':6810640}$pyideviceinstrumentsmonitor--filter=memory Memory>>{'App Memory':'699.69 MiB','Cached Files':'1.48 GiB','Compressed':'155.17 MiB','Memory Used':'1.42 GiB','Wired Memory':'427.91 MiB','Swap Used':'46.25 MiB'}Get Processes performance data$pyideviceinstrumentssysmontap--help $pyideviceinstrumentssysmontap-bcom.tencent.xin--proc_filtermemVirtualSize,cpuUsage--processes--sortcpuUsage# 只显示 memVirtualSize,cpuUsage 参数的进程列表,且根据 cpuUsage 字段排序[('WeChat',{'cpuUsage':0.03663705586691998,'memVirtualSize':2179284992,'name':'WeChat','pid':99269})][('WeChat',{'cpuUsage':0.036558268613227536,'memVirtualSize':2179284992,'name':'WeChat','pid':99269})]Get FPS data$pyideviceinstrumentsfps{'currentTime':'2021-05-11 14:14:40.259059','fps':52}{'currentTime':'2021-05-11 14:14:40.259059','fps':56}Get network data$pyideviceinstrumentsnetworking# Get all network data"connection-update{\"RxPackets\": 2, \"RxBytes\": 148, \"TxPackets\": 2, \"TxBytes\": 263, \"RxDups\": 0, \"RxOOO\": 0, \"TxRetx\": 0, \"MinRTT\": 0.05046875, \"AvgRTT\": 0.05046875, \"ConnectionSerial\": 5}""connection-update{\"RxPackets\": 4, \"RxBytes\": 150, \"TxPackets\": 3, \"TxBytes\": 1431, \"RxDups\": 0, \"RxOOO\": 0, \"TxRetx\": 0, \"MinRTT\": 0.0539375, \"AvgRTT\": 0.0541875, \"ConnectionSerial\": 4}"$pyideviceinstrumentsnetwork_process-pcom.tencent.xin# Get application network data{403:{'net.packets.delta':119,'time':1620720061.0643349,'net.tx.bytes':366715,'net.bytes.delta':63721,'net.rx.packets.delta':47,'net.tx.packets':633,'net.rx.bytes':34532,'net.bytes':401247,'net.tx.bytes.delta':56978,'net.rx.bytes.delta':6743,'net.rx.packets':169,'pid':403,'net.tx.packets.delta':72,'net.packets':802}}{403:{'net.packets.delta':13,'time':1620720076.2191892,'net.tx.bytes':1303204,'net.bytes.delta':5060,'net.rx.packets.delta':5,'net.tx.packets':2083,'net.rx.bytes':46736,'net.bytes':1349940,'net.tx.bytes.delta':4682,'net.rx.bytes.delta':378,'net.rx.packets':379,'pid':403,'net.tx.packets.delta':8,'net.packets':2462}}Set device status. iOS version > 12$pyideviceinstrumentsconditionget# Get device configuration information$pyideviceinstrumentsconditionset-cSlowNetworkCondition-pSlowNetwork2GUrban# Set the device network status. eg: 2G, 3G ,100% Loss$pyideviceinstrumentsconditionset-cThermalCondition-pThermalCritical# Set the device behaves as though under a high thermal stateListen to app notifications$pyideviceinstrumentsnotifications[{'execName':'MobileNotes','state_description':'Foreground Running','elevated_state_description':'Foreground Running','displayID':'com.apple.mobilenotes','mach_absolute_time':27205542653928,'appName':'Notes','elevated_state':8,'timestamp':1620714619.1264,'state':8,'pid':99367}][{'execName':'MobileNotes','state_description':'Background Running','elevated_state_description':'Background Running','displayID':'com.apple.mobilenotes','mach_absolute_time':27205678872050,'appName':'Notes','elevated_state':4,'timestamp':1620714624.802145,'state':4,'pid':99367}][{'execName':'MobileNotes','state_description':'Background Task Suspended','elevated_state_description':'Background Task Suspended','displayID':'com.apple.mobilenotes','mach_absolute_time':27205683486410,'appName':'Notes','elevated_state':2,'timestamp':1620714624.99441,'state':2,'pid':99367}]Dump core profile stack snapshot$instrumentsstackshot--outstackshot.logAnalyze the core profile data stream$instrumentsinstrumentscore_profile--pid=1107SealTalk(1107)PERF_THD_CSwitch(0x25010014)DBG_PERFPERF_DATADBG_FUNC_NONESealTalk(1107)MACH_DISPATCH(0x1400080)DBG_MACHDBG_MACH_SCHEDDBG_FUNC_NONESealTalk(1107)DecrSet(0x1090004)DBG_MACHDBG_MACH_EXCP_DECIDBG_FUNC_NONEGet Metal GPU Counters$instrumentsgpu_counters15.132907ALULimiter93.7715.132907TextureSampleLimiter39.6215.132907TextureWriteLimiter13.8715.132907BufferReadLimiter0.0115.132907BufferWriteLimiter015.132907Threadgroup/ImageblockLoadLimiter17.1615.132907Threadgroup/ImageblockStoreLimiter10.915.132907FragmentInputInterpolationLimiter15.7415.132907GPULastLevelCacheLimiter6.2415.132907VertexOccupancy015.132907FragmentOccupancy91.4415.132907ComputeOccupancy015.132907GPUReadBandwidth2.6515.132907GPUWriteBandwidth1.25OtherProfiles & Device Management$pyideviceprofileslist{"OrderedIdentifiers":["aaaff7e2b7df39eeb77bfbc0cd7a70ea99f3fd97a"],"ProfileManifest":{"aaaff7e2b7df39eeb77bfbc0cd7a70ea99f3fd97a":{"Description":"DO_NOT_TRUST_FiddlerRoot","IsActive":true}},"ProfileMetadata":{"aaaff7e2b7df39eeb77bfbc0cd7a70ea99f3fd97a":{"PayloadDisplayName":"DO_NOT_TRUST_FiddlerRoot","PayloadRemovalDisallowed":false,"PayloadUUID":"C8CE7BC1-F840-4616-B606-337F8CB6AE19","PayloadVersion":1}},"Status":"Acknowledged"}$pyideviceprofilesinstall--pathDownloads/charles-certificate.pem## install charles certificate$pyideviceprofilesremove--namefe7371d9ce36c541ac8dee5f51f3b490b2aa98dcd95699ee44717fd5233fe7a0a## uninstall charles certificateget syslog$pyidevicesyslog# --path# --filterget crash syslog$pyidevicecrashlist['.','..','com.apple.appstored','JetsamEvent-2021-05-12-112126.ips']$pyidevicecrashexport--nameJetsamEvent-2021-05-12-112126.ips $pyidevicecrashdelete--nameJetsamEvent-2021-05-12-112126.ips $pyidevicecrashshellapps$pyideviceappslist $pyideviceappsinstall--ipa_path $pyideviceappsuninstall--bundle_id$pyideviceappslaunch--bundle_id $pyideviceappskill--bundle_id $pyideviceappsshellpacket traffic$pyidevicepcapd./test/test.pacp $pyidevicepcapd-|"/Applications/Wireshark.app/Contents/MacOS/Wireshark"-k-i-# mac forword Wireshark$pyidevicepcapd-|"D:\Program Files\Wireshark\Wireshark.exe"-k-i-# win forword Wiresharkdevice battery$pyidevicebattery# [Battery] time=1622777708, current=-71, voltage=4330, power=-307.43, temperature=3279enable developer mode$pyideviceenable_developer_modeQQ :37042417api :documentdemo:document
antra
BingAI (ChatGPT)Bing AI API for ChatGPTInstallationpip install bingai --upgradeImportant Feature - No Manual Intervention Required100% Programmatic Run Edge browser opens up and automatically fetches tokens, no manual intervention whatsoeverUsagefrom bingai import BingSession session = BingSession(email, password) session.run()Microsoft credentials for the account which has access to the Bing Chat feature
antropy
No description available on PyPI.
ants
Ants===![Build Status](https://travis-ci.org/mymusise/Ants.svg?branch=master)Ants is a clawer framework for perfectionism base on Django, which help people build a clawer faster and easier to make manage.Requirements===- Python 2.7 +- Django 1.8 +- Gevent 1.1.1 +Install===pip install antsTo get the lastest, download this fucking source code and buildgit clone [email protected]:mymusise/Ants.gitcd Antspython setup.py buildsudo python setup.py installGetting started====## 1. Initialize projectOk, I suggest you work on a virtualenvvirtualenv my_envsource my_env/bin/activateThen we install the Requirements and start a django projectpip install django gevent antsants-tools startproject my_antscd my_antsnow you can use ``tree`` command to get the file-tree of your project├── manage.py├── my_ants│   ├── __init__.py│   ├── settings.py│   ├── urls.py│   └── wsgi.py├── parsers│   ├── admin.py│   ├── ants│   │   └── __init__.py│   ├── apps.py│   ├── __init__.py│   ├── migrations│   │   └── __init__.py│   ├── models.py│   ├── tests.py│   └── views.py└── spiders├── admin.py├── ants│   └── __init__.py├── apps.py├── __init__.py├── migrations│   └── __init__.py├── models.py├── tests.py└── views.py### 2. Write your first spider ant!Ants will run ants belong to clawers by run command `python manage.py runclawer [spider_name]`Let's add a ant.``spider/ants/first_blood.py``import requestsclass FirstBloodClawerWhatNameYouWant(object):NAME = 'first_blood' # must be uniqueurl_head = """https://movie.douban.com/j/search_subjects?type=movie&tag=%E7%83%AD%E9%97%A8&sort=recommend&page_limit=10&page_start={}"""max_page = 4def get_url(self, url):res = requests.get(url)print(res.text)def start(self):need_urls = [self.url_head.format(i) for i in range(self.max_page)]list(map(self.get_url, need_urls))This is a simple spider that can get hot movie from `Douban`.Be attention, the `NAME` property is required, which is the unique identificationfor different clawers.Now we can run our first blood to get hot movie!python manage.py runclawer first_blood# About repeating URLAnts provide a way to avoid running a same url again if Ants it crash last time. You just need define two model, source and aim. For example:In file clawers/models.pyclass BaseTask(models.Model):source_url = models.CharField(max_length=255)class MovieHtml(models.Model):task_id = models.IntegerField()html = models.TextField()We define `BaseTask` to save the base task url we need request, and save the html source in `MovieHtml`. If we get a page by requesting a url from BaseTask, we will save this `BaseTask().id` to `MovieHtml().task_id`. We can redefine the our 'first_blood' ant like this:from ants.clawer import Clawerimport requestsclass FirstBloodClawerWhatNameYouWant(Clawer):NAME = 'first_blood' # must be uniquethread = 2 # the number of coroutine, default 1task_model = BaseTaskgoal_model = MovieHtmldef run(self, task):res = requests.get(task.source_url)MovieHtml.objects.create(task_id=task.id, html=res.text)Before 'first_blood'.run(), it will get all BaseTask objects and give each of then to run() function. But if one of BaseTask object.ID has in MovieHtml.task_id, this BaseTask object would be given to run function.# About asyncAnts use `gevent` as event pool to make each ant run with async, and you can use `thread` property to decide how many coroutine you want. More infomation from [the fucking source code](https://github.com/mymusise/Ants/blob/master/ants/utils.py#L74)# Enjoy!
antsar-avion
No description available on PyPI.
ants-client
ANTS is a framework to manage and apply macOS and Linux host configurations using Ansible Pull.The ANTS Framework is developed by the Client Services Team of theUniversity of BaselIT Servicesand released under theGNU General Public License Version 3.Ansibleis a trademark ofRed Hat, Inc.IntroductionThe ANTS Framework consists of the following components:A wrapper for Ansible-PullAn Ansible Dynamic Inventory Script (MS Active Directory Connector)A modular collection of roles ready to be usedStrong logging integrationRequirementsThis project assumes that you are familiar withAnsible,Gitand the shell.Getting startedInstalling ants using pipMake suregitis installed on your machineInstall the latest ants client using pip:pip install ants_clientPip will install the ANTS client with a default configuration and put the executable in your path.Installing ants using macOS .pkg installerDownload the latest .pkg installer from thereleases page.Execute the installer. This will take care of all dependencies.A launch daemon will be installed, runningantsevery 15 minutes. It will trigger after the next restart.Run antsOpen your terminalStart an ANTS run by typingants.Wait for ANTS to finish, then open another shell. You will see a new message of the day.What happened?Running ANTS with the default configuration will use ansible-pull to clonethe ANTS playbookvia https from a github repository and execute an ansible run.By default, this will generate/etc/motdto add a message of the day to your macOS or Linux host. Logs of all the runs are stored at/var/log/ants.Also by default, ants will add github to yourknown_hostsfile. This is important for later, when you want to enable git clone using ssh.Where to go from here?Look at the optionsRunants-hto see all command line options.Write your own configurationRunants--show-configto see the active configuration.Runants--initializeto write your own configuration.Your local configuration file will be saved at/etc/ants/ants.cfg. You can also edit it using your favorite text editor.Do not modify the default configuration file as it might be overwritten when updating ANTS.On Mac OS, you can also configure ANTS with a preference list (plist) or configuration profile. Please note that configurations set in this manner will override any other configuration, includingants.cfg. Goherefor an example configuration profile.Run other rolesFork or duplicateour example playbookand change the client configuration to point to your repository. Updatemain.ymlto assign different roles to your hosts.You can use the default Ansible syntax. You can also use wildcards. Have a look at theAnsible documentationAdd ssh authentication to your repositoryAnsible-pull can clone a git repository using ssh. You can enable this by creating your own private playbook, adding ssh authentication and a read only ssh key to the repository. Configure ANTS to use that key.By default, ANTS will look for a private key at/etc/ants/id_antsYou can generate a key withssh-keygen-trsa-b4096-N''-C"ants client"-f/etc/ants/id_antsBy default, ANTS is configured to run with strict host key checking disabled and will add the host key for your repo to yourknown_hostsfile.You should change this in production.To do so, addssh_stricthostkeychecking = Trueto your ants.cfgAdd a dynamic inventory sourceAnsible supports dynamic inventory scripts. (A json representation of hosts to group mappings.)You can use scripts to tell ansible-pull which tasks to run on which host. You need an inventory source and a script that can read and return it in thecorrect format.By default, ANTS will run a dummy scriptinventory_defaultthat will just return your hostname belonging to a group namedants-common. You can editmain.ymlstraight away and assign roles using host names. But ANTS shows it’s real power when ansible-pull is combined with a dynamic inventory using group mappings.For this we provide theinventory_adscript which will connect to your Active Directory and return all groups your host is a member of. Just add your configuration to/etc/ants/ants.cfg. Note that read only rights for the Active Directory user are sufficient.Your host DOSN’T have to be bound to Active Directory in order for this to work.You can use a placeholder object.By using a dynamic inventory source, you can assign roles to a host using AD and let ANTS handle the configuration.Group Layout in Active DirectoryThe groups in Active Directory must have the same names as the mappings and the variables you want to assign using Ansible. We recommend to keep the groups in a dedicated Organizational Unit to prevent naming collisions.Nested groups with access restrictions are an easy way to offer rights delegation to other units in your organization.What else do I needNothing. You just set up a configuration management that communicates savely over ssh using your AD and Github.No additional infrastructure and no AD binding required.Add your own inventory fileYou can add your own inventory file. This can be adynamic inventory sourceor astatic file. By default, ANTS will look for the inventory file in its python package. This is useful because it enables you to use inventory scripts likeinventory_adwithout having to specify the full path. However, if you would like to place your inventory file somewhere else you’re free to do so. All you have to do is use an absolute path inants.cfg.The following entry inants.cfgwill look for your inventory file in the ANTS python package. This is useful for everything that comes with the ANTS installation:[main] inventory_script = inventory_adThis entry on the other hand will look for your inventory file in/etc/ants:[main] inventory_script = /etc/ants/myinventoryCallback plugins and reportingANTS can be configured to execute ansible callback plugins. We will cover the most common use case here: log ANTS information to logstash.ANTS ships with a modified version of thedefault ansible logstash plugin. If you want to use plugins that are installed at a custom location you can specify your path in theants.cfgconfig file underansible_callback_plugins.In order for ANTS to execute the callback plugin, just add the following entries to the config file:ansible_callback_whitelist = ants_logstashand add a new section called[callback_plugins]. This section should contain theLOGSTASH_SERVERand theLOGSTASH_PORT. ANTS will set the environment variables according to these values. Environment variables will only be added if theansible_callback_whitelistis not empty.You can add other callback plugins toansible_callback_whitelistif you desire. The same is true for[callback_plugins]. Just add environment variables to that sub section.Please note that the casing of the environment variables is essential for the callback plugins to work. The casing can be found usingansible-doc-tcallback logstash $name_of_plugin.Testing and DevelopmentYou made changes to the ANTS code or you want to test a feature that hasn’t been released yet? This is what you should do:If what you’re looking for is already available in pypi as a pre-release, you can simply install it by telling pip to include pre-releases in its search:pip install ants_client--preIf you made local changes to your code and want to test them, you can set up avirtual environment,activate itand install your code locally usingpip install-e<path_to_ants>.Make sure all inventory files are found. You can run a local dev version of ants ANTS using the following commands:git clone https://github.com/ANTS-Framework/ants.git ants_dev cd ants_dev python3 -m venv venv source venv/bin/activate python -m pip install -e . sudo ants --ansible_pull_exe $(which ansible-pull) -i $(which inventory_ad) -vvvCommunicationPlease use theGitHub issue trackerto file issues.Please use aGitHub Pull-Requestto suggest changes.Comparison of plain Ansible and Ansible Tower to ANTSWhat does ANTS do, that Ansible can not?ANTS gives you a set of ready to be used roles for typical macOS and Linux host configurations.ANTS let’s you utilize Active Directory to map computers to roles. With all it’s delegation and nesting features.ANTS utilizes Ansible Pull and therefore does not require an active network connection to a central server. Roles will be locally applied even if the host is offline.What does Ansible or Ansible Tower do that ANTS does not?Tower has a nice DashboardTower has a real time job output and push-button job runsTower can to job schedulingTower supports run-time job promotingTower supports workflowsAnsbile can use encrypted secrets using VaultAnsible and Tower do offer Enterprise Support
ant-secret-bin
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
ant-sftpgo-client
SFTPGo REST API # noqa: E501
antshell
No description available on PyPI.
antsibull
antsibull -- Ansible Build ScriptsTooling for building various things related to AnsibleScripts that are here:antsibull-build - Builds Ansible-2.10+ from component collections (docs)Related projects areantsibull-changelogandantsibull-docs, which are in their own repositories (antsibull-changelog repository,antsibull-docs repository). Currently antsibull-changelog is a dependency of antsibull. Therefore, the scripts contained in it will be available as well when installing antsibull.You can find a list of changes inthe Antsibull changelog.antsibull is covered by theAnsible Code of Conduct.LicensingThis repository abides by theREUSE specification. See the copyright headers in each file for the exact license and copyright. Summarily:The default license is the GNU Public License v3+ (GPL-3.0-or-later).src/antsibull/_vendor/shutil.pyincludes code derived from CPython, licensed under the Python 2.0 License (Python-2.0.1).Versioning and compatibilityFrom version 0.1.0 on, antsibull sticks to semantic versioning and aims at providing no backwards compatibility breaking changesto the command line API (antsibull)during a major release cycle. We might make exceptions from this in case of security fixes for vulnerabilities that are severe enough.We explicitly exclude code compatibility.antsibull is not supposed to be used as a library.The only exception are potential dependencies with other antsibull projects (currently, none). If you want to use a certain part of antsibull as a library, please create an issue so we can discuss whether we add a stable interface forpartsof the Python code. We do not promise that this will actually happen though.DevelopmentInstall and runnoxto run all tests. That's it for simple contributions!noxwill create virtual environments in.noxinside the checked out project and install the requirements needed to run the tests there.antsibull depends on the sister antsibull-core and antsibull-changelog projects. By default,noxwill install development versions of these projects from Github. If you're hacking on antsibull-core or antsibull-changelog alongside antsibull, nox will automatically install the projects from../antsibull-coreand../antsibull-changelogwhen running tests if those paths exist. You can change this behavior through theOTHER_ANTSIBULL_MODEenv var:OTHER_ANTSIBULL_MODE=auto— the default behavior described aboveOTHER_ANTSIBULL_MODE=local— install the projects from../antsibull-coreand../antsibull-changelog. Fail if those paths don't exist.OTHER_ANTSIBULL_MODE=git— install the projects from the Github main branchOTHER_ANTSIBULL_MODE=pypi— install the latest version from PyPITo run specific tests:nox -e testto only run unit tests;nox -e lintto run all linters;nox -e formattersto runisortandblack;nox -e codeqato runflake8,pylint,reuse lint, andantsibull-changelog lint;nox -e typingto runmypyandpyre.nox -e coverage_releaseto build a test ansible release. This is expensive, so it's not run by default.nox -e check_package_filesto run the generate-package-files integration tests. This is somewhat expensive and thus not run by default.nox -e coverageto display combined coverage results after runningnox -e test coverage_release check_package_files;Runnox -lto list all test sessions.To create a more complete local development env:git clone https://github.com/ansible-community/antsibull-changelog.gitgit clone https://github.com/ansible-community/antsibull-core.gitgit clone https://github.com/ansible-community/antsibull.gitcd antsibullpython3 -m venv venv. ./venv/bin/activatepip install -e '.[dev]' -e ../antsibull-changelog -e ../antsibull-core[...]noxCreating a new release:Runnox -e bump -- <version> <release_summary_message>. This:Bumps the package version inpyproject.toml.Createschangelogs/fragments/<version>.ymlwith arelease_summarysection.Runsantsibull-changelog releaseand adds the changed files to git.Commits with messageRelease <version>.and runsgit tag -a -m 'antsibull <version>' <version>.Runshatch build --cleanto build an sdist and wheel indist/and clean up any old artifacts in that directory.Rungit pushto the appropriate remotes.Once CI passes on GitHub, runnox -e publish. This:Runshatch publishto publish the sdist and wheel generated during step 1 to PyPI;Bumps the version to<version>.post0;Adds the changed file to git and runsgit commit -m 'Post-release version bump.';Rungit push --follow-tagsto the appropriate remotes and create a GitHub release.
antsibull-changelog
antsibull-changelog -- Ansible Changelog ToolA changelog generator used by ansible-core and Ansible collections.Using theantsibull-changelogCLI tool for collections.Using theantsibull-changelogCLI tool for other projects.Documentation on thechangelogs/config.yamlconfiguration file forantsibull-changelog.Documentation on thechangelog.yamlformat.antsibull-changelog is covered by theAnsible Code of Conduct.InstallationIt can be installed with pip:pip install antsibull-changelogFor python projects,antsibull-changelog releasecan retrieve the current version frompyproject.toml. You can install the project withpip install antsibull-changelog[toml]to pull in the necessary toml parser for this feature. Thetomlextra is always available, but it is noop on Python >= 3.11, astomllibis part of the standard library.For more information, see thedocumentation.DevelopmentInstall and runnoxto run all tests. That's it for simple contributions!noxwill create virtual environments in.noxinside the checked out project and install the requirements needed to run the tests there.To run specific tests:nox -e testto only run unit tests;nox -e integrationto only run integration tests; this runsantsibull-changelog lintagainst antsibull-changelog and community.general (after cloning its repository) and records coverage data.nox -e coverageto display combined coverage results after runningnox -e test integration;nox -e lintto run all linters and formatters at once;nox -e formattersto runisortandblack;nox -e codeqato runflake8,pylint,reuse lint, andantsibull-changelog lint;nox -e typingto runmypyandpyreCreating a new release:Runnox -e bump -- <version> <release_summary_message>. This:Bumps the package version inpyproject.toml.Createschangelogs/fragments/<version>.ymlwith arelease_summarysection.Runsantsibull-changelog releaseand adds the changed files to git.Commits with messageRelease <version>.and runsgit tag -a -m 'antsibull-changelog <version>' <version>.Runshatch build --clean.Rungit pushto the appropriate remotes.Once CI passes on GitHub, runnox -e publish. This:Runshatch publish;Bumps the version to<version>.post0;Adds the changed file to git and rungit commit -m 'Post-release version bump.';Rungit push --follow-tagsto the appropriate remotes and create a GitHub release.LicenseUnless otherwise noted in the code, it is licensed under the terms of the GNU General Public License v3 or, at your option, later. SeeLICENSES/GPL-3.0-or-later.txtfor a copy of the license.The repository follows theREUSE Specificationfor declaring copyright and licensing information. The only exception are changelog fragments inchangelog/fragments/.
antsibull-core
antsibull-core -- Library for Ansible Build ScriptsLibrary needed for tooling for building various things related to Ansible.You can find a list of changes inthe antsibull-core changelog.Unless otherwise noted in the code, it is licensed under the terms of the GNU General Public License v3 or, at your option, later.antsibull-core is covered by theAnsible Code of Conduct.Versioning and compatibilityFrom version 1.0.0 on, antsibull-core sticks to semantic versioning and aims at providing no backwards compatibility breaking changes during a major release cycle. We might make exceptions from this in case of security fixes for vulnerabilities that are severe enough.The current major version is 2.x.y. Development for 2.x.y occurs on themainbranch. 1.x.y is End of Life and was developed on thestable-1branch. It is no longer updated. 2.x.y mainly differs from 1.x.y by dropping support for Python 3.6, 3.7, and 3.8. It deprecates several compatibility functions for older Python versions that are no longer needed; see the changelog for details.DevelopmentInstall and runnoxto run all tests. That's it for simple contributions!noxwill create virtual environments in.noxinside the checked out project and install the requirements needed to run the tests there.To run specific tests:nox -e testto only run unit tests;nox -e coverageto display combined coverage results after runningnox -e test;nox -e lintto run all linters and formatters at once;nox -e formattersto runisortandblack;nox -e codeqato runflake8,pylint,reuse lint, andantsibull-changelog lint;nox -e typingto runmypyandpyreCreating a new release:Runnox -e bump -- <version> <release_summary_message>. This:Bumps the package version inpyproject.toml.Createschangelogs/fragments/<version>.ymlwith arelease_summarysection.Runsantsibull-changelog releaseand adds the changed files to git.Commits with messageRelease <version>.and runsgit tag -a -m 'antsibull-core <version>' <version>.Runshatch build.Rungit pushto the appropriate remotes.Once CI passes on GitHub, runnox -e publish. This:Runshatch publish;Bumps the version to<version>.post0;Adds the changed file to git and rungit commit -m 'Post-release version bump.';Rungit push --follow-tagsto the appropriate remotes and create a GitHub release.LicenseUnless otherwise noted in the code, it is licensed under the terms of the GNU General Public License v3 or, at your option, later. SeeLICENSES/GPL-3.0-or-later.txtfor a copy of the license.Parts of the code are vendored from other sources and are licensed under other licenses:src/antsibull_core/vendored/collections.pyandsrc/antsibull_core/vendored/json_utils.pyare licensed under the terms of the BSD 2-Clause license. SeeLICENSES/BSD-2-Clause.txtfor a copy of the license.tests/functional/aiohttp_utils.pyandtests/functional/certificate_utils.pyare licensed under the terms of the MIT license. SeeLICENSES/MIT.txtfor a copy of the license.src/antsibull_core/vendored/_argparse_booleanoptionalaction.pyis licensed under the terms of the Python Software Foundation license version 2. SeeLICENSES/PSF-2.0.txtfor a copy of the license.The repository follows theREUSE Specificationfor declaring copyright and licensing information. The only exception are changelog fragments inchangelog/fragments/.
antsibull-docs
antsibull-docs -- Ansible Documentation Build ScriptsTooling for building Ansible documentation.Script that is here:antsibull-docs - Extracts documentation from ansible pluginsThis also includes aSphinx extensionsphinx_antsibull_extwhich provides a minimal CSS file to render the output ofantsibull-docscorrectly.You can find a list of changes inthe antsibull-docs changelog.Unless otherwise noted in the code, it is licensed under the terms of the GNU General Public License v3 or, at your option, later.antsibull-docs is covered by theAnsible Code of Conduct.Versioning and compatibilityFrom version 1.0.0 on, antsibull-docs sticks to semantic versioning and aims at providing no backwards compatibility breaking changesto the command line API (antsibull-docs)during a major release cycle. We might make exceptions from this in case of security fixes for vulnerabilities that are severe enough.The current major version is 2.x.y. Development for 2.x.y occurs on themainbranch. 2.x.y mainly differs from 1.x.y by dropping support for Python 3.6, 3.7, and 3.8, and by dropping support for older Ansible/ansible-base/ansible-core versions. See the changelog for details. 1.x.y is still developed on thestable-1branch, but only security fixes, major bugfixes, and other absolutely necessary changes will be backported.We explicitly exclude code compatibility.antsibull-docs is not supposed to be used as a library.The only exception are potential dependencies with other antsibull projects (currently there are none). If you want to use a certain part of antsibull-docs as a library, please create an issue so we can discuss whether we add a stable interface forpartsof the Python code. We do not promise that this will actually happen though.If you are interested in library support for interpreting Ansible markup, please take a look atthe antsibull-docs-parser project.Using the Sphinx extensionInclude it in your Sphinx configurationconf.py::# Add it to 'extensions': extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'notfound.extension', 'sphinx_antsibull_ext']Updating the CSS file for the Sphinx extensionThe CSS filesphinx_antsibull_ext/antsibull-minimal.cssis built fromsphinx_antsibull_ext/css/antsibull-minimal.scssusingSASSandpostcssusingautoprefixerandcssnano.Use the scriptbuild.shinsphinx_antsibull_ext/css/to build the.cssfile from the.scssfile:cd sphinx_antsibull_ext/css/ ./build-css.shFor this to work, you need to make sure thatsasscandpostcssare on your path and that the autoprefixer and nanocss modules are installed:# Debian: apt-get install sassc # PostCSS, autoprefixer and cssnano require nodejs/npm: npm install -g autoprefixer cssnano postcss postcss-cliDevelopmentInstall and runnoxto run all tests. That's it for simple contributions!noxwill create virtual environments in.noxinside the checked out project and install the requirements needed to run the tests there.antsibull-docs depends on the sister antsibull-core and antsibull-docs-parser projects. By default,noxwill install a development version of these projects from Github. If you're hacking on antsibull-core and/or antsibull-docs-parser alongside antsibull-docs, nox will automatically install the projects from../antsibull-coreand../antsibull-docs-parserwhen running tests if those paths exist. You can change this behavior through theOTHER_ANTSIBULL_MODEenv var:OTHER_ANTSIBULL_MODE=auto— the default behavior described aboveOTHER_ANTSIBULL_MODE=local— install the projects from../antsibull-coreand../antsibull-docs-parser. Fail if those paths don't exist.OTHER_ANTSIBULL_MODE=git— install the projects from the Github main branchOTHER_ANTSIBULL_MODE=pypi— install the latest versions from PyPITo run specific tests:nox -e testto only run unit tests;nox -e lintto run all linters and formatter;nox -e codeqato runflake8,pylint,reuse lint, andantsibull-changelog lint;nox -e formattersto runisortandblack;nox -e typingto runmypyandpyre.To create a more complete local development env:git clone https://github.com/ansible-community/antsibull-core.gitgit clone https://github.com/ansible-community/antsibull-docs-parser.gitgit clone https://github.com/ansible-community/antsibull-docs.gitcd antsibull-docspython3 -m venv venv. ./venv/bin/activatepip install -e '.[dev]' -e ../antsibull-core -e ../antsibull-docs-parser[...]noxCreating a new release:Runnox -e bump -- <version> <release_summary_message>. This:Bumps the package version insrc/antsibull_docs/__init__.py.Createschangelogs/fragments/<version>.ymlwith arelease_summarysection.Runsantsibull-changelog release --version <version>and adds the changed files to git.Commits with messageRelease <version>.and runsgit tag -a -m 'antsibull-docs <version>' <version>.Runshatch build --clean.Rungit pushto the appropriate remotes.Once CI passes on GitHub, runnox -e publish. This:Runshatch publish;Bumps the version to<version>.post0;Adds the changed file to git and rungit commit -m 'Post-release version bump.';Rungit push --follow-tagsto the appropriate remotes and create a GitHub release.
antsibull-docs-parser
antsibull-docs-parser - Python library for processing Ansible documentation markupThis is a Python library for processing Ansible documentation markup. It is named afterantsibull-docswhere this code originates from. It was moved out to make it easier to reuse the markup code in other projects without having to depend on all of antsibull-docs's dependencies.DevelopmentInstall and runnoxto run all tests.noxwill create virtual environments in.noxinside the checked out project and install the requirements needed to run the tests there.To run specific tests:nox -e testto only run unit tests;nox -e lintto run all linters and formatters at once;nox -e formattersto runisortandblack;nox -e codeqato runflake8,pylint,reuse lint, andantsibull-changelog lint;nox -e typingto runmypyandpyre;nox -e create_vectorsto update thetest-vectors.ymlfile. Please note that this file should be synchronized with the corresponding file inthe antsibull-docs-ts project.Releasing a new versionRunnox -e bump -- <version> <release_summary_message>. This:Bumps the package version inpyproject.toml.Createschangelogs/fragments/<version>.ymlwith arelease_summarysection.Runsantsibull-changelog releaseand adds the changed files to git.Commits with messageRelease <version>.and runsgit tag -a -m 'antsibull-docs-parser <version>' <version>.Runshatch build --clean.Rungit pushto the appropriate remotes.Once CI passes on GitHub, runnox -e publish. This:Runshatch publish;Bumps the version to<version>.post0;Adds the changed file to git and rungit commit -m 'Post-release version bump.';Rungit push --follow-tagsto the appropriate remotes and create a GitHub release.
antsichaut
AntsichautAntsichaut automates the filling of achangelog.yamlused by antsibull-changelog.You define a Github repository and a Github release. Then the script searches all pull requests since the release and adds them to thechangelog.yaml.The PR's get categorized into the changelog-sections based on these default labels:group_config = [ {"title": "major_changes", "labels": ["major", "breaking"]}, {"title": "minor_changes", "labels": ["minor", "enhancement"]}, {"title": "breaking_changes", "labels": ["major", "breaking"]}, {"title": "deprecated_features", "labels": ["deprecated"]}, {"title": "removed_features", "labels": ["removed"]}, {"title": "security_fixes", "labels": ["security"]}, {"title": "bugfixes", "labels": ["bug", "bugfix"]}, {"title": "skip_changelog", "labels": ["skip_changelog"]}, ]This means for example that PR's with the labelmajorget categorized into themajor_changessection of the changelog.PR's that have askip_changelogdo not get added to the changelog at all.PR's that do not have one of the above labels get categorized into thetrivialsection.Installationpip install antsichautManual UsageYou need a minimalchangelog.ymlcreated by antsibull-changelog:antsibull-changelog release --version 1.17.0Then define the version and the github repository you want to fetch the PRs from. Either via arguments or via environment variables:> cd /path/to/your/ansible/collection > antsichaut \ --github_token 123456789012345678901234567890abcdefabcd \ --since_version 1.17.0 \ --to_version 1.18.0 \ --major_changes_labels=foo --major_changes_labels=bar --minor_changes_labels=baz --repository=T-Systems-MMS/ansible-collection-icinga-director> cd /path/to/your/ansible/collection > export SINCE_VERSION=1.17.0 # (or `latest`) > export TO_VERSION=1.18.0 # optional. if unset, defaults to current date > export REPOSITORY=T-Systems-MMS/ansible-collection-icinga-director > export MAJOR_CHANGES_LABELS=["foo","bar"] > export MINOR_CHANGES_LABELS=["baz"] > antsichautThis will fill thechangelog.yamlwith Pull Requests. Then runantsibull-changelog generateto create the final changelog.Usage with Github ActionsInputssince_versionRequiredthe version to fetch PRs sinceto_versionthe version to fetch PRs toUsage----name:"GetPrevioustag"id:previoustaguses:"WyriHaximus/github-action-get-previous-tag@master"env:GITHUB_TOKEN:"${{secrets.GITHUB_TOKEN}}"-name:"Runantsichaut"uses:ansible-community/antsichaut@mainwith:GITHUB_TOKEN:"${{secrets.GITHUB_TOKEN}}"since_version:"${{steps.previoustag.outputs.tag}}"ExamplesCheck these examples out:telekom_mms.icinga_directorprometheus.prometheusAcknowledgements and KudosThis script was initially forked fromhttps://github.com/saadmk11/changelog-ci/and modified by @rndmh3ro. Thank you, @saadmk11!From May 2021 through May 2023, this project was maintained by @rndmh3ro and then graciously transferred to the ansible community organization. Thank you @rndmh3ro!LicenseThe code in this project is released under the MIT License.
ants-pi
Welcome to ANTS👋!ANTSis a toolbox for analyzing time-series data. You can recruit your own workers and order some analyzing jobs! Here, we call the workers as'ants'! Here after, you don't have to work alone.Work with your ants!InstallationYou can startANTSwith typing a simple command in your terminal.pipinstallants-piDocumentationThis documentation will help your ants!
antspy
Failed to fetch description. HTTP Status Code: 404
antspymm
ANTsPyMMmappingprocessing utilities for timeseries/multichannel images - mostly neuroimagingthe outputs of these processes can be used for data inspection/cleaning/triage as well for interrogating hypotheses.this package also keeps track of the latest preferred algorithm variations for production environments.install thedevversion by calling (within the source directory):python setup.py installor install the latest release viapip install antspymmwhat this will doANTsPyMM will process several types of brain MRI into tabular form as well as normalized (standard template) space. The processing includes:T1wHier uses hierarchical processing from ANTsPyT1w organized around these measurementsCIT168 template 10.1101/211201Desikan Killiany Tourville (DKT) 10.3389/fnins.2012.00171basal forebrain (Avants et al HBM 2022 abstract)other regions (egMTL) 10.1101/2023.01.17.23284693also produces jacobian datarsfMRI: resting state functional MRIuses 10.1016/j.conb.2012.12.009 to estimate network specific correlationsf/ALFF 10.1016/j.jneumeth.2008.04.012NM2DMT: neuromelanin mid-brain imagesCIT168 template 10.1101/211201DTI: DWI diffusion weighted images organized via:CIT168 template 10.1101/211201JHU atlases 10.1016/j.neuroimage.2008.07.009 10.1016/j.neuroimage.2007.07.053DKT for cortical to cortical tractography estimates based on DiPyT2Flair: flair for white matter hyperintensityhttps://pubmed.ncbi.nlm.nih.gov/30908194/https://pubmed.ncbi.nlm.nih.gov/30125711/https://pubmed.ncbi.nlm.nih.gov/35088930/T1w: voxel-based cortical thickness (DiReCT) 10.1016/j.neuroimage.2008.12.016Results of these processes are plentiful; processing for a single subject will all modalities will take around 2 hours on an average laptop.documentation of functionshere.data dictionaryhere.notes on QChereachieved through four steps (recommended approach):organize data in NRG formatperform blind QCcompute outlierness per modality and select optimally matched modalities ( steps 3.1 and 3.2 )run the main antspymm functionfirst time setupimportantspyt1wimportantspymmantspyt1w.get_data(force_download=True)antspymm.get_data(force_download=True)NOTE:get_datahas aforce_downloadoption to make sure the latest package data is installed.NOTE: some functions inantspynetwill download deep network model weights on the fly. if one is containerizing, then it would be worth running a test case through in the container to make sure all the relevant weights are pre-downloaded.example processingsee the latest help but this snippet gives an idea of how one might use the package:importosos.environ["TF_NUM_INTEROP_THREADS"]="8"os.environ["TF_NUM_INTRAOP_THREADS"]="8"os.environ["ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS"]="8"importantspymmimportantspyt1wimportantspynetimportants...i/ocodehere...tabPro,normPro=antspymm.mm(t1,hier,nm_image_list=mynm,rsf_image=rsf,dw_image=dwi,bvals=bval_fname,bvecs=bvec_fname,flair_image=flair,do_tractography=False,do_kk=False,do_normalization=True,verbose=True)antspymm.write_mm('/tmp/test_output',t1wide,tabPro,normPro)blind quality controlthis package also provides tools to identify thebestmulti-modality image set at a given visit.the code below provides guidance on how to automatically qc, filter and match multiple modality images at each time point. these tools are based on standard unsupervised approaches and are not perfect so we recommend using the associated plotting/visualization techniques to check the quality characterizations for each modality.## run the qc on all images - requires a relatively large sample per modality to be effective## then aggregateqcdf=pd.DataFrame()forfninfns:qcdf=pd.concat([qcdf,antspymm.blind_image_assessment(fn)],axis=0)qcdfa=antspymm.average_blind_qc_by_modality(qcdf,verbose=True)## reduce the time series qcqcdfaol=antspymm.outlierness_by_modality(qcdfa)# estimate outlier scoresprint(qcdfaol.shape)print(qcdfaol.keys)matched_mm_data=antspymm.match_modalities(qcdfaol)or just get modality-specific outlierness "by hand" then matchmm:importantspymmimportpandasaspdmymods=antspymm.get_valid_modalities()alldf=pd.DataFrame()forninrange(len(mymods)):m=mymods[n]jj=antspymm.collect_blind_qc_by_modality("qc/*"+m+"*csv")jjj=antspymm.average_blind_qc_by_modality(jj,verbose=False).dropna(axis=1)## reduce the time series qcjjj=antspymm.outlierness_by_modality(jjj,verbose=False)alldf=pd.concat([alldf,jjj],axis=0)jjj.to_csv("mm_outlierness_"+m+".csv")print(m+" done")# write the joined data outalldf.to_csv("mm_outlierness.csv",index=False)# find the best mm collectionmatched_mm_data=antspymm.match_modalities(alldf,verbose=True)matched_mm_data.to_csv("matched_mm_data.csv",index=False)matched_mm_data['negative_outlier_factor']=1.0-matched_mm_data['ol_loop'].astype("float")matched_mm_data2=antspymm.highest_quality_repeat(matched_mm_data,'subjectID','date',qualityvar='negative_outlier_factor')matched_mm_data2.to_csv("matched_mm_data2.csv",index=False)an example on open neuro (BIDS) datafrom :ANT PDimagesBIDS/ └── ANTPD └── sub-RC4125 └── ses-1 ├── anat │   ├── sub-RC4125_ses-1_T1w.json │   └── sub-RC4125_ses-1_T1w.nii.gz ├── dwi │   ├── sub-RC4125_ses-1_dwi.bval │   ├── sub-RC4125_ses-1_dwi.bvec │   ├── sub-RC4125_ses-1_dwi.json │   └── sub-RC4125_ses-1_dwi.nii.gz └── func ├── sub-RC4125_ses-1_task-ANT_run-1_bold.json ├── sub-RC4125_ses-1_task-ANT_run-1_bold.nii.gz └── sub-RC4125_ses-1_task-ANT_run-1_events.tsvimportantspymmimportpandasaspdimportglobasglobfns=glob.glob("imagesBIDS/ANTPD/sub-RC4125/ses-*/*/*gz")fns.sort()randid='000'# BIDS does not have unique image ids - so we assign onestudycsv=antspymm.generate_mm_dataframe('ANTPD','sub-RC4125','ses-1',randid,'T1w','/Users/stnava/data/openneuro/imagesBIDS/','/Users/stnava/data/openneuro/processed/',t1_filename=fns[0],dti_filenames=[fns[1]],rsf_filenames=[fns[2]])studycsv2=studycsv.dropna(axis=1)mmrun=antspymm.mm_csv(studycsv2,mysep='_')# aggregate the data after you've run on many subjects# studycsv_all would be the vstacked studycsv2 data frameszz=antspymm.aggregate_antspymm_results_sdf(studycsv_all,subject_col='subjectID',date_col='date',image_col='imageID',base_path=bd,splitsep='_',idsep='-',wild_card_modality_id=True,verbose=True)NRG exampleNRG format detailshereimagesNRG/ └── ANTPD └── sub-RC4125 └── ses-1 ├── DTI │   └── 000 │   ├── ANTPD_sub-RC4125_ses-1_DTI_000.bval │   ├── ANTPD_sub-RC4125_ses-1_DTI_000.bvec │   ├── ANTPD_sub-RC4125_ses-1_DTI_000.json │   └── ANTPD_sub-RC4125_ses-1_DTI_000.nii.gz ├── T1w │   └── 000 │   └── ANTPD_sub-RC4125_ses-1_T1w_000.nii.gz └── rsfMRI └── 000 └── ANTPD_sub-RC4125_ses-1_rsfMRI_000.nii.gzimportantspymmimportpandasaspdimportglobasglobt1fn=glob.glob("imagesNRG/ANTPD/sub-RC4125/ses-*/*/*/*T1w*gz")[0]# flair also takes a single imagedtfn=glob.glob("imagesNRG/ANTPD/sub-RC4125/ses-*/*/*/*DTI*gz")rsfn=glob.glob("imagesNRG/ANTPD/sub-RC4125/ses-*/*/*/*rsfMRI*gz")studycsv=antspymm.generate_mm_dataframe('ANTPD','sub-RC4125','ses-1','000','T1w','/Users/stnava/data/openneuro/imagesNRG/','/Users/stnava/data/openneuro/processed/',t1fn,rsf_filenames=rsfn,dti_filenames=dtfn)studycsv2=studycsv.dropna(axis=1)mmrun=antspymm.mm_csv(studycsv2,mysep='_')Population studiesLarge population studies may need more care to ensure everything is reproducibly organized and processed. In this case, we recommend:1. blind qcfirst run the blind qc function that would look liketests/blind_qc.py. this gives a quick view of the relevant data to be processed. it provides both figures and summary data for each 3D and 4D (potential) input image.2. collect outlierness measurementsthe outlierness function gives one an idea of how each image relates to the others in terms of similarity. it may or may not succeed in detecting true outliers but does a reasonable job of providing some rank ordering of quality when there is repeated data. seetests/outlierness.py.3. match the modalities for each subject and timepointthis occurs at the end oftests/outlierness.py. the output of the function will select the best quality time point multiple modality collection and will define the antspymm cohort in a reproducible manner.4. run the antspymm processingfor each subject/timepoint, one would run:# ... imports above ...studyfn="matched_mm_data2.csv"df=pd.read_csv(studyfn)index=20# 20th subject/timepointcsvfns=df['filename']csvrow=df[df['filename']==csvfns[index]]csvrow['projectID']='MyStudy'############################################################################################template=ants.image_read("~/.antspymm/PPMI_template0.nii.gz")bxt=ants.image_read("~/.antspymm/PPMI_template0_brainmask.nii.gz")template=template*bxttemplate=ants.crop_image(template,ants.iMath(bxt,"MD",12))studycsv2=antspymm.study_dataframe_from_matched_dataframe(csvrow,rootdir+"nrgdata/data/",rootdir+"processed/",verbose=True)mmrun=antspymm.mm_csv(studycsv2,dti_motion_correct='SyN',dti_denoise=True,normalization_template=template,normalization_template_output='ppmi',normalization_template_transform_type='antsRegistrationSyNQuickRepro[s]',normalization_template_spacing=[1,1,1])5. aggregate resultsif you have a large population study then the last step would look like this:importantspymmimportglobasglobimportreimportpandasaspdimportosdf=pd.read_csv("matched_mm_data2.csv")pdir='./processed/'df['projectID']='MYSTUDY'merged=antspymm.merge_wides_to_study_dataframe(df,pdir,verbose=False,report_missing=False,progress=100)print(merged.shape)merged.to_csv("mystudy_results_antspymm.csv")useful tools for converting dicom to niftidcm2niixdicom2niftiimportdicom2niftidicom2nifti.convert_directory(dicom_directory,output_folder,compression=True,reorient=True)simpleitkimportSimpleITKassitkimportsysimportosimportglobasglobimportantsdd='dicom'oo='dicom2nifti'folders=glob.glob('dicom/*')k=0forfinfolders:print(f)reader=sitk.ImageSeriesReader()ff=glob.glob(f+"/*")dicom_names=reader.GetGDCMSeriesFileNames(ff[0])iflen(ff)>0:fnout='dicom2nifti/image_'+str(k).zfill(4)+'.nii.gz'ifnotexists(fnout):failed=Falsereader.SetFileNames(dicom_names)try:image=reader.Execute()except:failed=Truepassifnotfailed:size=image.GetSpacing()print(image.GetMetaDataKeys())print(size)sitk.WriteImage(image,fnout)img=ants.image_read(fnout)img=ants.iMath(img,'TruncateIntensity',0.02,0.98)ants.plot(img,nslices=21,ncol=7,axis=2,crop=True)else:print(f+": "+'empty')k=k+1build docspdoc -o ./docs antspymm --htmlssl errorif you get an odd certificate error when callingforce_download, try:importsslssl._create_default_https_context=ssl._create_unverified_contextto publish a releaserm -r -f build/ antspymm.egg-info/ dist/ python3 setup.py sdist bdist_wheel twine upload --repository antspymm dist/*
antspynet
ANTsPyNetA collection of deep learning architectures and applications ported to the python language and tools for basic medical image processing. Based onkerasandtensorflowwith cross-compatibility with our R analogANTsRNet.Documentation pagehttps://antsx.github.io/ANTsPyNet/.For MacOS and Linux, may install with:pipinstallantspynetArchitecturesImage voxelwise segmentation/regressionU-Net (2-D, 3-D)U-Net + ResNet (2-D, 3-D)Dense U-Net (2-D, 3-D)Image classification/regressionAlexNet (2-D, 3-D)VGG (2-D, 3-D)ResNet (2-D, 3-D)ResNeXt (2-D, 3-D)WideResNet (2-D, 3-D)DenseNet (2-D, 3-D)Object detectionImage super-resolutionSuper-resolution convolutional neural network (SRCNN) (2-D, 3-D)Expanded super-resolution (ESRCNN) (2-D, 3-D)Denoising auto encoder super-resolution (DSRCNN) (2-D, 3-D)Deep denoise super-resolution (DDSRCNN) (2-D, 3-D)ResNet super-resolution (SRResNet) (2-D, 3-D)Deep back-projection network (DBPN) (2-D, 3-D)Super resolution GANRegistration and transformsSpatial transformer network (STN) (2-D, 3-D)Generative adverserial networksGenerative adverserial network (GAN)Deep Convolutional GANWasserstein GANImproved Wasserstein GANCycle GANSuper resolution GANClusteringDeep embedded clustering (DEC)Deep convolutional embedded clustering (DCEC)ApplicationsMRI super-resolutionMulti-modal brain extractionT1T1"no brainer"FLAIRT2FABOLDT1/T2 infantSix-tissue Atropos brain segmentationCortical thicknessBrain ageHippMapp3r hippocampal segmentationSysu white matter hyperintensity segmentationHyperMapp3r white matter hyperintensity segmentationHypothalamus segmentationClaustrum segmentationDeep FlashDesikan-Killiany-Tourville cortical labelingCerebellum segmentation, parcellation, and thicknessMRI modality classificationLung extractionProtonCTFunctional lung segmentationNeural style transferImage quality assessmentTID2013KonIQ-10kMixture density networks (MDN)RelatedTraining scriptsInstallationANTsPyNet Installation:Option 1:$ git clone https://github.com/ANTsX/ANTsPyNet $ cd ANTsPyNet $ python setup.py installPublicationsNicholas J. Tustison, Michael A Yassa, Batool Rizvi, Andrew J. Holbrook, Mithra Sathishkumar, James C. Gee, James R. Stone, and Brian B. Avants. ANTsX neuroimaging-derived structural phenotypes of UK Biobank.(medrxiv)Nicholas J. Tustison, Talissa A. Altes, Kun Qing, Mu He, G. Wilson Miller, Brian B. Avants, Yun M. Shim, James C. Gee, John P. Mugler III, and Jaime F. Mata. Image- versus histogram-based considerations in semantic segmentation of pulmonary hyperpolarized gas images.Magnetic Resonance in Medicine, 86(5):2822-2836, Nov 2021.(pubmed)Andrew T. Grainger, Arun Krishnaraj, Michael H. Quinones, Nicholas J. Tustison, Samantha Epstein, Daniela Fuller, Aakash Jha, Kevin L. Allman, Weibin Shi. Deep Learning-based Quantification of Abdominal Subcutaneous and Visceral Fat Volume on CT Images,Academic Radiology, 28(11):1481-1487, Nov 2021.(pubmed)Nicholas J. Tustison, Philip A. Cook, Andrew J. Holbrook, Hans J. Johnson, John Muschelli, Gabriel A. Devenyi, Jeffrey T. Duda, Sandhitsu R. Das, Nicholas C. Cullen, Daniel L. Gillen, Michael A. Yassa, James R. Stone, James C. Gee, and Brian B. Avants for the Alzheimer’s Disease Neuroimaging Initiative. The ANTsX ecosystem for quantitative biological and medical imaging.Scientific Reports. 11(1):9068, Apr 2021.(pubmed)Nicholas J. Tustison, Brian B. Avants, and James C. Gee. Learning image-based spatial transformations via convolutional neural networks: a review,Magnetic Resonance Imaging, 64:142-153, Dec 2019.(pubmed)Nicholas J. Tustison, Brian B. Avants, Zixuan Lin, Xue Feng, Nicholas Cullen, Jaime F. Mata, Lucia Flors, James C. Gee, Talissa A. Altes, John P. Mugler III, and Kun Qing. Convolutional Neural Networks with Template-Based Data Augmentation for Functional Lung Image Quantification,Academic Radiology, 26(3):412-423, Mar 2019.(pubmed)Andrew T. Grainger, Nicholas J. Tustison, Kun Qing, Rene Roy, Stuart S. Berr, and Weibin Shi. Deep learning-based quantification of abdominal fat on magnetic resonance images.PLoS One, 13(9):e0204071, Sep 2018.(pubmed)Cullen N.C., Avants B.B. (2018) Convolutional Neural Networks for Rapid and Simultaneous Brain Extraction and Tissue Segmentation. In: Spalletta G., Piras F., Gili T. (eds) Brain Morphometry. Neuromethods, vol 136. Humana Press, New York, NYdoiAcknowledgmentsWe gratefully acknowledge the support of the NVIDIA Corporation with the donation of two Titan Xp GPUs used for this research.We gratefully acknowledge the grant support of theOffice of Naval ResearchandCohen Veterans Bioscience.
antspyt1w
ANTsPyT1wreference processing for t1-weighted neuroimages (human)the outputs of these processes can be used for data inspection/cleaning/triage as well for interrogating neuroscientific hypotheses.this package also keeps track of the latest preferred algorithm variations for production environments.install by calling (within the source directory):python setup.py installor install viapip install antspyt1wwhat this will doprovide example databrain extractiondenoisingn4 bias correctionbrain parcellation into tissues, hemispheres, lobes and regionshippocampus specific segmentationt1 hypointensity segmentation and classificationexploratorydeformable registration with robust and repeatable parametersregistration-based labeling of major white matter tractshelpers that organize and annotate segmentation variables into data frameshypothalamus segmentationFIXME/TODOthe two most time-consuming processes are hippocampus-specific segentation (because it uses augmentation) and registration. both take 10-20 minutes depending on your available computational resources and the data. both could be made computationally cheaper at the cost of accuracy/reliability.first time setupimportantspyt1wantspyt1w.get_data()NOTE:get_datahas aforce_downloadoption to make sure the latest package data is installed.example processingimportosos.environ["TF_NUM_INTEROP_THREADS"]="8"os.environ["TF_NUM_INTRAOP_THREADS"]="8"os.environ["ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS"]="8"importantspyt1wimportantspynetimportants##### get example data + reference templates# NOTE: PPMI-3803-20120814-MRI_T1-I340756 is a good example of our naming style# Study-SubjectID-Date-Modality-UniqueID# where Modality could also be measurement or something elsefn=antspyt1w.get_data('PPMI-3803-20120814-MRI_T1-I340756',target_extension='.nii.gz')img=ants.image_read(fn)# generalized default processingmyresults=antspyt1w.hierarchical(img,output_prefix='/tmp/XXX')##### organize summary data into data frames - user should pivot these to columns# and attach to unique IDs when accumulating for large-scale studies# see below for how to easily pivot into wide format# https://stackoverflow.com/questions/28337117/how-to-pivot-a-dataframe-in-pandasAn example "full study" (at small scale) is illustrated in~/.antspyt1w/run_dlbs.pywhich demonstrates/comments on:how to aggregate dataframeshow to pivot to wide formathow to join with a demographic/metadata filevisualizing basic outcomes.ssl errorif you get an odd certificate error when callingforce_download, try:importsslssl._create_default_https_context=ssl._create_unverified_contextto publish a releasepython3 -m build python -m twine upload -u username -p password dist/*
antspyx
Advanced Normalization Tools in Python wraps core C++-based ANTs tools for registration, segmentation and other basic processing.
antstar
Python lib to find path in 2d environment to an objective, with limited around informationExampleWhere#a wall,Sthe start point of our ant,Xthe objective,,the memory of road done since blocked by a wall andAis ant.
antsy
UNKNOWN
antsys
antsysA general purpose ant colony optimization system.OverviewThe Ant Colony Optimization (ACO) is a technique, inspired by the foraging behavior of ants, to find good solutions for discrete optimization problems. Its central metaphor resides in the indirect communication mechanism through chemical signals (pheromones) used by many species of social ants in their search for food sources.The same inspiration was build in theantsyspackage, wich takes advantage ofpythonflexibility to be easily applied to different optimization problems.InstallationInstallation viapippip3 install antsysUsage Example:Travelling Salesman ProblemThe Travelling Salesman Problem (TSP) is the challenge of finding the shortest yet most efficient route for a person to take given a list of specific destinations. It is a well-known optimization problem and commonly solved by ACO algorithm.Import necessary packages and modulesfromantsysimportAntWorldfromantsysimportAntSystemimportnumpyasnpimportrandomGenerate a travelling salesman problem instance# generate citiesprint('cities:')print('| id | x | y |')cities=[]forcityinrange(10):x=random.uniform(-100,100)y=random.uniform(-100,100)cities.append((city,x,y))print('|%4i|%9.4f|%9.4f|'%cities[city])The functionsalesman_ruleswill append the euclidean distance between cities to the edges.defsalesman_rules(start,end):return[((start[1]-end[1])**2+(start[2]-end[2])**2)**0.5]The functionsalesman_costwill be used to calculate the cost of any possible solution (path).defsalesman_cost(path):cost=0foredgeinpath:cost+=edge.inforeturncostThesalesman_heuristicis a simple heuristic that will help the ants to make better choices. Edges with small distances have a slightly higher probability of selection.defsalesman_heuristic(path,candidate):returncandidate.infoThis function shows the details of a possible solution (sys_resp).defprint_solution(sys_resp):print('total cost =%g'%sys_resp[0])print('path:')print('| id | x | y |--distance-->| id | x | y |')foredgeinsys_resp[2]:print('|%4i|%9.4f|%9.4f|--%8.4f-->|%4i|%9.4f|%9.4f|'%(edge.start[0],edge.start[1],edge.start[2],edge.info,edge.end[0],edge.end[1],edge.end[2]))The world (new_world) is created from the nodes (cities) as a complete graph. In this point,salesman_rules,salesman_costandsalesman_heuristicare defined as respectivelyr_func,c_funcandh_func. These functions are bound to the world and the first one has an important role in its structure.new_world=AntWorld(cities,salesman_rules,salesman_cost,salesman_heuristic)Configureant_optas anAntSystem.ant_opt=AntSystem(world=new_world,n_ants=50)Execute the optimization loop.ant_opt.optimize(50,20)Show details about the best solution found.print_solution(ant_opt.g_best)Examples can be foundhereas jupyter notebooks.
anttzkc
Failed to fetch description. HTTP Status Code: 404
antu
# AntU Universal data IO and neural network modules in NLP tasks.data IOis an universal module in Natural Language Processing system and not based on any framework (like TensorFlow, PyTorch, MXNet, Dynet…).neural networkmodule contains the neural network structures commonly used in NLP tasks. We want to design commonly used structures for each neural network framework. We will continue to develop this module.# RequirementsPython>=3.6bidict==0.17.5numpy==1.15.4numpydoc==0.8.0overrides==1.9pytest==4.0.2##### If you need dynet neural network:dynet>=2.0# Installing via pip`bash pip install antu `# Resources[Documentation](https://antu.readthedocs.io/en/latest/index.html)[Source Code](https://github.com/AntNLP/antu)
antuantu555
UNKNOWN
antvis
No description available on PyPI.