package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
annotationjf
|
No description available on PyPI.
|
annotation-refinery
|
The Annotation Refinery python package consists of functions that process
publicly available annotated sets of genes, such as Gene Ontology and Disease
Ontology terms.Configuration filesThe Annotation Refinery requires at least two.iniconfiguration files in
the main directory to run:Amain_config.inifile with the main configuration settings, andAt least one<species>.inifile, which will contain the locations of
the desired annotation files for that species, amon other things. Users can
add configuration files in the main directory for as many species as
they want the refinery to process.Optionally, there can also be asecrets.inifile, which stores values like
usernames and passwords for access to restricted URLs.The Main Configuration FileThe main configuration file includes settings like the location(s) of the
species file(s), where the output of the refinery (the processed genesets)
should be loaded to, where annotation files should be downloaded to,
and optionally, the location of the secrets file.[main]
SECRETS_FILE: secrets.ini
PROCESS_TO: Tribe
# All other download folders in this files should be folders within
# this root folder
[download_folder]
BASE_DOWNLOAD_FOLDER: download_files
[Tribe parameters]
TRIBE_URL: https://tribe.greenelab.com
[species files]
SPECIES_FILES: human.iniThe Species File(s)Each species file should contain the URLs of the desired annotation files to be
downloaded.# File for human settings
[species_info]
SCIENTIFIC_NAME: Homo sapiens
TAXONOMY_ID: 9606
SPECIES_DOWNLOAD_FOLDER: download_files/Human
# ***********************************************
# Below, add as sections the types of annotations
# that should be downloaded and processed
# ***********************************************
[GO]
DOWNLOAD: TRUE
GO_OBO_URL: ftp://ftp.geneontology.org/go/ontology/obo_format_1_2/gene_ontology.1_2.obo
ASSOC_FILE_URL: ftp://ftp.geneontology.org/go/gene-associations/gene_association.goa_human.gz
EVIDENCE_CODES: EXP, IDA, IPI, IMP, IGI, IEP
TAG_MAPPING_FILE: tag_mapping_files/brenda-gobp-all_mapping.dir.v2.txt
GO_ID_COLUMN: 2
GO_NAME_COLUMN: 3
TAG_COLUMN: 1
TAG_FILE_HEADER: TRUE
[KEGG]
DOWNLOAD: TRUE
KEGG_ROOT_URL: http://rest.kegg.jp
DB_INFO_URL: /info/kegg
SETS_TO_DOWNLOAD: /link/hsa/pathway, /link/hsa/module, /link/hsa/disease
SET_INFO_DIR: /get/
# This is the type of gene identifier used by KEGG for this species
XRDB: Entrez
[DO]
DOWNLOAD: TRUE
DO_OBO_URL: http://sourceforge.net/p/diseaseontology/code/HEAD/tree/trunk/HumanDO.obo?format=raw
MIM2GENE_URL: http://omim.org/static/omim/data/mim2gene.txt
GENEMAP_URL: http://data.omim.org/downloads/<SecretKey>/genemap.txt
# This is the type of gene identifier used by DO
XRDB: Entrez
TAG_MAPPING_FILE: tag_mapping_files/tissue-disease_curated-associations.txt
DO_ID_COLUMN: 2
DO_NAME_COLUMN: 3
TAG_COLUMN: 1
TAG_FILE_HEADER: TRUEThe Secrets FileThe secrets file contains things like usernames and passwords for databases,
secret keys for APIs where annotation files will be downloaded from, etc.[OMIM API secrets]
SECRET_KEY: ExampleSecretKey
[Tribe secrets]
TRIBE_ID: asdf1234
TRIBE_SECRET: qwerty1234
USERNAME: example_username
PASSWORD: passwordInstructions for getting an OMIM API secret key can be found here:http://omim.org/downloadsInstructions for getting the Tribe secrets can be found here:http://tribe-greenelab.readthedocs.io/en/latest/api.html#creating-new-resources-through-tribe-s-api
|
annotations
|
UNKNOWN
|
annotations-api
|
No description available on PyPI.
|
annotation-tool
|
InstallationAll stable versions can be installed fromPyPIby usingpipor your favorite package managerpip install annotation-toolYou can get pre-published versions fromTestPyPIor this repositoryTest PyPI:pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ annotation-toolFrom Source:pip install git+https://github.com/TUD-Patrec/annotation-tool@masterAfter installation the annotation tool can be run as simple asannotation-toolDevelopmentRequirements:Python 3.8 or higherpoetry1.2 or highermakedocker(if you want to build the binaries)For installing the development environment runmakesetup
|
annotationtools
|
annotation-toolsTools for working with annotations used for training machine learning models. This package currently
supports reading and writing CVAT Images 1.1 files.
|
annotation-validation
|
Annotation ValidationAnnotation Validation ensures type checking for function annotations.Free software: MIT licenseDocumentation:https://annotation-validation.readthedocs.io.FeaturesValidates input to match data types of annotations of function argumentsValidates output to match data type of annotation of return argumentUsed as a decorator!Possible ImprovementsValidation of ranges of inputThrow warnings instead of errorsLoggingCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.Inspiration is fromthis blog post.History0.1.1 (2019-02-05)Adding caching togetfullargspecandget_type_hints.0.1.0 (2019-02-04)First release on PyPI.
|
annotator
|
Annotator StoreThis is a backend store forAnnotator.The functionality can roughly be separated in two parts:An abstraction layer wrapping Elasticsearch, to easily manage annotation
storage. It features authorization to filter search results according to
their permission settings.A Flask blueprint for a web server that exposes an HTTP API to the annotation
storage. To use this functionality, build this package with the[flask]option.Getting goingYou’ll need a recent version ofPython(Python 2 >=2.6
or Python 3 >=3.3) andElasticSearch(>=1.0.0)
installed.The quickest way to get going requires thepipandvirtualenvtools (easy_install virtualenvwill get them both). Run the
following in the repository root:virtualenv pyenv
source pyenv/bin/activate
pip install -e .[flask]
cp annotator.cfg.example annotator.cfg
python run.pyYou should see something like:* Running on http://127.0.0.1:5000/
* Restarting with reloader...If you wish to customize the configuration of the Annotator Store, make
your changes toannotator.cfgor dive intorun.py.Additionally, theHOSTandPORTenvironment variables override
the default socket binding of address127.0.0.1and port5000.Store APIThe Store API is designed to be compatible with theAnnotator. The annotation store, a
JSON-speaking REST API, will be mounted at/apiby default. See theAnnotator
documentationfor
details.Running testsWe usenoseteststo run tests. You can justpip install-e.[testing], ensure ElasticSearch is running, and
then:$ nosetests
......................................................................................
----------------------------------------------------------------------
Ran 86 tests in 19.171s
OKAlternatively (and preferably), you should installTox, and then runtox. This will run
the tests against multiple versions of Python (if you have them
installed).Pleaseopen an issueif you find that the tests don’t all pass on your machine, making sure to include
the output ofpip freeze.ChangelogAll notable changes to this project will be documented in this file. This
project endeavours to adhere toSemantic Versioning.0.14.2 2015-07-17FIXED:Annotation.searchno longer mutates the passed query.FIXED/BREAKING CHANGE:Document.get_by_uri()no longer returns a list for
empty resultsets, instead returningNone.0.14.1 2015-03-05FIXED: Document plugin doesn’t drop links without a type. The annotator
client generates a typeless link from the document href. (#116)ADDED: the search endpoint now supports ‘before’ and ‘after query parameters,
which can be used to return annotations created between a specific time
period.0.14 - 2015-02-13ADDED: the search endpoint now supports ‘sort’ and ‘order’ query parameters,
which can be used to control the sort order of the returned results.FIXED: previously only one document was returned when looking for equivalent
documents (#110). Now the Document model tracks all discovered equivalent
documents and keeps each document object up-to-date with them all.BREAKING CHANGE: Document.get_all_by_uris() no longer exists. Use
Document.get_by_uri() which should return a single document containing all
equivalent URIs. (You may wish to update your index by fetching all documents
and resaving them.)FIXED: the search_raw endpoint no longer throws an exception when the
‘fields’ parameter is provided.0.13.2 - 2014-12-03Avoid a confusing error about reindexing when annotator is used as a
library and not a standalone application (#107).0.13.1 - 2014-12-03Reindexer can run even when target exists.0.13.0 - 2014-12-02Slight changes to reindex.py to ease subclassing it.0.12.0 - 2014-10-06A tool for migrating/reindexing elasticsearch (reindex.py) was added (#103).The store returns more appropriate HTTP response codes (#96).Dropped support for ElasticSearch versions before 1.0.0 (#92).The default search query has been changed from a term-filtered “match all” to
a set of “match queries”, resulting in more liberal interpretations of
queries (#89).The default elasticsearch analyzer for annotation fields has been changed to
“keyword” in order to provide more consistent case-sensitivity behaviours
(#73, #88).Made Flask an optional dependency: it is now possible to use the persistence
components of the project without needing Flask (#76).Python 3 compatibility (#72).0.11.2 - 2014-07-25SECURITY: Fixed bug that allowed authenticated users to overwrite annotations
on which they did not have permissions (#82).0.11.1 - 2014-04-09Fixed support for using ElasticSearch instances behind HTTP Basic auth0.11.0 - 2014-04-08Add support for ElasticSearch 1.0Create changelog
|
annotator_store
|
annotator_storeis aDjangoapplication meant for use within a Django project as anannotator.js2.x annotation
store backend, and implements theAnnotator Storage API.annotator_storewas originally develop as a component ofReadux.LicenseThis software is distributed under the Apache 2.0 License.InstallationUse pip to install:pip installYou can also install from GitHub. Use branch or tag name, e.g.@[email protected], to install a specific tagged release or branch:pip install git+https://github.com/Princeton-CDH/django-annotator-store.git@develop#egg=annotator_storeConfigurationAddannotator_storeto installed applications and make sure that other
required components are enabled:INSTALLED_APPS = (
...
'django.contrib.auth',
'django.contrib.admin',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'annotator_store',
...
)Include the annotation storage API urls at the desired base url with the
namespace:from annotator_store import views as annotator_views
urlpatterns = [
# annotations
url(r'^annotations/api/', include('annotator_store.urls', namespace='annotation-api')),
# annotatorjs doesn't handle trailing slash in api prefix url
url(r'^annotations/api', annotator_views.AnnotationIndex.as_view(), name='annotation-api-prefix'),
]Run migrations to create annotation database tables:python manage.py migrateNoteIf you want per-object permissions on individual annotations (rather than
the standard django type-based permissions), you must also installdjango-guardianand includeguardianin yourINSTALLED_APPS. Per-object permissions must be turned on in Django
settings by settingANNOTATION_OBJECT_PERMISSIONSto True.Custom Annotation ModelThis module is designed to allow the use of a custom Annotation model, in order
to add functionality or relationships to other models within an application.
To take advantage of this feature, you should extend the abstract modelannotator_store.models.BaseAnnotationand configure your model in
Django setings, e.g.:ANNOTATOR_ANNOTATION_MODEL = 'myapp.LocalAnnotation'If you want per-object permissions on your annotation model, you should
extendannotator_store.models.AnnotationWithPermissionsrather than
the base annotation class.NotePer-object permissions require that apermissions pluginbe
included when you initialize your annotator.js Annotator object.
That code is currently available as a plugin in theReadux codebaseDevelopment instructionsThis git repository usesgit flowbranching conventions.Initial setup and installation:recommended: create and activate a python virtualenv:virtualenv annotator-store
source annotator-store/bin/activatepip install the package with its python dependencies:pip install -e .Unit TestingUnit tests are run withpy.testbut use
Django test classes for convenience and compatibility with django test suites.
Running the tests requires a minimal settings file for Django required
configurations.Copy sample test settings and add aSECRET_KEY:cp ci/testsettings.py testsettings.pyTo run the tests, either use the configured setup.py test command:python setup.py testOr install test requirements and use py.test directly:pip install -e '.[test]'
py.testSphinx DocumentationTo work with the sphinx documentation, installsphinxdirectly via pip
or via:pip install -e '.[docs]'Documentation can be built in thedocsdirectory using:make html
|
annot-attrs
|
annot_attrsDesigned to get list of annotated but not defined/not used attrs from class (not instance!).may be helpful further in instance to check that really have values.Featuresget set of unused attributes from class(not instance!)work with nested classesget valuesby case insensitive namesby dict key access methodwork on any object (over Obj parameter!)at least for NamedTupleLicenseSee theLICENSEfile for license rights and limitations (MIT).Release historySee theHISTORY.mdfile for release history.Installationpip install annot-attrsImportfromannot_attrsimport*GUIDE1. inheritanceBEST practice - dont mess classes! use as separated object!fromannot_attrsimport*classCls:ATTR1:intATTR2:int=2obj=Cls(1)assertAnnotAttrs().annots_get_set(obj)=={"ATTR1",}assertAnnotAttrs().annots_get_dict(obj)=={"ATTR1":1,}fromannot_attrsimport*classCls(AnnotAttrs):ATTR1:intATTR2:int=2assertCls().annots_get_set()=={"ATTR1",}classCls2(Cls):ATTR1:int=2ATTR3:intassertCls2().annots_get_set()=={"ATTR1","ATTR3",}inst=Cls2()inst.ATTR1=1inst.ATTR2=1inst.ATTR3=1assertCls2().annots_get_set()=={"ATTR1","ATTR3",}assertCls().ATTR2==2assertCls().attr2==2assertCls()["ATTR2"]==2assertCls()["attr2"]==2obj=Cls()try:obj.annots_get_dict()exceptExx_AttrNotExist:passelse:assertFalseobj.ATTR1=1assertobj.annots_get_dict()=={"ATTR1":1}2. Indepandant usagefromannot_attrsimport*try:classCls(AnnotAttrs,NamedTuple):ATTR1:intATTR2:int=2exceptTypeError:# TypeError: can only inherit from a NamedTuple type and Genericpasselse:assertTrueclassCls(NamedTuple):ATTR1:intATTR2:int=2obj=Cls(1)assertAnnotAttrs().annots_get_set(obj)=={"ATTR1",}assertAnnotAttrs().annots_get_dict(obj)=={"ATTR1":1}
|
annotell-auth
|
Annotell AuthenticationPython 3 library providing foundations for Annotell Authentication
on top of therequestsorhttpxlibraries.Install withpip install annotell-auth[requests]orpip install annotell-auth[httpx]Builds on the standard OAuth 2.0 Client Credentials flow. There are a few ways to provide auth credentials to our api
clients. Annotell Python clients such as inannotell-input-apiaccept anauthparameter that
can be set explicitly or you can omit it and use environment variables.There are a few ways to set your credentials inauth.Set the environment variableANNOTELL_CREDENTIALSto point to your Annotell Credentials file.
The credentials will contain the Client Id and Client Secret.Set to the credentials file path likeauth="~/.config/annotell/credentials.json"Set environment variablesANNOTELL_CLIENT_IDandANNOTELL_CLIENT_SECRETSet to credentials tupleauth=(client_id, client_secret)API clients such as theInputApiClientaccept thisauthparameter.Under the hood, they commonly use the AuthSession class which is implements arequestssession with automatic token
refresh. Anhttpximplementation is also available.fromannotell.auth.requests.auth_sessionimportRequestsAuthSessionsess=RequestsAuthSession()# make call to some Annotell service with your token. Use default requestssess.get("https://api.annotell.com")Changelog2.0.0 (2022-05-02)Refactor for backend separation, with optional dependencies for eitherhttpxorrequests.1.8.0 (2022-04-12)Initial support for httpx (BETA). Solves refresh token expiry by reset without theFaultTolerantAuthRequestSessionThe library will be refactored by a breaking 2.0 release, and make the same changes to the requests version.
Theauthsessionmodule backed byrequestsis untouched for now.1.7.0 (2022-04-11)Fix compatibility issue with authlib >= 1.0.0. Resetting the auth session failed, when the refresh token had expired.1.6.0 (2021-02-21)Expose underlyingrequests.SessiononFaultTolerantAuthRequestSessionFix some thread locks1.5.0 (2020-10-20)AddFaultTolerantAuthRequestSessionthat handles token refresh on long running sessions.1.4.0 (2020-04-16)Add support forauthparameter, with path to credentials file orAnnotellCredentialsobjectDrop support for legacy API token
|
annotell-base-clients
|
Annotell Base ClientsPython 3 library for providing a base clients for interacting with the Annotell platformChangelogAll notable changes to this library will be documented in this file[0.1.0] - 2022-05-12Updateannotell-authto 2.0.1Support for pagination on API requests[0.0.1] - 2022-04-01Library created
|
annotell-export
|
Annotell Export API ClientPython 3 library providing access to Annotell Export APITo install with pip runpip install annotell-exportExampleSet env ANNOTELL_CREDENTIALS to the credentials file provided to you by Annotell,
seeannotell-auth.ChangelogAll notable changes to this project will be documented in this file.[0.2.1] - 2022-08-09Bump annotell-auth to 2.0.1[0.2.0] - 2020-11-09Update docsBump annotell-auth to 1.5.0[0.1.0] - 2020-09-28First draft
|
annotell-input-api
|
Annotell Input API ClientPython 3 library providing access to Annotell Input APITo install with pip runpip install annotell-input-apiDocumentation & Getting Started GuideDocumentation about how to use the library can be foundhereChangelogAll notable changes to this project will be documented in this file.[1.3.1] - 2022-09-13BugfixChanged naming in PrincipalPointDistortionCalibration[1.3.0] - 2022-09-09ChangedAdded PrincipalPointDistortionCalibration[1.2.6] - 2022-08-19BugfixFixed error when using deprecated calibration modelannotell.input_api.model.calibration.sensors.LidarCalibrationWe encourage updating to use the newer, typed calibration models inannotell.input_api.model.calibration.{camera, lidar}.*,refer to the calibration documentation for details.[1.2.5] - 2022-08-18ChangedChanged frominternalIdtosceneUuidon initialize inputBugfixFixed error when using deprecated calibration modelannotell.input_api.model.calibration.sensors.CameraCalibrationWe encourage updating to use the newer, typed calibration models inannotell.input_api.model.calibration.{camera, lidar}.*,refer to the calibration documentation for details.[1.2.4] - 2022-07-06BugfixFix bug where not specifying feature flags led to a runtime error.[1.2.3] - 2022-06-23AddedFeature flags can be specified when creating inputs. This can be used to disable motion compensation.[1.2.2] - 2022-06-20AddedAbility to specify field of view on LIDAR sensor calibrations.[1.2.1] - 2022-06-08AddedAbility to create inputs from a sceneAbility to create pre-annotations (limited support)[1.2.0] - 2022-05-05Changedannotell-base-clientsis now a dependency forannotell-input-api, used for http requests and file uploadsThe generator produced byclient.annotation.get_project_annotationswill now reliably iterate projects with thousands of annotationsAddedIncludeannotation_typesinInputMethod to remove annotation types from an input:remove_annotation_types[1.1.5] - 2022-03-29BugfixAdded 'archived' as ProjectBatchStatus[1.1.4] - 2022-02-17AddedMetaDataContainer added with reserved keywordregion.Added ImageMetaData for shutter start and end times, used for high accuracy multi-sensor projections[1.1.3] - 2022-01-05BugfixFixed download of annotations[1.1.2] - 2021-12-03ChangedRefactor offile_resource_client.py, split upload and download into separate classes.AddedNew parametertimeouttoInputApiClient, which decides what the timeout in seconds is for calls to Annotell API:s and Google Cloud Storage.Retries when aConnectionErroris raised during uploading/downloading of resources to/from Google Cloud Storage.New input statusPending. An input will have this status if the input has been validated, but the server is waiting for the associated data to
be uploaded. When all data is uploaded the status of the input will change toProcessing.[1.1.1] - 2021-11-11ChangedFixed import statement to work with python < 3.9[1.1.0] - 2021-11-03AddedTwo new methods has been added for downloading annotations:client.annotation.get_annotationandclient.annotation.get_project_annotations. These two methods will serve annotations
in the OpenLABEL format. With this change the previous method for fetching annotations,client.annotation.get_annotationshas become deprecated.Stricter typing for the calibrations, specifically the camera calibrations. Each of the supported camera calibration models now have their own class inannotell.input_api.model.calibration.camera. Documentation regarding use can be found here:DocumentationField of View support for camera calibrationsNew Parameter Xi for Fisheye camera calibration modelChangedTwo constructor arguments inInputApiClientandFileResourceClienthave been renamed frommax_upload_retry_attemps,max_upload_retry_wait_timetomax_retry_attempts,max_retry_wait_timerespectively.The old camera calibration class will be deprecated in favour of the new classes[1.0.8] - 2021-09-07AddedA new method has been added,get_inputs_with_uuids, which can fetch inputs using only theinput_uuid.annoutilhas a new flag when fetching inputs,annoutil inputs --uuids <comma_separated_uuids>.lidarsandlidars_sequenceinputs now available through the client.A new method has been added,add_annotation_type, which adds additional annotation types to be performed for an input.Changedclient.calibration.get_calibration()now properly deserializes calibration intoSensorCalibrationEntryinstead of keeping it as a dict.[1.0.7] - 2021-06-11Addedcreatedtimestamp when queryingget_inputsMethodget_annotation_typesChangedinput_list_idreplaced withannotation_typesfor all createable resources (Cameras,CamerasSeq,LidarsAndCameras,LidarsAndCamerasSeq).[1.0.6] - 2021-05-28Addedcalibration_idnow available for created inputs via theclient.input.get_inputsmethod.It is now possible to create your project batches on your own using theclient.project.create_batchmethod. Please contact Annotell's Professional Services
before using. More information available in thedocumentation.[1.0.5] - 2021-05-06ChangedChanged the height/width in the unity calibration created in the examples to match the image/videos.Added new field in the Input class, view_link. If the Input was successfully created it will contain an URL to view the input in the Annotell app.BugfixesFixed issue whereinvalidate_inputsdid not properly discard response content.[1.0.4] - 2021-04-26AddedAdded support for providing metadata in the form of a flat KV-pair both on an input-level for all input types, as well as on a frame-level for all sequential input types.ChangedMade SensorSpecification Optional for all input typesRemovedRemoved sensor_settings from SensorSpecification. The pixel dimensions are now
automatically inferred from videos and images.[1.0.3] - 2021-04-14AddedAdded an example for download_annotationsAdded check so thatinput_list_idandprojectis not used simultaneously when creating inputsChangedMade client and file_client internalFixed bug where client sometimes didn't raise exception when http calls return error codesBugfix where annoutil didn't work due to missing importClarified examples with different images/videos for different sensors and frames.RemovedRemoved unnecessary parametersframe_idandrelative_timestampfromlidars_and_cameras[1.0.1] - 2021-04-06Use backport ofdataclassesto support python 3.6.[1.0.0] - 2021-03-23New major release of client. Reworked to be more internally consistent between input types, and use of project and batch identifiers across methods. Seedocsfor more info.client.lidar_and_cameras.createreplacesclient.create_inputs_point_cloud_with_imagesclient.cameras.createreplacesclient.upload_and_create_images_input_jobclient.annotations.get_annotationsreplacesclient.download_annotations[0.4.4] - 2021-03-02Remove unused dependency on annotell-cloud-storage[0.4.3] - 2021-02-16Fixed import bug in annoutil CLI tool.[0.4.2] - 2021-02-02ChangedChanged url for theget_calibration_datamethod. Does not affect
usage of the method in any way.[0.4.1] - 2021-01-29ChangedRemoved unused propertydeadlinefrom project[0.4.0] - 2021-01-28ChangedRenamed methodupload_and_create_images_input_jobtocreate_inputs_images.Renamed methodlist_projectstoget_projects.Renamed methodlist_project_batchestoget_project_batches.Changed behaviour of methoddownload_annotations. The previously optional argumnetrequest_idhas been removed. Additionally, the return
signature is changed to return a list of annotations for each input, instead of a dict as before.Behaviour ofget_inputshas changed. It now receivesproject(identifier, not numerical id anymore), as well as three optional parametersbatch,external_idsandinclude_invalidated. Returns all inputs belonging to the project, with the option of filtering on batch, external ID and whether or not including invalidated inputs. The returned list of classes had additional fields describing which batch each input belongs to, as well as their status (created,processing,failed,invalidated).Changed name of argumentinput_idstoinput_internal_idsfor methodinvalidate_inputs.Use backport ofdataclassesto support python 3.6.Add missing dependency onpython-dateutil.RemovedMethodscount_inputs_for_external_ids,get_internal_ids_for_external_ids,mend_input_data,remove_inputs_from_input_list,list_input_lists,publish_batch,get_requests_for_request_ids,get_requests_for_input_lists,get_input_status,get_input_jobs_status,get_requests_for_project_id,get_datas_for_inputs_by_internal_idsandget_datas_for_inputs_by_external_idshave all been removed.[0.3.12] - 2021-01-13ChangedRemoved getting started documentation fromREADME.mdand instead link to new docs.[0.3.11] - 2020-12-14ChangedDeserialization bugfix in models forInputBatchandInputBatch.[0.3.10] - 2020-12-01AddedMinor fix in annoutil[0.3.9] - 2020-11-26AddedBump of required python version to >=3.7New explicit models forlidarandcamera calibrationadded.publish_batchwhich accepts project identifier and batch identifier and marks the batch as ready for annotation.ChangedDeprecation warning for the oldlidarandcamera calibrationmodels. No other change in functionality.[0.3.8] - 2020-11-13Addedget_inputswhich accepts a project ID or project identifier (external ID) and returns inputs connected to the project.invalidatedfilter parameter to optionally filter only invalidated inputs. Also exposed in annoutil asannoutil projects 1 --invalidated.Changedinvalidate_inputsnow accepts annotellinternal_ids (UUID)instead of Annotell specific input ids.[0.3.7] - 2020-11-06Changedbug fix related to oauth session[0.3.6] - 2020-11-02ChangedSLAM - add cuboid timespans,dynamic_objectsnot includes bothcuboidsandcuboid_timespans[0.3.5] - 2020-10-19AddedAdd support forprojectandbatchidentifiers for input request.
Specifying project and batch adds input to specified batch.
When only sending project, inputs are added to the latest open batch for the project.Deprecatedinput_list_idwill be removed in the 0.4.x version[0.3.4] - 2020-09-10ChangedSLAM - add requiredsub_sequence_idand optionalsettings[0.3.3] - 2020-09-10ChangedSLAM - add requiredsequence_id[0.3.2] - 2020-09-01ChangedSLAM - startTs and endTs not optional in Slam request[0.3.1] - 2020-07-16ChangedIf the upload of point clouds or images crashes and returns status code 429, 408 or 5xx the script will
retry the upload before crashing. The default settings may be changed when initializing theInputApiClientby specifying values to themax_upload_retry_attemptsandmax_upload_retry_wait_timeparameters.[0.3.0] - 2020-07-03ChangedThe methodcreate_inputs_point_cloud_with_imagesinInputApiClientnow takes an extra parameter:dryrun: bool.
If set toTrueall the validation checks will be run but no inputJob will be created, and
if it is set toFalsean inputJob will be created if the validation checks all pass.BugfixesFixed bug where the uploading of .csv files to GCS crashed if run on some windows machines.[0.2.9] - 2020-07-02AddedNew public method inInputApiClient:count_inputs_for_external_ids.[0.2.8] - 2020-06-30AddedDocstrings for all public methods in theInputApiClientclass[0.2.7] - 2020-06-29AddedRequire time specification to be send when posting slam requests[0.2.6] - 2020-06-26ChangedRemovedCalibrationSpecfromCalibratedSceneMetaDataandSlamMetaData. Updated
so thatcreate_calibration_datainInputApiClientonly takes aCalibrationSpecas parameter.[0.2.5] - 2020-06-22BugfixesFixed issue where a path including a "~" would not expand correctly.[0.2.4] - 2020-06-17ChangedChanged pointcloud_with_images model. Images and point clouds are now represented asImageandPointCloudcontaining filename and source. Consequently,images_to_sourceis removed fromSourceSpecification.Addedcreate Image inputs viacreate_images_input_jobIt's now possible to invalidate erroneous inputs viainvalidate_inputsSupport for removing specific inputs viaremove_inputs_from_input_listSLAM support (not generally available)BugfixesFixed issue where annoutils would not deserialize datas correctly when querying datas by internalId[0.2.3] - 2020-04-21ChangedChanged how timestamps are represented when receiving responses.[0.2.2] - 2020-04-17AddedMethodsget_datas_for_inputs_by_internal_idsandget_datas_for_inputs_by_external_idscan be used to get whichDataare part of anInput, useful in order to check which images, lidar-files have been uploaded. Both are also available in the CLI via :$annoutilinputs--get-datas<internal_ids>$annoutilinputs-externalid--get-datas<external_ids>Support has been added forKannalacamera types. Whenever adding calibration forKannalaundistortion coefficients must also be added.Calibration is now represented as a class and is no longer just a dictionary, making it easier to understand how the Annotell format is structured and used.[0.2.0] - 2020-04-16ChangedChange constructor to disable legacy api token support and only accept anauthparameter[0.1.5] - 2020-04-07AddedMethodget_input_jobs_statusnow accepts lists of internal_ids and external_ids as arguments.
|
annotell-openlabel
|
Annotell OpenLABELInstallationInstall the Annotell OpenLABEL package from pip withpip install annotell-openlabelSerialization and deserializationSince all models inherit frompydantic'sBaseModel, serialization and deserialization from dicts or json strings are relatively straight forward.data={"openlabel":{"metadata":{"schema_version":"1.0.0"}}}importannotell.openlabel.modelsasOLM# Deserialize dictopenlabel_annotation=OLM.OpenLabelAnnotation.parse_obj(data)# Serialize to jsonjson_data=openlabel_annotation.json(exclude_none=True)# Deserialize jsonopenlabel_annotation=OLM.OpenLabelAnnotation.parse_raw(json_data)# Serialize to dictdict_data=openlabel_annotation.dict(exclude_none=True)Further readinghttps://www.asam.net/project-detail/asam-openlabel-v100/Changelog[0.1.4] - 2022-01-24Improved serializability for enum classes[0.1.3] - 2022-01-04Fixed issues with version 0.1.2[0.1.2] - 2021-12-29Updated several fields with multiple types to fix issues with serialization and deserialization.
For example, the coordinates onPoly2dobjects were previously always parsed to strings.
This update means that an attempt to parse them to floats is made. If this fails the they will be parsed to strings.[0.1.1] - 2021-11-24Updated stream properties model generation to be nicer to work with[0.1.0] - 2021-11-18Updated json schema and model to be true to the 1.0.0 release of openlabel. Previously it was based on the release-candidate
|
annotell-query
|
Annotell Query API ClientPython 3 library providing access to the Annotell Query API.To install with pip runpip install annotell-querySet env ANNOTELL_CREDENTIALS, seeannotell-auth.Judgement Query ExampleStream all items matching a queryfromannotell.query.query_api_clientimportQueryApiClientquery_client=QueryApiClient()resp=query_client.query_judgements(query_filter="requestId = X")foriteminresp.items():print(item)Change log2.5.0A new base Query client that can be inherrited for other applications2.4.1Useannotell-auth[requests]>=2.0.02.3.0Useannotell-auth>=1.6Remove metadata queries2.2.0Useannotell-auth>=1.5with fault tolerant auth request session2.1.0Use server default query limits2.0.0Rename library toannotell-queryRenameQueryApitoQueryApiClientAdd KPI query method1.3.0Change constructor for authentication to only acceptauth.
|
annotest
|
aNNoTestaNNoTest is a tool (and an approach) to automatically
generating bug-finding inputs for NN program testing.
PaperAn annotation-based approach for finding bugs in
neural network
programsby
Mohammad Rezaalipour and Carlo A. Furia explains aNNoTest
in details and provides guidelines on how to use it, effectively.InstallationRun the following command to install aNNoTest:pip install annotestWe have tested aNNoTest on Python 3.6.
But it should work on Python 3.6+ as well.Using aNNoTestaNNoTest is a command line tool.
After annotating your project with
aN (aNNoTest's annotation language)
you cancdto your project directory
and then run aNNoTest.cd path_to_python_projectannotestOr you can input the project path to aNNoTest:annotest path_to_python_projectExamplesTo see examples of using aNNoTest, see
the following repository:https://github.com/atom-sw/annotest-subjectsCitationsaNNoTest's Journal
Paper:@article{Rezaalipour:2023,
title = {An annotation-based approach for finding bugs in neural network programs},
journal = {Journal of Systems and Software},
volume = {201},
pages = {111669},
year = {2023},
issn = {0164-1212},
doi = {https://doi.org/10.1016/j.jss.2023.111669},
url = {https://www.sciencedirect.com/science/article/pii/S016412122300064X},
author = {Mohammad Rezaalipour and Carlo A. Furia},
keywords = {Test generation, Neural networks, Debugging, Python}
}MirrorsThe current repository is a public mirror of
our internal private repository.
We have two public mirrors, which are as follows:https://github.com/atom-sw/annotesthttps://github.com/mohrez86/annotest
|
annotlib
|
annotlib: Simulation of Annotatorsauthors: Marek Herde and Adrian CalmaIntroductionannotlibis a Python package for simulating annotators in an active learning setting.
Solving classification problems by using supervised machine learning models requires samples assigned to class labels.
However, labeling these samples causes costs (e.g. workload, time, etc.), so that active learning strategies aim at reducing these costs by selecting samples, which are the most useful for training a classifier.In real-world scenarios, human annotators are often responsible for providing the class labels.
Unfortunately, there is no guaranty for the omniscience of such annotators.
Hence, annotators are prone to error, respectively uncertain, so that the class labels assigned to samples may be false.
The labeling performance of an annotator is affected by many factors, e.g. expertise, experience, concentration, level of fatigue and so on.
Moreover, the difficulty of a sample influences the outcome of a labeling process.To evaluate an active learning strategy in the setting of uncertain annotators, class labels of these uncertain
annotators are required to be available, but there is a lack of publicly accessible real-world data sets labeled by error-prone annotators.
As a result, recently published active learning strategies are evaluated on simulated annotators where the used simulation techniques are diverse.
Our developedannotlibPython package represents a of these techniques and implements additional methods, which simulate realistic characteristics of uncertain annotators.
This way, we establish a library simplifying and standardising the evaluation of active learning strategies coping with uncertain annotators.For more information go to thedocumentation.
|
annotmerge
|
This Library helps in merging multiple Pascall VOC Annotation file into a single Annotation.The Package contains a single function called merge. Which takes a single param: base_directory
you need to specity the folder name where your multiple annotation files are present and then a log
is created and also output directory is created with merged files and foldersThe Structure of folderChange Log0.0.1 (12/02/2021)First Release
|
annotrack
|
annotrackAnnotrack is a napari plugin for annotating errors in object trajectories. The plugin will help you take a sample of track segments along with a small section of corresponding image and segmentation. Annotrack allows you to annotate three types of errors: (1) ID swap errors (track jumps between objects), (2) false starts (track starts on a pre-existing object) and false terminations (track ends but object still exists). By looking at the combined rates of false starts and false terminations you can assess track discontinutation errors.Please note:Images and segmentations must be in zarr format. Tracks should be in parquet format.InstallationThere are three main ways to install annotrack:Install Using pipType the following into your terminal (MacOS or Ubuntu) or annaconda prompt (windows):pipinstallnapariannotrackInstallType the following into your terminal (MacOS or Ubuntu) or annaconda prompt (windows):pipinstallnapari
napariOnce napari has opened (this may take a second the first time you open it), go to the pannel at the top of the screen and select the 'plugins' dropdown. Then select install/uninstall plugins. A new window will open showing available plugins. Either scroll down to or search 'annotrack' and click 'install'.Install from Source Codeplease use this for nowType the following into your terminal (MacOS or Ubuntu) or annaconda prompt (windows):gitclone<repositoryhttpsorssh>cdannotrack
pipinstall.Opening AnnotrackOnce annotrack is properly installed you will be able to open annotrack by opening napari. You can open napari through the command line (terminal (MacOS or Ubuntu) or annaconda prompt (windows)) as follows:napariYou can find the annotrack widgets by selecting the dropdown 'plugins' at the pannel at the top of the screen and hovering over 'annotrack'.Sample from CSVTo sample your tracks you will need to supply the file paths for the images, segmentations, and tracks.Annotate Now?In the case that we are annotating multiple conditions to compare, we want to show them in the one session in randomised order with the annotator blinded to where the sample has originated from. We want to be able to annotated unannotated data from the sample without having the burden of having to do this all at once. The annotations are therefore saved into the saved sample. A selected number of samples saved from the various tracking experiments can be annotated using the following code. If you re-execute this code, you will only be shown not yet annotated data, unless you request otherwise.Keys to navagate and annotate samples'2' - move to next sample'1' - move to previous sample'y' - annotate as correct (will move to the next sample automatically)'n' - annotate as containing an error (will move to the next sample automatically)'i' - annotate the frame following a ID swap error't' - annotate the fame following an incorrect termination'Shift-t' - annotate the frame containing a false start error's' - annotate an error ('i', 't', or 'Shift-t') as being associated with a segmentation error (merge or split of objects)When an error is associated the specific frame ('i', 't', 'Shift-t', or 's'), the frame number (within the original image) will be added to a list of errors for the sample within the sample's (.smpl) info data frame. E.g., you may have a list of ID swaps for your sampled track segment ([108, 111, 112]) and a corresponding list of segmentation error associations ([108, 112]).Annotate Existing SampleIf you have already saved a sample and want to annotate it, you can load the sample data using theannotate_existing_samplewidget. This might be useful if you want to have several annotators annotate the same sample. To access this widget, open napariContributing and User SupportUser support:If you have an issue with annotrack please add an issue (go to the Issues tab at the top of the GitHub page). If your issue is a bug, please include as much information as possible to help debug the problem. Examples of information include: details about the image and segmentation data (dimensions), number of images, number of samples you are trying to take. If you are requesting an improvement, try to be as clear as possible about what you need.Contributing:If you want to contribute to annotrack, please fork the repo and if you want to make changes make a pull request with as much detail about the change as possible. Please ensure any changes you want to make don't break the existing functions.
|
annots
|
Annotsannotsis the package that allow to use Python 3.6 variable annotations in handy way. Thanks for inspiration toattrslibrary.When you wrap a class withannotsdecoratorimport annot
@annot.s
class Account:
__tablename__ = 'account'
username: str
password: strAnnots add class attribute annotations into__init__class Account:
def __init__(self, username, password):
self.username = str
self.password = strFree software: MIT licenseDocumentation:https://annots.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.2 (2017-01-09)Fix value initialization errorAdd type annotation for args into__init__functionUpdate test0.1.1 (2017-01-08)Add few tests0.1.0 (2017-01-08)First release on PyPI.Working version with basic annotation support
|
annot-utils
|
annot_utilsIntroductionannot_utilsis a software for generating tabix-indexed annotation files, which can be shared by other softwares by Y.S.
Currently, this software support only annotatioin files for hg19 (GRCh37), hg38 (GRCh38) and mm10 (GRCm38).DependencyPython packagespkg_resourcesSoftwarehstlibInstallannot_utilsis available through pypi.
To install, type:pip install annot_utilsWhen you are not the root user, you may want to type:pip install annot_utils --userAlternatively, install from the source code:wget https://github.com/friend1ws/annot_utils/archive/v0.3.0.tar.gz
tar xzvf v0.3.0.tar.gz
cd annot_utils-0.3.0
python setup.py build install --userThis package has been tested on Python 2.7, 3.5, 3.6.Update databseCurrently,annot_utilsalready store annotation files fromUCSC genome browserand several other sources upon installation.
If you want to update the annotation files:cd annot_utils/resource
bash prep_data.shThen, install the software from the source code.CommandsgeneGenerate gene annotation bed flies indexed by tabix.annot_utils gene [-h]
[--gene_model {refseq,gencode}] [--grc]
[--genome_id {hg19,hg38,mm10}] [--add_ref_id]
gene.bed.gzexonGenerate exon annotation bed flies indexed by tabix.annot_utils exon [-h]
[--gene_model {refseq,gencode}] [--grc]
[--genome_id {hg19,hg38,mm10}] [--add_ref_id]
exon.bed.gzcodingGenerate regional (coding, intronic, 5'UTR, 3'UTR and so on) annotation bed flies indexed by tabix.annot_utils coding [-h]
[--gene_model {refseq,gencode}] [--grc]
[--genome_id {hg19,hg38,mm10}] [--add_ref_id]
coding.bed.gzjunctionGenerate annotated splicing junction bed files indexed by tabix.annot_utils junction
usage: annot_utils junction [-h]
[--gene_model {refseq,gencode}] [--grc]
[--genome_id {hg19,hg38,mm10}] [--add_ref_id]
junction.bed.gzboundaryGenerate exon intron boundary annotation files index by tabix.annot_utils boundary [-h]
[--genome_id {hg19,hg38,mm10}] [--grc]
[--donor_size donor_size]
[--acceptor_size acceptor_size]
boudary.bed.gz
|
annotype
|
Annotypecombines Python 3 annotations and Marshmallow for powerful
validation of function arguments.fromannotypeimportannotypedfrommarshmallowimport(Schema,fields)classPersonSchema(Schema):firstname=fields.Str(required=True)lastname=fields.Str(required=True)@annotyped()defsalute(person:PersonSchema):print'Hello{}{}'.format(person['firstname'],person['lastname'])person=dict(firstname='John')# This will raise a ValidationError because lastname is not definedsalute(person)@annotyped()defwelcome(firstname:fields.Str(),lastname:fields.Str()):print'Welcome{}{}'.format(firstname,lastname)# This will also raise a ValidationError because lastname is not a stringwelcome('Jane',1)In short, annotype allows you to validate data using the powerful
marshmallow library and the Python 3 annotations.Get It Now$ pip install -U annotypeDocumentationSee marshmallow documentation available herehttp://marshmallow.readthedocs.io/.RequirementsPython >= 3.4marshmallow >= 3.0.0LicenseMIT licensed. See the bundledLICENSEfile for more details.
|
annotyped
|
annotypedAnnotyped is a simple library with utilities and decorators for type checking and type casting automatically
on annotated functions at runtime.pip install annotyped --userThe basics# Simple typecheck, this uses the instance checker to check equality.# You can set your own with `@annotyped.checkers.custom(callable)`@annotyped.checkdefadd(a:int,b:int)->int:returna+badd(1,2)# 3add(2,'10')# Param 'b' requires type '<class 'int'>', found '<class 'str'>': '10'# Simple typecast, this runs the annotation like it were an expression and uses the# result `annotation(value)`@annotyped.castdefadd(a:int,b:int)->str:returna+badd('10','20')# '30'add(1,2)# '3'add('1.1',2)# Param 'a' could not convert to '<class 'int'>' from '<class 'str'>': invalid literal for int() with base 10: '1.1'Custom converters / casterseg:Convert from a tuple or a str into namedtuple.importannotypedimportmathfromcollectionsimportnamedtuplePosition=namedtuple('Position','x, y')defposition(pos):ifisinstance(pos,str)and','inpos:pos=map(int,pos.split(','))returnPosition(*pos)@annotyped.castdefdiff(p1:position,p2:position):returnmath.sqrt((p2.x-p1.x)**2+(p2.y-p1.y)**2)p1=(10,20)p2='20,50'print(diff(p1,p2))
|
annotypes
|
Adding annotations to Python types while still being compatible withmypyandPyCharmYou can write things like:fromannotypesimportAnno,WithCallTypeswithAnno("The exposure time for the camera"):AExposure=floatwithAnno("The full path to the text file to write"):APath=strclassSimple(WithCallTypes):def__init__(self,exposure,path="/tmp/file.txt"):# type: (AExposure, APath) -> Noneself.exposure=exposureself.path=pathor the Python3 alternative:fromannotypesimportAnno,WithCallTypeswithAnno("The exposure time for the camera"):AExposure=floatwithAnno("The full path to the text file to write"):APath=strclassSimple(WithCallTypes):def__init__(self,exposure:AExposure,path:APath="/tmp/file.txt"):self.exposure=exposureself.path=pathAnd at runtime see what you should pass to call it and what it will return:>>>fromannotypes.py2_examples.simpleimportSimple>>>list(Simple.call_types)['exposure', 'path']>>>Simple.call_types['exposure']Anno(name='AExposure', typ=<type 'float'>, description='The exposure time for the camera')>>>Simple.return_typeAnno(name='Instance', typ=<class 'annotypes.py2_examples.simple.Simple'>, description='Class instance')For more examples see thePython 2 examplesorPython 3 examples.InstallationTo install the latest release, type:pip install annotypesTo install the latest code directly from source, type:pip install git+git://github.com/dls-controls/annotypes.gitChangelogSeeCHANGELOGContributingSeeCONTRIBUTINGLicenseAPACHE License. (seeLICENSE)
|
announcementlink
|
A speech system for places like train stations.
|
announcer
|
announcerThis tool:takes ankeepachangelog-style CHANGELOG.md fileextracts all changes for a particular versionand sends a formatted message to a Slack or Microsoft Teams webhook.It is available as a Python package, or as a Docker container for use in CI.InstallationInstall this tool using pip:pip install announcerTool usageusage: announce [-h] (--webhook WEBHOOK | --slackhook WEBHOOK) [--target {slack,teams}] --changelogversion CHANGELOGVERSION --changelogfile CHANGELOGFILE --projectname PROJECTNAME
[--username USERNAME] [--compatibility-teams-sections] [--iconurl ICONURL | --iconemoji ICONEMOJI]
Announce CHANGELOG changes on Slack and Microsoft Teams
optional arguments:
-h, --help show this help message and exit
--webhook WEBHOOK The incoming webhook URL
--slackhook WEBHOOK The incoming webhook URL. (Deprecated)
--target {slack,teams}
The type of announcement that should be sent to the webhook
--changelogversion CHANGELOGVERSION
The changelog version to announce (e.g. 1.0.0)
--changelogfile CHANGELOGFILE
The file containing changelog details (e.g. CHANGELOG.md)
--projectname PROJECTNAME
The name of the project to announce (e.g. announcer)
--username USERNAME The username that the announcement will be made as (e.g. announcer). Valid for: Slack
--compatibility-teams-sections
Compatibility option - sends Teams messages in multiple sections
--iconurl ICONURL A URL to use for the user icon in the announcement. Valid for: Slack
--iconemoji ICONEMOJI
An emoji code to use for the user icon in the announcement (e.g. party_parrot). Valid for: SlackGitlab UsageAnnouncer builds and publishes a Docker image that you can integrate into your.gitlab-ci.yml:announce:
stage: announce
image: metaswitch/announcer:5.0.0
script:
- announce --webhook <webhook address>
--changelogversion $CI_COMMIT_REF_NAME
--changelogfile <CHANGELOG.md file>
--projectname <Project name>
--iconemoji party_parrot
only:
- tags
|
announce-server
|
Announce ServerA Python library that announces a server to a host.Installationpipinstallannounce-serverDevelopmentTo install the developer dependencies required for testing and publishing:pipinstall-e.[dev,pub]BuildTo build the package, run:rm-rfdist/build/.eggs/.pytest_cache/src/announce_server.egg-info/
python-mbuild--sdist--wheelTo publish:twineuploaddist/*TestTo run the tests, call:pytestUsagefromannounce_serverimportregister_service@register_service(name="server_name",ip="server_ip",port=8000,host_ip="host_server_ip",host_port=5000,retry_interval=5)defyour_function():passRegistryTheannounce_serverCLI provides a simple way to start a registry server. The registry server keeps track of available services and periodically sends heartbeat messages to ensure that registered services are still active.Commandannounce_serverstart_registry[--addressADDRESS][--portPORT][--heartbeat_intervalINTERVAL][--heartbeat_timeoutTIMEOUT]Arguments--address ADDRESS: The IP address of the server. Default:0.0.0.0.--port PORT: The port number of the server. Default:4999.--heartbeat_interval INTERVAL: The interval between heartbeat messages in seconds. Default:5.--heartbeat_timeout TIMEOUT: The timeout for waiting for a response in seconds. Default:3.ExampleTo start the registry server with the default configuration, run:announce_serverstart_registryThe full syntax is equivalent to:announce_serverstart_registry--address0.0.0.0--port4999--heartbeat_interval5--heartbeat_timeout3
|
annovar-tools
|
ANNOVAR tools该包是配合ANNOVAR对相关文件进行处理的工具集vcf将ANNOVAR输入文件,AVinput文件转换为VCF4.0格式-i/--avinput: 输入文件,AVinput格式-r/--reference: 参考基因组文件如hg19.fa,请事先使用samtools faidex构建索引-o/--vcf: 输出的VCF文件zhuy@ubuntu:/projects/example$annovar_tools.pyvcf-itest.avinput-rhg19.fa-otest.vcfsplit将ANNOVAR注释结果按照Gene拆分为多行,一次只能拆分一种gene-based数据库-i/--avoutput: 输入文件,ANNVOAR注释结果,如test.hg19_multianno.txt-r/--refgenes: 运行ANNOVAR所用的注释refGene文件,如hg19_ensGene.txt,hg19_knownGene.txt,hg19_refGeneWithVer.txt-g/gene_db: 所需拆分的gene-based数据库, 如refGeneWithVer,即ANNVOAR g参数对应的数据库名称,在ANNOVAR结果中体现为"Func.refGeneWithVer, Gene.refGeneWithVer, GeneDetail.refGeneWithVer, ExonicFunc.refGeneWithVer,
AAChange.refGeneWithVer"-o/--outfile: 输出的拆分后的文件zhuy@ubuntu:/projects/example$annovar_tools.pysplit\-itest.hg19_multianno.txt\-rhg19_refGeneWithVer.txt\-rhg19_ensGene.txt\-rhg19_knownGene.txt\-grefGeneWithVer-otest.hg19_multianno.refGeneWithVer.txt
|
annoworkapi
|
annowork-api-python-clientAnnowork WebAPIのPython用クライントライブラリです。RequirementsPython 3.8+Install$ pip install annoworkapiUsage認証情報の設定方法$HOME/.netrcmachine annowork.com
login ${user_id}
password ${password}環境変数に設定する場合環境変数ANNOWORK_USER_IDにユーザID,ANNOWORK_PASSWORDにパスワードを設定する。基本的な使い方importannoworkapiservice=annoworkapi.build()result=service.api.get_my_account()print(result)# {'account_id': 'xxx', ... }応用的な使い方ログの出力importlogginglogging_formatter='%(levelname)-8s:%(asctime)s:%(name)s:%(message)s'logging.basicConfig(format=logging_formatter)logging.getLogger("annoworkapi").setLevel(level=logging.DEBUG)In [1]: c = s.api.get_actual_working_times_by_workspacen_member("a9956d30-b201-418a-a03b-b9b8b55b2e3d", "204bf4d9-4569-4b7b-89b9-84f089201247")
DEBUG : 2022-01-11 17:36:04,354 : api.py : annoworkapi.api : _request_wrapper : Sent a request :: {'request': {'http_method': 'get', 'url': 'https://annowork.com/api/v1/workspacens/a9956d30-b201-418a-a03b-b9b8b55b2e3d/members/204bf4d9-4569-4b7b-89b9-84f089201247/actual-working-times', 'query_params': None, 'header_params': None, 'request_body': None}, 'response': {'status_code': 200, 'content_length': 209988}}開発者用ドキュメントREADME_for_developer.md参照
|
annoworkcli
|
annowork-cliAnnoworkのCLIです。RequirementsPython3.8+Install$ pip install annoworkcliUsage認証情報の設定.netrc$HOME/.netrcファイルに以下を記載する。machine annowork.com
login annowork_user_id
password annowork_password環境変数環境変数ANNOWORK_USER_ID,ANNOWORK_PASSWORDannoworkcli annofabコマンドを利用する場合annoworkcli annofabコマンドはannofabのwebapiにアクセスするため、annofabのwebapiの認証情報を指定する必要があります。環境変数ANNOFAB_USER_ID,ANNOFAB_PASSWORD$HOME/.netrcファイルmachine annofab.com
login annofab_user_id
password annofab_passwordコマンドの使い方# CSV出力
$ annoworkcli actual_working_time list_daily --workspace_id foo \
--start_date 2022-05-01 --end_date 2022-05-10 --output out.csv
$ cat out.csv
date,job_id,job_name,workspace_member_id,user_id,username,actual_working_hours,notes
2022-05-02,5c39a2e8-90dd-4f20-b0a6-39d7f5129e3d,MOON,52ff73fb-c1d6-4ad6-a185-64386ee7169f,alice,Alice,11.233333333333334,
2022-05-02,5c39a2e8-90dd-4f20-b0a6-39d7f5129e3d,MARS,c66acd58-c893-4908-bdcc-1414978bf06b,bob,Bob,8.0,開発者向けの情報https://github.com/kurusugawa-computer/annowork-cli/blob/main/README_for_developer.md
|
annoy
|
NoteFor the latest source, discussion, etc, please visit theGitHub repositoryAnnoyAnnoy (Approximate Nearest NeighborsOh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that aremmappedinto memory so that many processes may share the same data.InstallTo install, simply dopip install--userannoyto pull down the latest version fromPyPI.For the C++ version, just clone the repo and#include "annoylib.h".BackgroundThere are some other libraries to do nearest neighbor search. Annoy is almost as fast as the fastest libraries, (see below), but there is actually another feature that really sets Annoy apart: it has the ability touse static files as indexes. In particular, this means you canshare index across processes. Annoy also decouples creating indexes from loading them, so you can pass around indexes as files and map them into memory quickly. Another nice thing of Annoy is that it tries to minimize memory footprint so the indexes are quite small.Why is this useful? If you want to find nearest neighbors and you have many CPU’s, you only need to build the index once. You can also pass around and distribute static files to use in production environment, in Hadoop jobs, etc. Any process will be able to load (mmap) the index into memory and will be able to do lookups immediately.We use it atSpotifyfor music recommendations. After running matrix factorization algorithms, every user/item can be represented as a vector in f-dimensional space. This library helps us search for similar users/items. We have many millions of tracks in a high-dimensional space, so memory usage is a prime concern.Annoy was built byErik Bernhardssonin a couple of afternoons duringHack Week.Summary of featuresEuclidean distance,Manhattan distance,cosine distance,Hamming distance, orDot (Inner) Product distanceCosine distance is equivalent to Euclidean distance of normalized vectors = sqrt(2-2*cos(u, v))Works better if you don’t have too many dimensions (like <100) but seems to perform surprisingly well even up to 1,000 dimensionsSmall memory usageLets you share memory between multiple processesIndex creation is separate from lookup (in particular you can not add more items once the tree has been created)Native Python support, tested with 2.7, 3.6, and 3.7.Build index on disk to enable indexing big datasets that won’t fit into memory (contributed byRene Hollander)Python code examplefromannoyimportAnnoyIndeximportrandomf=40# Length of item vector that will be indexedt=AnnoyIndex(f,'angular')foriinrange(1000):v=[random.gauss(0,1)forzinrange(f)]t.add_item(i,v)t.build(10)# 10 treest.save('test.ann')# ...u=AnnoyIndex(f,'angular')u.load('test.ann')# super fast, will just mmap the fileprint(u.get_nns_by_item(0,1000))# will find the 1000 nearest neighborsRight now it only accepts integers as identifiers for items. Note that it will allocate memory for max(id)+1 items because it assumes your items are numbered 0 … n-1. If you need other id’s, you will have to keep track of a map yourself.Full Python APIAnnoyIndex(f, metric)returns a new index that’s read-write and stores vector offdimensions. Metric can be"angular","euclidean","manhattan","hamming", or"dot".a.add_item(i, v)adds itemi(any nonnegative integer) with vectorv. Note that it will allocate memory formax(i)+1items.a.build(n_trees,n_jobs=-1)builds a forest ofn_treestrees. More trees gives higher precision when querying. After callingbuild, no more items can be added.n_jobsspecifies the number of threads used to build the trees.n_jobs=-1uses all available CPU cores.a.save(fn, prefault=False)saves the index to disk and loads it (see next function). After saving, no more items can be added.a.load(fn, prefault=False)loads (mmaps) an index from disk. Ifprefaultis set toTrue, it will pre-read the entire file into memory (using mmap withMAP_POPULATE). Default isFalse.a.unload()unloads.a.get_nns_by_item(i, n,search_k=-1,include_distances=False)returns thenclosest items. During the query it will inspect up tosearch_knodes which defaults ton_trees * nif not provided.search_kgives you a run-time tradeoff between better accuracy and speed. If you setinclude_distancestoTrue, it will return a 2 element tuple with two lists in it: the second one containing all corresponding distances.a.get_nns_by_vector(v, n,search_k=-1,include_distances=False)same but query by vectorv.a.get_item_vector(i)returns the vector for itemithat was previously added.a.get_distance(i, j)returns the distance between itemsiandj. NOTE: this used to return thesquareddistance, but has been changed as of Aug 2016.a.get_n_items()returns the number of items in the index.a.get_n_trees()returns the number of trees in the index.a.on_disk_build(fn)prepares annoy to build the index in the specified file instead of RAM (execute before adding items, no need to save after build)a.set_seed(seed)will initialize the random number generator with the given seed. Only used for building up the tree, i. e. only necessary to pass this before adding the items. Will have no effect after callinga.build(n_trees)ora.load(fn).Notes:There’s no bounds checking performed on the values so be careful.Annoy uses Euclidean distance of normalized vectors for its angular distance, which for two vectors u,v is equal tosqrt(2(1-cos(u,v)))The C++ API is very similar: just#include "annoylib.h"to get access to it.TradeoffsThere are just two main parameters needed to tune Annoy: the number of treesn_treesand the number of nodes to inspect during searchingsearch_k.n_treesis provided during build time and affects the build time and the index size. A larger value will give more accurate results, but larger indexes.search_kis provided in runtime and affects the search performance. A larger value will give more accurate results, but will take longer time to return.Ifsearch_kis not provided, it will default ton * n_treeswherenis the number of approximate nearest neighbors. Otherwise,search_kandn_treesare roughly independent, i.e. the value ofn_treeswill not affect search time ifsearch_kis held constant and vice versa. Basically it’s recommended to setn_treesas large as possible given the amount of memory you can afford, and it’s recommended to setsearch_kas large as possible given the time constraints you have for the queries.You can also accept slower search times in favour of reduced loading times, memory usage, and disk IO. On supported platforms the index is prefaulted duringloadandsave, causing the file to be pre-emptively read from disk into memory. If you setprefaulttoFalse, pages of the mmapped index are instead read from disk and cached in memory on-demand, as necessary for a search to complete. This can significantly increase early search times but may be better suited for systems with low memory compared to index size, when few queries are executed against a loaded index, and/or when large areas of the index are unlikely to be relevant to search queries.How does it workUsingrandom projectionsand by building up a tree. At every intermediate node in the tree, a random hyperplane is chosen, which divides the space into two subspaces. This hyperplane is chosen by sampling two points from the subset and taking the hyperplane equidistant from them.We do this k times so that we get a forest of trees. k has to be tuned to your need, by looking at what tradeoff you have between precision and performance.Hamming distance (contributed byMartin Aumüller) packs the data into 64-bit integers under the hood and uses built-in bit count primitives so it could be quite fast. All splits are axis-aligned.Dot Product distance (contributed byPeter Sobot) reduces the provided vectors from dot (or “inner-product”) space to a more query-friendly cosine space usinga method by Bachrach et al., at Microsoft Research, published in 2014.More infoDirk Eddelbuettelprovides anR version of Annoy.Andy Sloaneprovides aJava version of Annoyalthough currently limited to cosine and read-only.Pishen Tsaiprovides aScala wrapper of Annoywhich uses JNA to call the C++ library of Annoy.Atsushi TatsumaprovidesRuby bindings for Annoy.There isexperimental support for Goprovided byTaneli Leppä.Boris NagaevwroteLua bindings.During part of Spotify Hack Week 2016 (and a bit afterward),Jim KangwroteNode bindingsfor Annoy.Min-Seok Kimbuilt aScala versionof Annoy.hanabi1224built a read-onlyRust versionof Annoy, together withdotnet, jvm and dartread-only bindings.Presentation from New York Machine Learning meetupabout AnnoyAnnoy is available as aconda packageon Linux, OS X, and Windows.ann-benchmarksis a benchmark for several approximate nearest neighbor libraries. Annoy seems to be fairly competitive, especially at higher precisions:Source codeIt’s all written in C++ with a handful of ugly optimizations for performance and memory usage. You have been warned :)The code should support Windows, thanks toQiang KouandTimothy Riley.To run the tests, executepython setup.py nosetests. The test suite includes a big real world dataset that is downloaded from the internet, so it will take a few minutes to execute.DiscussFeel free to post any questions or comments to theannoy-usergroup. I’m@fulhackon Twitter.
|
annoyance
|
annoyfor theannotation ofcell types.Annoyanceis a simple API for classifying cell types as annotated in anadata.obstable.InstallationpipinstallannoyanceTo install the development versiongitclonehttps://github.com/mvinyard/annoyance.gitcd./annoyance/
pipinstall-e.NotesThis project uses open-source code fromspotify/annoy. However,this repo is in no way affiliated with Spotify.Interested? Questions and discussion may be directed toMichael Vinyardat:[email protected].
|
annoyclients
|
No description available on PyPI.
|
annoy-dm
|
NoteFor the latest source, discussion, etc, please visit theGitHub repositoryAnnoyAnnoy (Approximate Nearest NeighborsOh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that aremmappedinto memory so that many processes may share the same data.InstallTo install, simply dopip install--userannoyto pull down the latest version fromPyPI.For the C++ version, just clone the repo and#include "annoylib.h".BackgroundThere are some other libraries to do nearest neighbor search. Annoy is almost as fast as the fastest libraries, (see below), but there is actually another feature that really sets Annoy apart: it has the ability touse static files as indexes. In particular, this means you canshare index across processes. Annoy also decouples creating indexes from loading them, so you can pass around indexes as files and map them into memory quickly. Another nice thing of Annoy is that it tries to minimize memory footprint so the indexes are quite small.Why is this useful? If you want to find nearest neighbors and you have many CPU’s, you only need to build the index once. You can also pass around and distribute static files to use in production environment, in Hadoop jobs, etc. Any process will be able to load (mmap) the index into memory and will be able to do lookups immediately.We use it atSpotifyfor music recommendations. After running matrix factorization algorithms, every user/item can be represented as a vector in f-dimensional space. This library helps us search for similar users/items. We have many millions of tracks in a high-dimensional space, so memory usage is a prime concern.Annoy was built byErik Bernhardssonin a couple of afternoons duringHack Week.Summary of featuresEuclidean distance,Manhattan distance,cosine distance,Hamming distance, orDot (Inner) Product distanceCosine distance is equivalent to Euclidean distance of normalized vectors = sqrt(2-2*cos(u, v))Works better if you don’t have too many dimensions (like <100) but seems to perform surprisingly well even up to 1,000 dimensionsSmall memory usageLets you share memory between multiple processesIndex creation is separate from lookup (in particular you can not add more items once the tree has been created)Native Python support, tested with 2.7, 3.6, and 3.7.Build index on disk to enable indexing big datasets that won’t fit into memory (contributed byRene Hollander)Python code examplefromannoyimportAnnoyIndeximportrandomf=40t=AnnoyIndex(f,'angular')# Length of item vector that will be indexedforiinrange(1000):v=[random.gauss(0,1)forzinrange(f)]t.add_item(i,v)t.build(10)# 10 treest.save('test.ann')# ...u=AnnoyIndex(f,'angular')u.load('test.ann')# super fast, will just mmap the fileprint(u.get_nns_by_item(0,1000))# will find the 1000 nearest neighborsRight now it only accepts integers as identifiers for items. Note that it will allocate memory for max(id)+1 items because it assumes your items are numbered 0 … n-1. If you need other id’s, you will have to keep track of a map yourself.Full Python APIAnnoyIndex(f, metric)returns a new index that’s read-write and stores vector offdimensions. Metric can be"angular","euclidean","manhattan","hamming", or"dot".a.add_item(i, v)adds itemi(any nonnegative integer) with vectorv. Note that it will allocate memory formax(i)+1items.a.build(n_trees)builds a forest ofn_treestrees. More trees gives higher precision when querying. After callingbuild, no more items can be added.a.save(fn, prefault=False)saves the index to disk and loads it (see next function). After saving, no more items can be added.a.load(fn, prefault=False)loads (mmaps) an index from disk. Ifprefaultis set toTrue, it will pre-read the entire file into memory (using mmap withMAP_POPULATE). Default isFalse.a.unload()unloads.a.get_nns_by_item(i, n,search_k=-1,include_distances=False)returns thenclosest items. During the query it will inspect up tosearch_knodes which defaults ton_trees * nif not provided.search_kgives you a run-time tradeoff between better accuracy and speed. If you setinclude_distancestoTrue, it will return a 2 element tuple with two lists in it: the second one containing all corresponding distances.a.get_nns_by_vector(v, n,search_k=-1,include_distances=False)same but query by vectorv.a.get_item_vector(i)returns the vector for itemithat was previously added.a.get_distance(i, j)returns the distance between itemsiandj. NOTE: this used to return thesquareddistance, but has been changed as of Aug 2016.a.get_n_items()returns the number of items in the index.a.get_n_trees()returns the number of trees in the index.a.on_disk_build(fn)prepares annoy to build the index in the specified file instead of RAM (execute before adding items, no need to save after build)a.set_seed(seed)will initialize the random number generator with the given seed. Only used for building up the tree, i. e. only necessary to pass this before adding the items. Will have no effect after callinga.build(n_trees)ora.load(fn).Notes:There’s no bounds checking performed on the values so be careful.Annoy uses Euclidean distance of normalized vectors for its angular distance, which for two vectors u,v is equal tosqrt(2(1-cos(u,v)))The C++ API is very similar: just#include "annoylib.h"to get access to it.TradeoffsThere are just two main parameters needed to tune Annoy: the number of treesn_treesand the number of nodes to inspect during searchingsearch_k.n_treesis provided during build time and affects the build time and the index size. A larger value will give more accurate results, but larger indexes.search_kis provided in runtime and affects the search performance. A larger value will give more accurate results, but will take longer time to return.Ifsearch_kis not provided, it will default ton * n_trees * Dwherenis the number of approximate nearest neighbors andDis a constant depending on the metric. Otherwise,search_kandn_treesare roughly independent, i.e. the value ofn_treeswill not affect search time ifsearch_kis held constant and vice versa. Basically it’s recommended to setn_treesas large as possible given the amount of memory you can afford, and it’s recommended to setsearch_kas large as possible given the time constraints you have for the queries.You can also accept slower search times in favour of reduced loading times, memory usage, and disk IO. On supported platforms the index is prefaulted duringloadandsave, causing the file to be pre-emptively read from disk into memory. If you setprefaulttoFalse, pages of the mmapped index are instead read from disk and cached in memory on-demand, as necessary for a search to complete. This can significantly increase early search times but may be better suited for systems with low memory compared to index size, when few queries are executed against a loaded index, and/or when large areas of the index are unlikely to be relevant to search queries.How does it workUsingrandom projectionsand by building up a tree. At every intermediate node in the tree, a random hyperplane is chosen, which divides the space into two subspaces. This hyperplane is chosen by sampling two points from the subset and taking the hyperplane equidistant from them.We do this k times so that we get a forest of trees. k has to be tuned to your need, by looking at what tradeoff you have between precision and performance.Hamming distance (contributed byMartin Aumüller) packs the data into 64-bit integers under the hood and uses built-in bit count primitives so it could be quite fast. All splits are axis-aligned.Dot Product distance (contributed byPeter Sobot) reduces the provided vectors from dot (or “inner-product”) space to a more query-friendly cosine space usinga method by Bachrach et al., at Microsoft Research, published in 2014.More infoDirk Eddelbuettelprovides anR version of Annoy.Andy Sloaneprovides aJava version of Annoyalthough currently limited to cosine and read-only.Pishen Tsaiprovides aScala wrapper of Annoywhich uses JNA to call the C++ library of Annoy.There isexperimental support for Goprovided byTaneli Leppä.Boris NagaevwroteLua bindings.During part of Spotify Hack Week 2016 (and a bit afterward),Jim KangwroteNode bindingsfor Annoy.Min-Seok Kimbuilt aScala versionof Annoy.Presentation from New York Machine Learning meetupabout AnnoyAnnoy is available as aconda packageon Linux, OS X, and Windows.ann-benchmarksis a benchmark for several approximate nearest neighbor libraries. Annoy seems to be fairly competitive, especially at higher precisions:Source codeIt’s all written in C++ with a handful of ugly optimizations for performance and memory usage. You have been warned :)The code should support Windows, thanks toQiang KouandTimothy Riley.To run the tests, executepython setup.py nosetests. The test suite includes a big real world dataset that is downloaded from the internet, so it will take a few minutes to execute.DiscussFeel free to post any questions or comments to theannoy-usergroup. I’m@fulhackon Twitter.
|
annoy_fixed
|
NoteFor the latest source, discussion, etc, please visit theGitHub repositoryAnnoyAnnoy (Approximate Nearest NeighborsOh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that aremmappedinto memory so that many processes may share the same data.InstallTo install, simply dopip install--userannoyto pull down the latest version fromPyPI.For the C++ version, just clone the repo and#include "annoylib.h".BackgroundThere are some other libraries to do nearest neighbor search. Annoy is almost as fast as the fastest libraries, (see below), but there is actually another feature that really sets Annoy apart: it has the ability touse static files as indexes. In particular, this means you canshare index across processes. Annoy also decouples creating indexes from loading them, so you can pass around indexes as files and map them into memory quickly. Another nice thing of Annoy is that it tries to minimize memory footprint so the indexes are quite small.Why is this useful? If you want to find nearest neighbors and you have many CPU’s, you only need to build the index once. You can also pass around and distribute static files to use in production environment, in Hadoop jobs, etc. Any process will be able to load (mmap) the index into memory and will be able to do lookups immediately.We use it atSpotifyfor music recommendations. After running matrix factorization algorithms, every user/item can be represented as a vector in f-dimensional space. This library helps us search for similar users/items. We have many millions of tracks in a high-dimensional space, so memory usage is a prime concern.Annoy was built byErik Bernhardssonin a couple of afternoons duringHack Week.Summary of featuresEuclidean distance,Manhattan distance,cosine distance,Hamming distance, orDot (Inner) Product distanceCosine distance is equivalent to Euclidean distance of normalized vectors = sqrt(2-2*cos(u, v))Works better if you don’t have too many dimensions (like <100) but seems to perform surprisingly well even up to 1,000 dimensionsSmall memory usageLets you share memory between multiple processesIndex creation is separate from lookup (in particular you can not add more items once the tree has been created)Native Python support, tested with 2.7, 3.6, and 3.7.Build index on disk to enable indexing big datasets that won’t fit into memory (contributed byRene Hollander)Python code examplefromannoyimportAnnoyIndeximportrandomf=40t=AnnoyIndex(f,'angular')# Length of item vector that will be indexedforiinrange(1000):v=[random.gauss(0,1)forzinrange(f)]t.add_item(i,v)t.build(10)# 10 treest.save('test.ann')# ...u=AnnoyIndex(f,'angular')u.load('test.ann')# super fast, will just mmap the fileprint(u.get_nns_by_item(0,1000))# will find the 1000 nearest neighborsRight now it only accepts integers as identifiers for items. Note that it will allocate memory for max(id)+1 items because it assumes your items are numbered 0 … n-1. If you need other id’s, you will have to keep track of a map yourself.Full Python APIAnnoyIndex(f, metric)returns a new index that’s read-write and stores vector offdimensions. Metric can be"angular","euclidean","manhattan","hamming", or"dot".a.add_item(i, v)adds itemi(any nonnegative integer) with vectorv. Note that it will allocate memory formax(i)+1items.a.build(n_trees)builds a forest ofn_treestrees. More trees gives higher precision when querying. After callingbuild, no more items can be added.a.save(fn, prefault=False)saves the index to disk and loads it (see next function). After saving, no more items can be added.a.load(fn, prefault=False)loads (mmaps) an index from disk. Ifprefaultis set toTrue, it will pre-read the entire file into memory (using mmap withMAP_POPULATE). Default isFalse.a.unload()unloads.a.get_nns_by_item(i, n,search_k=-1,include_distances=False)returns thenclosest items. During the query it will inspect up tosearch_knodes which defaults ton_trees * nif not provided.search_kgives you a run-time tradeoff between better accuracy and speed. If you setinclude_distancestoTrue, it will return a 2 element tuple with two lists in it: the second one containing all corresponding distances.a.get_nns_by_vector(v, n,search_k=-1,include_distances=False)same but query by vectorv.a.get_item_vector(i)returns the vector for itemithat was previously added.a.get_distance(i, j)returns the distance between itemsiandj. NOTE: this used to return thesquareddistance, but has been changed as of Aug 2016.a.get_n_items()returns the number of items in the index.a.get_n_trees()returns the number of trees in the index.a.on_disk_build(fn)prepares annoy to build the index in the specified file instead of RAM (execute before adding items, no need to save after build)a.set_seed(seed)will initialize the random number generator with the given seed. Only used for building up the tree, i. e. only necessary to pass this before adding the items. Will have no effect after callinga.build(n_trees)ora.load(fn).Notes:There’s no bounds checking performed on the values so be careful.Annoy uses Euclidean distance of normalized vectors for its angular distance, which for two vectors u,v is equal tosqrt(2(1-cos(u,v)))The C++ API is very similar: just#include "annoylib.h"to get access to it.TradeoffsThere are just two main parameters needed to tune Annoy: the number of treesn_treesand the number of nodes to inspect during searchingsearch_k.n_treesis provided during build time and affects the build time and the index size. A larger value will give more accurate results, but larger indexes.search_kis provided in runtime and affects the search performance. A larger value will give more accurate results, but will take longer time to return.Ifsearch_kis not provided, it will default ton * n_trees * Dwherenis the number of approximate nearest neighbors andDis a constant depending on the metric. Otherwise,search_kandn_treesare roughly independent, i.e. a the value ofn_treeswill not affect search time ifsearch_kis held constant and vice versa. Basically it’s recommended to setn_treesas large as possible given the amount of memory you can afford, and it’s recommended to setsearch_kas large as possible given the time constraints you have for the queries.You can also accept slower search times in favour of reduced loading times, memory usage, and disk IO. On supported platforms the index is prefaulted duringloadandsave, causing the file to be pre-emptively read from disk into memory. If you setprefaulttoFalse, pages of the mmapped index are instead read from disk and cached in memory on-demand, as necessary for a search to complete. This can significantly increase early search times but may be better suited for systems with low memory compared to index size, when few queries are executed against a loaded index, and/or when large areas of the index are unlikely to be relevant to search queries.How does it workUsingrandom projectionsand by building up a tree. At every intermediate node in the tree, a random hyperplane is chosen, which divides the space into two subspaces. This hyperplane is chosen by sampling two points from the subset and taking the hyperplane equidistant from them.We do this k times so that we get a forest of trees. k has to be tuned to your need, by looking at what tradeoff you have between precision and performance.Hamming distance (contributed byMartin Aumüller) packs the data into 64-bit integers under the hood and uses built-in bit count primitives so it could be quite fast. All splits are axis-aligned.Dot Product distance (contributed byPeter Sobot) reduces the provided vectors from dot (or “inner-product”) space to a more query-friendly cosine space usinga method by Bachrach et al., at Microsoft Research, published in 2014.More infoDirk Eddelbuettelprovides anR version of Annoy.Andy Sloaneprovides aJava version of Annoyalthough currently limited to cosine and read-only.Pishen Tsaiprovides aScala wrapper of Annoywhich uses JNA to call the C++ library of Annoy.There isexperimental support for Goprovided byTaneli Leppä.Boris NagaevwroteLua bindings.During part of Spotify Hack Week 2016 (and a bit afterward),Jim KangwroteNode bindingsfor Annoy.Min-Seok Kimbuilt aScala versionof Annoy.Presentation from New York Machine Learning meetupabout AnnoyAnnoy is available as aconda packageon Linux, OS X, and Windows.ann-benchmarksis a benchmark for several approximate nearest neighbor libraries. Annoy seems to be fairly competitive, especially at higher precisions:Source codeIt’s all written in C++ with a handful of ugly optimizations for performance and memory usage. You have been warned :)The code should support Windows, thanks toQiang KouandTimothy Riley.To run the tests, executepython setup.py nosetests. The test suite includes a big real world dataset that is downloaded from the internet, so it will take a few minutes to execute.DiscussFeel free to post any questions or comments to theannoy-usergroup. I’m@fulhackon Twitter.
|
annoy-gpu
|
NoteThis project is derived fromAnnoy. The original project can use multi-thread to accelerate build process. In this project GPU is used to accelerate the build process. This project is still under developing. Currently it only support the Angular metrics.
|
annoying
|
My first Python package with a slightly longer description
|
ann-package
|
制作第一个Python包这是我的第一个Python包实现自动生成用例实现笛卡尔算法
|
anns
|
annsanns isan approximate nearest-neighbor search library for Python.InstallationpipinstallannsQuickstartimportannsLicenseanns has a BSD-3-Clause license, as found in theLICENSEfile.ContributingChangelog
|
ann-solo
|
ANN-SoLoFor more information:Official code websiteANN-SoLo(ApproximateNearestNeighborSpectralLibrary) is a spectral library search engine for fast and accurate open modification searching. ANN-SoLo uses approximate nearest neighbor indexing to speed up open modification searching by selecting only a limited number of the most relevant library spectra to compare to an unknown query spectrum. This is combined with a cascade search strategy to maximize the number of identified unmodified and modified spectra while strictly controlling the false discovery rate and the shifted dot product score to sensitively match modified spectra to their unmodified counterpart.The software is available as open-source under the Apache 2.0 license.InstallSee thewikifor detailed instructions on how to install and run ANN-SoLo.ANN-SoLo requires Python 3.6 or higher. The GPU-powered version of ANN-SoLo can be used on Linux systems with an NVIDIA CUDA-enabled GPU device, while the CPU-only version supports both the Linux and OSX platforms. Please refer to the Faiss installation instructions linked below for more information on OS and GPU support.Installation requirementsNumPyneeds to be available prior to the installation of ANN-SoLo.TheFaissinstallation depends on a specific GPU version. Please refer to theFaiss installation instructionsfor more information.Install ANN-SoLoThe recommended way to install ANN-SoLo is using pip:pip install ann_soloANN-SoLo searchRun ANN-SoLo to search your spectral data directly using on the command line usingann_soloor as a named Python module (if you do not have sufficient rights to install command-line scripts) usingpython -m ann_solo.ann_solo.ANN-SoLo arguments can be specified as command-line arguments or in a configuration file. Argument preference is command-line args > configuration file > default settings.For more information on which arguments are available and their default values runann_solo -h.Most options have sensible default values. Some positional arguments specifying which in- and output files to use are required. Additionally, the precursor and fragment mass tolerances do not have default values as these are data set dependent.Please note that to run ANN-SoLo in cascade search mode two different precursor mass tolerances need to be specified for both levels of the cascade search (precursor_tolerance_(mass|mode)andprecursor_tolerance_(mass|mode)_open).usage: ann_solo [-h] [-c CONFIG_FILE] [--resolution RESOLUTION]
[--min_mz MIN_MZ] [--max_mz MAX_MZ] [--remove_precursor]
[--remove_precursor_tolerance REMOVE_PRECURSOR_TOLERANCE]
[--min_intensity MIN_INTENSITY] [--min_peaks MIN_PEAKS]
[--min_mz_range MIN_MZ_RANGE]
[--max_peaks_used MAX_PEAKS_USED]
[--max_peaks_used_library MAX_PEAKS_USED_LIBRARY]
[--scaling {sqrt,rank}] --precursor_tolerance_mass
PRECURSOR_TOLERANCE_MASS --precursor_tolerance_mode {Da,ppm}
[--precursor_tolerance_mass_open PRECURSOR_TOLERANCE_MASS_OPEN]
[--precursor_tolerance_mode_open {Da,ppm}]
--fragment_mz_tolerance FRAGMENT_MZ_TOLERANCE
[--allow_peak_shifts] [--fdr FDR]
[--fdr_tolerance_mass FDR_TOLERANCE_MASS]
[--fdr_tolerance_mode {Da,ppm}]
[--fdr_min_group_size FDR_MIN_GROUP_SIZE] [--mode {ann,bf}]
[--bin_size BIN_SIZE] [--hash_len HASH_LEN]
[--num_candidates NUM_CANDIDATES] [--batch_size BATCH_SIZE]
[--num_list NUM_LIST] [--num_probe NUM_PROBE] [--no_gpu]
spectral_library_filename query_filename out_filename
ANN-SoLo: Approximate nearest neighbor spectral library searching
=================================================================
Bittremieux et al. Fast open modification spectral library searching through
approximate nearest neighbor indexing. Journal of Proteome Research 17,
3464-3474 (2018).
Bittremieux et al. Extremely fast and accurate open modification spectral
library searching of high-resolution mass spectra using feature hashing and
graphics processing units. Journal of Proteome Research 18, 3792-3799 (2019).
Official code website: https://github.com/bittremieux/ANN-SoLo
Args that start with '--' (eg. --resolution) can also be set in a config file
(config.ini or specified via -c). Config file syntax allows: key=value,
flag=true, stuff=[a,b,c] (for details, see syntax at https://goo.gl/R74nmi).
If an arg is specified in more than one place, then commandline values
override config file values which override defaults.
positional arguments:
spectral_library_filename
spectral library file (supported formats: splib)
query_filename query file (supported formats: mgf)
out_filename name of the mzTab output file containing the search
results
optional arguments:
-h, --help show this help message and exit
-c CONFIG_FILE, --config CONFIG_FILE
config file path
--resolution RESOLUTION
spectral library resolution; masses will be rounded to
the given number of decimals (default: no rounding)
--min_mz MIN_MZ minimum m/z value (inclusive, default: 11 m/z)
--max_mz MAX_MZ maximum m/z value (inclusive, default: 2010 m/z)
--remove_precursor remove peaks around the precursor mass (default: no
peaks are removed)
--remove_precursor_tolerance REMOVE_PRECURSOR_TOLERANCE
the window (in m/z) around the precursor mass to
remove peaks (default: 0 m/z)
--min_intensity MIN_INTENSITY
remove peaks with a lower intensity relative to the
maximum intensity (default: 0.01)
--min_peaks MIN_PEAKS
discard spectra with less peaks (default: 10)
--min_mz_range MIN_MZ_RANGE
discard spectra with a smaller mass range (default:
250 m/z)
--max_peaks_used MAX_PEAKS_USED
only use the specified most intense peaks for the
query spectra (default: 50)
--max_peaks_used_library MAX_PEAKS_USED_LIBRARY
only use the specified most intense peaks for the
library spectra (default: 50)
--scaling {sqrt,rank}
to reduce the influence of very intense peaks, scale
the peaks by their square root or by their rank
(default: rank)
--precursor_tolerance_mass PRECURSOR_TOLERANCE_MASS
precursor mass tolerance (small window for the first
level of the cascade search)
--precursor_tolerance_mode {Da,ppm}
precursor mass tolerance unit (options: Da, ppm)
--precursor_tolerance_mass_open PRECURSOR_TOLERANCE_MASS_OPEN
precursor mass tolerance (wide window for the second
level of the cascade search)
--precursor_tolerance_mode_open {Da,ppm}
precursor mass tolerance unit (options: Da, ppm)
--fragment_mz_tolerance FRAGMENT_MZ_TOLERANCE
fragment mass tolerance (m/z)
--allow_peak_shifts use the shifted dot product instead of the standard
dot product
--fdr FDR FDR threshold to accept identifications during the
cascade search (default: 0.01)
--fdr_tolerance_mass FDR_TOLERANCE_MASS
mass difference bin width for the group FDR
calculation during the second cascade level (default:
0.1 Da)
--fdr_tolerance_mode {Da,ppm}
mass difference bin unit for the group FDR calculation
during the second cascade level (default: Da)
--fdr_min_group_size FDR_MIN_GROUP_SIZE
minimum group size for the group FDR calculation
during the second cascade level (default: 20)
--mode {ann,bf} search using an approximate nearest neighbors or the
traditional (brute-force) mode; 'bf': brute-force,
'ann': approximate nearest neighbors (default: ann)
--bin_size BIN_SIZE ANN vector bin width (default: 0.04 Da)
--hash_len HASH_LEN ANN vector length (default: 800)
--num_candidates NUM_CANDIDATES
number of candidates to retrieve from the ANN index
for each query (default: 1024), maximum 1024 when
using GPU indexing
--batch_size BATCH_SIZE
number of query spectra to process simultaneously
(default: 16384)
--num_list NUM_LIST number of partitions in the ANN index (default: 256)
--num_probe NUM_PROBE
number of partitions in the ANN index to inspect
during querying (default: 128), maximum 1024 when
using GPU indexing
--no_gpu don't use the GPU for ANN searching (default: GPU is
used if available)Spectrum–spectrum match viewerUse the ANN-SoLo plotter to visualize spectrum–spectrum matches from your search results. The plotter can be run directly on the command line usingann_solo_plotor as a named Python module (if you do not have sufficient rights to install command-line scripts) usingpython -m ann_solo.plot_ssm.The plotter requires as command-line arguments an mzTab identification file produced by ANN-SoLo and the identifier of the query to visualize.
Please note that the spectral library used to perform the search needs to be present in the exact location as specified in the mzTab file.The plotter will create a PNG file with a mirror plot to visualize the specified spectrum–spectrum match.usage: ann_solo_plot [-h] mztab_filename query_id
Visualize spectrum–spectrum matches from your ANN-SoLo identification results
positional arguments:
mztab_filename Identifications in mzTab format
query_id The identifier of the query to visualize
optional arguments:
-h, --help show this help message and exitContactFor more information you can visit theofficial code websiteor send an email [email protected].
|
annt
|
anntSimple annotated file loader for object detection task.DescriptionVarious tools have been developed so far for object detection tasks.
However, there are no standard in annotation tools and formats and
developers still write their own json or xml parser of annotation files.anntis an annotation tool that operates in the form of cloud services such as dropbox.
annt provides not only simple and comfortable annotation exprience, but also powerful library for loading annotated images.This is the python library which can read images annotated with annt.
You can load annotated images in a simple way and focus on the essential AI development.
Also, this library has a basic build-in preprocessing functions. So you can save time to write extra code.Usage and ExampleExample 1. Load annotated imagesimportannt# annotations is list of annotation dataannotations=annt.load('~/Dropbox/app/project_name')# Display ths information of each annotation file.forainannotations:image=a.image# opencv2 image arrayboxes=a.boxes# list of bounding boxesheight,width,colors=image.shape# you canforboxinboxes:# Tag information (str)print(f'~ tag name : box.tag ~')# You can get coordination information of the box by two methods,# Left Top Style and Edge Style.# Coordination information based on left top of the box. (Left-Top Style)print(f'x :{box.x}')print(f'y :{box.y}')print(f'w :{box.w}')print(f'h :{box.h}')# Coordination information based on the distance from each edge of the image. (Edge Style)print(f'left :{box.left}')print(f'right :{box.right}')print(f'top :{box.top}')print(f'bottom :{box.bottom}')# If you change these coordination properties, all of them will recomputed.box.w=300# This operation will also change box.right property.Example 2. Data augumentationimportanntimportrandom# annotations is list of annotation dataannotations=annt.load('./Dropbox/App/annt/test')sample_n=10# Number of samples from one image# Display ths information of each annotation file.augumented=[]forraw_ainannotations:foriinrange(sample_n):# Rotate imagerot_deg=random.choice([0,90,180,270,360])a=raw_a.rotate(rot_deg)# Tilt imagetilt_deg=random.randint(-8,8)a=a.rotate(tilt_deg)# Flip imageflip_x=random.randint(0,1)flip_y=random.randint(0,1)a=a.flip(flip_x,flip_y)augumented.append(a)# Show first augumented image.augumented[0].show()Getting StartedRegister annt and annotate imaes.Install this libary from pip.Develop you own project.Installyou can install from pip.pip install anntDocumentationsSeehttp://doc.annt.ai/Recent Updates0.0.7: Bug fix.
|
anntonia
|
Artificial Neural Network to Node-link Immersive Analytics (ANNtoNIA)ANNtoNIA is a framework for building immersive node-link visualizations, designed for Artificial Neural Networks (ANN). It is currently under development and unfinished. For any questions, contact @mbellgardt.Recommended SetupDownload and installAnaconda, then create an environment, by executing:conda create -c conda-forge --name anntonia --file anntonia-env.txtin the anaconda prompt. Activate the environment using:conda activate anntoniaAfterwards you can run one of the examples by, e.g:python linear_model_test_server.pyThis will start the ANNtoNIA server, you can then connect to with theANNtoNIA rendering client.
|
anntonia-keras
|
ANNtoNIA_KerasANNtoNIA_Keras is an extension for theANNtoNIAframework, which allows ANNtoNIA to extract the needed data from Keras models to visualize them.Using ANNtoNIA_KerasInstantiate a KerasReader from to path to your Keras model, for example with:model = KerasReader('path_to_your_model')Then, you can use this reader for a visualizer, such as the LinearModelVisualizer from ANNtoNIA:LinearModelVisualizer(model, test_data)
|
anntoolkit
|
AnnToolKit - Image annotation toolkitCross-platform, dataset agnostic, "DIY" style image annotation frameworkGetting startedDocumentation -http://anntoolkit.rtfd.io/1. Installpip install anntoolkit2. Hello worldSubclass fromanntoolkit.AppIn init method load some test image.importanntoolkitclassApp(anntoolkit.App):def__init__(self):super(App,self).__init__(title='Test')im=imageio.imread('test_image.jpg')self.set_image(im)Run app:app=App()app.run()Result:
|
anntools
|
The anntools package provides various modules to take advantage
of Python 3.0’s new function annotation feature. It supports
validation, conversion and type checking of parameters passed
to functions and their return values. It is useful for adding
security checks and make your code more readable. This package
is useful for Python 2.4 and up, since all functionality is
also provided as keyword arguments for decorators.Read the INSTALL file or see the home page for more information,
documentation, tests and examples:http://code.google.com/p/anntools/
|
annual-stats
|
The Archive "annual_stats"This is a quick Python app that generates a Markdown-compatible table of yearly zettelkasting stats for users of The Archive.Usage (Example)Change to the directory where you installed the application and run the following command in Terminal.python main.pyYou'll be prompted to enter the 4 digit year of your oldest note.Earliest year:Look for the output where you specified with thetable_outputvariable. ``Sample OutputLicienceMITDisclaimerWhen it comes to Python, I am just a hobbyist. So it's very likely I made some mistakes. Please bear with me. Let me know and I'll fix things and in doing so, you'll be teaching me to be a better programmer.
|
annulus
|
Detection of annuli (donuts) in images and recovery of a grid for camera calibration.
|
ann_visualizer
|
ANN Visualizer is a great visualization python library used to work with Keras. It uses python’s graphviz library to create a presentable graph of the neural network you are building.Usage:from ann_visualizer.visualize import ann_viz;
#Build your model here
ann_viz(model)Documentation:ann_viz(model, view=True, filename=”network.gv”)model - The Keras Sequential model
view - If True, it opens the graph preview after executed
filename - Where to save the graph. (.gv file format)
|
annxious-callback
|
ANNxious bot callbackThis is a Keras callback that connects you toANNxiousTelegram bot, a bot that
lets you know when your model is done with its training.
For more details visit theGithub repo.
|
annxuncements
|
No description available on PyPI.
|
annydict
|
Example PackageThis is a simple example package. You can use
#Github-flavored Markdownto write your content.
|
anoa
|
No description available on PyPI.
|
anoapycore
|
No description available on PyPI.
|
anoapycore-pkg
|
Failed to fetch description. HTTP Status Code: 404
|
anoapycore-pkg-ah4d1
|
Failed to fetch description. HTTP Status Code: 404
|
anobbs-client
|
AnoBBS ClientAnoBBS(A岛所用匿名版架构)API 的 Python 封装库。功能随个人需要增加。注意⚠️:由于本库出发点的项目没有多线程需求,所以本库当前只以单线程使用为目的设计。虽然每个请求都奢侈地专门创建了一个新的 Session,但共用的 CookieJar 并非线程安全。实现功能查看版块页面版块列表/版规介绍/…遍历反向遍历串页面遍历版块页面发布串回应添加订阅/删除订阅装载饼干…术语「卡页」「卡99」访问串的100页之后的页面,响应的会是100页的内容。示例毕竟只是自用,感觉也不会有其他人感兴趣,就不在这方面多费时间了。下面都是些最基础的例子,剩下的就让源代码自己去解释吧 (ゝ∀・)创建客户端client=anobbsclient.Client(# 客户端的 User-Agentuser_agent='…',# 目标服务器的主机名,如 'adnmb3.com'host='…',# 客户端的 appid,可为 `None`appid='…',# 与单次请求相关的一些选项,发送请求时可以选择覆盖这些选项default_request_options={# 在浏览器中以名为 userhash 的 cookie 的形式呈现,登录的凭证。# 领饼干领的就是这个。# 在需要提供此值(如访问超过100页的页面)而此值空缺时# 会直接抛异常'user_cookie':'…',# 要怎么处理登录:# 'enforce': 无论如何都会在请求中包含 user_cookie。# 无论操作是否需要登录,都要提供 user_cookie# 'when_has_cookie': 只要提供了 user_cookie,就会在请求中包含它# 'when_required': 当需要进行需要登录的操作时,# 才会在请求中包含 user_cookie# 'always_no': 无论如何都不在请求中包含 user_cookie。# 遇到需要登录的操作会直接抛异常'login_policy':'when_required',},)获取串内容luwei_thread=client.get_thread_page(49607,page=1)print(luwei_thread.content)#=> '这是芦苇'获取版块内容g_board=client.get_board_page(4,page=1)print(g_board[0].user_id)#=> 'ATM'发布回应try:client.reply_thread("正文内容",to_thread_id=999999999999,title="标题(可选)",name="名称(可选)",email="邮箱(可选)",)exceptanobbsclient.ReplyExceptionase:# 服务器不接受所发回应print(e.raw_error,e.raw_detail)raisee反向遍历串页面Q:为何这么做?A:防止遍历途中有串被抽导致遗漏。例子略,只是表示有这个功能 (ゝ∀・)
|
anobii.api
|
==Introduction==============This is a wrapper for the Anobii APIs.see: http://api.anobii.com/api/api_home.phpMy approach is so similar to the one used by flickrApi that I get his code fromhttp://flickrapi.sf.net/and I adapted it to Anobii API.thanks to Sybren Stuvel for his work.This product uses the aNobii API but is not endorsed or certified by aNobii.How to use it=============It's quite easy. first of all istance the AnobiiApi passing key and secret:>>> a = anobii.api.AnobiiAPI(key, secret)then call the api you need. for example to call theanobii.shelf.getSimpleShelf('user', 'limit')you can call:>>> a.shelf_getSimpleShelf(user_id='massimoazzolini', limit='3')note that "anobii" disappears (a is anobii..) and "shelf." is replaced by "shelf_".that's it.Changelog=========0.1 - Unreleased----------------* Initial release
|
anoctor
|
Failed to fetch description. HTTP Status Code: 404
|
anodb
|
AnoDBConvenient Wrapper aroundaiosqland aDatabase Connection.DescriptionThis class creates a persistent database connection and imports
SQL queries from a file as simple Python functions.If the connection is broken, a new connection is attempted with increasing
throttling delays.Compared toaiosql, the point is not to need to pass a connection
as an argument on each call: TheDBclass embeds both connectionandquery methods.For concurrent programming (threads, greenlets…), a relevant setup
should also consider thread-locals and pooling issues at some higher level.ExampleInstall the module withpip install anodbor whatever method you like.
Once available:importanodb# parameters: driver, connection string, SQL filedb=anodb.DB("sqlite3","test.db","test.sql")db.do_some_insert(key=1,val="hello")db.do_some_update(key=1,val="world")print("data",db.do_some_select(key=1))db.commit()db.close()With filetest.sqlcontaining something like:-- name: do_some_selectSELECT*FROMStuffWHEREkey=:key;-- name: do_some_insert!INSERTINTOStuff(key,val)VALUES(:key,:val);-- name: do_some_update!UPDATEStuffSETval=:valWHEREkey=:key;DocumentationTheanodbmodule provides theDBclass which embeds both aPEP 249database connection
(providing methodscommit,rollback,cursor,closeand
itsconnectcounterpart to re-connect)andSQL queries wrapped
into dynamically generated functions byaiosql.
Such functions may be loaded from a string (add_queries_from_str) or a
path (add_queries_from_path).TheDBconstructor parameters are:dbthe name of the database driver:sqlite3,psycopg,pymysql, seeaiosql documentationfor a list of supported drivers.connan optional connection string used to initiate a connection with the
driver.
For instance,psycopgaccepts alibpq connection stringsuch as:"host=db1.my.org port=5432 dbname=acme user=calvin …".queriesa path name or list of path names from which to read query
definitions.optionsa dictionary or string to pass additional connection parameters.auto_reconnectwhether to attempt a reconnection if the connection is lost.
Default isTrue. Reconnection attempts are throttled exponentially
following powers of two delays from0.001and capped at30.0seconds.kwargs_onlywhether to only accept named parameters to python functions.exceptionfunction to re-process database exceptions.debugwhether to generate debugging messages.
Default isFalse.other named parameters are passed as additional connection parameters.importanodbdb=anodb.DB("sqlite3","acme.db","acme-queries.sql")db=anodb.DB("duckdb","acme.db","acme-queries.sql")db=anodb.DB("psycopg","host=localhost dbname=acme","acme-queries.sql")db=anodb.DB("psycopg",None,"acme-queries.sql",host="localhost",user="calvin",password="...",dbname="acme")db=anodb.DB("psycopg2","host=localhost dbname=acme","acme-queries.sql")db=anodb.DB("pygresql",None,"acme-queries.sql",host="localhost:5432",user="calvin",password="...",database="acme")db=anodb.DB("pg8000",None,"acme-queries.sql",host="localhost",port=5432,user="calvin",password="...",database="acme")db=anodb.DB("MySQLdb",None,"acme-queries.sql",host="localhost",port=3306,user="calvin",password="...",database="acme")db=anodb.DB("pymysql",None,"acme-queries.sql",host="localhost",port=3306,user="calvin",password="...",database="acme")db=anodb.DB("mysql-connector",None,"acme-queries.sql",host="localhost",port=3306,user="calvin",password="...",database="acme")db=anodb.DB("mariadb",None,"acme-queries.sql",host="localhost",port=3306,user="calvin",password="...",database="acme")VersionsSources,documentationandissuesare available onGitHub.Seeall versionsand
getpackagesfromPyPI.
|
anodi
|
anodi[1]is a decorator-based backport ofPEP 3107,
function annotations, to Python 2.7, along with a limited set of tools
based on those annotations (e.g.,anodi.tools.document, which hoists
annotations into the docstring, for theSphinxautodocextension to find).[1]Etymology:The Welsh forannotationisanodi(according toGoogle Translate). It won out over translations to other
languages because it’s short, and phonetically (and, thus,
mnemonically), related. It’s also a tribute to my friend,Allen
Briggs, who passed away, unexpectedly, in March of 2012. Allen
was an amateur student of Welsh, when he wasn’t busymaintaining
the mac68k port of NetBSD, or, as a consequence of his *BSD
work, beingcredited in iOS.
|
anodict
|
anodict: annotated dictConvert adictto an annotated object.UsageInstallationpipinstallanodictExampleimportanodictclassPerson:name:strage:intperson=anodict.dict_to_class({"name":"bob","age":23},Person)print("type:",type(person))print("name:",person.name)print("age:",person.age)will give:type: <class '__main__.Person'>
name: bob
age: 23
|
anodot-monitor
|
No description available on PyPI.
|
anodyne
|
1.0.1-------------------------------------------------------------------------------* Remove refs to old package name "tincture".-------------------------------------------------------------------------------1.0.0-------------------------------------------------------------------------------* Changed how engines module operates.-------------------------------------------------------------------------------
|
anoens
|
anoensHere be dragons
|
anoexpress
|
Anophelesgene expression in resistance studiesAuthors:Sanjay Curtis NagiandVictoria A InghamA python package, colab notebooks and results from a meta-analysis of RNA-Sequencing studies investigating insecticide resistance inAnopheles gambiae s.landAnopheles funestus. Analyses can be launched in Google Colab using the badges below, allowing users to explore gene expression in their own genes of interest.Documentation:https://sanjaynagi.github.io/AnoExpress/Contributing datasetsIf you would like to contribute a dataset from a major malaria vector, please raise anissueoremail me!
|
anoikis
|
UNKNOWN
|
anole
|
AnoleLike a anole, fake everything.Currently support user agent and name fake, other will be coming soon.Thanks for use.How to usefromanoleimportUserAgent# Suppose this is the request headersheaders={"referer":"https://leesoar.com"}user_agent=UserAgent()user_agent.fake(headers)It will check if there is "user-agent" in headers. If not, "user-agent" will update with random.fromanoleimportNamename=Name()name.fake()# Whole name's length is between 2 and 3.name.fake(length=2)# You can specify the length of the name.name.fake(surname="李")# And you can specify the surname.Use as fake_useragentfromanoleimportUserAgentuser_agent=UserAgent()user_agent.random# or user_agent.chrome# or other browsers
|
anolis
|
Anolis is an HTML document post-processor that takesan input HTML file, adds section numbers, a table
of contents, and cross-references, and writes the
output to another file.
|
anom
|
https://github.com/Bogdanp/anom-py
|
anomalearn
|
anomalearn: time series anomaly detection libraryGroupBadgesPyPIRepositoryCodeDocstringsStatusThe current version of the library (first version) is a pre-release because
other content is planned to be added, i.e., the library is currently on
development. However, we feel that people can start to use it and contribute to
it. Please refer to the documentation for contribution and use.What is it?anomalearn is aPythonpackage that provides modular and
extensible functionalities for developing anomaly detection methods for time
series data, reading publicly available time series anomaly detection datasets,
creating the loading of data for experiments, and dataset evaluation functions.
Additionally, anomalearn development's plans include the implementation of
several state-of-the-art and historical anomaly detection methods, and the
implementation of objects to automate the training process of methods. See
Discussion and development section for more details.DocumentationEvery functionality in anomalearn is documented. The official documentation is
hosted athttps://marcopetri98.github.io/anomalearn/index.html.Main featuresHere you find a list of the features offered by anomalearn:Implementation of state-of-the-art and historical anomaly detection methods
for time series. The bare models are located inanomalearn.algorithms.models.
Where bare models mean the model without the preprocessing or postprocessing
operations.Implementation of data readers of commonly used publicly accessible time
series anomaly detection datasets. Data readers are all located in the packageanomalearn.readeror inanomalearn.reader.time_series. All data
readers return apandasDataFrame.Implementation of some data analysis functions, such as simplicity scoring
functions, stationarity tests and time series decomposition functions. These
functions are all located inanomalearn.analysis.Implementation of helpers for creating experiments. Currently, only the
helper for data loading has been implemented capable of taking data readers
and returning all or a subset of series with a default or specific split. The
experiment helpers are all located inanomalearn.applications.InstallationThe source code is available atanomalearn github repo.Currently, the library is shipped only to thePython Package Index (PyPI).# install from PyPIpipinstallanomalearn--preInstallation from sourceFirstly, download or clone the repository and place it in any location on your
computer. We will call REPO_PATH. Open the terminal and navigate to the folder:cdREPO_PATHSecondly, install the repository using pip:pipinstall.DependenciesThis repository is strongly based on other existing high-quality Python packages
for machine learning and for general programming:Numpy: adds support for efficient array operations.Scipy: adds support for scientific computing.Numba: adds a Just In Time compiler for functions that have
to be efficient and leaves the package a pure Python package.Pandas: adds support for working with data structures.Scikit-learn: adds support for model development.Scikit-optimize: adds support for searching hyper-parameters
of models.Statsmodels: adds support for statistical tests and
models.Matplotlib: adds supports for plotting.Getting helpFor the moment, the suggested way to get help is by posting questions toStackOverflow. Then, until the community will grow
bigger, consider sending the URL of the questions to the author via email.BackgroundThis work started with Marco Petri's thesis work. The work initially aimed to
develop new anomaly detection methods for time series to reach new
state-of-the-art performances. However, given the scarcity of tools specifically
aimed for time series anomaly detection, the thesis developed anomalearn and a
way to evaluate the simplicity of a dataset. The very first version of the
library (v0.0.2a1) is the one presented and described on the thesis. From that
point on, the library will receive updates outside the sole scope of the thesis.Discussion and developmentCurrently, the development of the first stable version of anomalearn is ongoing.
If you want to use it, you can help us in testing the functionalities by
providing feedback on the clarity of the documentation, the naming of functions,
ease of use, and in proposing new functionalities to implement.In the future, once the first stable version will be published, a structured and
well documented on how to contribute to the library will be written. For the
moment, all the discussions related to the development, requests and proposals
should be places in the GitHub discussion page.Contributing to codeFirstly, download or clone the repository and place it in any location on your
computer. We will call REPO_PATH. Open the terminal and navigate to the folder:cdREPO_PATHThe library usespoetryfor managing dependencies, building,
and publishing. Therefore, it is strongly recommended to carefully read its docs
to be able to contribute and install it from source.Be careful, the installed
version of poetry must be at least 1.4.1.poetryinitNow, poetry will recognize the project. You can install the library and its
dependencies by using the poetry lock file such that every contributor will use
the exact same versions of packages:# this command will install the library using the lock filepoetryinstallNow, you can add functionalities to the library. To ask for changes to be
merged, create a pull request. However,it is strongly suggested to ask if a
feature can be implemented in anomalearn such that it does not violate any
design choice.CitationCurrently, neither the thesis has been published nor a paper presenting the
library to a conference or in a journal has been published. I strongly ask you
to wait till the 4th of May 2023 to get the citation (the date on which the
dissertation will happen and the thesis will be published).
|
anomalia
|
No description available on PyPI.
|
anomalib
|
A library for benchmarking, developing and deploying deep learning anomaly detection algorithmsKey Features•Getting Started•Docs•LicenseIntroductionAnomalib is a deep learning library that aims to collect state-of-the-art anomaly detection algorithms for benchmarking on both public and private datasets. Anomalib provides several ready-to-use implementations of anomaly detection algorithms described in the recent literature, as well as a set of tools that facilitate the development and implementation of custom models. The library has a strong focus on image-based anomaly detection, where the goal of the algorithm is to identify anomalous images, or anomalous pixel regions within images in a dataset. Anomalib is constantly updated with new algorithms and training/inference extensions, so keep checking!Key featuresThe largest public collection of ready-to-use deep learning anomaly detection algorithms and benchmark datasets.PyTorch Lightningbased model implementations to reduce boilerplate code and limit the implementation efforts to the bare essentials.All models can be exported toOpenVINOIntermediate Representation (IR) for accelerated inference on intel hardware.A set ofinference toolsfor quick and easy deployment of the standard or custom anomaly detection models.Getting StartedFollowing is a guide on how to get started withanomalib. For more details, look at theDocumentation.Jupyter NotebooksFor getting started with a Jupyter Notebook, please refer to theNotebooksfolder of this repository. Additionally, you can refer to a few created by the community:PyPI InstallYou can get started withanomalibby just using pip.pipinstallanomalibLocal InstallIt is highly recommended to use virtual environment when installing anomalib. For instance, withanaconda,anomalibcould be installed as,yes|condacreate-nanomalib_envpython=3.10
condaactivateanomalib_env
gitclonehttps://github.com/openvinotoolkit/anomalib.gitcdanomalib
pipinstall-e.TrainingBy defaultpython tools/train.pyrunsPADIMmodel onleathercategory from theMVTec AD(CC BY-NC-SA 4.0)dataset.pythontools/train.py# Train PADIM on MVTec AD leatherTraining a model on a specific dataset and category requires further configuration. Each model has its own configuration
file,config.yaml, which contains data, model and training configurable parameters. To train a specific model on a specific dataset and
category, the config file is to be provided:pythontools/train.py--config<path/to/model/config.yaml>For example, to trainPADIMyou can usepythontools/train.py--configsrc/anomalib/models/padim/config.yamlAlternatively, a model name could also be provided as an argument, where the scripts automatically finds the corresponding config file.pythontools/train.py--modelpadimwhere the currently available models are:CFACFlowDFKDEDFMDRAEMEfficientAdFastFlowGANomalyPADIMPatchCoreReverse DistillationSTFPMFeature extraction & (pre-trained) backbonesThe pre-trained backbones come fromPyTorch Image Models (timm), which are wrapped byFeatureExtractor.For more information, please check our documentation or thesection about feature extraction in "Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide".Tips:Papers With Code has an interface to easily browse models available in timm:https://paperswithcode.com/lib/timmYou can also find them with the functiontimm.list_models("resnet*", pretrained=True)The backbone can be set in the config file, two examples below.model:name:cflowbackbone:wide_resnet50_2pre_trained:trueCustom DatasetIt is also possible to train on a custom folder dataset. To do so,datasection inconfig.yamlis to be modified as follows:dataset:name:<name-of-the-dataset>format:folderpath:<path/to/folder/dataset>normal_dir:normal# name of the folder containing normal images.abnormal_dir:abnormal# name of the folder containing abnormal images.normal_test_dir:null# name of the folder containing normal test images.task:segmentation# classification or segmentationmask:<path/to/mask/annotations>#optionalextensions:nullsplit_ratio:0.2# ratio of the normal images that will be used to create a test splitimage_size:256train_batch_size:32test_batch_size:32num_workers:8transform_config:train:nullval:nullcreate_validation_set:truetiling:apply:falsetile_size:nullstride:nullremove_border_count:0use_random_tiling:Falserandom_tile_count:16InferenceAnomalib includes multiple tools, including Lightning, Gradio, and OpenVINO inferencers, for performing inference with a trained model.The following command can be used to run PyTorch Lightning inference from the command line:pythontools/inference/lightning_inference.py-hAs a quick example:pythontools/inference/lightning_inference.py\--configsrc/anomalib/models/padim/config.yaml\--weightsresults/padim/mvtec/bottle/run/weights/model.ckpt\--inputdatasets/MVTec/bottle/test/broken_large/000.png\--outputresults/padim/mvtec/bottle/imagesExample OpenVINO Inference:pythontools/inference/openvino_inference.py\--weightsresults/padim/mvtec/bottle/run/openvino/model.bin\--metadataresults/padim/mvtec/bottle/run/openvino/metadata.json\--inputdatasets/MVTec/bottle/test/broken_large/000.png\--outputresults/padim/mvtec/bottle/imagesEnsure that you provide path tometadata.jsonif you want the normalization to be applied correctly.You can also use Gradio Inference to interact with the trained models using a UI. Refer to ourguidefor more details.A quick example:pythontools/inference/gradio_inference.py\--weightsresults/padim/mvtec/bottle/run/weights/model.ckptExporting Model to ONNX or OpenVINO IRIt is possible to export your model to ONNX or OpenVINO IRIf you want to export your PyTorch model to an OpenVINO model, ensure thatexport_modeis set to"openvino"in the respective modelconfig.yaml.optimization:export_mode:"openvino"# options: openvino, onnxHyperparameter OptimizationTo run hyperparameter optimization, use the following command:pythontools/hpo/sweep.py\--modelpadim--model_config./path_to_config.yaml\--sweep_configtools/hpo/sweep.yamlFor more details refer theHPO DocumentationBenchmarkingTo gather benchmarking data such as throughput across categories, use the following command:pythontools/benchmarking/benchmark.py\--config<relative/absolutepath>/<paramfile>.yamlRefer to theBenchmarking Documentationfor more details.Experiment ManagementAnomablib is integrated with various libraries for experiment tracking such as Comet, tensorboard, and wandb throughpytorch lighting loggers.Below is an example of how to enable logging for hyper-parameters, metrics, model graphs, and predictions on images in the test data-setvisualization:log_images:True# log images to the available loggers (if any)mode:full# options: ["full", "simple"]logging:logger:[comet,tensorboard,wandb]log_graph:TrueFor more information, refer to theLogging DocumentationNote: Set your API Key forComet.mlviacomet_ml.init()in interactive python or simply runexport COMET_API_KEY=<Your API Key>Community Projects1. Web-based Pipeline for Training and InferenceThis project showcases an end-to-end training and inference pipeline build on top of Anomalib. It provides a web-based UI for uploading MVTec style datasets and training them on the available Anomalib models. It also has sections for calling inference on individual images as well as listing all the images with their predictions in the database.You can view the project onGithubFor more details see theDiscussion forumDatasetsanomalibsupports MVTec AD(CC BY-NC-SA 4.0)and BeanTech(CC-BY-SA)for benchmarking andfolderfor custom dataset training/inference.MVTec AD DatasetMVTec AD dataset is one of the main benchmarks for anomaly detection, and is released under the
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License(CC BY-NC-SA 4.0).Note: These metrics are collected with image size of 256 and seed42. This common setting is used to make model comparisons fair.Image-Level AUCModelAvgCarpetGridLeatherTileWoodBottleCableCapsuleHazelnutMetal NutPillScrewToothbrushTransistorZipperEfficientAdPDN-S0.9820.9821.0000.9971.0000.9861.0000.9520.9500.9520.9790.9870.9600.9970.9990.994EfficientAdPDN-M0.9750.9720.9981.0000.9990.9840.9910.9450.9570.9480.9890.9260.9751.0000.9650.971PatchCoreWide ResNet-500.9800.9840.9591.0001.0000.9891.0000.9900.9821.0000.9940.9240.9600.9331.0000.982PatchCoreResNet-180.9730.9700.9471.0000.9970.9971.0000.9860.9651.0000.9910.9160.9430.9310.9960.953CFlowWide ResNet-500.9620.9860.9621.0000.9990.9931.00.8930.9451.00.9950.9240.9080.8970.9430.984CFAWide ResNet-500.9560.9780.9610.9900.9990.9940.9980.9790.8721.0000.9950.9460.7031.0000.9570.967CFAResNet-180.9300.9530.9470.9991.0001.0000.9910.9470.8580.9950.9320.8870.6250.9940.8950.919PaDiMWide ResNet-500.9500.9950.9421.0000.9740.9930.9990.8780.9270.9640.9890.9390.8450.9420.9760.882PaDiMResNet-180.8910.9450.8570.9820.9500.9760.9940.8440.9010.7500.9610.8630.7590.8890.9200.780DFMWide ResNet-500.9430.8550.7840.9970.9950.9750.9990.9690.9240.9780.9390.9620.8730.9690.9710.961DFMResNet-180.9360.8170.7360.9930.9660.9771.0000.9560.9440.9940.9220.9610.890.9690.9390.969STFPMWide ResNet-500.8760.9570.9770.9810.9760.9390.9870.8780.7320.9950.9730.6520.8250.5000.8750.899STFPMResNet-180.8930.9540.9820.9890.9490.9610.9790.8380.7590.9990.9560.7050.8350.9970.8530.645DFKDEWide ResNet-500.7740.7080.4220.9050.9590.9030.9360.7460.8530.7360.6870.7490.5740.6970.8430.892DFKDEResNet-180.7620.6460.5770.6690.9650.8630.9510.7510.6980.8060.7290.6070.6940.7670.8390.866GANomaly0.4210.2030.4040.4130.4080.7440.2510.4570.6820.5370.2700.4720.2310.3720.4400.434Pixel-Level AUCModelAvgCarpetGridLeatherTileWoodBottleCableCapsuleHazelnutMetal NutPillScrewToothbrushTransistorZipperCFAWide ResNet-500.9830.9800.9540.9890.9850.9740.9890.9880.9890.9850.9920.9880.9790.9910.9770.990CFAResNet-180.9790.9700.9730.9920.9780.9640.9860.9840.9870.9870.9810.9810.9730.9900.9640.978PatchCoreWide ResNet-500.9800.9880.9680.9910.9610.9340.9840.9880.9880.9870.9890.9800.9890.9880.9810.983PatchCoreResNet-180.9760.9860.9550.9900.9430.9330.9810.9840.9860.9860.9860.9740.9910.9880.9740.983CFlowWide ResNet-500.9710.9860.9680.9930.9680.9240.9810.9550.9880.9900.9820.9830.9790.9850.8970.980PaDiMWide ResNet-500.9790.9910.9700.9930.9550.9570.9850.9700.9880.9850.9820.9660.9880.9910.9760.986PaDiMResNet-180.9680.9840.9180.9940.9340.9470.9830.9650.9840.9780.9700.9570.9780.9880.9680.979EfficientAdPDN-S0.9600.9630.9370.9760.9070.8680.9830.9830.9800.9760.9780.9860.9850.9620.9560.961EfficientAdPDN-M0.9570.9480.9370.9760.9060.8670.9760.9860.9570.9770.9840.9780.9860.9640.9470.960STFPMWide ResNet-500.9030.9870.9890.9800.9660.9560.9660.9130.9560.9740.9610.9460.9880.1780.8070.980STFPMResNet-180.9510.9860.9880.9910.9460.9490.9710.8980.9620.9810.9420.8780.9830.9830.8380.972Image F1 ScoreModelAvgCarpetGridLeatherTileWoodBottleCableCapsuleHazelnutMetal NutPillScrewToothbrushTransistorZipperPatchCoreWide ResNet-500.9760.9710.9741.0001.0000.9671.0000.9680.9821.0000.9840.9400.9430.9381.0000.979PatchCoreResNet-180.9700.9490.9461.0000.980.9921.0000.9780.9691.0000.9890.9400.9320.9350.9740.967EfficientAdPDN-S0.9700.9661.0000.9951.0000.9751.0000.9070.9560.8970.9780.9820.9440.9840.9880.983EfficientAdPDN-M0.9660.9770.9911.0000.9940.9670.9840.9220.9690.8840.9840.9520.9551.0000.9290.979CFAWide ResNet-500.9620.9610.9570.9950.9940.9830.9840.9620.9461.0000.9840.9520.8551.0000.9070.975CFAResNet-180.9460.9560.9460.9731.0001.0000.9830.9070.9380.9960.9580.9200.8580.9840.7950.949CFlowWide ResNet-500.9440.9720.9321.0000.9880.9671.0000.8320.9391.0000.9790.9240.9710.8700.8180.967PaDiMWide ResNet-500.9510.9890.9301.0000.9600.9830.9920.8560.9820.9370.9780.9460.8950.9520.9140.947PaDiMResNet-180.9160.9300.8930.9840.9340.9520.9760.8580.9600.8360.9740.9320.8790.9230.7960.915DFMWide ResNet-500.9500.9150.8700.9950.9880.9600.9920.9390.9650.9710.9420.9560.9060.9660.9140.971DFMResNet-180.9430.8950.8710.9780.9580.9001.0000.9350.9650.9660.9420.9560.9140.9660.8680.964STFPMWide ResNet-500.9260.9730.9730.9740.9650.9290.9760.8530.9200.9720.9740.9220.8840.8330.8150.931STFPMResNet-180.9320.9610.9820.9890.9300.9510.9840.8190.9180.9930.9730.9180.8870.9840.7900.908DFKDEWide ResNet-500.8750.9070.8440.9050.9450.9140.9460.7900.9140.8170.8940.9220.8550.8450.7220.910DFKDEResNet-180.8720.8640.8440.8540.9600.8980.9420.7930.9080.8270.8940.9160.8590.8530.7560.916GANomaly0.8340.8640.8440.8520.8360.8630.8630.7600.9050.7770.8940.9160.8530.8330.5710.881ReferenceIf you use this library and love it, use this to cite it 🤗@misc{anomalib,
title={Anomalib: A Deep Learning Library for Anomaly Detection},
author={Samet Akcay and
Dick Ameln and
Ashwin Vaidya and
Barath Lakshmanan and
Nilesh Ahuja and
Utku Genc},
year={2022},
eprint={2202.08341},
archivePrefix={arXiv},
primaryClass={cs.CV}}ContributingFor those who would like to contribute to the library, seeCONTRIBUTING.mdfor details.Thank you to all of the people who have already made a contribution - we appreciate your support!
|
anomalies
|
AnomaliesImplement the anomaly free solution ofarXiv:1905.13729[PRL]:Obtain a numpy arrayzofNintegers which satisfy the Diophantine equations>>>z.sum()0>>>(z**3).sum()0The input is two listslandkwith any(N-3)/2and(N-1)/2integers forNodd, orN/2-1andN/2-1forNeven (N>4).
The function is implemented below under the name:free(l,k)Install$pipinstallanomaliesUSAGE>>>fromanomaliesimportanomaly>>>anomaly.free([-1,1],[4,-2])array([3,3,3,-12,-12,15])>>>anomaly.free.gcd3>>>anomaly.free.simplifiedarray([1,1,1,-4,-4,5])ExampleA sample for4<N<13with integers until|30|with~400 000chiral solutions can be download from:[JSON]
|
anomalo
|
A basic REST API client and command line client forAnomalo
|
anomalous
|
Server anomaly detection.DescriptionContact AuthorityLabs.com for more details.
|
anomalous-diffusion
|
Stochastic Process SimulationRandom generatorsymmetric stable distributiontotally skewed stable distributionpower-law distributiondiscrete finite probability distributionLevy processstable Levy processsubordinatorPoisson processContinuous-time random walk(CTRW)finite and diverging characteristic waiting timefinite and diverging jump length varianceAlternating processtwo-states process with Levy walk and Brownian motionFractional Brownian motionMultiple internal states processfractional compound Poisson processLevy walk
|
anomaly
|
Failed to fetch description. HTTP Status Code: 404
|
anomalyapp
|
Anomaly_Ensemble_AppTable of contentsA statement of needOverview of Anomaly_Ensemble_AppFeaturesTarget audienceFunctionalityModule anomaly_libsModule anomaly_data_preprocessingModule anomaly_modelsModule anomaly_detectionModule anomaly_mainInstallation instructionsPrerequisitesApp installationDemoDataCodeCommunity guidelinesContribute to the softwareReport issues or problems with the softwareSeek supportSoftware licenseA statement of needOverview of Anomaly_Ensemble_AppAnomaly_Ensemble_App is an anomaly detection python library that is based on the ensemble learning algorithm to derive better predictions. It combines multiple independent anomaly detection models such as Isolation Forest, DBSCAN, ThymeBoost, One-Class Support Vector Machine, Local Outlier Factor and TADGAN. Then, it returns the output according as the average prediction of the individual models is above a certain threshold. This package is specially designed for univariate time series.Process Flow Diagram:Performance MetricsA confusion matrix consists of different combinations of predicted and actual anomaly labels.True Positive (TP): Number of instances correctly classified as anomaly by the model. A high value indicates that the model is accurately identifying anomalies.True Negative (TN): Number of instances correctly classified as non anomaly by the model. A high value indicates that the model is accurately identifying non-anomalies.False Positive (FP): Number of instances incorrectly classified as anomaly by the model. A high value indicates that the model is producing a large number of false alarms.False Negative (FN): Number of instances incorrectly classified as non anomaly by the model. A high value indicates that the model is failing to detect many anomalies.Predicted ClassAnomalyNon AnomalyActual ClassAnomalyTrue PositiveFalse NegativeNon AnomalyFalse PositiveTrue NegativeThe confusion matrix helps to calculate important metrics that is used to evaluate anomaly detection models:Accuracy: It measures the overall correctness of the model's predictions, calculated as (TP + TN) / (TP + TN + FP + FN).Precision: It quantifies the proportion of correctly predicted anomalies out of all predicted anomalies, calculated as TP / (TP + FP).Recall/Sensitivity: It represents the proportion of correctly predicted anomalies out of all actual anomalies, calculated as TP / (TP + FN).F1 score: It combines precision and recall into a single metric that is used in case of imbalanced classes (for eg, less anomalies and more non anomalies), calculated as 2 * (precision * recall) / (precision + recall).FeaturesWorks well when the underlying data distribution is unknown.Handles time series data.Deals with trends and seasonality.Provides options to choose anomaly detection models for ensemble learning.Suggests hyperparameters for each anomaly detection model.Provides execution time for each anomaly detection model.Provides more accurate anomaly predictions.Target audienceAnomaly_Ensemble_App should be of interest to readers who are involved in outlier detection for time series data.FunctionalityThe package consists of several modules and each module contains several functions.Package Structure:Module anomaly_libsThis module imports all the needed python libraries that are required to run the package.Module anomaly_data_preprocessingThis module contains the data preprocessing tasks which includes detrending and deseasonalising the time series, as well as finding optimal hyperparameters for the models.output = detrend(df, id_column, time_column, time_format)Definition: This function is used to identify trend and then, detrend a time series.Identify trend: Augmented Dickey Fuller Test (ADF test) is used to capture trend of the time series.Detrend: 'detrend' function is used from the 'scipy' module for detrending the time series.Input Parameters:df is the dataset.id_column is the column over which the function will iterate.time_column is the datetime column.time_format is the datetime format of time_column.Output: Returns detrended time series.output = deseasonalise(df, id_column, time_column, time_format)Definition: This function is used to determine seasonality and then, deseasonlise a time series.Determine seasonality: Autocorrelation function is used to check seasonality.Deseasonalise: 'seasonal_decompose' function is used from the 'statsmodel' module for deseasonalising the time series.Input Parameters:df is the dataset.id_column is the column over which the function will iterate.time_column is the datetime column.time_format is the datetime format of time_column.Output: Returns deseasonalised time series.min_samples = find_min_samples(df)Definition: This function is used to find an hyperparameter for DBSCAN.Find min_samples: min_samples is chosen as 2*n, where n = the dimensionlity of the data.Input Parameters: df is the dataset.Output: Returns min_samples.eps = find_eps(df)Definition: This function is used to find an hyperparameter for DBSCAN.Find eps: eps is chosen as the point of maximum curvature of the k-NN distance graph.Input Parameters: df is the dataset.Output: Returns eps.p = find_seasonal_period(df)Definition: This function is used to find an hyperparameter for Thymeboost.Find p: seasonal_period is chosen as the first expected seasonal period at the maximum amplitude, computed using Fast Fourier Transform.Input Parameters: df is the dataset.Output: Returns seasonal_period.(best_nu, best_kernel) = parameters_oc_svm(X, y, trials=10)Definition: This function is used to hyperparameters for One Class SVM.Find best_nu & best_kernel: Best optimal nu and kernal are found using Optuna hyperparameter optimization frameworkInput Parameters:df is the dataset.id_column is the column over which the function will iterate.time_column is the datetime column.time_format is the datetime format of time_column.Output: Returns best_nu and best_kernel.Module anomaly_modelsThis module contains the models which are to be fitted on the data and hence used in anomaly prediction.ifo_labels = fit_iforest(X, ** kwargs)Definition: This function is used to fit isolation forest model to the data and predict anomaly labels.Model: Isolation forest is a decision tree based anomaly detection algorithm.
It isolates the outliers by randomly selecting a feature, and
then randomly selecting a split value between the max and min values of that feature.
It randomly selects a feature, and then randomly selects a split value from the max and min values of that feature.
It isolates the outliers based on path length of the node/data point in the tree structure.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for Isolation Forest from a keyword-based Python dictionary.Output: Returns the anomaly labels.dbscan_labels = fit_dbscan(X, ** kwargs)Definition: This function is used to fit dbscan model to the data and predict anomaly labels.Model: DBSCAN is a density based anomaly detection algorithm.
It groups together the core points in clusters which are in high density regions, surrounded by border points.
It marks the other points as outliers.Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameters for DBSCAN, eps and min_samples.Output: Returns the anomaly labels.tb_labels = fit_ThymeBoost(df, ** kwargs)Definition: This function is used to fit thymeboost model to the data and predict anomaly labels.Model: ThymeBoost is an anomaly detection algorithm which applies gradient boosting on time series decomposition.It is a time series model for trend/seasonality/exogeneous estimation and decomposition using gradient boosting. It classifies a datapoint as outlier when it does not lie within the range of the fitted trend.Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for Thymeboost, seasonal_period.Output: Returns the anomaly labels.ocsvm_labels = fit_oc_svm(df, ** kwargs)Definition: This function is used to fit one-class svm model to the data and predict anomaly labels.Model: One-Class Support Vector Machine (SVM) is an anomaly detection algorithm.
It finds a hyperplane that separates the data set from the origin such that
the hyperplane is as close to the datapoints as possible. It fits a non-linear
boundary around the dense region of the data set separating the remaining points
as outliers.
It minimizes the volume of the hypersphere that separates the data points from the origin in the feature space.
It marks the data points outside the hypersphere as outliers.Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for One Class SVM, best_nu and best_kernel.Output: Returns the anomaly labels.lof_labels = fit_lof(df, k):Definition: This function is used to fit lof model to the data and predict anomaly labels.Model: Local outlier factor (LOF) is an anomaly detection algorithm.
It computes the local density deviation of a data point with respect to its neighbors.
It considers the data points that fall within a substantially lower density range than its neighbors as outliers.Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for LOF, alg.Output: Returns the anomaly labels.tadgan_labels = fit_tadgan(df, k):Definition: This function is used to fit tadgan model to the data and predict anomaly labels.Model:Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for tadgan, epochs.Output: Returns the anomaly labels.Module anomaly_detectionThis module contains the majority vote algorithm.voters = election(voters, n_voters, threshold)Definition: This function is used to find the final anomaly labels of the ensemble method.Model: Ensemble method is a tree based anomaly detection method. It identifies outliers based on majority voting logic. If the average predicted data point of all the models is above a certain threshold, then it is marked as an outlier.Prediction labels: Anomaly marked as 1 and normal as 0.Input Parameters:voters is the dataframe that contains prediction columns of all models that are fit for the run.n_voters is the number of models that are fit for the run.threshold is the limit above which a datapoint is considered an outlier.Output: Returns the anomaly labels.(election_results, models_dict) = get_labels(X, ** kwargs)Definition: This function is used to .Model:Prediction labels: Anomaly marked as 1 and normal as 0.Input Parameters:X is the dataset.** kwargsOutput: Returns the anomaly labels.election_results is the datframe with predicted labels of all models.models_dict is the Python dictionary of the models containing fit function, execution time, labels and parameters.Module anomaly_mainThis module contains the results to be displayed.find_parameters(self)Definition: This function is used to provide parameters of all the models.find_anomalies(self)Definition: This function is used to find the model performance.Input parameters:Installation instructionsPrerequisitesAnomaly_Ensemble_App has been developed and tested in Python Version: v3.7 and requires some libraries.python3 -m pip install -r "topath\requirements.txt"Application installationpython3 -m pip install Anomaly_Ensemble_AppDemoDataUse exemplary datasetsCodefrom anomaly_ensemble_app.anomaly_main import *
import pandas as pd
original_data = "syntethic_original.csv"
original_DF = pd.read_csv(original_data, sep=";")
anomaly_detection_obj = AnomalyDetection(original_DF, 'spare_part_id', 'yyyymm', '%Y%m', models=["full"])
anomaly_detection_labels.performance_DF
anomaly_detection_labels.final_dfCommunity guidelinesContribute to the softwareTo contribute fixes, feature modifications or enhancements, a pull request can be created in thePull requeststab of the project GitHub repository. When contributing to the software, the folowing should be included.Description of the change;Check that all tests pass;Include new tests to report the change.Report issues or problems with the softwareAny feature request or issue can be submitted to the theIssuestab of the project GitHub repository. When reporting issues with the software, the folowing should be included.Description of the problem;Error message;Python version and Operating System.Seek supportIf any support needed, the authors can be contacted by e-mail @volvo.com.Software licenseAnomaly_Ensemble_App is released under the MIT License.
|
anomalydetection
|
No description available on PyPI.
|
anomaly-detection
|
Anomaly Detection Package=======================This is an open anomaly detection package.
|
anomaly-detection-framework
|
Anomaly Detection FrameworkAnomaly-Detection-Framework is a platform for Time Series Anomaly Detection Problems. Give the data to the platform to get the Anomaly Labels with scheduled time periods. It is such simple is that!!!
Anomaly-Detection-Framework enables to Data Science communities easy to detect abnormal values on a Time Series Data Set. It is a platform that can run on Docker containers as services or python by using its modules. It also has the web interface which allows us to train - prediction - parameter tuning jobs easily.Key FeaturesWeb Interface:It is a web interface which allows us to connect data source and execute Machine Learning processes. You may create tasks according to your data set. These tasks are train models, Anomaly Detection Prediction, Parameter Tunning For Train Models.MenuData Source ConfiguraitonsCreate TaskJob CenterDashboardSchedule Machine Learning Jobs:Three main processes are able to initialized from platform Train, Prediction, and Parameter Tuning. Each Process can be scheduled Daily, Monthly, Weekly Hourly, etc with given time. In addition to that, you may run your Machine Learning processes in real-time.Dashboard Visualization:When your data have been connected to a data source and assign Date Indicator, Model Dimensions (optional), and feature column from Create Task MenuCreate Task, you may see the dashboard fromDashboardMenu.Data Sources:
Here is the data source that you can connect with your SQL queries:Ms SQL ServerPostgreSQLAWS RedShiftGoogle BigQuery.csv.jsonpickleModels:There are 2 Anomaly Detection Algorithm and FBProphet Anomaly Detection Solution which are running on the platform.LSTMFBProphetIsolation ForestAPI Services:There are 4 Services run on the platform.Machine Learning Schedule ServicesLSTM Model ServiceFbProphet Model ServiceIsolation Foreset model ServiceDocker Compose Integration (Beta):These 4 containers are running on containers as services.ml_executor-servicesmodel-services-iso_fmodel-services-lstmmodel-services-prophetRunning Platform1. You have to specify you directoryfrom anomaly_detection import ad_execute as ad_exec
ad = ad_exec.AnomalyDetection(path='./Desktop', environment='local)
ad.init(apis=None)Once, you have assigned the path, a folder calledAnomaly_Detection_Frameworkwill be created inside of it. This folder includes models, data, logs, and docs folders.Trained models will be imported to themodelsfolder.logsfolder for bothml_execute,model_iso_f,model_prophet, andmodel_lstmof log files will be created at logs file.
Your.csv,.jsonor.yamldata source file must be copied to the data folder which is at theAnomaly_Detection_Frameworkfolder. If you are connecting to Google Big Query data source, Big Query API (.json file) must be copied into the "data" folder. Once, prediction jobs have been initialized output .csv file is imported to thedatafolder.The given path will be your workspace where all data-source you can store or get from it. By using "AnomalyDetection" module ofpathargument you can specify the path. If there are files which are already exists on the given path, you may remove them by usingremove_existed = True(defaultFalse)anomaly_detection.AnomalyDetection:AnomalyDetectionpath :The location where you are willing to create models and prediction data set.*enviroment :local or dockerhost :local or dockerremove_existed :remove data from the location where you have entered as a path.master_node :ifFalse, you must enter the services of information manually (port host, etc.). This allows the user to initialize each service on different locations or servers or workers.
IfFalsethere will not be a web interface. Once you create a master node, in order to use other services, you have to clarify these services on it. The master node will lead the other services which has additional web interface service that runs on itinitThis initializes the folders. Checks the available ports for services in the range between6000 - 7000. Updates theapis.yamlif it is necessary.apis :services = {
'model_iso_f': {'port': 6000, 'host': '127.0.0.1'},
'model_lstm': {'port': 6001, 'host': '127.0.0.1'}
}
ad = ad_exec.AnomalyDetection(path='./Desktop', environment='local, master_node=False)
ad.init(apis=services)Example above, It will initializesmodel_iso_fandmodel_lstmservices. Both will be run on a given host with given ports. However given ports are used, it will assign another port automatically.2. Run The PLatformad.run_platform()This process initializes the platform. Once you have run the code above you may have seen the services are running.
If you assignmaster_node = Trueyou may use enter to web interface fromhttp://127.0.0.1:7002/.
If7002port is used from another platform directly platform assigns +1 port. (7003,7004, 7005, ..)2. Data SourceYou can connect to data source fromData Source Configuraitons.
There is two option to connect to a data source. You can integrate on the web interface or you can useAnomalyDetectionmethod in order to able to connect a data source.from anomaly_detection import ad_execute as ad_exec
# create your platform folders.
ad = ad_exec.AnomalyDetection(path='./Desktop', environment='local')
# copy folders
ad.init()
# initialize services
ad.run_platform()
# create data source with Google BigQuery
ad.create_data_source(data_source_type='googlebigquery',
data_query_path="""
SELECT
fullVisitorId,
TIMESTAMP_SECONDS(visitStartTime) as date,
CASE WHEN type = 'PAGE' THEN 'PAGE'
ELSE eventAction END as event_type_and_category,
MIN(time) / 1000 as time_diff
FROM (SELECT
geoNetwork.city as city,
device.browser browser,
device.deviceCategory deviceCategory,
visitStartTime,
hits.eventInfo.eventAction eventAction,
hits.eventInfo.eventCategory eventCategory,
hits.type type,
hits.page.pageTitle pageTitle,
hits.time time,
fullVisitorId as fullVisitorId
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*`,
UNNEST(hits) as hits
) as a
WHERE pageTitle != 'Home'
GROUP BY
visitStartTime,
deviceCategory,
browser,
city,
eventAction,
eventCategory,
type,
pageTitle,
eventCategory,
fullVisitorId
ORDER BY fullVisitorId, visitStartTime
""",
db='flash-clover-**********.json',
host=None,
port=None,
user=None,
pw=None)Example above, it is created a connector to Google BigQuery by usingAnomalyDtectionmethod.Connection PostgreSQL - MS SQLConnection .csv - .json - .yamlConnection Google BigQueryCreate TasksModel Dimensions :You may want to Train your model with separated Groups. The platform automatically finds the date part as dimensions from theDate Indicator. However external dimension can be included by assigning from here.Date Indicator :You have to specify the date column from your raw data set.This is a mandatory field.Anomaly Feature :In order to find the anomaly values, you have to specify which column we are investigating for.This is a mandatory field.Train :Choose the schedule time period for train task. The chosen period will be started depending on the time where it is assigned atTrain Job Dates - Start*. IfTrain Job Dates - Start* is not assigned, the job date will automatically assign as the current date and it can be started immediately. Parameter Tunning also runs when train task runs for the first time.Prediction :As like Train Task, Prediction task also be scheduled similar way. However, you have to assign ***Prediction Job Dates - Start *** while you are creating task.Parameter Tuning :Parameter Tuning also is able to be scheduled. However, the starting date is assigning related toTrain Job Dates - Start. Parameter tunning also runs when train task runs for the first time.Here are the schedule options :Daily :Each day, the job process will start with a given time where you assign atTrain Job Dates - Start.only once :It can be triggered just once.Mondays ... Sundays :Assigned day of the week, the job will start.Weekly :Job will run every 7 days after it is started.Every 2 Weeks :14 days of the time period.Monthly :every 30 days of the time period.Every Minute :Every minute job can be triggered.Every Second :Every each second job can be triggered.You can create 3 main Machine Learning task which generally uses for each Data Scientist. You may create a task and schedule them separately. For instance, train can run every week, prediction can create outputs daily, and every each month parameters can be optimized by parameter tunning task.This process is only available after Data Source is created.
Once you create the data source you can see the column names onModel Dimensions,Date Indicator,Anomaly Feature.
You can not create tasks separately.Job RunOnce, you create tasks, jobs are eligible to run periodically. You can also run below codes rather than using application interface;ad.manage_train(stop=False)
ad.manage_prediction(stop=False)
ad.manage_parameter_tuning(stop=False)*** AnomalyDetection.manage_train :***- stop :If False stops running training scheduled task.*** AnomalyDetection.manage_prediction :***- stop :If False stops running prediction scheduled task.*** AnomalyDetection.manage_parameter_tuning :***- stop :If False stops running parameter tuning scheduled task.DashboardOnce you assign the data source connection and create the task automatically, the dashboard will be created directly according to the model dimension.
AfterData SourceandCreate Taskare done, in order to initialize the platform with the code below;ad = anomaly_detection.Ad_execute.AnomalyDetection(path='./Desktop', environment='local').reset_web_app()
ad.reset_web_app()
|
anomaly-detection-models
|
anomaly_detection_modelsRepository with some useful anomaly detection model definitions.installfor latest stable versionpip install anomaly_detection_models [--user]for most recent version, usegit clone [email protected]:luclepot/anomaly_detection_models.git
pip install . [--user]with the--userargument specifying local installation.usageimport models directly or subclassanomaly_detection_baseto make a new model (instructions in-source)exampleseedemos/test.ipynbfor an example. general usage is like sklearn, asfrom anomaly_detection_models import SACWoLa
sacwola = SACWoLa(epochs=10, lambda_=1.2)
sacwola.fit(x, y_sim, y_sb)
pred = sacwola.predict(x_test)
|
anomaly-detection-ts
|
No description available on PyPI.
|
anomaly-detect-useready
|
Failed to fetch description. HTTP Status Code: 404
|
anomaly-devosmita
|
IntroductionTODO: Give a short introduction of your project. Let this section explain the objectives or the motivation behind this project.Getting StartedTODO: Guide users through getting your code up and running on their own system. In this section you can talk about:Installation processSoftware dependenciesLatest releasesAPI referencesBuild and TestTODO: Describe and show how to build your code and run the tests.ContributeTODO: Explain how other users and developers can contribute to make your code better.If you want to learn more about creating good readme files then refer the followingguidelines. You can also seek inspiration from the below readme files:ASP.NET CoreVisual Studio CodeChakra Core
|
anomaly-ensemble-app
|
https://github.com/devosmitachatterjee2018/Creating_Python_Libraryhttps://github.com/devosmitachatterjee2018/Anomaly_Ensemble_App/blob/main/README.mdIntroductionTODO: Give a short introduction of your project. Let this section explain the objectives or the motivation behind this project.Getting StartedTODO: Guide users through getting your code up and running on their own system. In this section you can talk about:Installation processSoftware dependenciesLatest releasesAPI referencesBuild and TestTODO: Describe and show how to build your code and run the tests.ContributeTODO: Explain how other users and developers can contribute to make your code better.If you want to learn more about creating good readme files then refer the followingguidelines. You can also seek inspiration from the below readme files:ASP.NET CoreVisual Studio CodeChakra Core
|
anomaly-ensemble-application
|
Anomaly_Ensemble_AppTable of contentsA statement of needOverview of Anomaly_Ensemble_AppFeaturesTarget audienceFunctionalityModule anomaly_libsModule anomaly_data_preprocessingModule anomaly_modelsModule anomaly_detectionModule anomaly_mainInstallation instructionsPrerequisitesApp installationDemoDataCodeCommunity guidelinesContribute to the softwareReport issues or problems with the softwareSeek supportSoftware licenseA statement of needOverview of Anomaly_Ensemble_AppAnomaly_Ensemble_App is an anomaly detection python library that is based on the ensemble learning algorithm to derive better predictions. It combines multiple independent anomaly detection models such as Isolation Forest, DBSCAN, ThymeBoost, One-Class Support Vector Machine, Local Outlier Factor and TADGAN. Then, it returns the output according as the average prediction of the individual models is above a certain threshold. This package is specially designed for univariate time series.Process Flow Diagram:Performance MetricsA confusion matrix consists of different combinations of predicted and actual anomaly labels.True Positive (TP): Number of instances correctly classified as anomaly by the model. A high value indicates that the model is accurately identifying anomalies.True Negative (TN): Number of instances correctly classified as non anomaly by the model. A high value indicates that the model is accurately identifying non-anomalies.False Positive (FP): Number of instances incorrectly classified as anomaly by the model. A high value indicates that the model is producing a large number of false alarms.False Negative (FN): Number of instances incorrectly classified as non anomaly by the model. A high value indicates that the model is failing to detect many anomalies.Predicted ClassAnomalyNon AnomalyActual ClassAnomalyTrue PositiveFalse NegativeNon AnomalyFalse PositiveTrue NegativeThe confusion matrix helps to calculate important metrics that is used to evaluate anomaly detection models:Accuracy: It measures the overall correctness of the model's predictions, calculated as (TP + TN) / (TP + TN + FP + FN).Precision: It quantifies the proportion of correctly predicted anomalies out of all predicted anomalies, calculated as TP / (TP + FP).Recall/Sensitivity: It represents the proportion of correctly predicted anomalies out of all actual anomalies, calculated as TP / (TP + FN).F1 score: It combines precision and recall into a single metric that is used in case of imbalanced classes (for eg, less anomalies and more non anomalies), calculated as 2 * (precision * recall) / (precision + recall).FeaturesWorks well when the underlying data distribution is unknown.Handles time series data.Deals with trends and seasonality.Provides options to choose anomaly detection models for ensemble learning.Suggests hyperparameters for each anomaly detection model.Provides execution time for each anomaly detection model.Provides more accurate anomaly predictions.Target audienceAnomaly_Ensemble_App should be of interest to readers who are involved in outlier detection for time series data.FunctionalityThe package consists of several modules and each module contains several functions.Package Structure:Module anomaly_libsThis module imports all the needed python libraries that are required to run the package.Module anomaly_data_preprocessingThis module contains the data preprocessing tasks which includes detrending and deseasonalising the time series, as well as finding optimal hyperparameters for the models.output = detrend(df, id_column, time_column, time_format)Definition: This function is used to identify trend and then, detrend a time series.Identify trend: Augmented Dickey Fuller Test (ADF test) is used to capture trend of the time series.Detrend: 'detrend' function is used from the 'scipy' module for detrending the time series.Input Parameters:df is the dataset.id_column is the column over which the function will iterate.time_column is the datetime column.time_format is the datetime format of time_column.Output: Returns detrended time series.output = deseasonalise(df, id_column, time_column, time_format)Definition: This function is used to determine seasonality and then, deseasonlise a time series.Determine seasonality: Autocorrelation function is used to check seasonality.Deseasonalise: 'seasonal_decompose' function is used from the 'statsmodel' module for deseasonalising the time series.Input Parameters:df is the dataset.id_column is the column over which the function will iterate.time_column is the datetime column.time_format is the datetime format of time_column.Output: Returns deseasonalised time series.min_samples = find_min_samples(df)Definition: This function is used to find an hyperparameter for DBSCAN.Find min_samples: min_samples is chosen as 2*n, where n = the dimensionlity of the data.Input Parameters: df is the dataset.Output: Returns min_samples.eps = find_eps(df)Definition: This function is used to find an hyperparameter for DBSCAN.Find eps: eps is chosen as the point of maximum curvature of the k-NN distance graph.Input Parameters: df is the dataset.Output: Returns eps.p = find_seasonal_period(df)Definition: This function is used to find an hyperparameter for Thymeboost.Find p: seasonal_period is chosen as the first expected seasonal period at the maximum amplitude, computed using Fast Fourier Transform.Input Parameters: df is the dataset.Output: Returns seasonal_period.(best_nu, best_kernel) = parameters_oc_svm(X, y, trials=10)Definition: This function is used to hyperparameters for One Class SVM.Find best_nu & best_kernel: Best optimal nu and kernal are found using Optuna hyperparameter optimization frameworkInput Parameters:df is the dataset.id_column is the column over which the function will iterate.time_column is the datetime column.time_format is the datetime format of time_column.Output: Returns best_nu and best_kernel.Module anomaly_modelsThis module contains the models which are to be fitted on the data and hence used in anomaly prediction.ifo_labels = fit_iforest(X, ** kwargs)Definition: This function is used to fit isolation forest model to the data and predict anomaly labels.Model: Isolation forest is a decision tree based anomaly detection algorithm.
It isolates the outliers by randomly selecting a feature, and
then randomly selecting a split value between the max and min values of that feature.
It randomly selects a feature, and then randomly selects a split value from the max and min values of that feature.
It isolates the outliers based on path length of the node/data point in the tree structure.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for Isolation Forest from a keyword-based Python dictionary.Output: Returns the anomaly labels.dbscan_labels = fit_dbscan(X, ** kwargs)Definition: This function is used to fit dbscan model to the data and predict anomaly labels.Model: DBSCAN is a density based anomaly detection algorithm.
It groups together the core points in clusters which are in high density regions, surrounded by border points.
It marks the other points as outliers.Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameters for DBSCAN, eps and min_samples.Output: Returns the anomaly labels.tb_labels = fit_ThymeBoost(df, ** kwargs)Definition: This function is used to fit thymeboost model to the data and predict anomaly labels.Model: ThymeBoost is an anomaly detection algorithm which applies gradient boosting on time series decomposition.It is a time series model for trend/seasonality/exogeneous estimation and decomposition using gradient boosting. It classifies a datapoint as outlier when it does not lie within the range of the fitted trend.Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for Thymeboost, seasonal_period.Output: Returns the anomaly labels.ocsvm_labels = fit_oc_svm(df, ** kwargs)Definition: This function is used to fit one-class svm model to the data and predict anomaly labels.Model: One-Class Support Vector Machine (SVM) is an anomaly detection algorithm.
It finds a hyperplane that separates the data set from the origin such that
the hyperplane is as close to the datapoints as possible. It fits a non-linear
boundary around the dense region of the data set separating the remaining points
as outliers.
It minimizes the volume of the hypersphere that separates the data points from the origin in the feature space.
It marks the data points outside the hypersphere as outliers.Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for One Class SVM, best_nu and best_kernel.Output: Returns the anomaly labels.lof_labels = fit_lof(df, k):Definition: This function is used to fit lof model to the data and predict anomaly labels.Model: Local outlier factor (LOF) is an anomaly detection algorithm.
It computes the local density deviation of a data point with respect to its neighbors.
It considers the data points that fall within a substantially lower density range than its neighbors as outliers.Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for LOF, alg.Output: Returns the anomaly labels.tadgan_labels = fit_tadgan(df, k):Definition: This function is used to fit tadgan model to the data and predict anomaly labels.Model:Prediction labels: Anomaly marked as -1 and normal as 1.Input Parameters:X is the dataset.** kwargs takes the hyperparameter for tadgan, epochs.Output: Returns the anomaly labels.Module anomaly_detectionThis module contains the majority vote algorithm.voters = election(voters, n_voters, threshold)Definition: This function is used to find the final anomaly labels of the ensemble method.Model: Ensemble method is a tree based anomaly detection method. It identifies outliers based on majority voting logic. If the average predicted data point of all the models is above a certain threshold, then it is marked as an outlier.Prediction labels: Anomaly marked as 1 and normal as 0.Input Parameters:voters is the dataframe that contains prediction columns of all models that are fit for the run.n_voters is the number of models that are fit for the run.threshold is the limit above which a datapoint is considered an outlier.Output: Returns the anomaly labels.(election_results, models_dict) = get_labels(X, ** kwargs)Definition: This function is used to .Model:Prediction labels: Anomaly marked as 1 and normal as 0.Input Parameters:X is the dataset.** kwargsOutput: Returns the anomaly labels.election_results is the datframe with predicted labels of all models.models_dict is the Python dictionary of the models containing fit function, execution time, labels and parameters.Module anomaly_mainThis module contains the results to be displayed.find_parameters(self)Definition: This function is used to provide parameters of all the models.find_anomalies(self)Definition: This function is used to find the model performance.Input parameters:Installation instructionsPrerequisitesAnomaly_Ensemble_App has been developed and tested in Python Version: v3.7 and requires some libraries.python3 -m pip install -r "topath\requirements.txt"Application installationpython3 -m pip install Anomaly_Ensemble_AppDemoDataUse exemplary datasetsCodefrom anomaly_ensemble_app.anomaly_main import *
import pandas as pd
original_data = "syntethic_original.csv"
original_DF = pd.read_csv(original_data, sep=";")
anomaly_detection_obj = AnomalyDetection(original_DF, 'spare_part_id', 'yyyymm', '%Y%m', models=["full"])
anomaly_detection_labels.performance_DF
anomaly_detection_labels.final_dfCommunity guidelinesContribute to the softwareTo contribute fixes, feature modifications or enhancements, a pull request can be created in thePull requeststab of the project GitHub repository. When contributing to the software, the folowing should be included.Description of the change;Check that all tests pass;Include new tests to report the change.Report issues or problems with the softwareAny feature request or issue can be submitted to the theIssuestab of the project GitHub repository. When reporting issues with the software, the folowing should be included.Description of the problem;Error message;Python version and Operating System.Seek supportIf any support needed, the authors can be contacted by e-mail @volvo.com.Software licenseAnomaly_Ensemble_App is released under the MIT License.
|
anomalyHTM
|
Numenta Platform for Intelligent Computing: a machine intelligence platform that implements the HTM learning algorithms. HTM is a detailed computational theory of the neocortex. At the core of HTM are time-based continuous learning algorithms that store and recall spatial and temporal patterns. NuPIC is suited to a variety of problems, particularly anomaly detection and prediction of streaming data sources.For more information, seehttp://numenta.orgor the NuPIC wiki athttps://github.com/numenta/nupic/wiki.
|
anomalytics
|
AnomalyticsYour Ultimate Anomaly Detection & Analytics ToolIntroductionanomalyticsis a Python library that aims to implement all statistical methods for the purpose of detecting any sort of anomaly e.g. extreme events, high or low anomalies, etc. This library utilises external dependencies such as:Pandas 2.1.1NumPy 1.26.0SciPy 1.11.3Matplotlib 3.8.2Pytest-Cov 4.1.0.Black 23.10.0Isort 5.12.0MyPy 1.6.1Bandit 1.7.5anomalyticssupports the following Python's versions:3.10.x,3.11.x,3.12.0.InstallationTo use the library, you can install as follow:# Install without openpyxl$pip3installanomalytics# Install with openpyxl$pip3install"anomalytics[extra]"As a contributor/collaborator, you may want to consider installing all external dependencies for development purposes:# Install bandit, black, isort, mypy, openpyxl, pre-commit, and pytest-cov$pip3install"anomalytics[codequality,docs,security,testcov,extra]"Use Caseanomalyticscan be used to analyze anomalies in your dataset (both aspandas.DataFrameorpandas.Series). To start, let's follow along with this minimum example where we want to detect extremely high anomalies in our dataset.Read the walkthrough below, or the concrete examples here:Extreme Anomaly Analysis - DataFrameBattery Water Level Analysis - Time SeriesAnomaly Detection via theDetectorInstanceImportanomalyticsand initialise our time series of 100_002 rows:importanomalyticsasaticsdf=atics.read_ts("./ad_impressions.csv","csv")df.head()datetimexandrgamadobe02023-10-1809:01:0052.48357171.02113135.68191512023-10-1809:02:0049.30867873.65199660.34724622023-10-1809:03:0053.23844365.69081348.12080532023-10-1809:04:0057.61514980.94439359.55077542023-10-1809:05:0048.82923376.44509926.710413Initialize the needed detector object. Each detector utilises a different statistical method for detecting anomalies. In this example, we'll use POT method and a high anomaly type. Pay attention to the time period that is directly created where thet2is 1 by default because "real-time" always targets the "now" period hence 1 (sec, min, hour, day, week, month, etc.):pot_detector=atics.get_detector(method="POT",dataset=ts,anomaly_type="high")print(f"T0:{pot_detector.t0}")print(f"T1:{pot_detector.t1}")print(f"T2:{pot_detector.t2}")pot_detector.plot(ptype="line-dataset-df",title=f"Page Impressions Dataset",xlabel="Minute",ylabel="Impressions",alpha=1.0)T0:42705T1:16425T2:6570The purpose of using the detector object instead the standalone is to have a simple fix detection flow. In case you want to customize the time window, we can call thereset_time_window()to resett2value, even though that will beat the purpose of using a detector object. Pay attention to the period parameters because the method expects a percentage representation of the distribution of period (ranging 0.0 to 1.0):pot_detector.reset_time_window("historical",t0_pct=0.65,t1_pct=0.25,t2_pct=0.1)print(f"T0:{pot_detector.t0}")print(f"T1:{pot_detector.t1}")print(f"T2:{pot_detector.t2}")pot_detector.plot(ptype="hist-dataset-df",title="Dataset Distributions",xlabel="Distributions",ylabel="Page Impressions",alpha=1.0,bins=100)T0:65001T1:25001T2:10000Now, we can extract exceedances by giving the expectedquantile:pot_detector.get_extremes(0.95)pot_detector.exeedance_thresholds.head()xandrgamadobedatetime058.22465385.17702960.3623062023-10-1809:01:00158.22465385.17702960.3623062023-10-1809:02:00258.22465385.17702960.3623062023-10-1809:03:00358.22465385.17702960.3623062023-10-1809:04:00458.22465385.17702960.3623062023-10-1809:05:00Let's visualize the exceedances and its threshold to have a clearer understanding of our dataset:pot_detector.plot(ptype="line-exceedance-df",title="Peaks Over Threshold",xlabel="Minute",ylabel="Page Impressions",alpha=1.0)Now that we have the exceedances, we can fit our data into the chosen distribution, in this example the "Generalized Pareto Distribution". The first couple rows will be zeroes which is normal because we only fit data that are greater than zero into the wanted distribution:pot_detector.fit()pot_detector.fit_result.head()xandr_anomaly_scoregam_anomaly_scoreadobe_anomaly_scoretotal_anomaly_scoredatetime01.0871470.0000000.0000001.0871472023-11-1700:46:0010.0000000.0000000.0000000.0000002023-11-1700:47:0020.0000000.0000000.0000000.0000002023-11-1700:48:0030.0000001.8158750.0000001.8158752023-11-1700:49:0040.0000000.0000000.0000000.0000002023-11-1700:50:00
...Let's inspect the GPD distributions to get the intuition of our pareto distribution:pot_detector.plot(ptype="hist-gpd-df",title="GPD - PDF",xlabel="Page Impressions",ylabel="Density",alpha=1.0,bins=100)The parameters are stored inside the detector class:pot_detector.params{0:{'xandr':{'c':-0.11675297447288158,'loc':0,'scale':2.3129766056305603,'p_value':0.9198385927065513,'anomaly_score':1.0871472537998},'gam':{'c':0.0,'loc':0.0,'scale':0.0,'p_value':0.0,'anomaly_score':0.0},'adobe':{'c':0.0,'loc':0.0,'scale':0.0,'p_value':0.0,'anomaly_score':0.0},'total_anomaly_score':1.0871472537998},1:{'xandr':{'c':0.0,'loc':0.0,'scale':0.0,'p_value':0.0,'anomaly_score':0.0},'gam':{'c':0.0,'loc':0.0,'scale':0.0,'p_value':0.0,
...'scale':0.0,'p_value':0.0,'anomaly_score':0.0},'total_anomaly_score':0.0},
...}Last but not least, we can now detect the extremely large (high) anomalies:pot_detector.detect(0.95)pot_detector.detection_result16425False16426False16427False16428False16429False...22990False22991False22992False22993False22994False
Name:detecteddata,Length:6570,dtype:boolNow we can visualize the anomaly scores from the fitting with the anomaly threshold to get the sense of the extremely large values:pot_detector.plot(ptype="line-anomaly-score-df",title="Anomaly Score",xlabel="Minute",ylabel="Page Impressions",alpha=1.0)Now what? Well, while the detection process seems quite straight forward, in most cases getting the details of each anomalous data is quite tidious! That's whyanomalyticsprovides a comfortable method to get the summary of the detection so we can see when, in which row, and how the actual anomalous data look like:pot_detector.detection_summary.head(5)rowxandrgamadobexandr_anomaly_scoregam_anomaly_scoreadobe_anomaly_scoretotal_anomaly_scoreanomaly_threshold2023-11-2812:06:005922564.11713576.42592547.77292921.4457590.0000000.00000021.44575919.6898852023-11-2812:25:005924440.51341594.52602165.9216440.00000019.5579622.68533722.24329919.6898852023-11-2812:45:005926452.36203954.19171979.9728600.0000000.00000072.31327372.31327319.6898852023-11-2816:48:005950764.75320370.34414242.54016832.5430210.0000000.00000032.54302119.6898852023-11-2816:53:005951235.91222152.57293975.6210030.0000000.00000022.19950522.19950519.689885In every good analysis there is a test! We can evaluate our analysis result with "Kolmogorov Smirnov" 1 sample test to see how far the statistical distance between the observed sample distributions to the theoretical distributions via the fitting parameters (the smaller thestats_distancethe better!):pot_detector.evaluate(method="ks")pot_detector.evaluation_resultcolumntotal_nonzero_exceedancesstats_distancep_valueclocscale0xandr33110.0129010.635246-0.12856102.3290051gam32790.0110060.817674-0.14047903.8525742adobe32980.0194790.161510-0.13301906.007833If 1 test is not enough for evaluation, we can also visually test our analysis result with "Quantile-Quantile Plot" method to observed the sample quantile vs. the theoretical quantile:# Use the last non-zero parameterspot_detector.evaluate(method="qq")# Use a random non-zero parameterspot_detector.evaluate(method="qq",is_random=True)Anomaly Detection via Standalone FunctionsYou have a project that only needs to be fitted? To be detected? Don't worry!anomalyticsalso provides standalone functions as well in case users want to start the anomaly analysis from a different starting points. It is more flexible, but many processing needs to be done by you. LEt's take an example with a different dataset, thistime the water level Time Series!Importanomalyticsand initialise your time series:importanomalyticsasaticsts=atics.read_ts("water_level.csv","csv")ts.head()2008-11-0306:00:000.2192008-11-0307:00:00-0.0412008-11-0308:00:00-0.2822008-11-0309:00:00-0.3682008-11-0310:00:00-0.400
Name:WaterLevel,dtype:float64Set the time windows of t0, t1, and t2 to compute dynamic expanding period for calculating the threshold via quantile:t0,t1,t2=atics.set_time_window(total_rows=ts.shape[0],method="POT",analysis_type="historical",t0_pct=0.65,t1_pct=0.25,t2_pct=0.1)print(f"T0:{t0}")print(f"T1:{t1}")print(f"T2:{t2}")T0:65001T1:25001T2:10000Extract exceedances and indicate that it is a"high"anomaly type and what's thequantile:pot_thresholds=get_threshold_peaks_over_threshold(dataset=ts,t0=t0,"high",q=0.90)pot_exceedances=atics.get_exceedance_peaks_over_threshold(dataset=ts,threshold_dataset=pot_thresholds,anomaly_type="high")exceedances.head()2008-11-0306:00:000.8592008-11-0307:00:000.8592008-11-0308:00:000.8592008-11-0309:00:000.8592008-11-0310:00:000.859
Name:WaterLevel,dtype:float64Compute the anomaly scores for each exceedance and initialize a params for further analysis and evaluation:params={}anomaly_scores=atics.get_anomaly_score(exceedance_dataset=pot_exceedances,t0=t0,gpd_params=params)anomaly_scores.head()2016-04-0315:00:000.02016-04-0316:00:000.02016-04-0317:00:000.02016-04-0318:00:000.02016-04-0319:00:000.0
Name:anomalyscores,dtype:float64
...Inspect the parameters:params{0:{'index':Timestamp('2016-04-03 15:00:00'),'c':0.0,'loc':0.0,'scale':0.0,'p_value':0.0,'anomaly_score':0.0},1:{'index':Timestamp('2016-04-03 16:00:00'),
...'c':0.0,'loc':0.0,'scale':0.0,'p_value':0.0,'anomaly_score':0.0},
...}Detect anomalies:anomaly_threshold=get_anomaly_threshold(anomaly_score_dataset=anomaly_scores,t1=t1,q=0.90)detection_result=get_anomaly(anomaly_score_dataset=anomaly_scores,threshold=anomaly_threshold,t1=t1)detection_result.head()2020-03-3119:00:00False2020-03-3120:00:00False2020-03-3121:00:00False2020-03-3122:00:00False2020-03-3123:00:00False
Name:anomalies,dtype:boolFor the test, kolmogorov-smirnov and qq plot are also accessible via standalone functions, but the params need to be processed so it only contains a non-zero parameters since there are no reasons to calculate a zero 😂nonzero_params=[]forrowinrange(0,t1+t2):if(params[row]["c"]!=0orparams[row]["loc"]!=0orparams[row]["scale"]!=0):nonzero_params.append(params[row])ks_result=atics.evals.ks_1sample(dataset=pot_exceedances,stats_method="POT",fit_params=nonzero_params)ks_result{'total_nonzero_exceedances':[5028],'stats_distance':[0.0284]'p_value':[0.8987],'c':[0.003566],'loc':[0],'scale':[0.140657]}Visualize via qq plot:nonzero_exceedances=exceedances[exceedances.values>0]visualize_qq_plot(dataset=nonzero_exceedances,stats_method="POT",fit_params=nonzero_params,)Sending Anomaly NotificationWe have anomaly you said? Don't worry,anomalyticshas the implementation to send an alert via E-Mail or Slack. Just ensure that you have your email password or Slack webhook ready. This example shows both application (please read the comments 😎):Initialize the wanted platform:# Gmailgmail=atics.get_notification(platform="email",sender_address="[email protected]",password="AIUEA13",recipient_addresses=["[email protected]","[email protected]"],smtp_host="smtp.gmail.com",smtp_port=876,)# Slackslack=atics.get_notification(platform="slack",webhook_url="https://slack.com/my-slack/YOUR/SLACK/WEBHOOK",)print(gmail)print(slack)'Email Notification''Slack Notification'Prepare the data for the notification! If you use standalone, you need to process thedetection_resultto become a DataFrame withrow, ``# Standalonedetected_anomalies=detection_result[detection_result.values==True]anomalous_data=ts[detected_anomalies.index]standalone_detection_summary=pd.DataFrame(index=anomalous.index.flatten(),data=dict(row=[ts.index.get_loc(index)+1forindexinanomalous.index],anomalous_data=[datafordatainanomalous.values],anomaly_score=[scoreforscoreinanomaly_score[anomalous.index].values],anomaly_threshold=[anomaly_threshold]*anomalous.shape[0],))# Detector Instancedetector_detection_summary=pot_detector.detection_summaryPrepare the notification payload and a custome message if needed:# Emailgmail.setup(detection_summary=detection_summary,message="Extremely large anomaly detected! From Ad Impressions Dataset!")# Slackslack.setup(detection_summary=detection_summary,message="Extremely large anomaly detected! From Ad Impressions Dataset!")Send your notification! Beware that the scheduling is not implemented since it always depends on the logic of the use case:# Emailgmail.send# Slackslack.send'Notification sent successfully.'Check your email or slack, this example produces the following notification via Slack:ReferenceNakamura, C. (2021, July 13). On Choice of Hyper-parameter in Extreme Value Theory Based on Machine Learning Techniques. arXiv:2107.06074 [cs.LG].https://doi.org/10.48550/arXiv.2107.06074Davis, N., Raina, G., & Jagannathan, K. (2019). LSTM-Based Anomaly Detection: Detection Rules from Extreme Value Theory. In Proceedings of the EPIA Conference on Artificial Intelligence 2019.https://doi.org/10.48550/arXiv.1909.06041Arian, H., Poorvasei, H., Sharifi, A., & Zamani, S. (2020, November 13). The Uncertain Shape of Grey Swans: Extreme Value Theory with Uncertain Threshold. arXiv:2011.06693v1 [econ.GN].https://doi.org/10.48550/arXiv.2011.06693Yiannis Kalliantzis. (n.d.). Detect Outliers: Expert Outlier Detection and Insights. Retrieved [23-12-04T15:10:12.000Z], fromhttps://detectoutliers.com/Wall of FameI am deeply grateful to have met and guided by wonderful people who inspired me to finish my capstone project for my study at CODE university of applied sciences in Berlin (2023). Thank you so much for being you!Sabrina LindenbergAdam RoeAlessandro DolciChristian LeschinskiJohanna KokocinskiPeter Krauß
|
anomaly-toolbox
|
Anomaly ToolboxDescriptionAnomaly Toolbox Powered by GANs.This is the accompanying toolbox for the paper "A
Survey on GANs for Anomaly Detection" (https://arxiv.org/pdf/1906.11632.pdf).The toolbox is meant to be used by the user to explore the performance of different GAN based
architectures (in our work aka "experiments"). It also already provides some datasets to
perform experiments on:MNIST,Corrupted MNIST,Surface Cracks(https://www.kaggle.com/arunrk7/surface-crack-detection),MVTec AD(https://www.mvtec.com/fileadmin/Redaktion/mvtec.
com/company/research/datasets/mvtec_ad.pdf).We provided theMNISTdataset because the original works extensively use it. On the other hand,
we have also added the previously listed datasets both because used by a particular
architecture and because they contribute a good benchmark for the models we have implemented.All the architectures were tested on commonly used datasets such asMNIST,FashionMNIST,CIFAR-10, andKDD99. Some of them were even tested on more specific datasets, such as an
X-Ray dataset that, however, we could not provide because of the impossibility of getting the
data (privacy reasons).The user can create their own dataset and use it to test the models.Quick StartFirst thing first, install the toolboxpipinstallanomaly-toolboxThen you can choose what experiment to run. For example:Run the GANomaly experiment (i.e., the GANomaly architecture) with hyperparameters tuning
enabled, the pre-defined hyperparameters filehparams.jsonand theMNISTdataset:anomaly-box.py--experimentGANomalyExperiment--hps-pathpath/to/config/hparams.json--datasetMNISTOtherwise, you can run all the experiments using the pre-defined hyperparameters filehparams.
jsonand theMNISTdataset:anomaly-box.py--run-all--hps-pathpath/to/config/hparams.json--datasetMNISTFor any other information, feel free to check the help:anomaly-box.py--helpContributionThis work is completely open source, andwe would appreciate any contribution to the code.
Any merge request to enhance, correct or expand the work is welcome.NotesThe structures of the models inside the toolbox come from their respective papers. We have tried to
respect them as much as possible. However, sometimes, due to implementation issues, we had to make
some minor-ish changes. For this reason, you could find out that, in some cases, some features
such as the number of layers, the size of kernels, or other such things may differ from the
originals.However, you don't have to worry. The heart and purpose of the architectures have remained intact.Installationpip install anomaly-toolboxUsageOptions:
--experiment [AnoGANExperiment|DeScarGANExperiment|EGBADExperiment|GANomalyExperiment]
Experiment to run.
--hps-path PATH When running an experiment, the path of the
JSON file where all the hyperparameters are
located. [required]
--tuning BOOLEAN If you want to use hyperparameters tuning,
use 'True' here. Default is False.
--dataset TEXT The dataset to use. Can be a ready to use
dataset, or a .py file that implements the
AnomalyDetectionDataset interface
[required]
--run-all BOOLEAN Run all the available experiments
--help Show this message and exit.Datasets and Custom DatasetsThe provided datasets are:MNISTCorrupted MnistSurface Crack (https://www.kaggle.com/arunrk7/surface-crack-detection)MVTec AD (https://www.mvtec.com/fileadmin/Redaktion/mvtec.com/company/research/datasets/mvtec_ad.pdf)and are automatically downloaded when the user makes a specific choice: ["MNIST",
"CorruptedMNIST", "SurfaceCracks","MVTecAD"].The user can also add its own specific dataset. To do this, the new dataset should inherit from
theAnomalyDetectionDatasetabstract class implementing its ownconfiguremethod. For a more
detailed guide, the user can refer to theREADME.mdfile inside thesrc/anomaly_toolbox/datasetsfolder. Moreover, in theexamplesfolder, the user can find adummy.pymodule with the basic skeleton code to implement a dataset.ReferencesGANomaly:Paper:https://arxiv.org/abs/1805.06725Code:https://github.com/samet-akcay/ganomalyEGBAD (BiGAN):Paper:https://arxiv.org/abs/1802.06222Code:https://github.com/houssamzenati/Efficient-GAN-Anomaly-DetectionAnoGAN:Paper:https://arxiv.org/abs/1703.05921Code (not official):https://github.com/LeeDoYup/AnoGANCode (not official):https://github.com/tkwoo/anogan-kerasDeScarGAN:Paper:https://arxiv.org/abs/2007.14118Code:https://github.com/JuliaWolleb/DeScarGAN
|
anomalytronic
|
No description available on PyPI.
|
anomasota
|
ANOMAly detection in State-Of-The-Art (anomasota)
|
anomatools
|
anomatoolsanomatoolsis a small Python package containing recentanomaly detection algorithms.
Anomaly detection strives to detectabnormaloranomalousdata points from a given (large) dataset.
The package contains several state-of-the-art semi-supervised and unsupervised anomaly detection algorithms.InstallationInstall the package directly from PyPi with the following command:pipinstallanomatoolsOR install the package using thesetup.pyfile:pythonsetup.pyinstallOR install it directly from GitHub itself:pipinstallgit+https://github.com/Vincent-Vercruyssen/anomatools.git@masterContents and usageSemi-supervised anomaly detectionGiven a dataset with attributesXand labelsY, indicating whether a data point isnormaloranomalous, semi-supervised anomaly detection algorithms are trained using all the instancesXand some of the labelsY.
Semi-supervised approaches to anomaly detection generally outperform the unsupervised approaches, because they can use the label information to correct the assumptions on which the unsupervised detection process is based.
Theanomatoolspackage implements two recent semi-supervised anomaly detection algorithms:TheSSDO(semi-supervised detection of outliers) algorithm first computes an unsupervised prior anomaly score and then corrects this score with the known label information [1].TheSSkNNO(semi-supervised k-nearest neighbor anomaly detection) algorithm is a combination of the well-knownkNNclassifier and thekNNO(k-nearest neighbor outlier detection) method [2].Given a training datasetX_trainwith labelsY_train, and a test datasetX_test, the algorithms are applied as follows:fromanomatools.modelsimportSSkNNO,SSDO# traindetector=SSDO()detector.fit(X_train,Y_train)# predictlabels=detector.predict(X_test)Similarly, the probability of each point inX_testbeing normal or anomalous can also be computed:probabilities=detector.predict_proba(X_test,method='squash')Sometimes we are interested in detecting anomalies in the training data (e.g., when we are doing a post-mortem analysis):# traindetector=SSDO()detector.fit(X_train,Y_train)# predictlabels=detector.labels_Unsupervised anomaly detection:Unsupervised anomaly detectors do not make use of label information (user feedback) when detecting anomalies in a dataset. Given a dataset with attributesXand labelsY, the unsupervised detectors are trained using onlyX.
Theanomatoolspackage implements two recent semi-supervised anomaly detection algorithms:ThekNNO(k-nearest neighbor outlier detection) algorithm computes for each data point the anomaly score as the distance to its k-nearest neighbor in the dataset [3].TheiNNE(isolation nearest neighbor ensembles) algorithm computes for each data point the anomaly score roughly based on how isolation the point is from the rest of the data [4].Given a training datasetX_trainwith labelsY_train, and a test datasetX_test, the algorithms are applied as follows:fromanomatools.modelsimportkNNO,iNNE# traindetector=kNNO()detector.fit(X_train,Y_train)# predictlabels=detector.predict(X_test)Package structureThe anomaly detection algorithms are located in:anomatools/models/For further examples of how to use the algorithms see the notebooks:anomatools/notebooks/DependenciesTheanomatoolspackage requires the following python packages to be installed:Python 3NumpyScipyScikit-learnContactContact the author of the package:[email protected][1] Vercruyssen, V., Meert, W., Verbruggen, G., Maes, K., Bäumer, R., Davis, J. (2018)Semi-Supervised Anomaly Detection with an Application to Water Analytics.IEEE International Conference on Data Mining (ICDM), Singapore. p527--536.[2] Vercruyssen, V., Meert, W., Davis, J. (2020)Transfer Learning for Anomaly Detection through Localized and Unsupervised Instance Selection.AAAI Conference on Artificial Intelligence, New York.
|
anomeda
|
Introduction to Anomedaanomeda package helps you analyze non-aggregated time-series data with Python and quickly indentify important changes of your metric.Here is a brief example of howanomedacan work for you."Why has the number of our website visits decreased a week ago? What kind of users caused that?" - anomeda will answer such questions quickly by processingnon-aggregatedvisits of your website.It will show you, for instance, that users from the X country using the Y device suddenly stopped visiting your website. Not only that, even if you are not aware of any significant change of the number of visits, anomeda will highlight the cluster of events where it happened.Is it fraudulent activity, a paused marketing campaign or technical issues? It's up to you to investigate.The package is easy-to-use and adjustable enough to meet a wide range of real scenarios. The basic object,anomeda.DataFrame, inheritspandas.DataFrame, so you will find the API familiar. In addition, there are different options for fine-tuning alghorithms used under the hood.Some of whatanomedacan do for yournon-aggregated data:Highlight time points and clusters when the trend, mean or variance changedFit trends for any cluster considering the points where trends changeHighlight time points and clusters if the anomalies were observed, considering trend at that momentCompare time periods and find clusters changing the metricFind the project in itsGitHub repo.Explore theDocumentation of anomeda.Quick startLet's imagine you oversee the number of visits of a website.You have a table with visits. Typically you just aggregate them by a datetime column and monitor from 3 to 5 dashboards with overall number of visits, as well as visits of important pages, visits from specific systems, visits of specific users clustes, etc. Here is what you would do withanomeda.Let's define an anomeda object.importanomedaanomeda_df=anomeda.DataFrame(df,# pandas.DataFramemeasures_names=['country','system','url','duration'],# columns represending measures or characteristics of your eventsmeasures_types={'categorical':[;'country','system','url'],'continuous':['duration']# measures can also be continuous - anomeda will take care of clustering them properly},index_name='date',metric_name='visit',# dummy metric, always 1agg_func='sum'# function that is used to aggregate metric)anomeda.DataFrameinheritspandas.DataFrame, so you can treat them similarly.NOTESomepandasmethods are not yet adapted foranomeda. They return a newpandas.DataFrameinstead of aanomeda.DataFrame. You just need to initialize ananomedaobject with a returned object in that case.Let's try to extract trends for important clusters from the data.trends=anomeda.fit_trends(anomeda_df,trend_fitting_conf={'max_trends':'auto','min_var_reduction':0.75},# set the number of trends automatically,# try to reduce error variance compared to error of estimating values by 1-line trend by 75%breakdown='all-clusters',# fit trends for clusters extracted from all possible sets of measuresmettic_propagte='zeros',# if some index values are missed after aggregation for a cluster, fill them with zerosmin_cluster_size=3# skip small clusters, they all will be combined into 'skipped' cluster)Typically you will see something like this:You can then plot the trends using theplot_trendsmethod. You can choose a specific cluster or plot them all together.anomeda.plot_trends(anomeda_df,clusters=['`country`=="Germany"'])The output will look like this:Of course, you may have no idea which cluster caused the problem and what to plot. Almost always you know only that there is a decrease of an overall metric and you need to find the culprits. Let's utilize another method --anomeda.compare_clusters.anomeda.compare_clusters(anomeda_df,period1='date < "2024-01-30"',period2='date >= "2024-01-30"')You see the clusters you fitted before and comparison between their characteristics. The result is quite hefty, but you can easily add your own metrics and sort clusters so that the cluster you are looking for will be on top. For example, look at how different means in the second cluster are. The second cluster corresponds to Germany (the first cluster consists of all events, so we are not interested in it now).Finally, you can check if there are any point anomalies present in any of your clusters.anomeda.find_anomalies(anomeda_df,anomalies_conf:{'p_large':1,'p_low':1,'n_neighbors':3})The output will look like this:If you plot the metric with its clusters, it would look quite reasonable.There are some nuances of how to useanomedawisely and powerfully. For example, you may use same anomeda methods simply with numpy arrays, without creating DataFrame's! See fullDocumentationfor more details and hints.Installinganomedais availale from PyPI. You may run apip installcommand:pip install anomedaAlso, theGitHub repocontains the source and built distribution files indistfolder.You must have such packages be installed:pandasnumpysklearnscipymatplotlibContributionYou are very welcome to participate in developing to the project. You may solve the current issues or add new functionality - it is up for you to decide.Here is how your flow may look like:Preparing your ForkClick ‘Fork’ on Github, creating e.g. yourname/theproject.Clone your project: git [email protected]:yourname/theproject.cd theprojectCreate and activate a virtual environment.Install the development requirements: pip install -r dev-requirements.txt.Create a branch: git checkout -b my_branchMaking your ChangesMake the changesWrite tests checking your code works for different scenariousRun tests, make sure they pass.Commit your changes: git commit -m "Foo the bars"Creating Pull RequestsPush your commit to get it back up to your fork: git push origin HEADVisit Github, click handy “Pull request” button that it will make upon noticing your new branch.In the description field, write down issue number (if submitting code fixing an existing issue) or describe the issue + your fix (if submitting a wholly new bugfix).Hit ‘submit’!Reporting issuesTo report an issue, you should useIssues sectionof the project's page on Github. We will try to solve the issue as soon as possible.ContactsIf you have any questions related toanomedaproject, feel free reaching out to the author.
|
anomix
|
anomixWhat is it?anomixis a python package for estimating, and simulating uni-variate mixture models.
We primarily useExpectation Maximization (EM)for
parameter estimation.anomixis specifically adapted to anomaly detection as well,
estimating probabilities of observing given data, relying on the component distributions.anomixwas primarily built with anomaly detection in mind, to uncover samples in data that
appear to be unlikely given the data modeled as mixtures of a given univariate distributions.The models have built in plotting mechanisms once trained ot the data that can be extended
to support more specific figure requirements.Why?EMExpectation Maximization has some nice properties, with a guarantee to converge on the maximum likelihood estimate of
the parameters. Also, for completeness in the python ecosystem, there are a several bayesian mixture modeling packages but
none seem to rely on EM. There also seems to be a similar package inmixem,
which implements much of the same EM fitting arcitecture.Why anomalies?Unsupervised anomaly detection is an increasingly important domain within the larger ML and statistical learning
literature (citation). There is a statistical literature that we can explore to construct well founded
probability estimates of the tails and the anomalies. This work extends previous work in open source python packages
for EM models into the domain of anomaly detection.ExampleA simple example would be to imagine the sampled heights of 18 year-olds, and of 5 year-olds. The heights can be expected
to be well represented as a mixture of two normals, with location parameters of 43 and 67 (inches),
and standard deviations of 1.5 and 3.fromanomix.models.modelsimportNormalMixtureModelheight_model=NormalMixtureModel()height_model.preset(weights=[.5,.5],loc=[43,67],scale=[1.5,3])OR by estimationfromanomix.models.modelsimportNormalMixtureModelfromnumpy.randomimportnormalheight_model=NormalMixtureModel()data=normal(loc=[43,67],scale=[1.5,3],size=500).flatten()height_model.fit(data)f,ax=height_model.plot_pdf()Then, we observe a new batch of individuals - a 5th grade classroom, with an average of 55 and a standard deviation of 3.
We can test to see which of these new heights are anomalous given our model.new_data=normal(loc=55,scale=3,size=30)anomalous=height_model.predict_anomaly(new_data,threshold=.95)And we can overlay this on our pdf:f,axes=plt.subplots(1,2,sharey=True,figsize=(15,8))f,ax1=height_model.plot_pdf(show=False,fig_ax=(f,axes[0]))f,ax2=height_model.plot_pdf(show=False,fig_ax=(f,axes[1]))_,_,s0=ax2.hist(new_data,density=True,alpha=.5)s1=ax2.scatter(x=new_data[anomalous],y=np.zeros_like(new_data[anomalous]),c='red',marker=2,s=100,label='Anomalous')s2=ax2.scatter(x=new_data[~anomalous],y=np.zeros_like(new_data[~anomalous]),c='green',marker=2,s=100,label='Non-Anomalous')ax2.legend([s0,s1,s2],['new-data','Anomaly','Non-Anomaly'])plt.show()Distributions SupportedNormalLogNormalExponentialCauchy(*)Students T (*)BinomialPoissonGeometricZeroInflatedNormalZeta/Zipf(*)(*) means non-EM based parameter estimationInstallationCompile from sourcegit clone <this url>pip install . -e anomixDownload from pypi and install usingpippip install anomixTODO: Register on pypiContributingWe want to continue to add new models. Just replicate the model structures within 'univariate', implement all abstract classes.We are considering mixtures with implementing multivariate data. See the branch 'multivariate' for the work that was started thereFuture improvementsmore anomaly prediction optionsmore tests and code coveragemore docstravis yaml? (not sure who this is but i see it on many projects its useful haha)add [smm] option to pip install, in case user does not want the Students T Mixture Modelpip install anomix[em] maybe installs only the EM ones? (aka not the cauchy, zeta, smm)other potential methods of verifying the estimates:variance of parameter estimate is approx normal with variance ~ 1/ncould run a bunch of data simulations and estimations to observe the variance of the estimator is normal around the
true estimate
|
anon
|
title: anon
...anonTable of contentsInstallationInstallationThebaseanon package can be installed from a terminal with the following command:$pipinstallanonThis installation includes basic tools for composing "neural network" -like models along with some convenient IO utilities. However, both automatic differentiation and JIT capabilities require Google's Jaxlib module which is currently in early development and only packaged for Ubuntu systems. On Windows systems this can be easily overcome by downloading the Ubuntu terminal emulator from Microsoft's app store and enabling the Windows Subsystem for Linux (WSL). The following extended command will install anon along with all necessary dependencies for automatic differentiation and JIT compilation:$pipinstallanon[jax]The in-development version can be installed the following command:$pipinstallhttps://github.com/claudioperez/anon/archive/master.zipChangelog0.0.0 (2021-01-14)First release on PyPI.
|
anon-ai-toolbelt
|
The Anon AI Toolbelt is a command line interface (CLI) tool for managing
and anonymising data with theAnon AI web service.It’s developed in Python and the code is published under theMIT
Licenseatgithub.com/anon-ai/toolbelt.InstallationInstall usingpipinto a Python3 environment:pipinstallanon-ai-toolbeltNote that the toolbelt only works with Python3 and installs dependencies
including thePython Cryptography
Toolkit.UsageThe primary workflow is for a data controller topushdata into the
system and then for data processors topullthe data down in
anonymised form.anon loginanon push INPUT_FILE RESOURCEanon pull RESOURCE OUTPUT_FILEanon pipe URL OUTPUT_FILELoginLogin with your API credentials (writes to~/.config/anon.ai/config.json):anonlogin>key:...>secret:...PushPush a data snapshot up to ingest and store it.anonpushfoo.dumpmydbWhen ingesting structured data you should specify the data format:anonpushfoo.dumpmydb--formatpostgresIn this example,mydbis an arbitrary resource name that you use to
identify this ingested data source. Subsequent pushes to the same name
are usually used to store a new snapshot of the same file or database.The stored data is encrypted using AES-256 with a per-account encryption
key that lives in (and never leaves) asecure
vault. You can also optionally provide
your own encryption key:anonpushfoo.dumpmydb--encryption-keyLONG_RANDOM_STRINGNote that:your encryption key isnever persistedin our system – so you
have to manage it and give it to any users that you want to share
anonymised data withthere’s no strict requirement on length or format for your encryption
key value (we SHA-256 hash it along with your per-account encryption
key) but we recommend at least 16 bytes entropyPullPull down an anonymised copy of an ingested data snapshot:anonpullmydbfoo.dumpOptionally provide an encryption key (to decrypt the stored data with)
and / or configure how you’d like it anonymised:anonpullmydbfoo.dump--configconfig.json--encryption-key...PipePipe data through to anonymise it:anonpipehttp://humanstxt.org/humans.txt/tmp/humans.anon.txtThis parses, analyses and anonymises the data on the fly, i.e.: without
persisting it. The data source must currently be a URL.VersionsYou canpullspecific snapshot versions by targeting them by name:anonpullmydb--snapshotsomeidYou can alsopushsnapshots up with a specific name:anonpushfoo.sqlmydb--snapshotsomeidTab completionEnablebashcompletion by adding the following to your.bashrc:eval"$(_ANON_COMPLETE=sourceanon)"If you usezsh, you can emulate bash completion by first addingbashcompinitto your.zshrc:autoloadbashcompinitbashcompiniteval"$(_ANON_COMPLETE=sourceanon)"For more information seeAnon AI.
|
anonapi
|
AnonAPIClient and tools for working with the IDIS web APIFree software: MIT licenseDocumentation:https://anonapi.readthedocs.io.FeaturesInteract with IDIS anonymization server web API via httpsCreate, modify, cancel anonymization jobsCLI (Command Line Interface) for quick overview of jobs and cancel/restart job.Python code with examples for fully automated interactionCreditsThis package was originally created withCookiecutterand theaudreyr/cookiecutter-pypackage
|
anonchat
|
anonchat-api.pyPython implementation of anonchat APIInstallation$pipinstallanonchatBuildingYou need to sync up this repo using git:https://github.com/anonchat-org/anonchat-api.pyFirst of all, install 'build' package$py-mpipinstall--upgradebuildThen, build this package. You need to be in root of directory, not in src and etc.$py-mbuildIt will give you two packages:dist/
anonchat-0.0.1-py3-none-any.whl
anonchat-0.0.1.tar.gzInstall builded package:pip install ./dist/anonchat-0.0.1-py3-none-any.whlUsageImport all from anonchat.clientfromanonchat.clientimport*You can check version, if you want.print(AnonClient._VERSION)Now, lets create your client.Basic events and connectionAnonClient(ip: str, - The IP of the server
port: int, - Port of the server
name: str - Bot name
)bot=AnonClient("IP",port,"ExampleNickname")# Change this to your info!By default, bot uses API v2. API v1 is supported, but not tested.
To change API version, after bot creation, change this:bot.version=2# Set API 2. This is set by default.# And to API 1bot.version=1# Set deprecated API 1.Let's send message about bot connection. You need this decorator:@[email protected]_connectdefon_connect():print(f"Bot{bot.username}connected!")You can send messages with function bot.sendbot.send(
text: str
)Add this to our on_connect [email protected]_connectdefon_connect():print(f"Bot{bot.username}connected!")bot.send("I am connected!")This function is called when bot is fully connected.If you want to send message on bot disconnect, you can also add this to your [email protected][email protected]_disconnectdefon_disconnect():bot.send("See you next time!")print("Bot disconnecting...")This function will be called before bot disconnect, so you can send messages.Thats all. Lets connect our bot. Write this function after ALL code. Or, it won't be called.bot.connect()So, our bot is working. It can be disconnected using another function.It it normal, if you get an error here. This is because of closed socket.bot.close()Lets go to another part.Message processingIf you want to get all messages, set your custom message event [email protected][email protected]_messagedefon_message(message):All messages, which passed to on_message, will be V1Message or V2Message class objects, depending on the selected API versionV2MessageVariables:.contents: str - Message contents.author: str - Message author.time: datetime.now - Time, when message was recieved by client..me: bool - Is this my message? But, this is not accurate, because anyone can set your name..bot: class - The bot object.Functions:.reply(text: str - Reply text
) - Reply to messageCan be converted to:bytes - Dumped Encoded JSONstr - Dumped JSONMessage author is not availible on API1, so there is no .authorV1MessageVariables:.contents: str - Message contents.time: datetime.now - Time, when message was recieved by client..me: bool - Is this my message? But, this is not accurate, because anyone can set your name..bot: class - The bot object.Functions:.reply(text: str - Reply text
) - Reply to messageCan be converted to:bytes - Encoded message.contentsstr - message.contentsIf the server (as the server on Dart does) sends a message to a client with API 2 of API 1 standard, the client will automatically adapt it to API 2 and the one who sent the message will be named "V1-Package". Since the function which adjusts for API 2 is local, it is possible to change the name to something else:bot.v1_client="V1MSG"# Or something else, if you like.If the official Python server is used, the server will do it automatically by itself, with exactly the same name "V1-Package", it cannot be changed.Next code is only for API2.Lets write basic on_message function, which will be detecting, if there is 'Hello' at start, and if message is not from our [email protected]_messagedefon_message(message):ifmessage.contents.startswith("Hello")andnotmessage.me:# If message has 'Hello' at start, and this is not our message.message.reply(f"Hello, dear{message.author}!")# Reply to message.This function will be called all time when the message is recieved.API1/API2 CodeIf you want to do some processing before message sending, there is also a [email protected]_sendThis function is called before message send, so it uses another objects.RequestV2MessageVariables:.contents: str - Message contents.author: str - Message authorCan be converted to:bytes - Dumped Encoded JSONstr - Dumped JSONRequestV1MessageVariables:.contents: str - Message contentsCan be converted to:bytes - Encoded message.contentsstr - message.contentsThere is no .bot, .me, .time and .reply, because this message is not sent. Of course, it is our message.And example code:@bot.event_senddefon_send(message):print(f"Bot will send message with text '{message.contents}'")ErrorsThere is three type of errors you can get.anonchat.SendErrorYou can get this while sending message in closed/disconnected socket.anonchat.SocketErrorYou can get this while trying to connect to bad server adress or offline server.RuntimeErrorYou can get this if there is an error in your code.Good luck!Thats all you need to know.
This example can be found in examples dir
Good luck in writing bot/client for your server!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.