package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
ambisafe
InstallUse pippip install ambisafeUsageCreate clientImport Client and create client objectfrom ambisafe import Client client = Client(ambisafe_server_url, secret, api_key, api_secret)You can set prefix for account idclient = Client(ambisafe_server_url, secret, api_key, api_secret, account_id_prefix='prefix')You can provide read and connect timeout (in seconds)client = Client(ambisafe_server_url, secret, api_key, api_secret, connect_timeout=2.5, read_timeout=5)Create accountSimple security schemaaccount = client.create_simple_account(account_id, currency='BTC')Wallet4 security schemaGenerate operator container using secret and create user container from public_key, data (encrypted private key), iv and saltfrom ambisafe import Container operator_container = Container.generate(client.secret) user_container = Container(public_key, data, iv, salt)Create account for security schema “Wallet4” and “BTC” currencyaccount = client.create_wallet4_account(account_id, user_container=user_container, operator_container=operator_container, currency='BTC')Update Wallet4 accountCreate new containers and update accountaccount = client.update_wallet4_account(account_id, user_container=user_container, operator_container=operator_container, currency='BTC')Get balanceGet balance in floatbalance = client.get_balance(account_id, 'BTC')Get accountaccount = client.get_account(account_id, 'BTC')Make paymentFor Simple accountBuild and submit transactiontransaction = client.build_transaction(account_id, 'BTC', address, amount) result = client.submit(account_id, transaction, 'BTC')For Wallet4 accountsBuild transactiontransaction = client.build_transaction(account_id, 'BTC', address, amount)Sign this transaction by user, then sing by operator and submit ittransaction = client.sign_wallet4_transaction(transaction, account_id, 'BTC') client.submit(account_id, transaction, 'BTC') # or result = client.cosign_wallet4_and_submit(transaction, account_id, 'BTC')Build recovery transactiontransaction = client.build_recovery_transaction(account_id, currency, old_address)DisclaimerThe library still in BETA. There can be changes without backward compatibility.
ambisafe-tenant
No description available on PyPI.
ambisync
Define methods that can dynamically shift between synchronous and asyncPlease star the repo onGitHub!
ambit
ambitis a Python library for interacting with PaletteGear and MonogramCC devices, a graphical simulator for device-free development, and an accompanying set of configurable end user tools and demos.
ambit-fe
READMEAmbit is an open-source multi-physics finite element solver written in Python, supporting solid and fluid mechanics, fluid-structure interaction (FSI), and lumped-parameter models. It is tailored towards solving problems in cardiac mechanics, but may also be used for more general nonlinear finite element analysis. It uses the finite element backend FEniCS and linear algebra library PETSc.https://github.com/marchirschvogel/ambit/assets/52761273/a438ff55-9b37-4572-a1c5-499dd3cfba73Heart cycle simulation of a generic bi-ventricular heart model coupled to a closed-loop circulation model.https://github.com/marchirschvogel/ambit/assets/52761273/8e681cb6-7a4f-4d1f-b34a-cefb642f44b7FSI simulation (Turek benchmark, FSI2 case) of an elastic flag in incompressible channel flow.The following is supported:Solid mechanicsFinite strain elastodynamics, implementing a range of hyperelastic isotropic and anisotropic as well as viscous constitutive lawsActive stress for modeling of cardiac contraction mechanicsQuasi-static, generalized-alpha, or one-step theta time integrationNearly incompressible as well as fully incompressible formulations (latter using pressure dofs)Prestressing using MULF method in displacement formulationVolumetric growth & remodelingFluid dynamicsIncompressible Navier-Stokes/Stokes equations, either in nonconservative or conservative formulationNavier-Stokes/Stokes flow in an Arbitrary Lagrangian Eulerian (ALE) reference frameOne-step theta, or generalized-alpha time integrationSUPG/PSPG stabilization for equal-order approximations of velocity and pressureLumped (0D) modelsSystemic and pulmonary circulation flow models2-element as well as 4-element Windkessel modelsSignalling network modelCoupling of different physics:Fluid-solid interaction (FSI): Monolithic FSI in ALE formulation using Lagrange multiplierMonolithic coupling of 3D solid/fluid/ALE-fluid with lumped 0D flow modelsMultiscale-in-time analysis of growth & remodeling (staggered solution of 3D-0D coupled solid-flow0d and G&R solid problem)Fluid-reduced-solid interaction (FrSI)Boundary subspace-projected physics-reduced solid model (incl. hyperelastic, viscous, and active parts) in an ALE fluid reference framePOD-based model order reduction (MOR)Projection-based model order reduction applicable to main fluid or solid field (also in a coupled problem), by either projecting the full problem or a boundary to a lower dimensional subspace spanned by POD modesauthor: Dr.-Ing. Marc Hirschvogel,[email protected] experimental / to-do:Finite strain plasticityElectrophysiology/scalar transport... whatever might be wanted in some future ...DocumentationDocumentation can be viewed athttps://ambit.readthedocs.ioInstallationIn order to use Ambit, you need toinstall FEniCSxLatest Ambit-compatible dolfinx release version: v0.6.0Latest tested Ambit-compatible dolfinx development version dating to 19 Aug 2023Ambit can then be installed using pip, either the current releasepython3 -m pip install ambit-feor latest development version:python3 -m pip install git+https://github.com/marchirschvogel/ambit.gitAlternatively, you can pull a pre-built Docker image with FEniCSx and Ambit installed:docker pull ghcr.io/marchirschvogel/ambit:latestIf a Docker image for development is desired, the following image contains all dependencies needed to install and run Ambit (including the dolfinx mixed branch):docker pull ghcr.io/marchirschvogel/ambit:devenvUsageCheck out the examples for the basic problem types in demos to quickly get started running solid, fluid, or 0D model problems. Further, you can have a look at input files in ambit/tests and the file ambit_template.py in the main folder as example of all available input options.Best, check if all testcases run and pass, by navigating to ambit/tests and executing./runtests.pyBuild your input file and run it with the commandmpiexec -n <NUMBER_OF_CORES> python3 your_file.py
ambition
ambitionInstallationTo install the latest release, type:pip install ambitionTo install the latest code directly from source, type:pip install git+git://github.com/ambitioninc/ambition.gitDocumentationFull documentation is available athttp://ambition.readthedocs.orgLicenseMIT License (see LICENSE)
ambition-ae
ambition-aeAdverse event handling
ambition-auth
ambition-auth
ambition-dashboard
ambition-dashboardDashboard classes for Ambition project
ambition-django-cachalot
Caches your Django ORM queries and automatically invalidates them.DocumentationAvailableon Read The Docs.DiscussionOnour public gitter chat room.
ambition-django-cachebuster
DJANGO 1.4+ USERS:You should probably look to use Django’s includedstatictemplate tag rather than use django-cachebuster. Why include a 3rd party app when you can use the built-in functionality?django-cachebuster – A backwards-compatible Django 1.3+ ready set of cache-busting template tagsOverviewdjango-cachebusteris a Django app containing two template tags:staticandmedia. Each tag will use the file last modified timestamp by default to ‘bust’ web browser file caches.staticis meant for your site’s JavaScript, CSS and standard images.mediais intended for user uploaded content like avatars, videos and other files. CloudFront and other content delivery networks are supported.DescriptionAll of the existing file cache busting techniques seem to be Django versions 1.2.x and lower oriented - meaning they don’t support the newdjango.contrib.staticfilesparadigm. This app addresses this functionality gap.Additionally, there are some optimizations (seeConfigurationbelow) that can be enabled to minimize file modification date disk reads.RequirementsPython 2.6 (May work with prior versions, but untested - please report)Django 1.2.x, 1.3.x (May work with prior versions, but untested - please report)InstallationCopy or symlink the ‘cachebuster’ package into your django project directory or install it by running one of the following commands:python setup.py installpip install django-cachebustereasy_install django-cachebusterNow, addcachebusterto yourINSTALLED_APPSin your project’ssettings.pymodule.Template UsageTo use these cache-busting template tags, you’ll need to load the template tag module at the top of each template with{% load cachebuster %}. Alternatively, as these tags will most likely be used in most of a project’s templates, you can tell Django to auto-load them without the requisite{% load cachebuster %}by adding the following to yoursettings.py:from django.template.loader import add_to_builtins add_to_builtins('cachebuster.templatetags.cachebuster'){% static filename %}attempts to use theCACHEBUSTER_UNIQUE_STRING(seeConfigurationbelow) setting to get a cached value to append to your static URLs (ie.STATIC_URL). IfCACHEBUSTER_UNIQUE_STRINGis not set, it falls back to the last date modified of the file. IfCACHEBUSTER_UNIQUE_STRINGis used, you can force last-date-modified behaviour by addingTrueinto the tag statement like so:{% static filename True %}. For example<link rel="stylesheet" href="{% static css/reset.css %}" type="text/css"> <link rel="stylesheet" href="{% static css/fonts.css True %}" type="text/css">This would yield something along the lines of:<link rel="stylesheet" href="/static/css/reset.css?927f6b650afce4111514" type="text/css"> <link rel="stylesheet" href="/static/css/fonts.css?015509150311" type="text/css">{% media filename %}is similar but has slightly different behaviour, as the file content has a different origin (user uploaded content like avatars, videos, etc.) and cannot depend on any git comment hash. This is why there is no behaviour other than the last modified date method for MEDIA_URL files.<img src='{% media uploads/uid1-avatar.jpg %}' />would result in something like this:<img src='/media/uploads/uid1-avatar.jpg?034511190510' />Configurationdjango-cachebustersupports two methods of ‘busting’. Appending a unique string (by default, this is the last modified datetime of the file) as a query string parameter is the easy, default behaviour. For more advanced requirements such as content distribution network (CDN, such as CloudFront) scenarios, there is also the ability to prepend the unique string.To start using it in it’s simplest form right now, see theTemplate Usagesection. Want more fromdjango-cachebuster? Read on.CACHEBUSTER_UNIQUE_STRING:optional; defaults to the file’s last modified timestamp.This is a simple performance optimization that minimizes accessing the file system to get a file’s last modified timestamp. This optimization is only used for static files (not media/user-generated files) as only static files are usually version-controlled.To setCACHEBUSTER_UNIQUE_STRING, you would mostly likely use a provided ‘detector’ or write your own (please contribute new ones!). For example, if you use Git as your source control, you can use the providedgitdetector. It simply traverses the Django project’s path looking for the.gitfolder, and then extracts the current commit hash. This hash is cached and used for subsequent cache-busting. In your settings.py:from cachebuster.detectors import git CACHEBUSTER_UNIQUE_STRING = git.unique_string(__file__)or if you wanted it to be a short busting string:from cachebuster.detectors import git CACHEBUSTER_UNIQUE_STRING = git.unique_string(__file__)[:8]__file__must be passed in so thatdjango-cachebusteroperates in the context of the Django project’s settings.py file. If it wasn’t passed in, django-cachebuster would only have its own context from which to grab the.gitdirectory, not that of the user’s project. (An alternative to this is to use Python’sinspectmodule - but there are some warnings around using it.)CACHEBUSTER_PREPEND_STATIC:optional; defaults toFalse.CACHEBUSTER_PREPEND_MEDIA:optional; defaults toFalse.If CloudFront or another CDN that ignores query string parameters is used,CACHEBUSTER_PREPEND_STATICwill need to be set toTrue. For static files, this prepends the unique string instead of appending it as a query string parameter.CACHEBUSTER_PREPEND_MEDIAdoes the same for media files. For example, withCACHEBUSTER_PREPEND_STATICset to True, the rendered output becomes:<link rel="stylesheet" href="/static/927f6b650afce4111514/css/reset.css" type="text/css">WithCACHEBUSTER_PREPEND_STATICset to False:<link rel="stylesheet" href="/static/css/reset.css?927f6b650afce4111514" type="text/css">Using this prepending method raises a couple of development environment issues, however. Assuming Django 1.3 or higher is used,./manage.py runserverwill automatically attempt to serve static (not media, however) files on its own without any urls.py changes; this standard method of serving does not work in this scenario. To prevent this default Django behaviour, the development server should be started with the following command:./manage.py runserver --nostaticAlso when using the prepending method in a development environment, to support serving files from both{% static %}and{{ STATIC_URL }}(as well as{% media %} and``{{MEDIA_URL }}), Django’s defaultserveviews need to be replaced with the following in yoururls.py:if settings.DEBUG: urlpatterns += patterns('', url(r'^static/(?P<path>.*)$', 'cachebuster.views.static_serve', {'document_root': settings.STATIC_ROOT,}), url(r'^media/(?P<path>.*)$', 'cachebuster.views.media_serve', {'document_root': settings.MEDIA_ROOT,}), )This is because both the prepended and the non-prepended paths need to be tested to support the above-mentioned scenarios.TroubleshootingMy date-based cache-busting unique strings keep updating even though my assets aren’t changingAre you deploying your assets from a source control system such as Subversion or Git? By default, those systems set the last modified date of checked-out files to their check-out dates,notthe original files’ last modified dates. To fix this on Subversion, setuse-commit-times=truein your Subversion config. In Git this is a little harder; it requires adding a Git post-checkout hook (or updating your deployment script). For more instructions on doing this, see the answers tothis question on Stack Overflow.NotesPlease feel free to send a pull request with fixes and in particular, additionaldetectorsto improve the usefulness of this app. Maybe forsvn,hg, etc?SourceThe latest source code can always be found here:github.com/ambitioninc/django-cachebusterCreditsdjango-cachebuster is maintained byJames Addison.Licensedjango-cachebuster is Copyright (c) 2011, James Addison. It is free software, and may be redistributed under the terms specified in the LICENSE file.Questions, Comments, Concerns:Feel free to open an issue here:github.com/ambitioninc/django-cachebuster/issues
ambition-django-db-readonly
AboutA way to globally disable writes to your database. This works by inserting a cursor wrapper between Django’sCursorWrapperand the database connection’s cursor wrapper.InstallationThe library is hosted onPyPi, so you can grab it there with:pip install django-db-readonlyThen addreadonlyto yourINSTALLED_APPS.:INSTALLED_APPS = ( # ... 'readonly', # ... )UsageYou need to add this line to yoursettings.pyto make the database read-only:# Set to False to allow writes SITE_READ_ONLY = TrueWhen you do this, any write action to your databases will generate an exception. You should catch this exception and deal with it somehow. Or let Django display anerror 500 page. The exception you will want to catch isreadonly.exceptions.DatabaseWriteDeniedwhich inherits fromdjango.db.utils.DatabaseError.There is also a middleware class that will handle the exceptions and attempt to handle them as explained below. To enable the middleware, add the following line to yoursettings.py:MIDDLEWARE_CLASSES = ( # ... 'readonly.middleware.DatabaseReadOnlyMiddleware', # ... )This will then catchDatabaseWriteDeniedexceptions. If the request is a POST request, we will redirect the user to the same URL, but as a GET request. If the request is not a POST (ie. a GET), we will just display aHttpResponsewith text telling the user the site is in read-only mode.In addition, the middleware class can add an error-type message using thedjango.contrib.messagesmodule. Add:# Enable DB_READ_ONLY_MIDDLEWARE_MESSAGE = Trueto yoursettings.pyand then on POST requests that generate aDatabaseWriteDeniedexception, we will add an error message informing the user that the site is in read-only mode.For additional messaging, there is a context processor that addsSITE_READ_ONLYinto the context. Add the following line in yoursettings.py:TEMPLATE_CONTEXT_PROCESSORS = ( # ... 'readonly.context_processors.readonly', # ... )And use it as you would any boolean in the template, e.g.{% if SITE_READ_ONLY %} We're down for maintenance. {% endif %}TestingThere aren’t any tests included, yet. Run it at your own risk.CaveatsThis will work withDjango Debug Toolbar. In fact, I was inspired byDDT’s sql panelwhen writing this app.However, in order for both DDTanddjango-db-readonly to work, you need to make sure that you havereadonlybeforedebug_toolbarin yourINSTALLED_APPS. Otherwise, you are responsible for debugging what is going on. Of course, I’m not sure why you’d be running DDT in production and running django-db-readonly in development, but whatever, I’m not you.More generally, if you have any other apps that modifies eitherdjango.db.backends.util.CursorWrapperordjango.db.backends.util.CursorDebugWrapper, you need to make sure thatreadonlyis placedbeforeof those apps inINSTALLED_APPS.The Nitty GrittyHow does this do what it does? Well, django-db-readonly sits between Django’s own cursor wrapper atdjango.db.backends.util.CursorWrapperand the database specific cursor atdjango.db.backends.*.base.*CursorWrapper. It overrides two specific methods:executeandexecutemany. If the site is in read-only mode, then the SQL is examined to see if it contains any write actions (defined inreadonly.ReadOnlyCursorWrapper.SQL_WRITE_BLACKLIST). If a write is detected, an exception is raised.LicenseUses theMITlicense.
ambition-django-timezone-field
A Django app providing database and form fields forpytztimezone objects.ExamplesDatabase Fieldimportpytzfromdjango.dbimportmodelsfromtimezone_fieldimportTimeZoneFieldclassMyModel(models.Model):timezone1=TimeZoneField(default='Europe/London')# defaults supportedtimezone2=TimeZoneField()timezone3=TimeZoneField()my_inst=MyModel(timezone1='America/Los_Angeles',# assignment of a stringtimezone2=pytz.timezone('Turkey'),# assignment of a pytz.DstTzInfotimezone3=pytz.UTC,# assignment of pytz.UTC singleton)my_inst.full_clean()# validates against pytz.common_timezonesmy_inst.save()# values stored in DB as stringstz=my_inst.timezone1# values retrieved as pytz objectsrepr(tz)# "<DstTzInfo 'America/Los_Angeles' PST-1 day, 16:00:00 STD>"Form Fieldfromdjangoimportformsfromtimezone_fieldimportTimeZoneFormFieldclassMyForm(forms.Form):timezone=TimeZoneFormField()my_form=MyForm({'timezone':'America/Los_Angeles',})my_form.full_clean()# validates against pytz.common_timezonestz=my_form.cleaned_data['timezone']# values retrieved as pytz objectsrepr(tz)# "<DstTzInfo 'America/Los_Angeles' PST-1 day, 16:00:00 STD>"InstallationFrompypiusingpip:pipinstalldjango-timezone-fieldAddtimezone_fieldto yoursettings.INSTALLED_APPS:INSTALLED_APPS=(...'timezone_field',...)Changelog2.0 (2016-01-31)Drop support for django 1.7, add support for django 1.9Drop support for python 3.2, 3.3, add support for python 3.5Remove tests from source distribution1.3 (2015-10-12)Drop support for django 1.6, add support for django 1.8Variousbug fixes1.2 (2015-02-05)For form field, changed default list of accepted timezones frompytz.all_timezonestopytz.common_timezones, to match DB field behavior.1.1 (2014-10-05)Django 1.7 compatibilityAdded support for formatingchoiceskwarg as[[<str>, <str>], …], in addition to previous format of[[<pytz.timezone>, <str>], …].Changed default list of accepted timezones frompytz.all_timezonestopytz.common_timezones. If you have timezones in your DB that are inpytz.all_timezonesbut not inpytz.common_timezones, this is a backward-incompatible change. Old behavior can be restored by specifyingchoices=[(tz, tz) for tz in pytz.all_timezones]in your model definition.1.0 (2013-08-04)Initial release astimezone_field.Running the TestsInstalltox.From the repository root, runtoxPostgres will need to be running locally, and sqlite will need to be installed in order for tox to do its job.Found a Bug?To file a bug or submit a patch, please head over todjango-timezone-field on github.CreditsOriginally adapted fromBrian Rosner’s django-timezones. The full list of contributors is available ongithub.
ambition-django-uuidfield
Provides a UUIDField for your Django models.InstallationInstall it with pip (or easy_install):pip install ambition-django-uuidfieldUsageFirst you’ll need to attach a UUIDField to your class. This acts as a char(32) to maintain compatibility with SQL versions:from uuidfield import UUIDField class MyModel(models.Model): uuid = UUIDField(auto=True)Check out the source for more configuration values.Enjoy!
ambition-edc
Ambition Edc(P.I. Joe Jarvis)http://www.isrctn.com/ISRCTN72509687See requirements and docs folder.
ambition-export
ambition-exportExport data from the Ambition EDC
ambition-form-validators
ambition-form-validators
ambition-inmemorystorage
dj-inmemorystorageAn in-memory data storage backend for Django.Compatible with Django’sstorage API.Supported VersionsPython 2.6/2.7 with Django 1.4+ Python 3.2/3.3/3.4 with Django 1.5+UsageIn your test settings file, addDEFAULT_FILE_STORAGE='inmemorystorage.InMemoryStorage'By default, theInMemoryStoragebackend is non-persistant, meaning that writes to it from one section of your code will not be present when reading from another section of your code, unless both are sharing the same instance of the storage backend.If you need your storage to persist, you can add the following to your settings.INMEMORYSTORAGE_PERSIST=TrueDifferencesThis library is based ondjango-inmemorystorageby Cody Soyland, withmodificationsmade by Seán Hayes with support for theurlmethod, withadditional supportfrom Tore Birkeland for writing to the file.Wave’s modifications include packaging, and test modifications such thatpython setup.py testworks. This version also bumps the version to1.0.0and renames it to dj-inmemorystorage such that it doesn’t conflict on PyPI.The biggest difference is that this package works with Django 1.4 now (previously only 1.5+). It also supports Python 2.6/2.7 with Django 1.4+ and Python 3.2/3.3/3.4 with Django 1.5+.ContributingEnsure that you open a pull requestAll feature additions/bug fixes MUST include tests
ambition-labs
ambition-labs
ambition-lists
List models for the Ambition trial
ambition-metadata-rules
ambition-metadata-rulesSee alsoedc-metadataandedc-metadata-rules.
ambition-permissions
ambition-permissionsSee alsoedc-permissions.
ambition-pharmacy
ambition-pharmacyProtocol-specific pharmacy configurations
ambition-prn
ambition-prnPRN forms such as offstudy, protocol violation, etc
ambition-rando
ambition-randoRandomization class and model for AmbitionTo load the randomization list:pythonmanage.pyimport_randomization_listTo rebuild the records in RandomizationList:fromdjango.core.exceptionsimportObjectDoesNotExistfromdjango.contrib.sites.modelsimportSitefromedc_registration.modelsimportRegisteredSubjectfromambition_rando.modelsimportRandomizationListcurrent_site=Site.objects.get_current()forobjinRegisteredSubject.on_site.all():try:randobj=RandomizationList.objects.get(sid=obj.sid)exceptObjectDoesNotExist:print(f'missing for{obj.subject_identifier},{obj.sid}.')else:randobj.alocated_site=current_siterandobj.subject_identifier=obj.subject_identifierrandobj.allocated_datetime=obj.consent_datetimerandobj.allocated=Truerandobj.save()
ambition-reference
ambition-referenceReference model configurations for Ambition
ambition-reports
ambition-reportsPDF Reports for the Ambition Trial
ambition-screening
ambition-screeningSubject Screening models and views
ambition-sites
ambition-sitesSite configurations
ambition-subject
ambition-subjectSubject/Participant models
ambition-utils
RequirementsPython 3.6+Django 1.11+Postgres 9.5+InstallationTo install the latest release, type:pip install ambition-utilsTo install the latest code directly from source, type:pip install git+git://github.com/ambitioninc/ambition-utils.gitDocumentationNote: As of version 0.5.0, this project only supports Python3.6+ If you need Python2 support, pin toambition-utils==0.4.0Full documentation is available athttp://ambition-utils.readthedocs.orgLicenseMIT License (see LICENSE)
ambition-validators
|pypi| |travis| |coverage|ambition-validators-----------------.. |pypi| image:: https://img.shields.io/pypi/v/ambition-validators.svg:target: https://pypi.python.org/pypi/ambition-validators.. |travis| image:: https://travis-ci.com/ambition-trial/ambition-validators.svg?branch=develop:target: https://travis-ci.com/ambition-trial/ambition-validators.. |coverage| image:: https://coveralls.io/repos/github/ambition-trial/ambition-validators/badge.svg?branch=develop:target: https://coveralls.io/github/ambition-trial/ambition-validators?branch=develop
ambition-visit-schedule
ambition-visit-scheduleAmbition Visit Schedule
ambiverseclient
Note:This client is not affiliated with Ambiverse or the Max-Planck-Institute.Table of ContentsInstallUsageLicenseInstallInstall with pip:`sh pip installambiverse-client`UsageKnowledgeGraph Clientfromambiverseclient.clientsimportKnowledgeGraphkg=KnowledgeGraph(API_ENDPOINT_HOST,port=API_ENDPOINT_PORT)entity_list=["http://www.wikidata.org/entity/Q104493","http://www.wikidata.org/entity/Q103968"]result=kg.entities(entity_list)AmbiverseNLU Clientfromambiverseclient.clientsimportAmbiverseNLUfromambiverseclient.modelsimportAnalyzeInputac=AmbiverseNLU(API_ENDPOINT_HOST,port=API_ENDPOINT_PORT)request_doc=AnalyzeInput(docId="test",language="en")request_doc.text="""Brexit: UBS to move London jobs to Europe as lack of transition deal forces 'significant changes' Swiss banking giant expects to merge UK entity with its German-headquartered European ..."""ac.analyze(request_doc)LicenseThis code is distributed under the terms of the GPLv3 license. Details can be found in the file [LICENSE](LICENSE) in this repository.Package AuthorLinus Kohl, <[email protected]>
ambivert
AmBiVErT - AMplicon BInning Variant caller with ERror Truncation. For calling variants in amplicon based sequencing experiments
ambix
No description available on PyPI.
amboseli
amboseli
ambra-clone-eyork-rad
Ambra_CloneThe Clone account class can be used to copy Ambra data from one account into another. These accounts can be in different Ambra instances. This Module is intended to be used within the Ambra scripting platform (future release) but can be downloaded and run locally as well.The data that can be copied at this time is limited to the corresponding ambra api calls and can be specified with theclone_itemsparameter.Get started with cloning Ambra data from local machine.pip install Ambra_CloneImport the Clone class from the file labeled Clone.pyfromAmbra_Clone.CloneImportCloneCreate a cloning instance. Seeinitialization parametersfor more details. For this example I am creating a new account although you can copy/update records to an existing account by specifying it's uuid.example_clone=Clone(sid="e9d41c71-b8fb-4119-8456-xxxxxxxxxx",original_url="https://uat.ambrahealth.com/api/v3/",original_account_uuid="",clone_items="all")example_clone.run()Once satisfied with the Clone parameters (see methods section for other customizable parameters) use the .run() method to perform the cloning task.Initialization parameterssid (str): session id for original_urloriginal_url (str): the original url with "api/v3" included. ie.https://access.ambrahealth.com/api/v3original_account_uuid (str): original account uuidclone_items (str/list, optional): list of items to clone. Defaults to "all".["customfield", "role", "group", "location", "account/users", "webhook", "route", "group_location_users", "namespace_settings", "terminology", "hl7/template", "hl7/transform", "dictionary", "mail_templates", "account_settings", "radreport/template", "share_settings", "site"]copy_url_base (str, optional): [description]. Defaults to None.copy_account_uuid (str, optional): If unspecified a new account will be created. The uuid of the duplicate account if it already exists. Defaults to None.sid_copy (str, optional): If unspecified, same as original session id. The session id for the duplicate account. Defaults to None.account_name (str,optional): Account Name. If unspecified or "", Account name will be be "Copy of "Methodsadd_special_fieldInstead of copying the value for the specified field and apistr. Default a preset value. For example if you wanted to suspend all webhooks in the newly cloned account - apistr='webhook',field_key='suspended',value=1Args:field_key (str): field key for specified value. Use : to separate fields in a seeded json.apistr (str): api endpoint string ie. webhook, route, customfield, ect.value (any): value to default for the api endpoint valueadd_special_uuidFor a specific uuid. Instead of copying the original field data use the specified valueArgs:uuid (str): uuid of the original item with a special fieldfield_key (str): field key for the special value. separate seeded json fields by a :value (any): value of the uuid field.update_uuid_mapAmbra items are generally mapped to their new uuid by the name of the record. However you can manually map records with this parameter instead. Args: - dict (dictionary): map of uuids {original_uuid : new_uuid}Architectural DesignCopying records from original_account to copy_account. Items are mapped by name unless the item uuid is manually mapped prior to running. When the .run() method is used the script first prints out the cloning information to the terminal then for each clone_item listed a coorespondingclonemethod is called. These methods all operate simmilarly and follow this procedure.map records to the uuid_map parameter- orignal_account uuids are the key, copy_account uuids are the values.add records that don't exist and add them to the uuid_mapupdate mapped records to match the originaluuid substitutions are automatic and rely on the uuid_map. If a uuid cannot be found the item is skipped and an error is printed.RequirementsThe clone script requires up to date versions of the python requests package and ambra-sdk packagepip install requests pip install ambra-sdk
ambra-sdk
Ambra-SDKWelcome to ambra-sdk library for intract with ambrahealth service and storage api.Quickstartpipinstallambra-sdkRunningfromambra_sdk.apiimportApifromambra_sdk.modelsimportStudyfromambra_sdk.service.filteringimportFilter,FilterConditionfromambra_sdk.service.sortingimportSorter,SortingOrder# Usually, URL has a form:# url = https://ambrahealth_host/api/v3# username and password - ambrahealth credentials.api=Api.with_creds(url,username,password)user_info=api.Session.user().get()studies=api\.Study\.list()\.filter_by(Filter('phi_namespace',FilterCondition.equals,user_info.namespace_id,),)\.only([Study.study_uid,Study.image_count])\.sort_by(Sorter('created',SortingOrder.ascending,),)\.all()forstudyinstudies:print(study.study_uid,study.image_count)LicenseAmbra-SDK is licensed under the terms of the Apache-2.0 License (see the file LICENSE).Read the docsDocumentation:https://dicomgrid.github.io/sdk-python/index.html
ambra-ts-tools
public-ambra-ts-scripts
ambrogio
AmbrogioAmbrogio is a framework to create and run procedures.InstallationTo install Ambrogio, run the following command:pipinstallambrogioUsageCreate a new projectTo create a new Ambrogio project runambrogioin CLI and, if no project can be found in the current folder, you will be prompted to confirm you want to create one and to enter its name.This will create a new folder with the following structure:. ├──ambrogio.ini └──proceduresTheambrogio.inifile is the configuration file for the project. It contains the following sections:[settings]procedure_module=proceduresTheprocedure_moduleis the name of the folder where the procedures are stored.Create a new procedureTo create a new procedure runambrogioin CLI and if no procedure can be found in the project, you will be prompted to confirm you want to create one, otherwise you can select the option to create a new procedure. In both cases, you will be prompted to enter the name and the type of the procedure.This will create a new file in theproceduresusing the name you entered and the procedure structure from a template.Run a procedureTo run a procedure runambrogioin CLI and select the procedure you want to run. You will be prompted to enter the parameters of the procedure if any.Procedure typesBasic procedureA basic procedure is a procedure that contains a single execution function.Here is an example of a basic procedure:fromambrogio.procedures.basicimportBasicProcedureclassMyProcedure(BasicProcedure):name='My Procedure'defexecute(self):print('Hello World!')Step procedureA step procedure is a procedure that contains multiple execution functions. Each execution function is called a step.When a step is added to a procedure using theadd_stepmethod, it can take the following arguments:function: the function to execute.name: the name of the step. If not specified, the name of the function will be used.parallel: if set toTrue, the step will be executed in parallel with the previous step. If set toFalse, the step will be executed sequentially after the previous step. Default value isFalse.blocking: if set toTrue, the procedure will stop if the step fails. If set toFalse, the procedure will continue to execute the next steps. Default value isTrue.params: an optionaldictcontaining the parameters to pass to the step function.Here is an example of a step procedure:fromambrogio.procedures.stepimportStepProcedureclassMyStepProcedure(StepProcedure):name='My Step Procedure'defstep_1(self,name:str):print(f'Hello{name}!')defstep_2(self):print('Step 2')defstep_3(self):print('Step 3')defstep_4(self):print('Step 4')defstep_5(self):print('Step 5')defset_up(self):self.add_step(self.step_1,params={'name':'World'})self.add_step(self.step_2,parallel=True)self.add_step(self.step_3,parallel=True)self.add_step(self.step_4,parallel=True)self.add_step(self.step_5)deftear_down(self):print('Done!')This procedure will execute as follow:┌─ step_2 ─┐ set_up ─ step_1 ─┼─ step_3 ─┼─ step_5 ─ tear_down └─ step_4 ─┘As you can see,set_upandstep_1are executed sequentially, thenstep_2,step_3andstep_4are executed in parallel and finallystep_5andtear_downare executed sequentially.When a sequential step follows some parallel steps, the sequential step will be executed after all the previous parallel steps have finished.If add_step is called during a step execution, the step will be appended to the end of the step list.Procedure parametersA procedure parameter is a parameter that can be passed to a procedure when it is executed.When you create a new procedure, you can define the parameters it can take. Here is an example of a procedure with two parameters:fromambrogio.procedures.basicimportBasicProcedurefromambrogio.procedures.paramimportProcedureParamclassMyProcedure(BasicProcedure):name='My Procedure'params=[ProcedureParam(name='name',type=str,value='World'),ProcedureParam(name='times',type=int,value=1),]defexecute(self):name=self.get_param('name').valuetimes=self.get_param('times').valueforiinrange(times):print(f'Hello{name}!')When you run this procedure, you will be prompted to enter the values of the parameters:Enter the value for 'name' (str): World Enter the value for 'times' (int): 3Then the procedure will be able to access the values of the parameters:name=self.get_param('name').valuetimes=self.get_param('times').valueParameters can be of the following types:boolintfloatstrPath(frompathlib)Procedure prompt and logYou can use thepromptandloggerproperties of a procedure to prompt the user and log messages during the execution of the procedure.You should avoid using thepromptduring the execution of a parallel step, as it runs in a different thread and the prompt will not work properly.Here is an example of a procedure that uses thepromptandloggerproperties:fromambrogio.procedures.basicimportBasicProcedureclassMyProcedure(BasicProcedure):name='My Procedure'defexecute(self):name=self.prompt.text('Enter your name:')self.logger.info(f'Hello{name}!')Available prompt methods:confirm: prompt the user to confirm an action. ReturnsTrueif the user confirms,Falseotherwise.text: prompt the user to enter a text. Returns the text entered by the user.editor: prompt the user to enter a text using an editor. Returns the text entered by the user.path: prompt the user to enter a path. Returns a pathlibPathobject.password: prompt the user to enter a password. Returns the password entered by the user.checkbox: prompt the user to select one or more options from a list passed using thechoicesargument. Returns a list of the selected options.list: prompt the user to select one option from a list passed using thechoicesargument. Returns the selected option.checkboxandlistcan be used to select from a list of options passed using thechoicesargument. The options can be a list of strings or a list of tuples containing the label and the value of the option. The label is the string displayed to the user and the value is the value returned by the prompt method.All prompt methods can take the following arguments:default: the default value to return if the user does not enter anything.validate: a function that takes the value entered by the user as argument and returnsTrueif the value is valid,Falseotherwise.Available logger methods:debug: log a debug message.info: log an info message.warning: log a warning message.error: log an error message.critical: log a critical message.
ambrose-distributions
No description available on PyPI.
ambrosia
Ambrosiais a Python library for A/B tests design, split and effect measurement. It provides rich set of methods for conducting full A/B testing pipeline.The project is intended for use in research and production environments based on data in pandas and Spark format.Key functionalityPilots design 🛫Multi-group split 🎳Matching of new control group to the existing pilot 🎏Experiments result evaluation as p-value, point estimate of effect and confidence interval 🎞Data preprocessing ✂️Experiments acceleration 🎢DocumentationFor more details, see theDocumentationandTutorials.InstallationYou can always get the newestAmbrosiarelease usingpip. Stable version is released on every tag tomainbranch.pipinstallambrosiaStarting from version0.4.0, the ability to process PySpark data is optional and can be enabled usingpipextras during the installation.pipinstallambrosia[spark]UsageThe main functionality ofAmbrosiais contained in several core classes and methods, which are autonomic for each stage of an experiment and have very intuitive interface.Below is a brief overview example of using a set of three classes to conduct some simple experiment.Designerfromambrosia.designerimportDesignerdesigner=Designer(dataframe=df,effects=1.2,metrics='portfel_clc')# 20% effect, and loaded data frame dfdesigner.run('size')Splitterfromambrosia.splitterimportSplittersplitter=Splitter(dataframe=df,id_column='id')# loaded data frame df with column with id - 'id'splitter.run(groups_size=500,method='simple')Testerfromambrosia.testerimportTestertester=Tester(dataframe=df,column_groups='group')# loaded data frame df with groups info 'group'tester.run(metrics='retention',method='theory',criterion='ttest')DevelopmentTo install all requirements runmakeinstallYou must havepython3andpoetryinstalled.For autoformatting runmakeautoformatFor linters check runmakelintFor tests runmaketestFor coverage runmakecoverageTo remove virtual environment runmakecleanAuthorsDevelopers and evangelists:Bayramkulov AslanKhakimov ArtemVasin Artem
ambrosio
AmbrosioAmbrosio helps accomplish tasks in the command lineRequirementsYou will need the following installed in your machine.PoetryPoetry Plugin: ExportPoetry Plugin: upHow to update all dependencies ?Just run in your project directory:poetryup--pinned--latest
ambrozia
Ambrosiais a Python library for A/B tests design, split and effect measurement. It provides rich set of methods for conducting full A/B test pipeline.An experiment design stage is performed using metrics historical data which could be processed in both forms of pandas and spark dataframes with either theoretical or empirical approach.Group split methods support different strategies and multi-group split, which allows to quickly create control and test groups of interest.Final effect measurement stage is conducted via testing tools that are able to return relative and absolute effects and construct corresponding confidence intervalsfor continious and binary variables. Testing tools as well as design ones support significant number of statistical criteria, like t-test, non-parametric, and bootstrap.For additional A/B tests support library provides features and tools for data preproccesing and experiment acceleration.Key functionalityPilots design ✈Multi-group split 🎳Matching of new control group to the existing pilot 🎏Getting the experiments result evaluation as p-value, point estimate of effect and confidence interval 🎞Experiments acceleration 🎢DocumentationFor more details, see theDocumentationandTutorials.InstallationStable version is released on every tag tomainbranch.pipinstallambroziaAmbrosia requires Python 3.7+UsageDesignerfromambrozia.designerimportDesignerdesigner=Designer(dataframe=df,effects=1.2,metrics='portfel_clc')# 20% effect, and loaded data frame dfdesigner.run('size')Splitterfromambrozia.splitterimportSplittersplitter=Splitter(dataframe=df,id_column='id')# loaded data frame df with column with id - 'id'splitter.run(groups_size=500,method='simple')Testerfromambrozia.testerimportTestertester=Tester(dataframe=df,column_groups='group')# loaded data frame df with groups info 'group'tester.run(metrics='retention',method='theory',criterion='ttest')DevelopmentTo install all requirements runmakeinstallYou must havepython3andpoetryinstalled.For autoformatting runmakeautoformatFor linters check runmakelintFor tests runmaketestFor coverage runmakecoverageTo remove virtual environment runmakecleanCommunicationDevelopers and evangelists:Bayramkulov AslanKhakimov ArtemVasin Artem
ambryfdw
UNKNOWN
ambsql
AmbSQLAmbSQL is a Relational Database Management System which created with keeping in focus the speed and the ease to operate on.Made With ❤ in Python3DocumentationPlease refer to the documentation athttps://github.com/ambujraj/AmbSQL/wiki/DocumentationCompatibilityThis program is compatible with python - 3.xInstallationFor Command-line InterfaceDownload the AmbSQL.exe file fromhttps://github.com/ambujraj/AmbSQL/releasesand run it on Your PC.AmbSQL can also be downloaded fromhttps://ambujraj.github.io/AmbSQL/download/.For Python PackageYou can use one of the below methods to download and use this repository.Using pip:$ pip install ambsqlManually using CLI:$ git clone https://github.com/ambujraj/AmbSQL.git$ cd AmbSQL$ sudo python3 setup.py install (Linux and MacOS)$ python setup.py install (Windows)Manually using UI:Go to therepo on github=> Click on 'Clone or Download' => Click on 'Download ZIP' and save it on your local disk.UsageIf installed CLI, open the AmbSQL.exe file and get the work started.If installed using pip or CLI:$ python(Windows)or$ python3(Linux or MacOS)>>>from ambsql import *If installed using UI, unzip the file downloaded, go to the 'AmbSQL' directory and use one of the below commands:$ python3 AmbSQL.py(Linux or MacOS)or$ python AmbSQL.py(Windows)ExamplesIf you installedpackageusing pip or CLI, below is the sample code:from ambsql import *createtable('studenttable', 'name', 'age')insertvalues('studenttable', 'Jack', 21)showvalues('studenttable')If you installedAmbSQL.exe, below is the sample code:> connect> createtable(studenttable, name, age)> insertvalues(studenttable, Jack, age)> showvalues(studenttable)ContributorsCheck the list of contributorshereHelp Us ImproveYou can suggest us of new improvements you want by creating new IssuehereLicenseMIT License
ambs-realpython-reader
Real Python Feed ReaderThe Real Python Feed Reader is a basicweb feedreader that can download the latest Real Python tutorials from theReal Python feed.For more information see the tutorialHow to Publish an Open-Source Python Package to PyPIon Real Python.InstallationYou can install the Real Python Feed Reader fromPyPI:pip install realpython-readerThe reader is supported on Python 2.7, as well as Python 3.4 and above.How to useThe Real Python Feed Reader is a command line application, namedrealpython. To see a list of thelatest Real Python tutorialssimply call the program:$ realpython The latest tutorials from Real Python (https://realpython.com/) 0 How to Publish an Open-Source Python Package to PyPI 1 Python "while" Loops (Indefinite Iteration) 2 Writing Comments in Python (Guide) 3 Setting Up Python for Machine Learning on Windows 4 Python Community Interview With Michael Kennedy 5 Practical Text Classification With Python and Keras 6 Getting Started With Testing in Python 7 Python, Boto3, and AWS S3: Demystified 8 Python's range() Function (Guide) 9 Python Community Interview With Mike Grouchy 10 How to Round Numbers in Python 11 Building and Documenting Python REST APIs With Flask and Connexion – Part 2 12 Splitting, Concatenating, and Joining Strings in Python 13 Image Segmentation Using Color Spaces in OpenCV + Python 14 Python Community Interview With Mahdi Yusuf 15 Absolute vs Relative Imports in Python 16 Top 10 Must-Watch PyCon Talks 17 Logging in Python 18 The Best Python Books 19 Conditional Statements in PythonTo read one particular tutorial, call the program with the numerical ID of the tutorial as a parameter:$ realpython 0 # How to Publish an Open-Source Python Package to PyPI Python is famous for coming with batteries included. Sophisticated capabilities are available in the standard library. You can find modules for working with sockets, parsing CSV, JSON, and XML files, and working with files and file paths. However great the packages included with Python are, there are many fantastic projects available outside the standard library. These are most often hosted at the Python Packaging Index (PyPI), historically known as the Cheese Shop. At PyPI, you can find everything from Hello World to advanced deep learning libraries. [... The full text of the article ...]You can also call the Real Python Feed Reader in your own Python code, by importing from thereaderpackage:>>> from reader import feed >>> feed.get_titles() ['How to Publish an Open-Source Python Package to PyPI', ...]
ambulance_game
Ambulance Decision Game: A python library that attempts to explore a game theoretic approach to the EMS - ED interfaceTable of ContentsAbout The ProjectInstallationTestsRoadmapLicenseContactAbout The ProjectThis project revolves around modelling the interaction between the Emergency Medical Service (EMS) and Emergency Departments.Theambulance_gamelibrary is used to model:a discrete event simulation model of a queueing system with two waiting spaces, where individuals can be blocked.the equivalent Markov chain model,a game theoretic model between 3 players; 2 queueing systems (EDs) and a distrbutor (EMS).InstallationInstall a development version of this library with the command:$ python -m pip install flit $ python -m flit install --symlinkTestsRun all tests developed by first installing and then runningtox:$ python -m pip install tox $ python -m toxRoadmapSee theopen issuesfor a list of proposed features (and known issues).LicenseDistributed under the MIT License. SeeLICENSEfor more information.ContactYour Name -@[email protected] Link:AmbulanceDecisionGame
ambush
This is a debug toolbox.Free software: 3-clause BSD licenseDocumentation: (COMING SOON!)https://ke-zhang-rd.github.io/ambush.FeaturesTODO
ambushed
This is a package that has useful machine vision for use with CarsCHANGE LOGRELEASE 10.0.0.1 (7/27/2021)First Release\RELEASE 20.0.0.2 –> 7/27/2021Just wanted to test some things outreally didn’t don anything\-RELEASE 3 -0.0.0.3–> 7/27/2021 -Just wanted to test some things out -really didn’t don anything
amc
In quantum many-body theory, one often encounters problems with rotational symmetry. While methods are most conveniently derived in schemes that do not exploit the symmetry, a symmetry-adapted formulation can lead to orders of magnitude savings in computation time. However, actually reducing the formulas of a many-body method to symmetry-adapted form is tedious and error-prone.The AMC package aims to help practitioners by automating the reduction process. The unreduced (m-scheme) equations can be entered via an easy-to-use language. The package then uses Yutsis graph techniques to reduce the resulting network of angular-momentum variables to irreducible Wigner 6j and 9j symbols, and outputs the reduced equations as a LaTeX file. Moreover, the package is based on abstract representations of the unreduced and reduced equations in the form of syntax trees, which enable other uses such as automatic generation of code that evaluates the reduced equations.InstallationInstall amc using thepippackage manager.pipinstallamcUsagePrepare a file with the properties of the tensors and the equations to reduce. For example, second-order many-body perturbation theory can be reduced in this way:# mbpt.amc declare E2 { mode=0, latex="E^{(2)}_{0}", } # Hamiltonian declare H { mode=4, latex="H", } E2 = 1/4 * sum_abij(H_abij * H_ijab);Then run theamcprogram on the inputamc-ombpt.texmbpt.amcThe result isE^{(2)}_{0}=\frac{1}{4}\sum_{a b i j{J}_{0}}\hat{J}_{0}^{2}H_{a b i j}^{{J}_{0}{J}_{0}0}H_{i j a b}^{{J}_{0}{J}_{0}0}See theUser’s Guidefor details.CitingReleases of this code are deposited to the Zenodo repository. If you use it in research work please cite the version used. Go tothe Zenodo recordto find bibliographic information for each release.If you use this code in research work please also cite the following publicationA. Tichai, R. Wirth, J. Ripoche, T. Duguet.Symmetry reduction of tensor networks in many-body theory I. Automated symbolic evaluation of SU(2) algebra.arXiv:2002.05011[nucl-th]ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.LicenseGPLv3
amc2moodle
amc2moodleamc2moodle, is a suite of tools to convert multiple choice questionnaires (MCQ)fromauto-multiple-choiceLaTeX quizzestomoodle questions (XML format), see details in theamc2moodle README file. Sinceamc2moodleuseLaTeXMLtoexpands LaTeX commands, it could beeasilyadapted to support most of LaTeX capabilities.frommoodle questions (XML format)toauto-multiple-choiceLaTeX quizzes, see details in themoodle2amc README file.The conversion supports equations, tables, figures and standard text formatting. This software is written in python and in XSLT, thus the conversion step is OS independent. It has been tested for moodle 3.x and auto-multiple-choice (v1.0.3-v1.5) and the conversion step is OS independent.Note thatauto-multiple-choice(amc) LaTeX format is very convenient, and can be used for preparing multiple choice questions off-line and avoiding moodle web GUI.InstallationA Docker image withamc2moodleand its dependencies is available atghcr.io/nennigb/amc2moodle. Once the docker software is installed, this image allows to useamc2moodleonwindows plateformsor to add the resource isolation features on linux or macOS. For more information, see theamc2moodle' docker README. To installamc2moodleas a python package on linux or macOS platform, follow the steps below.Before installing amc2moodle:install python (version >=3.5)installimageMagick, useful to convert image files (*.eps, *.pdf, ...) into pngUbuntu:sudo apt-get install imagemagickMacOS:brew install imagemagick(seeImageMagickwebsitefor more details )installLaTeXML[tested with version 0.8.1] This program does the first step of the conversion into XMLUbuntu:sudo apt-get install latexmlsee alsoLaTeXML wikiorinstall notesthat all the dependencies are installed (perl, latex, imagemagick).installxmlindent[optional]. This program can be used to indent well the XML fileUbuntu:sudo apt-get install xmlindentMacOS: not necessary. Script will usexmllintnatively available on MacOS.For MacOS users, most dependencies can be installed withbrewbutLaTeXMLinstallation can failed for some version. Please see the steps given in the install scriptworkflow.Install with pipRunpip install amc2moodlepip will download automatically the required files.or if you have download the sources, runpip install .in the root folder (wheresetup.pyis). This will automatically install other dependencies i.e.,lxml, andWand. Alternatively, you can runpip install -e .to install it in editable mode, useful if git is used.Note: for Ubuntu users usepip3instead ofpipfor python3.UninstallationRunpip uninstall amc2moodleConversionThe program can be run in a shell terminal, for instance to convert anamc LaTeX file to moodle XMLamc2moodle input_Tex_file.tex -o output_file.xml -c catnameHelp and options can be obtained usingamc2moodle -hThen on moodle, go to the courseadministration\question bank\importand choose 'moodle XML format' and tick:If your grade are not conform to that you must use: 'Nearest grade if not listed' in import option in the moodle question bank(see below for details). Examples of theamc2moodlepossibilities are given atQCM.pdfIf your original exam usesAMC-TXT syntax, you must first convert it to LaTeX before feeding it toamc2moodle. To convert an AMC-TXT file to LaTeX, generate the exam documents with AMC graphical interface as usual. AMC will generate a LaTeX version of your exam calledDOC-filtered.texinside the project directory, which you can pass toamc2moodle.In the same way, conversion frommoodle XML to amc LaTeX file, runmoodle2amc input_XML_file.xmlHelp and options can be obtained usingmoodle2amc -hThen the output LaTeX can be edited and included for creating amc exams. Examples of themoodle2amcpossibilities are givenhere.TroubleshootingIn case of problem, do not hesitate to ask for help ondiscussionsor to create anissues. Both binaries (amc2moodleandmoodle2amc) write full log in log files based on the name of the input file (_amc2moodle.logand_amc2moodle.logsuffixes are added on these files).'convert: not authorized..' see ImageMagick policy.xml file seeherebugs with tikz-LaTeXML in texlive 2019/2020: please update the followingperlmodulesParse::RecDescent,XML::LibXMLandXML::LibXSLTherewithcpanorcpanmin CLI.If LaTeXML doesn't know some LaTeX package and returnWarning:missing_file:package-name Can't find binding for package package-name, you can try to invoqueamc2moodlewith--includestylesflag.Related Projectauto-multiple-choice, is a piece of software that can help you creating and managing multiple choice questionnaires (MCQ), with automated marking.TeX2Quiz, is a similar project to translate multiple choice quiz into moodle XML, without connection with AMC.moodle- Generating Moodle quizzes via LaTeX. A package for writing Moodle quizzes in LaTeX. In addition to typesetting the quizzes for proofreading, the package compiles an XML file to be uploaded to a Moodle server.moodle-mod-automultiplechoice- An interface to use AMC within Moodle.flatex- A Python script for "flattening" a nested LaTeX document by pulling in all the \input files. Part of this project has been reused in amc2moodle.pyexams, It allows to eval code inside any jupyter kernel (like Sagemath, sympy, ...) and to export them in the moodle XML format.How to contribute ?If you want to contribute toamc2moodle, your are welcomed! Don't hesitate toask for help or share some tips ondiscussionsreport bugs, installation problems onissuespropose some enhancements in the code or in documentation throughpull requests(PR)create a moodle plugin for importsupport new kind of questionsadd support for other language (French and English are present) in AMC command...To ensure code homogeneity among contributors, we use a source-code analyzer (e.g.pylint). Before submitting a PR, run the tests suite.LicenseThis file is part of amc2moodle, a tool to convert automultiplechoice quizzes to moodle questions. amc2moodle is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. amc2moodle is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with amc2moodle. If not, seehttps://www.gnu.org/licenses/.
amcache2
amcache2.pycreates a bodyfile from AmCache.hveInstallationI recommend to usepipenvinstead ofvenv, because usingvenvI had problems withhttps://github.com/construct/construct/pull/930pipenvinstallamcache2Usageusage:amcache2.py[-h]registry_hive ParseprogramexecutionentriesfromtheAmcache.hveRegistryhive positionalarguments:registry_hivePathtotheAmcache.hvehivetoprocess options:-h,--helpshowthishelpmessageandexitExamplepipenvrunamcache2.pyAmcache.hve|mactime-d-b-
amcat4
No description available on PyPI.
amcat4annotator
No description available on PyPI.
amcat4apiclient
No description available on PyPI.
amcat4py
No description available on PyPI.
amcatclient
No description available on PyPI.
amc-cropper
AMC Cropping ToolAuthor: Nikhil RejiCrops AMC files to descired length based on provided fps, start, and end whole seconds. Works through command line.For asf/amc v1.1 standard.features:Crop AMC file based on seconds.Save to new parsed AMC file.Thorough validation of all arguments.Soft mode: Will not override existing files.Command line driven.Dependencies:AsfAmc-ParserPython 3.6 +CompatabilityOnly tested on Windows 10.ExampleExample working directory the programm will execute in.Documentstest.amccroppedREADME.mdIn an open command line in the prefered working directory:python -m amccrop -i test -o testcropped -fps 30 -s 2 -e 5where: -m : tells python to find installed module. -i : input amc relative filepath including filename, excluding format. -o : output amc relative file path including filename, excluding format. -fps : frame rate per second -s : start seconds, whole integer. -e : end seconds, whole integer.
amcess
Atomic and Molecular Cluster Energy Surface Sampler (AMCESS)Exploration of the Potential Energy Surface (PES) of molecules or atoms clusters is a crucial step to analyze physical–chemistry properties and processes. The Atomic and Molecular Energy Surface Sampler (AMCESS) is an end-to-end package implemented in Python 3.9 to generate candidate structures for the critical points sampling of the PES. The amcess main purpose is to be a user friendly package, easy to install, import, and run, available in most platforms and open-source. As a Python module, amcess can be integrated into any workflow. This package has code reviews with unit testing and continuous integration, code coverage tools, and automatically keeps documentation up–to–date.Molecular cluster of ibuprofen and six water molecules [doi: 10.1063/1.4874258]DescriptionThe amcess package uses simple input files and automates common procedures to explore the PES using the Simulated Annealing, Simplicial Homology Global Optimiza- tion (SHGO), and Bayesian Optimization to generate candidate structures for any kind of critical point, such as local minima or transition states. The package also allows the user to perform local searches around defined regions. The PES is generated computing the electronic energy using standard and powerful quantum chemistry packages such as PySCF and Psi4, also implemented in Python.Technical DocumentationTechnical documents behind this project can be accessedhere.RequirementsFirst you should install the required python packages- attrs==21.2 - scipy==1.7.1 - numpy==1.21.2 - pyscf==1.7.6.post1 - h5py==3.1.0 - pyberny==0.6.3 - geomeTRIC==0.9.7.2 - GPyOpt==1.2.6 - pyDOE==0.3.8 - matplotlib==3.4.2check the filerequirements.txt. For developer, you should installrequirements_dev.txt.InstallationAMCESS isPython 3.9packageInstall virtual environment:python -m venv venvActivate virtual environment:source venv/bin/activateInstall the packages:pip install amcessRun AMCESS (check some examples below)For developer only, install dependencies:pip install -r requirements.txt -r requirements_dev.txtRun all test:tox==3.24.3UsageA detail workflow is provide intoworkflowdirectory. It has a list of Jupyter notebook with detail examples about AMCESS tools and capabilities.Workflow:Getting starting with atoms and molecules properties.Notebook (binder):01_importing_atoms_and_molecules.ipynbTranslating and rotating atoms and molecules.Notebook (binder):02_move_rotate_molecules.ipynbMoving Molecules randomly from a Cluster.Notebook (binder):03_move_rotate_cluster.ipynbFreezing any molecule and redefine its sphere center.Notebook (binder):04_freeze_molecule_redefine_center.ipynbInitialize a cluster avoiding atomic overlappingNotebook (binder):05_initialize_cluster_and_move_molecule.ipynbRoadmapSome of the ideas to keep growing are:Integration withRDKit(multiple format input)Results: geometrical analysis (clustering, k-nearest, k-means, etc.)ContributingThe easiest way to get help with the project is to join the #amcess channel on Discord.We hang out there and you can get real-time help with your projects. The other good way is to open an issue on GitLab.Discord:https://discord.gg/vxQQCjpgGitLab:https://gitlab.com/ADanianZE/amcess/issuesLicenceGNU General Public License v3 (GLPv3)Authors and AcknowledgmentMain authors: Alejandra Mendez, Juan Jose Aucar, Daniel Bajac, César Ibargüen, Andy Zapata, Edison Florez ([email protected])Project StatusUnder developmentASCEC (FORTRAN 77 version)A previous version of AMCESS, called ASCEC[1](spanish acronym Annealing Simulado con Energía Cuántica) was written in FORTRAN77 and was successfully used in the wide range of research and academic applications. From atomic cluster to molecular cluster, the ASCEC package has produced novel results (structure never seen before) published in the literature. Read more onASCEC publications.You could check the directoryASCECV3ASCECV3/ |---papers/ |---p_ascec/ |---examples/ |---adf |---dalton |---g03 |---gamess |---nwchemReferences[1]J Pérez and A Restrepo. Ascec v–02: annealing simulado con energía cuántica. Property, development and implementation: Grupo de Química–Física Teórica, Instituto de Química, Universidad de Antioquia: Medellín, Colombia, 2008.
amcheck
amcheckamcheckis the program (and a library) to check, based on the symmetry arguments, whether a given material is an altermagnet.Altermagnet is a collinear magnet with symmetry-enforced zero net magnetization, where the opposite spin sublattices are coupled by symmetry operations that are not inversions or translations.The user is supposed to provide a crystal structure and a magnetic pattern to describe the material of interest. It is implicitly assumed that the net magnetic moment is zero, i.e. if user types in a ferromagnet, the program will return an error. The underlying idea is that some pre-classification was already done and the user wants to figure out if the given material is an altermagnet or not, thus an antiferromagnet.InstallationThe code is written inpythonand can be installed usingpip:pip install amcheckIt has the following packages among its dependencies:ase,spglibanddiophantine.UsageTo use it as a command line tool, one provides one or more structure files (the code will internally loop over all listed files) and, when prompted, types in spin designation for each atom: 'u' or 'U' for spin-up, 'd' or 'D' for spin-down and 'n' or 'N' if the atom is non-magnetic. All atoms will be grouped into sets of symmetry-related atoms (orbits), and the user will need to provide spin designations per such a group. To mark the entire group as non-magnetic, one can use the 'nn' or 'NN' designation. Note that here, we treat spins as pseudoscalars (up and down, black and white), not as pseudovectors, and thus, no spatial anisotropy for spins is assumed.ExamplesChecking if a given material is altermagnetic$ amcheck FeO.vasp MnTe.vasp ========================================================== Processing: FeO.vasp ---------------------------------------------------------- Spacegroup: P6_3/mmc (194) Writing the used structure to auxiliary file: check FeO.vasp_amcheck.vasp. Orbit of Fe atoms at positions: 1 (1) [0.33333334 0.66666669 0.25 ] 2 (2) [0.66666663 0.33333331 0.75 ] Type spin (u, U, d, D, n, N, nn or NN) for each of them (space separated): u d Orbit of O atoms at positions: 3 (1) [0. 0. 0.] 4 (2) [0. 0. 0.5] Type spin (u, U, d, D, n, N, nn or NN) for each of them (space separated): n n Group of non-magnetic atoms (O): skipping. Altermagnet? False ========================================================== Processing: MnTe.vasp ---------------------------------------------------------- Spacegroup: P6_3/mmc (194) Writing the used structure to auxiliary file: check MnTe.vasp_amcheck.vasp. Orbit of Mn atoms at positions: 1 (1) [0. 0. 0.] 2 (2) [0. 0. 0.5] Type spin (u, U, d, D, n, N, nn or NN) for each of them (space separated): u d Orbit of Te atoms at positions: 3 (1) [0.33333334 0.66666669 0.25 ] 4 (2) [0.66666663 0.33333331 0.75 ] Type spin (u, U, d, D, n, N, nn or NN) for each of them (space separated): nn Group of non-magnetic atoms (Te): skipping. Altermagnet? TrueUsing as a libraryHere is a code snippet providing an example on how to use theamcheckas a library:importnumpyasnpfromamcheckimportis_altermagnetsymmetry_operations=[(np.array([[-1,0,0],[0,-1,0],[0,0,-1]],dtype=int),np.array([0.0,0.0,0.0])),# for compactness reasons,# other symmetry operations are omitted# from this example]# positions of atoms in NiAs structure: ["Ni", "Ni", "As", "As"]positions=np.array([[0.00,0.00,0.00],[0.00,0.00,0.50],[1/3.,2/3.,0.25],[2/3.,1/3.,0.75]])equiv_atoms=[0,0,1,1]# high-pressure FeO: Fe at As positions, O at Ni positions => afmchem_symbols=["O","O","Fe","Fe"]spins=["n","n","u","d"]print(is_altermagnet(symops,positions,equiv_atoms,chem_symbols,spins))# MnTe: Mn at Ni positions, Te at As positions => amchem_symbols=["Mn","Mn","Te","Te"]spins=["u","d","n","n"]print(is_altermagnet(symops,positions,equiv_atoms,chem_symbols,spins))Determining the form of Anomalous Hall coefficient$ amcheck --ahc RuO2.vasp ========================================================== Processing: RuO2.vasp ---------------------------------------------------------- List of atoms: Ru [0. 0. 0.] Ru [0.5 0.5 0.5] O [0.30557999 0.30557999 0. ] O [0.19442001 0.80558002 0.5 ] O [0.80558002 0.19442001 0.5 ] O [0.69441998 0.69441998 0. ] Type magnetic moments for each atom ('mx my mz' or empty line for non-magnetic atom): 1 1 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 Assigned magnetic moments: [[1.0, 1.0, 0.0], [-1.0, -1.0, 0.0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]] Magnetic Space Group: {'uni_number': 584, 'litvin_number': 550, 'bns_number': '65.486', 'og_number': '65.6.550', 'number': 65, 'type': 3} Conductivity tensor: [[ 'xx' 'xy' '-yz'] [ 'xy' 'xx' 'yz'] [ 'yz' '-yz' 'zz']] The antisymmetric part of the conductivity tensor (Anomalous Hall Effect): [['0' '0' '-yz'] ['0' '0' 'yz'] ['yz' '-yz' '0']] Hall vector: ['-yz', '-yz', '0']ContributorsAndriy Smolyanyuk[1], Libor Šmejkal[2] and Igor I. Mazin[3][1] Institute of Solid State Physics, TU Wien, 1040 Vienna, Austria[2] Johannes Gutenberg Universität Mainz, Mainz, Germany[3] George Mason University, Fairfax, USAHow to citeIf you're using theamcheckpackage, please cite the manuscript describing the underlying ideas:A tool to check whether a symmetry-compensated collinear magnetic material is antiferro- or altermagnetic.@misc{smolyanyuk2024tool,title={A tool to check whether a symmetry-compensated collinear magnetic material is antiferro- or altermagnetic},author={Andriy Smolyanyuk and Libor \v{S}mejkal and Igor I. Mazin},year={2024},eprint={2401.08784},archivePrefix={arXiv},primaryClass={cond-mat.mtrl-sci}}
amclient
amclientAMClient is an Archivematica API client library and Python package for making it easier to talk to Archivematica from your Python scripts. AMClient also acts as a command line application which can easily be combined with shell-scripts to perform the same functions as a Python script might.AMClient brings together the majority of the functionality of the two primary Archivematica components:Archivematica APIStorage Service APIBasic usage:amclient.py <subcommand> [optional arguments] <positional argument(s)>E.g.:amclient.pyclose-completed-transfers\--am-user-nametest234deffdf89d887a7023546e6bc0031167cedf6To see a list of all commands and how they are used, then runamclient.pywithout any arguments.To understand how to use an individual subcommand, simply run:amclient.py <subcommand>, the output will describe the input parameters for that command:E.g.:usage:amclientextract-file[-h][--ss-user-nameUSERNAME][--ss-urlURL][--directoryDIR][--saveas-filenameSAVEASFILENAME]ss_api_keypackage_uuidrelative_pathCalling the module from Python:E.g.:Python3.6.7(default,Oct222018,11:32:17)[GCC8.2.0]onlinuxType"help","copyright","credits"or"license"formoreinformation.>>>fromamclientimportAMClient>>>am=AMClient()>>>am.ss_url="http://127.0.0.1:62081">>>am.ss_user_name="test">>>am.ss_api_key="test">>>am.list_storage_locations()...jsonisoutputhere...CONTRIBUTINGFor information about contributing to this project please see the AMClientCONTRIBUTING.md
amcommon
Failed to fetch description. HTTP Status Code: 404
amconll
No description available on PyPI.
amcp-pylib
Python AMCP Client Libraryv0.2.1IntroductionWelcome to the AMCP client library repository for Python! The goal of this library is to provide simple and understandable interface for communication with CasparCG server.Installationpip install amcp_pylibUsage examplesBelow you can see various usage examples.Connecting to serverfromamcp_pylib.coreimportClientclient=Client()client.connect("caspar-server.local",6969)# defaults to 127.0.0.1, 5250Built-in support forasynciomodule:importasynciofromamcp_pylib.coreimportClientAsyncclient=ClientAsync()asyncio.new_event_loop().run_until_complete(client.connect("caspar-server.local",6969))Sending commandsfromamcp_pylib.coreimportClientfromamcp_pylib.module.queryimportVERSION,BYEclient=Client()client.connect()response=client.send(VERSION(component="server"))print(response)response=client.send(BYE())print(response)<SuccessResponse(data=['2.0.7.e9fc25a Stable'],code=201,code_description='VERSION')> <InfoResponse(data=['SERVER SENT NO RESPONSE'],code=0,code_description='EMPTY')>All supported protocol commands are listed and documented on CasparCG'swiki pages.Some commands may not be supported yet (in that case, please create issue (or pull ;) request).
amcrest
A Python 2.7/3.x module forAmcrest Camerasusing the SDK HTTP API. Amcrest and Dahua devices share similar firmwares. Dahua Cameras and NVRs also work with this module.Documentation:http://python-amcrest.readthedocs.io/InstallationPyPI$pipinstallamcrest--upgrade$eval"$(register-python-argcompleteamcrest-cli)"# To enable amcrest-cli autocomplete in the system:$echo'eval "$(register-python-argcomplete amcrest-cli)"'>/etc/profile.d/[email protected]:tchellomello/python-amcrest.git$./autogen.sh$makerpm$dnf/yuminstallamcrest-cli-NVR.rpmpythonX-amcrest-NVR.rpmUsagefromamcrestimportAmcrestCameracamera=AmcrestCamera('192.168.0.1',80,'admin','password').camera#Check software informationcamera.software_information'version=2.420.AC00.15.R\r\nBuildDate=2016-09-08'#Capture snapshotcamera.snapshot(0,"/home/user/Desktop/snapshot00.jpeg")<requests.packages.urllib3.response.HTTPResponseobjectat0x7f84945083c8>#Capture audiocamera.audio_stream_capture(httptype="singlepart",channel=1,path_file="/home/user/Desktop/audio.aac")CTRL-Ctostopthecontinuousaudiofloworuseatimer#Move camera downcamera.ptz_control_command(action="start",code="Down",arg1=0,arg2=0,arg3=0)#Record realtime stream into a filecamera.realtime_stream(path_file="/home/user/Desktop/myvideo")CTRL-CtostopthecontinuousvideofloworuseatimerCommand Line$manamcrest-clior$amcrest-cli--help# Saving credentials to file.$vim~/.config/amcrest.conf[patio]hostname:192.168.0.20username:adminpassword:123456port:80[living_room]hostname:192.168.0.21username:adminpassword:secretport:80$amcrest-cli--cameraliving_room--version-http-apiversion=1.40Text User Interface (TUI)Configure amcrest.conf and trigger amcrest-tui, make sure the user triggering amcrest-tui have access to framebuffer device or use sudo.NOTE:Execute it from console logins, like /dev/ttyX (Non X Window). Pseudo-terminals like xterm, ssh, screen and others WONT WORK.$vim~/.config/amcrest.conf[patio]hostname:192.168.0.20username:adminpassword:123456port:80[living_room]hostname:192.168.0.21username:adminpassword:secretport:80$amcrest-tuiSupportability MatrixCamerasModelTestedStatusResults/IssuesIPM-721YesworkingIPM-HX1YesworkingIP2M-841YesworkingIP2M-842YesworkingIP3M-941YesworkingIP3M-943YesworkingIP3M-956YesworkingIP3M-956EYesworkingIP3M-HX2YesworkingIP4M-1026BYesworkingIP4M-1041BYesworkingIP4M-1051BYesworkingIP5M-1176EBYesworkingIP8M-2496EBYesworkingIP8M-T2499EW-28MYesworkingNetwork Video Recorders (NVR)ModelTestedStatusResults/IssuesXVR DAHUA 5104SYesworkingIf you have different model, feel fee to contribute and report your results.HelpIf you need any help, please join our community on the Gitter channels available atGitter.
amcs
No description available on PyPI.
amd2pdf
This is another markdown to pdf conversion tool. This includes TOC markdown tag support and the generated output is page indexed.
amd64
Failed to fetch description. HTTP Status Code: 404
amdahl
What does this package do?This python module contains a pseudo-application that can be used as a black box to reproduce Amdahl's Law. It does not do real calculations, nor any real communication, so can easily be overloaded.The application is installed as a python module with a shell script wrapper. The only requirement is MPI4PY.BackgroundAmdahl's law posits that some unit of work comprises a proportion $p$ that benefits from parallel resources, and a proportion $s$ that is constrained to execute in serial. The theoretical maximum speedup achievable for such a workload is$$ S = \frac{1}{s + p/N} $$where $S$ is the speedup relative to performing all of the work in serial and $N$ is the number of parallel workers. A plot of $S$ vs. $N$ ought to look like this, for $p = 0.89$ ( $s = 1 - p = 0.11$ ):5┬────────┼────────┼────────┼────────·────────┼────────┼────────┼────────┼────────*│ · ││ · * ││ · ││ · * ││ · ││ · ││ · * ││ · │4┼ · ┼│ · * ││ · ││ · ││ · ││ · * │S │ · │p │ · │e │ · │e 3┼ · * ┼d │ · │u │ · │p │ · ││ · ││ · * |│ · ││ · ││ · │2┼ · ┼│ · ││ · * ││ · ││ · ││ · ││ · ││ · ││· │1*────────┼────────┼────────┼────────┼────────┼────────┼────────┼────────┼────────┤1 2 3 4 5 6 7 8 9 10Workers"Ideal scaling" ( $p = 1$ ) is would be the line $y = x$ (or $S = N$), represented here by the dotted line.This graph shows there is a speed limit for every workload, and diminishing returns on throwing more parallel processors at a problem. It is worth running a "scaling study" to assess how far away that speed limit might be for the given task.
amdapi
AMDAPi Python SDKThe AMDAPi Python SDK package serves as a native python interface allowing for easy consumption of AMDAPi API services.InstallationQuick-Start GuideCreating a ClientAnalyzing a CallCall ParamsExampleRetrieving A CallSearching for Multiple CallsSearch paramsDefault SearchDeleting CallsReference DocsInstallationThe AMDAPi SDK can be installed from pip as follows:pipinstallamdapiQuick-Start GuideIn order to create a client and use the AMDAPi services, you must first be granted credentials fromAMDAPi.Once you have these credentials this Quick-Start guide can be followed to quickly understand the main functionality of the SDK.If further explanation is required the full documentation can be foundhere.Creating a ClientfromamdapiimportClientamdapi_id="XXXXXXXXXXXXXXXXXXXXXXXXXX"amdapi_secret="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"client=Client(amdapi_id,amdapi_secret)or credentials can be loaded into local environment variablesimportosfromamdapiimportClientfromamdapi.configsimportCLIENT_ID_ENV_NAME,CLIENT_SECRET_ENV_NAME# Client looks for the ID @ AMDAPI-CLIENT-IDos.environ[CLIENT_ID_ENV_NAME]="XXXXXXXXXXXXXXXXXXXXXXXXXX"# Client looks for the Secret @ AMDAPI-CLIENT-SECRETos.environ[CLIENT_SECRET_ENV_NAME]="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"client=Client()Output:<amdapi.Client|ClientID:XXXXXXXXXXXXXXXXXXXXXXXXXX|LastTokenRefresh:2022-04-2908:43:49>Analyzing a CallTo send a call to the AMDAPi Backend the following parameters are required.Note: Currently the audio file must be in a.wavformat.Call ParamsNameTypeAllowed ValuesDescriptioncall_idstrNAID of Call from your DBclient_idintNAID of Client from your DBagent_idintNAID of Agent from your DBcustomer_idintNAID of Customer from your DBsummaryboolNAWheather the call should be summarised during analysis.filenamestrNAFilename of the audio file.originstr['Inbound','Outbound']Defines whether the call was Outbound or Inbound.languagestr['en','en-in','fr']Primary language of the audio sent for analysis.agent_channelint[0,1]⚠ Required for stereo audio to specify the channel for the agent audio. Will be ignored in the case of mono-channel audio.ExampleThe following code segment demonstrates how you would analyze a call:filename="6c89833033cd57c3cfeb1ad8445821a6714d9bf6cd3613b723ac1cfb"file_path=f"{filename}.wav"params={"client_id":12345,"agent_id":12345,"filename":filename,"call_id":12345,"customer_id":12345,"origin":"Inbound","language":"en","summary":True,"agent_channel":1# <- Indicates that the audio is stereo and that the agent is on channel 1}withopen(file_path,'rb')asfile:call=client.analyze_call(file,**params)ACallobject containing the meta-data provided and the automatically generated uuid will be returned.Please allow a few minutes for the analysis of the call to be complete, and retrieve the analyzed call using the call UUID.Output:<amdapi.Call|UUID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX|Analyzed:False>Retrieving A CallRetrieving a call via UUID can be achieved as follows:call=client.get_call(call.uuid)Output:<amdapi.Call|UUID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX|Analyzed:True>Searching for Multiple CallsThe SDK allows for searching for all previously uploaded calls and will return results in a paginated format with max 350 calls per page.Search paramsCurrently these search parameters are supported, they may be leftNoneto get all calls.NameTypeAllowed ValuesDescriptionpage_numberintNAThe page numberagent_idintNAID of Client from your DBclient_idintNAID of Agent from your DBstart_datestr|datetime.datetime"DD/MM/YYYY"- If string is used.The start date of your search.end_datestr|datetime.datetime"DD/MM/YYYY"- If string is usedThe end date of your search.Examplesearch_params={"page_number":123,"agent_id":123,"client_id":123,"start_date":25/07/2021,"end_date":datetime.now(),}search=client.search_calls(**search_params)Output [No Calls Found]:<amdapi.SearchResult|current_page:None|is_last_page:None|n_calls0>Output [Calls Found]:<amdapi.SearchResult|current_page:1|is_last_page:False|n_calls350>Default SearchIf no Search Parameters are provided then all calls will be returned.search=client.search_calls()Output:<amdapi.SearchResult|current_page:1|is_last_page:False|n_calls350>Deleting Calls⚠WARNING - This action is irreversibleYou can delete a call and all analysis from the AMDAPi servers with the following:msg=client.delete_call(call.uuid)Output:'Call XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX Deleted Successfully'Reference DocsFind our full documentationhere.
amdapy
amdapyPython package for AMDAamdapyis a python package for accessing heliophysics data stored on the AMDA plateform.InstallationInstall with pippip3 install amdapy
amdfan
AmdFanIs a fork of amdgpu-fan, with security updates and added functionality. This is intended for stand-alone GPU's and not integrated APU's.Why fork?alternatives abandonedlacking required featuressecurity fixes not addressedbasic functionality not workingAmdgpu_fan abandonedAs of a couple years ago, and is no longer applying any security fixes to their project or improvements. There were also some bugs that bothered me with the project when I tried to get it up and running. Features missingThere are a number of features that I wanted, but were not available.Thresholds allow temperature range before changing fan speedFrequency setting to allow better controlMonitoring to be able to see real-time fan speeds and temperatureSecurity FixesThere are some un-addressed pull requests for some recent YAML vulnerabilities that are still present in the old amdgpu_fan project, that I’ve addressed in this fork.Basic functionalitySetting the card to system managed using the amdgpu_fan pegs your GPU fan at 100%, instead of allowing the system to manage the fan settings. I fixed that bug as well in this release.These are all addressed in Amdfan, and as long as I’ve still got some AMD cards I intend to at least maintain this for myself. And anyone’s able to help out since this is open source. I would have liked to just contribute these fixes to the main project, but it’s now inactive.DocumentationUsageUsage:amdfan.py[OPTIONS]Options:--daemonRunasdaemonapplyingthefancurve--monitorRunasamonitorshowingtempandfanspeed--manualManuallysetthefanspeedvalueofacard--configurationPrintsoutthedefaultconfigurationforyoutouse--servicePrintsouttheamdfan.servicefiletousewithsystemd--helpShowthismessageandexit.DaemonAmdfan is also runnable as a systemd service, with the providedamdfan.service.MonitorYou can use Amdfan to monitor your AMD video cards using the--monitorflag.ManualAlternatively if you don't want to set a fan curve, you can just apply a fan setting manually. Also allows you to revert the fan control to the systems default behavior by using the "auto" parameter.ConfigurationThis will dump out the default configuration that would get generated for/etc/amdfan.ymlwhen you first run it as a service. This allows you to configure the settings prior to running it as a daemon if you wish.Runningamdfan --configurationwill output the following block to STDOUT.#Fan Control Matrix.# [<Temp in C>,<Fanspeed in %>]speed_matrix: -[4,4]-[30,33]-[45,50]-[60,66]-[65,69]-[70,75]-[75,89]-[80,100]# Current Min supported value is 4 due to driver bug## Optional configuration options## Allows for some leeway +/- temp, as to not constantly change fan speed# threshold: 4## Frequency will change how often we probe for the temp# frequency: 5## While frequency and threshold are optional, I highly recommend finding# settings that work for you. I've included the defaults I use above.## cards:# can be any card returned from `ls /sys/class/drm | grep "^card[[:digit:]]$"`# - card0You can use this to generate your configuration by doingamdfan --configuration > amdfan.yml, you can then modify the settings and place it in/etc/amdfan.ymlfor when you would like to run it as a service.ServiceThis is just a convenience method for dumping out theamdfan.servicethat would get installed if you used a package manager to install amdfan. Useful if you installed the module viapip,pipenvorpoetry.Runningamdfan --servicewill output the following block to STDOUT.[Unit]Description=amdfancontroller[Service]ExecStart=/usr/bin/amdfan--daemonRestart=always[Install]WantedBy=multi-user.targetNoteMonitoring fan speeds and temperatures can run with regular user permissions.rootpermissions are required for changing the settings / running as a daemon.Recommended settingsBelow is the settings that I use on my machines to control the fan curve without too much fuss, but you should find a frequency and threshold setting that works for your workloads./etc/amdfan.ymlspeed_matrix: -[4,4]-[30,33]-[45,50]-[60,66]-[65,69]-[70,75]-[75,89]-[80,100]threshold:4frequency:5Installing the systemd serviceIf you installed via the AUR, the service is already installed, and you just need tostart/enableit. If you installed via pip/pipenv or poetry, you can generate your systemd service file with the following command.amdfan--service>amdfan.service&&sudomvamdfan.service/usr/lib/systemd/system/Starting the systemd serviceTo run the service, you can run the following commands tostart/enablethe service.sudosystemctlstartamdfan sudosystemctlenableamdfanAfter you've started it, you may want to edit the settings found in/etc/amdfan.yml. Once your happy with those, you can restart amdfan with the following command.sudosystemctlrestartamdfanChecking the statusYou can check the systemd service status with the following command:systemctlstatusamdfanBuilding Arch AUR packageBuilding the Arch package assumes you already have a chroot env setup to build packages.gitclonehttps://aur.archlinux.org/amdfan.gitcdamdfan/ makechrootpkg-c-r$HOME/$CHROOTInstalling the Arch packagesudopacman-U--asdepsamdfan-*-any.pkg.tar.zstInstalling from PyPiYou can also install amdfan from pypi using something like poetry.poetryinit poetryaddamdfan poetryrunamdfan--helpBuilding Python packageRequirespoetryto be [email protected]:mcgillij/amdfan.gitcdamdfan/ poetrybuild
amdfans
README athttps://github.com/jorrinpollard/amdfans
amdgpu-fan
[![CircleCI](https://circleci.com/gh/chestm007/amdgpu-fan.svg?style=svg)](https://circleci.com/gh/chestm007/amdgpu-fan)# Fan controller for amdgpusIf you experience problems please create an issue.## installation:### pip`sudo pip3 install .`### Arch linuxAvailable in the aur as `amdgpu-fan`## usage:`sudo amdgpu-fan`## configuration:```# /etc/amdgpu-fan.yml# eg:speed_matrix: # -[temp(*C), speed(0-100%)]- [0, 0]- [40, 30]- [60, 50]- [80, 100]# optional# cards: # can be any card returned from# # ls /sys/class/drm | grep "^card[[:digit:]]$"# - card0```
amdgpu-fan-ctrl
AMD GPU fan control for LinuxThis software has been developed and tested on Debian 10 Buster with the 4.19.0-6-amd64 Linux kernel and a Radeon™ RX 590 GPU. It requires theamdgpukernel module to be loaded.Why this package?automatic fan control provided by the GPU hardware keeps the fans of my RX 590 running all the time (albeit at a very low speed)after some years of continued use, even at low speeds, fans develop irritanting noisesmost of the time I use my computer in a way that requires minimal GPU usage (writing code, browsing the web, etc)without fans, the temperature of my RX 590 rarely raises above 50°C when doing non-GPU intensive activities, even during the hottest summer daysDisclaimerUSE OF THIS SOFTWARE IS ENTIRELY OF YOUR RESPONSABILITY. HARDWARE DAMAGE MAY RESULT FROM HIGH TEMPERATURES AS A RESULT OF USING THIS SOFTWARE.WarningThis software puts your GPU into manual fan control (manual means not controlled directly by hardware).If you stop this software you should reboot your computer or manually reset your GPU fan control to automatic mode.Installing this softwareIf you want to run the software manually or develop your own software making use of functions from theamdgpu_fan_ctrlPython module, you may install it with pip3 or simply copy the file into some directory of your choice.The most common use case will be to run this software as asystemdservice, started at boot. To install this service, run the script./install-systemd-service.shas root.LicenseThis software is licensed under the MIT license.
amdgpu-pptable
AMDGPU PowerPlay table parserA Python library that converts AMDGPU PowerPlay tables to ctypes structs.Uses code generated from MIT-licensed AMDGPU Linux driver headers.For a Qt GUI editor seeamdgpu-pptable-editor-qtGenerating ctypes structsGenerated code is tracked in git, it is located insrc/amdgpu_pptable/generated.To re-generate it (with, maybe, different kernel sources):$ tox -e generate-ctypes -- -k path/to/kernel/sources
amdgpu-stats
amdgpu_statsA Python module/TUI for AMD GPU statistics. TestedonlyonRX6000series cards and(less so)with Ryzen CPU iGPUs.Pleasefile an issueif finding incompatibility!ScreenshotsMain screen / statsUsage graphsLogsInstallationpipinstallamdgpu-statsTo use theTUI, runamdgpu-statsin your terminal of choice. For themodule, see below!ModuleIntroduction:In[1]:importamdgpu_stats.utilsIn[2]:amdgpu_stats.utils.AMDGPU_CARDSOut[2]:{'card0':'/sys/class/drm/card0/device/hwmon/hwmon9'}In[3]:amdgpu_stats.utils.get_core_stats('card0')Out[3]:{'sclk':640000000,'mclk':1000000000,'voltage':0.79,'util_pct':65}In[4]:amdgpu_stats.utils.get_clock('core',format_freq=True)Out[4]:'659 MHz'For more information on what the module provides, please see:ReadTheDocshelp('amdgpu_stats.utils')in your interpreterThe module sourceFeature requestsare encouraged😀RequirementsOnlyLinuxis supported. Information iscompletelysourced from interfaces insysfs.Itmaybe necessary to update theamdgpu.ppfeaturemaskparameter to enable metrics.This is assumed present forcontrolover the elements being monitored. Untested without.Seethis Arch Wiki entryfor context.
amdinfer
The AMD Inference Server is an open-source tool to deploy your machine learning models and make them accessible to clients for inference. Out-of-the-box, the server can support selected models that run on AMD CPUs, GPUs or FPGAs by leveraging existing libraries. For all these models and hardware accelerators, the server presents a common user interface based on community standards so clients can make requests to any using the same API. The server provides HTTP/REST and gRPC interfaces for clients to submit requests. For both, there are C++ and Python bindings to simplify writing client programs. You can also use the server backend directly using the native C++ API to write local applications.FeaturesSupports client requests usingHTTP/REST, gRPC and websocket protocolsusing an API based onKServe’s v2 specificationCustom applications can directly call the backend bypassing the other protocols using thenative C++ APIC++ library with Python bindingsto simplify making requests to the serverIncoming requests are transparentlybatchedbased on the user specificationsUsers can define how many models, and how many instances of each, torun in parallelThe AMD Inference Server is integrated with the following libraries out of the gate:TensorFlow and PyTorch models withZenDNNon CPUs (optimized for AMD CPUs)ONNX models withMIGraphXon AMD GPUsXModel models withVitis AIon AMD FPGAsA graph of computation including as pre- and post-processing can be written usingAKSon AMD FPGAs for end-to-end inferenceQuick Start Deployment and InferenceThe following example demonstrates how to deploy the server locally and run a sample inference. This example runs on the CPU and does not require any special hardware. You can see a more detailed version of this example in thequickstart.# Step 1: Download the example files and create a model repositorywgethttps://github.com/Xilinx/inference-server/raw/main/examples/resnet50/quickstart-setup.shchmod+x./quickstart-setup.sh./quickstart-setup.sh# Step 2: Launch the AMD Inference Serverdockerrun-d--net=host-v${PWD}/model_repository:/mnt/models:rwamdih/serve:uif1.1_zendnn_amdinfer_0.3.0amdinfer-server--enable-repository-watcher# Step 3: Install the Python client librarypipinstallamdinfer# Step 4: Send an inference requestpython3tfzendnn.py--endpointresnet50--image./dog-3619020_640.jpg--labels./imagenet_classes.txt# Inference should print the following: # # Running the TF+ZenDNN example for ResNet50 in Python # Waiting until the server is ready... # Making inferences... # Top 5 classes for ../../tests/assets/dog-3619020_640.jpg: # n02112018 Pomeranian # n02112350 keeshond # n02086079 Pekinese, Pekingese, Peke # n02112137 chow, chow chow # n02113023 Pembroke, Pembroke Welsh corgiLearn moreThe documentation for the AMD Inference Server is availableonline.Check out thequickstartonline to help you get started.SupportRaise issues if you find a bug or need help. Refer toContributingfor more information.LicenseThe AMD Inference Server is licensed under the terms of Apache 2.0 (seeLICENSE). The LICENSE file contains additional license information for third-party files distributed with this work. More license information can be seen in thedependencies.IMPORTANT NOTICE CONCERNING OPEN-SOURCE SOFTWAREMaterials in this release may be licensed by Xilinx or third parties and may be subject to the GNU General Public License, the GNU Lesser General License, or other licenses.Licenses and source files may be downloaded from:libamdinferlibbrotlicommonlibbrotlideclibbrotlienclibcareslibcom_errlibcryptolibdrogonlibefswlibgssapi_krb5libjsoncpplibk5cryptolibkeyutilslibkrb5libkrb5supportlibopencv_corelibopencv_imgcodecslibopencv_imgproclibossp-uuidlibpcrelibprometheus-cpp-corelibprotobuflibselinuxlibssllibtrantorNote: You are solely responsible for checking the header files and other accompanying source files (i) provided within, in support of, or that otherwise accompanies these materials or (ii) created from the use of third party software and tools (and associated libraries and utilities) that are supplied with these materials, because such header and/or source files may contain or describe various copyright notices and license terms and conditions governing such files, which vary from case to case based on your usage and are beyond the control of Xilinx. You are solely responsible for complying with the terms and conditions imposed by third parties as applicable to your software applications created from the use of third party software and tools (and associated libraries and utilities) that are supplied with the materials.
amdnet
AMDNetCode base for AMDNet described inhttps://doi.org/10.1126/sciadv.abf1754DescriptionIncorporation of physical principles in a machine learning (ML) architecture is a fundamental step toward the continued development of artificial intelligence for inorganic materials. As inspired by the Pauling's rule, we propose that structure motifs in inorganic crystals can serve as a central input to a machine learning framework. We demonstrated that the presence of structure motifs and their connections in a large set of crystalline compounds can be converted into unique vector representations using an unsupervised learning algorithm. To demonstrate the use of structure motif information, a motif-centric learning framework is created by combining motif information with the atom-based graph neural networks to form an atom-motif dual graph network (AMDNet), which is more accurate in predicting the electronic structures of metal oxides such as bandgaps. The work illustrates the route toward fundamental design of graph neural network learning architecture for complex materials by incorporating beyond-atom physical principles.Architecture:AMDNet architecture and materials property predictions. (A) Demonstration of the learning architecture of the proposed atom-motif dual graph network (AMDNet) for the effective learning of electronic structures and other material properties of inorganic crystalline materials. (B) Comparison of predicted and actual bandgaps [from density functional theory (DFT) calculations] and (C) comparison of predicted and actual formation energies (from DFT calculations) in the test dataset with 4515 compounds.The code is partially base from the paper "Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals" by Chen et al.https://github.com/materialsvirtuallab/megnetUsageTo get started, make sure you are using the same tensorflow and keras versions described in requirements.txt. Furthermore, you should manually download the data files because of the large file sizes.To train AMDNet from scratch, run python train_AMDnet.py --material_file data/material_data.pkl --motif_file data/motif_graph.pkl --save_name save/new_model.hdf5To test the pretrained network, run python evaluate_AMDnet.py --material_file data/material_data.pkl --motif_file data/motif_graph.pkl --load_name save/new_model.hdf5Other parameters:--material_file: dataset with material information--motif_file: motif information for each material--save_name: where to save the model--predict: attribute to predict (band_gap or formation_energy_per_atom)--epochs: maximum numbers of epochs--patience: stop training if no improvement for number of epochs--learning_rate: learning rate in training--batch_size: batch size during training--atom_cutoff: cutoff for atom distance that are considered connected in the graph--motif_cutoff: cutoff for motif distance that are considered connected in the graph--rbf_edge_dim_atom: dimension of RBF (radial basis function) for atoms--rbf_edge_dim_motif: dimension of RBF (radial basis function) for motifsDue to version changes and limited compatibility to older versions of tensorflow and keras, we can not provide the models used to recreate the results in the publication. However, the provided AMD model performs better than the one used in the publication with the same train/validation/test split. We observe an MAE on the test set of 0.41 (an improvement over the published 0.44).In some cases, the training does not converge and stops after NaN. In this case, the learning rate is reduced and training proceeds from the best solution (this is the same as in the original source code from MEGNet). In cases where this stops the training early (after less than 200 epochs), we recommend reducing the learning rate and retrying from scratch.
ame
Programa para análise matricial de estruturas baseado no médodo dos deslocamentos.
ameba
UNKNOWN
amebo
AMEBOAmebo is the simplest pubsub server to use, deploy, understand or maintain. It was built to enable communication between applications i.e. microservices or modules (if you are using monoliths), and hopes to be able to serve as a small but capable and simpler alternative to Kafka, RabbitMQ, SQS/SNS.STILL IN DEVELOPMENT: NOT READY FOR Production UseHow It WorksAmebo has only 4 concepts (first class objects) to understand or master.1. Microservices/ModulesThese are applications or modules you register on amebo - they send and receive events ;-)2. EventsSomething that can happen in an application, they are registered on Amebo by Microservices/Modules3. ActionsA HTTP request sent to Amebo to signal an event (action on event) has occured. Usually has a payload that is forwarded by Amebo to all applications subscribed to the offending event4. SubscribersHTTP endpoints registered to watch specific/particular events (by Microservices/Modules)GETTING STARTEDThis assumes you haveinstalledAmebo on your machine. Amebo requiresPython3.6+# the easy pathpipinstallamebo amebo--workers2--address0.0.0.0:8701# the hardway (manual installation) BUT not the only way... Sorry, I couldn't resist the pun ;-)gitclonehttps://github.com/tersoo/amebo mvamebo/to/a/directory/of/your/choosingexport$PATH=$PATH:/to/a/directory/of/your/choosing/amebo# add amebo location to your pathambeo-w2-a0.0.0.0:87011st : Tell Amebo about all your microservices or applicationsendpoint: /v1/microservicesSchemaExample Payload{"$schema":"","type":"object","properties":{"microservice":{"type":"string"},"passkey":{"type":"string","format":"ipv4 | ipv6 | hostname | idn-hostname"},"location":{"type":"web"}},"required":["microservice","passkey","location"]}{"microservice":"customers","passkey":"some-super-duper-secret-of-the-module-or-microservice","location":"http://0.0.0.0:3300"}2nd : Register events that can happen in the registered microservicesendpoint: /v1/eventsEndpoint JSON SchemaExample Payload{"type":"object","properties":{"event":{"type":"string"},"microservice":{"type":"string"},"schemata":{"type":"object","properties":{"type":{"type":"string"},"properties":{"type":"object"},"required":{"type":"array"}}}},"required":["event","microservice","schemata"]}{"event":"customers.v1.created","microservice":"customers","schemata":{"$id":"https://your-domain/customers/customer-created-schema-example.json","$schema":"https://json-schema.org/draft/2020-12/schema","type":"object","properties":{"customer_id":{"type":"number"},"first_name":{"type":"string"},"last_name":{"type":"string"},"email":{"type":"string","format":"email"}},"required":["customer_id","email"]}}3rd : Tell Amebo when an event occurs i.e. create an actionendpoint: /v1/actionsKeyDescriptioneventIdentifier name of the event. (As registered in the previous step.)deduperDeduplication string. used to prevent the same action from being registered twicepayloadJSON data (must confirm to the schema registerd with the event)Endpoint JSON SchemaExample Payload{"type":"object","properties":{"event":{"type":"string"},"microservice":{"type":"string"},"schemata":{"type":"string","format":"ipv4 | ipv6 | hostname | idn-hostname"},"location":{"type":"web"}},"required":["microservice","passkey","location"]}{"event":"customers.v1.created","microservice":"customers","schema":{"$id":"https://your-domain/customers/customer-created-schema-example.json","$schema":"https://json-schema.org/draft/2020-12/schema","type":"object","properties":{"event":{"type":"string"},"microservice":{"type":"string"},"schemata":{"type":"string","format":"ipv4 | ipv6 | hostname | idn-hostname"},"location":{"type":"web"}},"required":["microservice","passkey","location"]},"location":"http://0.0.0.0:3300"}Finally: Create an endpoint to receive action notifications from AmeboOther applications/modules within the same monolith can create handler endpoints that will be sent the payload with optional encryption if an encryption key was provided by the subscriber when registering for the event.Why? Advantages over traditional Message Oriented Middleware(s)Amebo comes complete with a Schema Registry, ensuring actions conform to event schema, and makes it easy for developers to search for events by microservice with commensurate schema (i.e. what is required, what is optional) as opposed to meetings with team mates continually.GUI for tracking events, actions, subscribers. Easy discovery of what events exist, what events failed and GUI retry for specific subscribersGossiping is HTTP native i.e. subscribers receive http requests automatically at pre-registered endpointsInfinite retries (stop after $MAX_RETRIES and $MAX_MINUTES coming soon)TriviaThe wordamebois a West African (Nigerian origin - but used in Ghana, Benin, Cameroon etc.) slang used to describe anyone that never keeps what you tell them to themselves. A talkative, never mind their business individual (a chronic gossip).
amee-distributions
No description available on PyPI.
amegaparser
No description available on PyPI.
ameilisearch
ameiliSearchAsynchronous MeiliSearch API client that is100% compatiblewithMeiliSearch Pythonupstream commit hash:e665923efc9735fd09994b0f01395ceb29051c71Getting StartedAdd Documentsimportasyncioimportameilisearchasyncdefmain():documents=[{'id':1,'title':'Carol','genres':['Romance','Drama']},{'id':2,'title':'Wonder Woman','genres':['Action','Adventure']},{'id':3,'title':'Life of Pi','genres':['Adventure','Drama']},{'id':4,'title':'Mad Max: Fury Road','genres':['Adventure','Science Fiction']},{'id':5,'title':'Moana','genres':['Fantasy','Action']},{'id':6,'title':'Philadelphia','genres':['Drama']},]asyncwithameilisearch.Client("http://127.0.0.1:7700",'masterKey')asclient:asyncwithclient.index("movies")asindex:# If the index 'movies' does not exist, MeiliSearch creates it when you first add the documents.awaitindex.add_documents(documents)# => { "updateId": 0 }asyncio.get_event_loop().run_until_complete(main())Differences from synchronous clientsExisting API clients worked withrequests.ameilisearch works withaiohttp.Users need to manage client sessions.The http instance is in two places:ClientandIndex.Use theasync withsyntax to close the session immediately after use, or must close the session usingawait :client_or_index_instance:.http.session.close()after using it all.
ameise
AMEISE DatasetDev-Kit forProject AMEISErecords:DescriptionThe ameise-dataset is a development kit for the AMEISE dataset. It provides functionalities to unpack AMEISE record files and extract meta information and frames. The core functionality revolves around handling AMEISE record files with the .4mse extension.InstallationTo install the ameise-dataset, you can use pip:pip install ameiseGetting StartedTo get started with the dataset, you can refer to theGettingStarted.ipynbnotebook provided in the repository. This notebook will guide you through the basic functionalities and usage of the dataset.
ameliabot
No description available on PyPI.
amelpersik-custom-serializer
No description available on PyPI.
amen
No description available on PyPI.
amendements2json
Failed to fetch description. HTTP Status Code: 404
amendment-back-up
A Python class for file comparison and new file backup.Author: Yu Sun at University of SydneyEmail:[email protected]:https://github.com/sunyu0410/AmendmentBackUpMotivationsWhen it comes to backing up a large amoumt of data, it is often preferable to only copy the modified and new files, rather than simply coping the whole directory. TheAmendmentBackUp(ABU) class provides a simple interface to do that. No dependencies are required apart from the Python 3 standard library.The designSay we have two folders, a source folderdir1which you have your most recent files and a reference folderdir2which holds some of your previous backup. What theABUdoes is to compare all files indir1with those indir2, and copy the files to a third destination folderdst.A quick examplefromAmendmentBackUpimport*createDemo()abu=AmendmentBackUp(dir1=r"demo/dir1",dir2=r"demo/dir2",dst=r"demo/dst")abu.compare()abu.backup()ExplanationsSay you have thedir1anddir2(along with adstto copy the files to) with the following tree structures:dir1 (source, recently updated) | file1.txt | file2.txt (modified) | file3.txt (new) | +---subfolder1 | file4.txt | +---subfolder2 | file5.txt | file6.txt (modified) | \---subfolder3 (new) anyfile.txt dir2 (reference, e.g. a previous backup) | file1.txt | file2.txt | file7.txt | +---subfolder1 | file4.txt | \---subfolder2 file5.txt file6.txt dst (destination)In this case, we want to copy the modified and new file(s) indir1:file2.txt file3.txt subfolder2/file6.txtand new folder(s):subfolder3You can initiate anABUobject by callingabu=AmendmentBackUp(dir1=r'path_to/dir1',dir2=r'path_to/dir2',dst=r'path_to/dst')By the way, thecreateDemo()will create a demo folder with structures shown above. After initiation, call the followingABUmethods to proceed:abu.compare(): Compare files by walking through all files and folders indir1and check the existence of the corresponding counterparts indir2.If negative, it then adds the file or folder to the copy list;If positive, it compares two corresponding files (fromdir1anddir2respectively, shallow comparison using the time stamp and the file size);If two files don’t match, it will add the file to the copy list;Otherwise, it will continue to the next one.abu.backup(): Copy the files and folders in the copy list.Folders will be copied first. If the parent folder has been copied, any child folder will be skipped;Files will copied next. If the file falls under any folder copied in the previous step, it will be skipped.The metadata of the backup process will be stored in a folder called_abuwith a time stamp (year-month-day-hour-minute-second) in thedstfolder. These include- abu_log.txt Log file - abu_obj.pickle ABU object of this backup task - dir1_tree.txt Tree structure of dir1 (source) - dir2_tree.txt Tree structure of dir2 (reference) - dst_tree.txt Tree structure of dst (destination)ResultsHere is the tree structure ofdstafter the backup:dst | file2.txt | file3.txt | +---subfolder2 | file6.txt | +---subfolder3 | anyfile.txt | \---_abu_20190717101307 abu_log.txt abu_obj.pickle dir1_tree.txt dir2_tree.txt dst_tree.txtIf you want to add the files to the previous backdir2, you can simply setdsttodir2.LimitationsTheABUis best suited when the source folderdir1is a natural growth of the reference folderdir2. Whatnatural growthmeans is that there should not be too much renaming or move of the subfolders fromdir2todir1. Otherwise, using a version control system is probably a better option sinceABUwon’t track the history of any folder or file.
amendment-forecast
forecast_ensembleforecast_ensemble combines multiple models into a single forecast and returns a combined package of models, their evaluated results including a composite scoreUsageThe forecast is run usingmain.pywith the following optionsRequest RequirementsThe required input format is a JSON with the following fields:Required:data_filepath: filepath of the time series data to run the ensemble ontarget_column: name of the input columnaggregate_operation: string operation to apply to target_column to create the time seriesoutput_directory: directory to return the result JSON to
amendments2json
Failed to fetch description. HTTP Status Code: 404
amendments-bundle-parser
No description available on PyPI.
amenparallel
No description available on PyPI.
ament-black
The ability to check code against style conventions using black and generate xUnit test result files.
ament-clang-tidy
The ability to check code against style conventions using clang-tidy and generate xUnit test result files.
ament-lint
Providing common API for ament linter packages, e.g. thelintermarker for pytest.