package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
addroot | addrootThis file will become your README and also the index of your
documentation.InstallpipinstalladdrootHow to useFill me in please! Don’t forget code examples:1+12 |
add_root_to_sys_path | No description available on PyPI. |
addrparser | Address parserSimple address parser with localization support.Note:This library is meant to be simple, light weight and easy to adapt. This is not the best and most optimized address parser out there.
Forstate of the artparser you should probably look athttps://github.com/openvenues/pypostalDocumentation:https://gispocoding.github.io/addr-parserGitHub:https://github.com/gispocoding/addr-parserPyPI:https://pypi.org/project/addrparser/Free software: MITSupported countriesCountryDescriptionDocumentationSuomi - FinlandSuomalaisten osoitteiden osoiteparserihttps://gispocoding.github.io/addr-parser/locales/fiInstallationpip install addrparserSetting up a development environmentSee instructions inCONTRIBUTING.mdUsageCommand line tool$addr-parse--help
Usage:addr-parse[OPTIONS]ADDRESSClitoolforparsingtextaddresses.Args:address(str):addresstext
Options:-l,--localeTEXTCountrycodeintwo-letterISO3166--helpShowthismessageandexit.$addr-parser"Iso Maantie 12b B 7"{"input":"Iso Maantie 12b B 7","result":{"street_name":"Iso Maantie","house_number":"12b","entrance":"B","apartment":"7"}}Library>>>fromaddrparserimportAddressParser>>>parser=AddressParser('fi')>>>address=parser.parse('Iso Maantie 12b B 7')>>>addressAddress(street_name='Iso Maantie',house_number='12b',entrance='B',apartment='7',post_office_box=None,zip_number=None,zip_name=None)CreditsThis project was created with inspiration fromwaynerv/cookiecutter-pypackageproject template. |
addrpy | No description available on PyPI. |
addr_seeker | UNKNOWN |
adds | No description available on PyPI. |
addscicrunch | This package provides a tool for adding SciCrunch terms to XML data. |
add-scihub-links | Add links to Sci-Hub in the reference section of a scientific articleContextPublishers increasingly include hypertext links in the reference section of scientific articles, which is a first step in improving the readability of scientific articles. However most of these references are behind paywall.This script attemps to add links to Sci-Hub next to the orginal links towards the publisher website.At this stage this will only work for PDFs with hypertext links for the references, using the DOI system. This is hopefully what all publishers will be doing in a near future. PLoS and eLIFE have done it systematically for recent articles, so it should work with their articles.InstallUsing pip (prefered):pip install add_scihub_linksOr clone the repository and runadd_scihub_links.py(requires pdfrw)Should work with both Python 2 and 3UseBasic use:add_scihub_links input.pdf output.pdfSee full documentation withadd_scihub_links -hExampleSee an example of the resulthere, obtained fromthis article. |
add_service | Effortlessly create and manage systemd startups with just one command. |
add-soham | IntroductionThis is a module which adds stuff. |
addsPy | No description available on PyPI. |
addsshkeys | Version: 0.5Released: 2023-04-20AddSSHkeysadds all of your keys to SSH Agent in one operation.
It is helpful if you routinely add more than one key to your agent.
It can work withAvendesorato keep your
passphrases secure.Please report all bugs and suggestions [email protected] StartedDownload and installAddSSHkeyswith:pip install addsshkeysOnce installed, you will need at least one configuration file.
Configurations are placed in: ~/.config/addsshkeys.
They areNestedTextfiles.
The default configuration isconfig; the default file isconfig.nt.The following settings may be given in your config files.ssh_addThe name of the command that adds keys to your SSH agent.
By default, ‘ssh-add’ is used.ssh_keysThis setting is required.
It contains a dictionary of dictionaries that contains information about each
key.
The primary dictionary contains a name and the values for each key.
The values are held in a dictionary that may contain three fields:pathsThis is required and contains the paths to one or more SSH private key files.
It may be a list of strings, or a single string that is split.
If a relative path is given, it is relative to ~/.ssh.accountThis gives the name of the Avendesora account that holds passphrase for the
keys.
If present, Avendesora will be queried for the passphrase.passphraseThis is required ifaccountis not given, otherwise it is optional.
Ifaccountis given, it is the name of the passphrase field in Avendesora,
which defaults to ‘passcode’.
If account is not given, it is the passphrase itself.
In this case, the settings file should only be readable by the user.config_file_maskAn integer that determines if a warning should be printed about the config file
permissions being too loose.
The permissions are only checked if the file is found to contain a passphrase.
Default is 077.
Set to 000 to disable the warning.
Set to 077 to generate a warning if the configuration directory is readable or
writable by the group or others.
Set to 007 to generated a warning if the directory is readable or writable by
others.auth_sock_pathIf given, the value of $SSH_AUTH_SOCKET is written to the specified path.
This can be useful when running SSH related commands in cron and anacron
scripts.Here is an example configuration file:ssh_keys:
primary:
paths: primary-ed25519 primary-rsa
account: primary-ssh-key
digitalocean:
paths: digitalocean
account: digitalocean-ssh-key
github:
paths: github
passphrase: canard apply trousseau forgive
backups:
paths: dumper
account: dumper-ssh-key
# assure config file is only readable by me
config_file_mask: 077
# used to provide path to SSH authorization socket to scripts run by cron
auth_sock_path: ~/.ssh/auth-sockUnderscores can be replaced by spaces in all keys.Running AddSSHkeysOnce configured, you can runAddSSHkeyswith the default configuration using:addsshkeysAnd you can run it with a particular configuration using:addsshkeys <config>where<config>is the name of the configuration you wish to use (no need to
give the .nt suffix).
In this way you can have several bundles of keys that you can load as needed.ReleasesLatest Development Version:Version: 0.5Released: 2023-04-200.5 (2023-04-20)addedauth_sock_path.0.4 (2020-10-19)fixconfig_file_mask.0.3 (2020-10-19)allow config file to end with .nt suffix.0.2 (2020-10-14)update to latest version of NestedText0.1 (2020-08-31)convert to NestedText for settings file. |
addstartup | Add StartupHi . This tool is designed to make it easy for you to add files to your Windows startupInformationOur social network addressYoutube :https://www.youtube.com/channel/UCGsKXfbCyhZoLIRukYUQyYQInstagram :https://www.instagram.com/mdtrackers/RunYou need to have winreg Library installed by default in PythonThen you have to add your file to the Windows startup with this command :importaddstartup
addstartup.add("Custom name","Your file address.exe")Author : MD Trackers |
add_staves | Analytical Stave AppenderAdd analytical staves to a score.How to install?Conveniently install the tool withpipand use it from your command line:pip install add_stavesWindowsMake surepythonis installed on your machine. This can be tested from the
terminal with:python --versionIf it isn't installed yet, you will be prompted to install it from the store.
After the installation, pip will output a warning if its installation directory
is not available in the global PATH.[!Note]
I don't have access to a Windows machine.. I hope it works in most cases.Einen Pfad zu PATH hinzufügenOpen the Start menu and search for “Edit the system environment variables”, or type “Environment Variables” into the search bar and select “Edit the system environment variables” from the results.In the System Properties window, click on the “Environment Variables…” button.In the Environment Variables window, under the “System variables” section, locate the variable named “Path” and select it. Then, click on the “Edit…” button.In the Edit Environment Variable window, click on the “New” button.Enter the path you want to add in the provided field. Make sure to type the directory containing the executable files you want to access globally.Click “OK” to close each of the open windows.Restart your shell, in order for the new path to be picked up.Einen Pfad zu PATH hinzufügenÖffnen Sie das Startmenü und suchen Sie nach “Systemumgebungsvariablen bearbeiten” oder geben Sie “Umgebungsvariablen” in die Suchleiste ein und wählen Sie “Umgebungsvariablen für Ihr Konto bearbeiten” aus.Im Fenster “Umgebungsvariablen” unter dem Abschnitt “Systemvariablen” suchen Sie nach “Path” und klicken Sie darauf, um es zu markieren. Klicken Sie dann auf “Bearbeiten…”.Klicken Sie im Fenster “Systemvariablen bearbeiten” auf “Neu”.Geben Sie den Pfad ein, den Sie hinzufügen möchten, und klicken Sie auf “OK”.Klicken Sie auf “OK”, um das Fenster “Umgebungsvariablen” zu schließen, und dann erneut auf “OK”, um das Fenster “Systemeigenschaften” zu schließen.Starten Sie die Eingabeaufforderung neu, damit der Pfad aktualisiert wird.BrissBriss depends on a java runtime. When you open it you will be prompted to install it (if it's not already on your system).Usageadd-staves path/to/your/score.pdfFurther information about its usage can be found in its help:add-staves --help.Separate a score into systemsYou can use any program of your choice in order to seperate a score into systems.[!Important]
Make sure that each system or part you want to append analytical staves to,
is its own page in the PDF.The following tool works quite well, but any other suggestions are
much appreciated.BrissBRISSis a cross-platform application for
cropping PDF-files. By default, it will try to find common areas on all the
pages and overlay them in the interface, so that you only have to declare
the area to crop once and not on all pages. However, this doesn't work
particularly well for scores. This behaviour can be circumvented by passing
a range from the first to the last page (e.g. "1-4") to the dialog showing
immediatly after loading a document.InstallationOn a mac,brisscan be installed and started from the command line with:brew install briss
briss path/to/score.pdfFor usage on Windows, an executable can be downloadedhere. After unzipping, double-clicking will
launch Briss.[!Note]
Briss needs a java runtime to be installed on the system. You will prompted to
install it, if it doesn't exist. |
addsub | My first Python package with a slightly longer description |
addsum | No description available on PyPI. |
add-swap | No description available on PyPI. |
add-swap-space | add-swap-spaceA Python package to add swap space to a Linux systemDescriptionThis package provides a command-line interface to add swap space to a Linux system.It prompts the user for the desired swap size in GB and then adds the swap space using the dd and mkswap commands. After the swap space has been added, it updates/etc/fstabto make the swap permanent.Download & Installpip install add-swap-spaceNote:If pip is unrecognized try pip3 instead.Usagesudo python3 -m add_swap_spaceNote:Make sure you have pip install under root user as updating /etc/fstab requires root permission.ScreenshotFeedback/IssueHave a feedback, feature request, known bug, please report it at thisissue page |
addTest | No description available on PyPI. |
add-test-20231130 | add_test_20231130 套件使用說明這是一個簡單的 Python 套件,提供一個函式add(a, b),可以將兩個數字相加並返回結果。您也可以直接從命令行執行這個套件,例如:python3 -m add_test_20231130 2 5這將輸出7。 |
add-testing | test-publishThis is a testing repository for publish, release, changelog github actions. |
addthalesnumbers | addthalesnumbersadd number for thalesFree software: MIT licenseDocumentation:https://addthalesnumbers.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2022-11-28)First release on PyPI. |
addthis | A Python wrapper for theAddThis Analytics API.RequirementsPython 2.6, 2.7 or 3.2+python-requestslibraryInstallationInstall from PyPI:pip install addthisUsagefrom addthis import Addthis
# create an AddThis instance using userid and password from your AddThis account and optionally provide a pubid
addthis = Addthis(userid="YOUR_USER_ID", password="YOUR_PASSWORD", pubid="YOUR_PUB_ID")
# get the number of shares for the last day
print addthis.shares.day()
# get the number of shares by day for the last week
print addthis.shares.day(period="week")You can see a full description of all supported metrics and dimensions athttp://support.addthis.com/customer/portal/articles/381264-addthis-analytics-apiA few more examplesHow many times was my content shared on Twitter, by day, over the last week?>>> addthis.shares.day(period="week", service="twitter")What were my top shared urls for the pubid=”MY_PUB_ID”?>>> addthis.shares.url(pubid="MY_PUB_ID")How many users shared my content this month, broken down by their interests?>>> addthis.sharers.interest(period="month")Which sharing services sent the most clicks back to my site this week?>>> addthis.clicks.service(period="week")ExceptionsAddthisValidationErrorAddthis object expects to be called with 2 parameters - “metric” and “dimension”:addthis.<metric>.<dimension>()For example:>>> addthis.shares.day() # "shares" is a metric and "day" is a dimensionIf it gets another number of parameters (e.g. addthis.shares() or addthis.shares.day.week()) it will raise anAddthisValidationError.from addthis import Addthis, AddthisValidationError
addthis = Addthis(userid="YOUR_USER_ID", password="YOUR_PASSWORD", pubid="YOUR_PUB_ID")
try:
addthis.shares()
except AddthisValidationError as e:
print e # "Incorrect number of parameters are given. Expected 2 but got 1."AddthisErrorAddthisErroris raised when AddThis service returns a response with a HTTP status code other than 200. The exception object has 4 attributes:status_code: Code from the HTTP response.code,message,attachment: Error attributes from the AddThis response body. (see the “Error” section in theAddThis Analytics API documentationfor more information).from addthis import Addthis, AddthisError
addthis = Addthis(userid="INCORRECT_USER_ID", password="INCORRECT_PASSWORD", pubid="INCORRECT_PUB_ID")
try:
addthis.shares.day()
except AddthisError as e:
print e # "401 Error (code = '80', message='authentication failed', attachment='{u'nonce': None, u'realm': u'AddThis', u'opaque': None})'."
print e.status_code # 401
print e.code # 80
print e.message # "authentication failed"
print e.attachment # {u'nonce': None, u'realm': u'AddThis', u'opaque': None} |
addthree | travis-practicetravis ci practice |
add-three-nevergonna | No description available on PyPI. |
addthreenumbers | No description available on PyPI. |
addtobuildlist | UNKNOWN |
addtopath | addtopathIntroductionaddtopathis a CLI program which allows you to easily add a directory to your PATH using the terminal on Windows.RequirementsYou need to have Python 3.6 or higher installed. This will allow you to installaddtopathwith Python's package manager,pip.How to installTo installaddtopathwithpip:pipinstalladdtopathHow to useAfter installation withpip, theaddtopathexecutable should be available on the PATH. It's very easy to use: just run it with a directory as an argument, to add that directory to the PATH.To add the current working directory to the PATH, run:addtopath.It works with relative paths:addtopath..It also works with the~symbol in Powershell, for example:addtopath~/scriptsYou can, of course, supply the absolute path to your target directory:addtopath"C:\Program Files\SomeProgram"User and system PATHsaddtopathadds to theuserpath by default. This doesn't require admin permissions, and is usually sufficient. However, you can instead add to thesystempath using the-sor--systemflag.For example:addtopath.-sNote:This requires an administrator Powershell or Command Prompt. |
add-trailing-comma | add-trailing-commaA tool (and pre-commit hook) to automatically add trailing commas to calls and
literals.Installationpipinstalladd-trailing-commaAs a pre-commit hookSeepre-commitfor instructionsSample.pre-commit-config.yaml:-repo:https://github.com/asottile/add-trailing-commarev:v3.1.0hooks:-id:add-trailing-commamulti-line method invocation style -- why?# Sample of *ideal* syntaxfunction_call(argument,5**5,kwarg=foo,)the initial paren is at the end of the lineeach argument is indented one level further than the function namethe last parameter (unless the call contains an unpacking
(*args/**kwargs)) has a trailing commaThis has the following benefits:arbitrary indentation is avoided:# I hear you like 15 space indents# oh your function name changed? guess you get to reindent :)very_long_call(arg,arg,arg)adding / removing a parameter preservesgit blameand is a minimal diff:# with no trailing commasx(- arg+ arg,+ arg2)# with trailing commasx(arg,+ arg2,)Implemented featurestrailing commas for function callsx(arg,- arg+ arg,)trailing commas for tuple / list / dict / set literalsx = [- 1, 2, 3+ 1, 2, 3,]trailing commas for function definitionsdef func(arg1,- arg2+ arg2,):async def func(arg1,- arg2+ arg2,):trailing commas forfromimportsfrom os import (path,- makedirs+ makedirs,)trailing comma for class definitionsclass C(Base1,- Base2+ Base2,):passtrailing comma for with statementwith (open('f1', 'r') as f1,- open('f2', 'w') as f2+ open('f2', 'w') as f2,):passtrailing comma for match statementmatch x:case A(1,- 2+ 2,):passcase (1,- 2+ 2,):passcase [1,- 2+ 2,]:passcase {'x': 1,- 'y': 2+ 'y': 2,}:passtrailling comma for PEP-695 type aliasesdef f[- T+ T,](x: T) -> T:return xclass A[- K+ K,]:def __init__(self, x: T) -> None:self.x = xtype ListOrSet[- T+ T,] = list[T] | set[T]unhug trailing parenx(arg1,- arg2)+ arg2,+)unhug leading paren-function_name(arg1,- arg2)+function_name(+ arg1,+ arg2,+)match closing brace indentationx = [1,2,3,- ]+]remove unnecessary commasyes yes, I realize the tool is calledadd-trailing-comma:laughing:-[1, 2, 3,]-[1, 2, 3, ]+[1, 2, 3]+[1, 2, 3] |
addtwo | Add Two Numbers DemoA Python package to add two numbersUsagedef sum(a,b):
c=a+b
return c |
add-two-num | A package to perform arithmetic operations |
addtwonumber | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content. |
add-two-numbers | No description available on PyPI. |
addu | A.D.D.U - A Dumb Docker UserADDU is a CLI tool to build ROS dockerimages and managing docker development environments.FeaturesCreate a new workspace from an available docker imageAutomatically build a new docker image for a workspaceInstall a code editorManage your workspacesInstallationYou can install addu using pip:pipinstalladduUsageBasic use of addu:addu-cliA terminal interface will appear, you can navigate through the options by inputting the available option.Future workAs it is, ADDU is not really a command line interface, but a cute terminal app, in any case addu should be available to
use as a true CLI, so future work will be to implement the same app with argparse or click like:adducreateaddurun<workspace_name>
addurm<workspace_name>
addulsaddu--help |
addup | No description available on PyPI. |
adduserpath | Ever wanted to release a cool new app but found it difficult to add its
location to PATH for users? Me too! This tool does that for you on all
major operating systems and does not require elevated privileges!Fear not, this only modifies the user PATH; the system PATH is never
touched nor even looked at!Table of ContentsInstallationCommandsAPILicenseInstallationadduserpath is distributed onPyPIas a universal
wheel and is available on Linux/macOS and Windows and supports
Python 2.6-2.7/3.3+ and PyPy.$pipinstalladduserpathCommandsOnly 3!$userpath-hUsage:userpath[OPTIONS]COMMAND[ARGS]...Options:--versionShowtheversionandexit.-h,--helpShowthismessageandexit.Commands:appendAppendstotheuserPATHprependPrependstotheuserPATHverifyChecksiflocationsareintheuserPATHAPI>>>importuserpath>>>location=r'C:\Users\Ofek\Desktop\test'>>>>>>userpath.in_current_path(location)False>>>userpath.in_new_path(location)False>>>userpath.append(location)True>>>userpath.in_new_path(location)True>>>userpath.need_shell_restart(location)TrueLicenseuserpath is distributed under the terms of bothMIT LicenseApache License, Version 2.0at your option. |
addwrohan | No description available on PyPI. |
addydaddy | No description available on PyPI. |
addy-mpc | Multi-Party Computation Cryptography - Final Projectfinish writing the development process api and demo and screenshots adding testsfix the SSD documenthttps://docs.google.com/document/d/1oWhDiyfaaCHef23QWVzcB4Yah-8EvjGF3wODgSoSex4/edit#Multi-Party Computation Cryptography - Final ProjectAboutWhat is Multi-Party Computation (MPC)?AuthorsProject OverviewProject GoalIntroductionMethods & AlgorithmsDesign ConsiderationsSelected approachPseudo Code Bit ORProcedure PrivacyPreservingBitOr:InfrastractureUser InterfaceDevelopment proccessAPI ReferenceGet itemadd(num1, num2)AppendixLicenseTech StackUsage/ExamplesDemoEnvironment VariablesRun LocallyFiles & Project structureTable of contents generated with markdown-tocAboutThis project is an implementation of a secure multi-party protocol for the secure set-union problem and the secure all-pairs shortest path problem. The protocol is devised from existing literature and is tailored for enhanced efficiency in a semi-honest setting with a dishonest majority.What is Multi-Party Computation (MPC)?Multi-Party Computation (MPC)is a subfield of cryptography that enables multiple entities to jointly compute a function over their inputs while keeping those inputs private. In the context of this project, we focus on a 2-party computation, where both entities share inputs and follow the MPC protocol, ensuring the privacy of their inputs.Without the intervention of a server (Third party) in the proccess.Authors@Dolev Dublon@Yakov Khodorkovski@Daniel Zaken@Aviad GilboaProject OverviewProject GoalThe primary objective of this project is to implement a secure multi-party protocol that is specifically designed for the secure set-union problem and the secure all-pairs shortest path problem. Our protocol aims to achieve greater efficiency than generic MPC protocols, especially in semi-honest settings with a dishonest majority. We base our approach on existing research by Justin Brickell and Vitaly Shmatikov, which you can accesshere.IntroductionOur protocol deals with two semi-honest groups. Since the late 1980s, general protocols have theoretically allowed secure computation in polynomial time and with a security parameter, enabling both players to compute safely under computational complexity assumptions. While these general protocols are theoretically efficient, they are not always practically efficient. Therefore, people have been trying to create specific security protocols for specific functions that are more efficient than the general protocols.The use of various generic libraries, such as YAO, and GMW, has proven to be less efficient, prompting efforts to develop more efficient approaches. We will implement the All-Pairs Shortest Path functionality to contribute to the ecosystem of implementations, aiming to create more efficient implementations in this domain.Methods & AlgorithmsDesign ConsiderationsThere were two algorithms for the set union to implement in our protocol:A provided pseudocode that utilized YAO and GMW for the calculation of the minimum using a generic library. However, this did not fit with our chosen programming language.A tree pruning method that utilized ElGamal and BitOr to reveal information securely.Selected approachWe have decided to implement the BitOr operation to achieve a union without relying on a generic library.ImageCryptographerLinkElgamalWikipediaYaoWikipediabecause the iterative method required using a generic library to calculate the minimum in a secure way.Pseudo Code Bit ORProcedure PrivacyPreservingBitOr:Alice initializes:Selects cyclic group $G$ of prime order $q$Chooses $g$ (quadratic residue) and large prime p $(p=2q+1)$Chooses private key $k ∈ {0, ..., q-1}$Picks random $r ∈ {2, ..., q-1}$If Alice's bit is $0$, calculates $C_a = (g^r, g^{(kr)})$If Alice's bit is $1$, calculates $C_a = (g^r, g\cdot g^{(kr)})$Alice sends $(C_a, q, g, g^k)$ to Bob, keeping k privateBob receives $(C_a, q, g, g^k)$ and unpacks $C_a$ into $(α, β)$Picks random $r' ∈ {2, ..., q-1}$If Bob's bit is $0$, calculate C_b = (α^r', β^r')If Bob's bit is $1$, calculate C_b = (α^r', g^r'*β^r')Bob sends $C_b$ back to AliceAlice receives $C_b$, unpacks it into $(γ, δ)$Calculates $b = \frac{δ}{γ^k}$If $b = 1$, returns $0$If $b ≠ 1$, returns $1$
insert diagram hereInfrastractureUsing Flask and Gunicorn servers on cloud platforms of Microsoft Azure. we represent parties involved in the secure computation.User InterfaceWe built a library mainly for software developers, but included visual aids and infrastructure for easier understanding. It's designed to show anyone how our system works, especially in Multi-party computation. Our simple interface gives a clear view of the protocol's progress and even includes a log output to follow the entire process.Insert screens here[ QR code for GitHub repo on bottom-right ]Development proccessAt first the Development process was to read a lot and get deep into the article of Justin Brickell and Vitaly Shmatikov, which you can accesshere.
and we met every week from the begining reading it, and also reading the articleA Proof of Security of Yao’s Protocol for Two-Party Computationby Yehuda Lindell Benny Pinkas and we had to learn a lot about the secure computation proofs and theorySemi-Honest Adversariesand this lecture as well.
This is the first step is to get into the field so we can understand the problem more deeply.
then we wrote theSynchronization was one of our biggest obstacle for us as a team and for the threads in the program between the clientsAPI ReferenceGET /api/datapoint| Parameter | Type | Description || :-------- | :------- | :------------------------- ||api_key|string|Required. Your API key |Get itemGET /api/items/${id}| Parameter | Type | Description || :-------- | :------- | :-------------------------------- ||id|string|Required. Id of item to fetch |add(num1, num2)Takes two numbers and returns the sum.AppendixAny additional information goes hereLicenseMITTech StackClient:HTML CSS JAVASCRIPT, Jinja engine for flaskServer:PYTHON FLASKUsage/ExamplesimportComponentfrom'my-project'functionApp(){return<Component/>}DemoInsert gifandimage for the video demo on youtubeDeploymentget a user on Microsoft AzureTo deploy this project rungunicornEnvironment VariablesTo run this project, you will need to add the following environment variables to your .env fileAPI_KEYANOTHER_API_KEYRun LocallyClone the projectgitclonehttps://link-to-projectGo to the project directorycdmy-projectInstall dependenciespipinstallflask.....Start the servernpmrunstartFiles & Project structureDocumentsVision Statement - MPC protocol implementationVision StatementSRD DocumentPrivacy-Preserving Graph Algorithms in the Semi-honest Modelby Justin Brickell and Bitaly Shmatikov
The University of Texas at Austin, Austin TX 78712, USAA Proof of Security of Yao’s Protocol for Two-Party Computationby Yehuda Lindell∗ Benny Pinkas June 26, 2006for original code go herehttps://github.com/Dolev-Dublon/Final-Project-Multiple-Party-Computation-Cryptograpynotesnotes for final projecthttps://docs.google.com/document/d/1d35ExjbP7p1KzuKcKIswkkOI2wedrJmxIr1OTyWHQyg/edit#vistion statements noteshttps://docs.google.com/document/d/1xL3wtaWKGzi0FGTweE9COot-9TdP_mHv4BdzA2mkEAg/edit?usp=sharingSSDhttps://docs.google.com/document/d/1oWhDiyfaaCHef23QWVzcB4Yah-8EvjGF3wODgSoSex4/edit?usp=sharingSRDhttps://docs.google.com/document/d/1w5ZWddqB6iOTN4Ku2wOdZLmmxYpQKhgURkkDfFiViZY/edit?usp=sharing |
addyson | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content. |
addytool | addytool: An API Wrapper for AddigyBeware:addytoolis a beta tool with capacity to do great harm to your Addigy environment. You are responsible for your own actions!To more easily leverage Addigy's API,addytoolabstracts the details of the API to allow admins to treat endpoints as Python objects.Requirementsaddytoolstores your API creds in macOS's keychain usingkeyring. You'll also needrequests. Both can be acquired throughpip, untilladdytoolitself is in pip.To getpip, runsudo easy_install pip.Then...pip install --user keyringpip install --user requests |
ade | Performs the Differential Evolution (DE) algorithm
asynchronously. With a multiprocess evaluation function running on a
multicore CPU or cluster,adecan get the DE processing done several
times faster than standard single-threaded DE. It does this without
departing in any way from the numeric operations performed by the
classic Storn and Price algorithm. You can use either a randomly
chosen candidate or the best available candidate.You get a substantial multiprocessing speed-up and the
well-understood, time-tested behavior of the classic DE/rand/1/bin or
DE/best/1/bin algorithm. (You can pick which one to use, or, thanks to
a specialadefeature, pick a probabilistic third version that
effectively operates at a selected midpoint between the extremes of
"random" and "best.") The underlying numeric recipe is not altered at
all, but everything runs a lot faster.Theadepackage also does simple and smart population initialization,
informative progress reporting, adaptation of the vector differential
scaling factorFbased on how much each generation is improving, and
automatic termination after a reasonable level of convergence to the
best solution.Comes with a couple of small and informativeexample
files, which you can install
to anade-examplessubdirectory of your home directory by typingade-examplesas a shell command.For a tutorial and more usage examples, see theproject
pageatedsuom.com. |
ade-cli | The ADE Development Environment (ADE) uses docker and gitlab to
manage environments of per project development tools and optional
volume images. Volume images may contain additional development tools
or released software versions. It enables easy switching of branches
for all images.For a public project using ADE as an example, seeAutoware.Auto. |
adecty-design | Adecty Design1. AboutAdecty Design is a module with which you can greatly simplify the development of the UI.2. Versions0.1.0 (2023-01-12):application structure;basic elements.3. License(c) 2022, Yegor YakubovichLicensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. |
adede | adedeA Wagtail app for wrapping our custom JavaScript library for frontend user components. Please bear with us while we prepare more detailed documentation.Compatibilityadede' major.minor version number indicates the Wagtail release it is compatible with. Currently this is Wagtail 4.1.xInstallationInstall usingpip:pipinstalladedeAddadedeto yourINSTALLED_APPSsetting:INSTALLED_APPS=[# ...'adede'# ...] |
adeem-adil-khatri-very-simple-dictionary | No description available on PyPI. |
ade-enseirb | ADE API for ENSEIRB-MATMECA students |
adeft | AdeftAdeft (Acromine based Disambiguation of Entities From Text context) is a
utility for building models to disambiguate acronyms and other abbreviations of
biological terms in the scientific literature. It makes use of an
implementation of theAcrominealgorithm developed by theNaCTeMat the
University of Manchester to identify possible longform expansions for
shortforms in a text corpus. It allows users to build disambiguation models to
disambiguate shortforms based on their text context. A growing number of
pretrained disambiguation models are publicly available to download through
adeft.CitationIf you use Adeft in your research, please cite the paper in the Journal of
Open Source Software:Steppi A, Gyori BM, Bachman JA (2020). Adeft: Acromine-based Disambiguation of
Entities from Text with applications to the biomedical literature.Journal of
Open Source Software,5(45), 1708,https://doi.org/10.21105/joss.01708InstallationAdeft works with Python versions 3.5 and above. It is available on PyPi and can be installed with the command$ pip install adeftAdeft's pretrained machine learning models can then be downloaded with the command$ python -m adeft.downloadIf you choose to install by cloning this repository$ git clone https://github.com/indralab/adeft.gitYou should also run$ python setup.py build_ext --inplaceat the top level of your local repository in order to build the extension module
for alignment based longform detection and scoring.Using AdeftA dictionary of available models can be imported withfrom adeft import available_modelsThe dictionary maps shortforms to model names. It's possible for multiple equivalent
shortforms to map to the same model.Here's an example of running a disambiguator for ER on a list of textsfromadeft.disambiguateimportload_disambiguatorer_dd=load_disambiguator('ER')...er_dd.disambiguate(texts)Users may also build and train their own disambiguators. See the documention
for more info.DocumentationDocumentation is available athttps://adeft.readthedocs.ioJupyter notebooks illustrating Adeft workflows are available undernotebooks:IntroductionModel buildingTestingAdeft usespytestfor unit testing, and uses Github Actions as a
continuous integration environment. To run tests locally, make sure
to install the test-specific requirements listed in setup.py aspipinstalladeft[test]and download all pre-trained models as shown above.
Then runpytestin the top-leveladeftfolder.FundingDevelopment of this software was supported by the Defense Advanced Research
Projects Agency under awards W911NF018-1-0124 and W911NF-15-1-0544, and the
National Cancer Institute under award U54-CA225088. |
adegoke | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content. |
adela-pack1.0 | No description available on PyPI. |
adelapdf | This is the homepage of our project. |
adelecv | Auto DEap LEarning Computer VisionPython library and dashboard for hyperparameter search and model training for computer vision tasks
based onPyTorch,Optuna,FiftyOne,Dash,Segmentation Model Pytorch.The main features of this library are:Fiftyone dataset integration with prediction visualizationUploading your dataset in one of the popular formats, currently supported - 2Adding your own python class for convert datasetDisplaying training statistics in tensorboardSupport for all samples from optunaSegmentation use smp: 9 model architectures, popular losses and metrics, seedoc smpConvert weights to another format, currently supported - 1 (onnx)📚 Project Documentation 📚VisitRead The Docs Project Pageor read following README to know more about Auto Deap Learning Computer Vision (AdeleCV for short) library📋 Table of contentExamplesInstallationInstruction DashboardArchitectureCitingLicense💡 ExamplesExample apinotebookSeevideoon the example of using dashboard🛠 InstallationInstall torch cuda if not installed:$pipinstalltorchtorchvisiontorchaudio--extra-index-urlhttps://download.pytorch.org/whl/cu116PyPI version:$pipinstalladelecvPoetry:$poetryaddadelecv📜 Instruction DashboardCreate .env file.Seedocs.Notification_LEVEL: DEBUG | INFO | ERRORExample:TMP_PATH='./tmp'
DASHBOARD_PORT=8080
FIFTYONE_PORT=5151
TENSORBOARD_PORT=6006
NOTIFICATION_LEVEL=DEBUGRun (about 30 seconds (I'm working on acceleration)).adelecv_dashboard--envfile.envHelpadelecv_dashboard--help🏰 ArchitectureThe user can use the api or dashboard(web app).
The api is based on 5 modules:data: contains an internal representation of the dataset, classes for converting datasets, fiftyone dataset_models: torch model, its hyperparams, functions for trainingoptimize: set of hyperparams, optuna optimizermodification model: export and conversion of weightslogs: python loggingThe Dash library was used for dashboard. It is based on components and callbacks on these component elements.📝 Citing@misc{Mamatin:2023,
Author = {Denis Mamatin},
Title = {AdeleCV},
Year = {2023},
Publisher = {GitHub},
Journal = {GitHub repository},
Howpublished = {\url{https://github.com/AsakoKabe/AdeleCV}}
}🛡️ LicenseProject is distributed underMIT License |
adeles | This is aSageMathpackage for computing with
adèles and idèles. It is based on and part of the master’s thesis [Her2021].[Her2021] Mathé Hertogh, Computing with adèles and idèles, master’s thesis,
Leiden University, 2021.In the root of this repository you can find [Her2021] as a PDF-file.Contents of the packageThe package can be seen to consist out of four parts.Part 1 corresponds to Chapters 3–6 of [Her2021] and provides the functionality
to compute with adèles and idèles over number fields. It consists out of these
files:profinite_integer.py– profinite integers over number fieldsprofinite_number.py– profinite numbers over number fieldscompletion.py– infinite completions of number fieldsadele.py– adèles over number fieldsmultiplicative_padic.py– multiplicativep-adicsidele.py– idèles over number fieldsray_class_group.py- ray class groups of number fieldsPart 2 corresponds to Chapter 7 of [Her2021] and implements profinite graphs,
which visualize graphs of functions from and to the ring of rational profinite
integers. In particular, the profinite Fibonacci function is implemented. Part 2
consists of out two files:profinite_function.py– profinite functions, including Fibonacciprofinite_graph.py– graphs of profinite functionsPart 3 corresponds to Chapter 8 of [Her2021] and implements the adèlic matrix
factorization algorithms discussed there. This resides in the file:matrix.py– adèlic matrix factorization algorithmsPart 4 corresponds to Chapter 9 of [Her2021] and implements the computation of
Hilbert class fields of imaginary quadratic number fields using Shimura’s
reciprocity law. It consists of the files:modular.py– modular functions and their actionsshimura.py– Shimura’s connecting homomorphismhilbert.py– example hilbert class field computationsGetting acquainted with the packageInstead of browsing through the source code files, we recommend browsing the
documentation, which is nicer formatted. It contains many examples to illustrate
the functionality.DocumentationThe documentation resides in the folderdocsand is also hosted online at
the following webpage:https://mathehertogh.github.io/adeles.Installing the packageFirst of all you should make sure you have a recent version ofSageMathinstalled, specificallySageMath
version 9.2 or newer.Now run the command$ sage -pip install adelesTo use the package, from anywhere on your computer, opensage$ sageand within thesageprompt, load the package:sage: from adeles.all import *Now you will have all functionality available, for example:sage: Adeles(QQ)
Adèle Ring of Rational FieldUpdating the packageTo update to the latest stable version of this package, run$ sage -pip install --upgrade adelesIt might be the case that the GitHub repositoryhttps://github.com/mathehertogh/adelescontains an ever newer version.
To install that version, clone the repository$ git clone https://github.com/mathehertogh/adeles.gitchange to the root directory of the package$ cd adelesand build the package using$ makeBackground informationFor more detailed information on this implementation of adèles and idèles, we
refer to [Her2021]. There we elaborate on properties of our representations of
adèles and idèles, design choices we made and implementation details.For questions you can contact the author via email (see below).Copyright# **************************************************************************
# Copyright (C) 2021 Mathé Hertogh <[email protected]>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
# https://www.gnu.org/licenses/
# ************************************************************************** |
adelie | AdelieA fast, flexible Python package for group elastic net.TODO: details.InstallationUsers must haveOpenMPinstalled in their system as a prerequisite.For MacOS/Linux users usinggcccompiler,
there is nothing to do sinceOpenMPis already shipped withgcc.For MacOS users usingclangcompiler, the simplest method is to install withbrew:brew install libompInstalladelie(stable) withpip:pip install adelieFor the bleeding edge version, follow the instructions inDeveloper Installation.Developer InstallationInstall conda. We recommend installingMambaforge, which is a conda installation withmambainstalled by default and set to useconda-forgeas the default set of package repositories.Clone the git repo:git clone [email protected]:JamesYang007/adelie.gitSet upadelieconda environment. The list of packages that will be installed in the environment
is inpyproject.toml.mamba update -y conda mamba
mamba env create
conda activate adelie
poetry config virtualenvs.create false --local
poetry install --no-rootInstalladeliein editable mode:pip install -e .ReferencesStrong Rules for Discarding Predictors in Lasso-type Problemssparsegl: An R Package for Estimating Sparse Group LassoHybrid safe-strong rules for efficient optimization in lasso-type problemsA Fast Iterative Shrinkage-Thresholding Algorithm
for Linear Inverse ProblemsSTANDARDIZATION AND THE GROUP LASSO PENALTYMany works explicitly do not sphere (Puig, Wiesel, and Hero (2009), Foygel and Drton (2010), Jacob, Obozinski, and Vert (2009), Hastie, Tibshirani, and Friedman (2008), among others), and many more make no mention of normalization.Successive over-relaxation (SOR)! |
adelin | Adelin DatabaseIn many cases, we don't need a complex database infrastructure for the applications we want to build. In such situations, Adel is very suitable for creating a simple and manageable database.FeaturesWritten pure pythonSave and load from fileYou can automatically add a date and an ID number for each entered data.The saved information, along with the Base64-encoded version in JSON format, is stored in .adl format.This way, it prevents ordinary users from tampering with and manipulating the JSON file.Each saved piece of information is stored in .adl format under a folder with a name determined by you.InstallpipinstalladelinUsageInit DatafromadelinimportMakedata# As a preliminary setup, we enter the row keys.fruits=MakeData("Name","Price_USD","Units_KG","Color")# Later on, we specify the name of the section under which we want to collect our data in our inherited object, and then we enter the values corresponding to the row keys.# "Fruit" is our main heading.fruits("Fruit","Apple",5,200,"Red")# Let's enter a new piece of data.fruits("Fruit","Apple",4,150,"Green")# Now, the prepared data is ready to be saved.Save and Read Data# We complete the saving process by using the `save_db` method, where we first specify the folder name we want to save and then the file name.# This way, we intend to establish a more organized recording system to prevent future complexity.# The saved file is located under the "FRUITS" folder and is recorded in the "product.adl" file.fruits.save_db("FRUITS","product")# Reading operations can also be easily performed by using the `read_db` method, where you specify the folder and file you want to read from, following the same approach.print(fruits.read_db("FRUITS","product"))# Here is the result{'Fruit':[{'Name':'Apple','Price_USD':5,'Units_KG':200,'Color':'Red'},{'Name':'Apple','Price_USD':4,'Units_KG':150,'Color':'Green'}]}Extra FeaturesSometimes, we may want to add an ID number and a date to the data we want to enter.
Let's now create the same example by adding an ID number and a Date.fruits=MakeData("Name","Price_USD","Units_KG","Color",id=True,date=True)fruits("Fruit","Apple",5,200,"Red")fruits("Fruit","Apple",4,150,"Green")fruits.save_db("FRUITS","product")print(fruits.read_db("FRUITS","product"))Here is the result:{'Fruit':[{'Name':'Apple','Price_USD':5,'Units_KG':200,'Color':'Red','Id':'da8bcabb','Date':'26/09/2023'},{'Name':'Apple','Price_USD':4,'Units_KG':150,'Color':'Green','Id':'da8cc555','Date':'26/09/2023'}]}Sometimes, we may prefer all entered headings to be in uppercase.fruits=MakeData("Name","Price_USD","Units_KG","Color",column_up=True)fruits("Fruit","Apple",5,200,"Red")fruits("Fruit","Apple",4,150,"Green")fruits.save_db("FRUITS","product")print(fruits.read_db("FRUITS","product")){'FRUIT':[{'Name':'Apple','Price_USD':5,'Units_KG':200,'Color':'Red'},{'Name':'Apple','Price_USD':4,'Units_KG':150,'Color':'Green'}]}Delete DataIn this current version, only data with entered ID numbers can be deleted, but this functionality can be further enhanced.# The `del_with_id` method will delete the information of the data with the specified "xxx" ID number when provided with the folder name and the file name where the data with the "xxx" ID number is located.fruits.del_with_id("FRUITS","product","c43c6881")Fetch Data from FileYou can obtain the data collected under each heading in a list format by entering the desired row values.foods=MakeData("Name","Price_USD","Units_KG","Color",id=True,date=True,column_up=True)foods("Fruit","Apple",5,200,"Red")foods("Fruit","Apple",4,150,"Green")foods.save_db("Fruits","product")foods("xxVegetable","cucumber",2,300,"Green")foods("xxVegetable","tomato",1,350," Red")foods.save_db("Vegetables","salad")foods("Eggs","quail egg",0.5,750,"patchy brown")foods("Eggs","chicken egg",0.1,1800,"White")foods("Eggs","chicken egg",0.1,3200,"Brown")foods.save_db("Eggs","eggs")print(foods.read_db("Eggs","eggs"))Result:{'EGGS':[{'Name':'quailegg','Price_USD':0.5,'Units_KG':750,'Color':'patchybrown','Id':'60799066','Date':'26/09/2023'},{'Name':'chickenegg','Price_USD':0.1,'Units_KG':1800,'Color':'White','Id':'60799067','Date':'26/09/2023'},{'Name':'chickenegg','Price_USD':0.1,'Units_KG':3200,'Color':'Brown','Id':'6079b783','Date':'26/09/2023'}]}print(foods.fetchdata("Vegetables","salad","xxVegetable","Name","Id"))# result ['cucumber', 'd48fa160', 'tomato', 'd48fc737'] |
adelphi | Adelphi toolingIntroductionA tool for interacting with the DataStaxAdelphiproject. This package provides the "adelphi" application which in turn provides the following features:Extraction of schemas for one or more keyspaces from a running Apache Cassandra™ clusterOptionally anonymizing these schemasFormatting these schemas as CQL statements or as JSON documentsAutomatic generation of a nosqlbench configuration for an input schemaDisplaying these formatted schemas on standard out or writing them to the filesystemAutomate a workflow for contributing anonymized schemas to the publicAdelphi schema repositoryThe anonymization process replaces all keyspace names, table names and table column names with a generic identifier. You can use the "adelphi" application to extract, format and display schemas from your Cassandra clusters without contributing these schemas to the Adelphi project, and for this use case anonymization is not required. Anonymizationisrequired anytime you contribute schemas to the Adelphi project.All the schemas in our repository are publicly visible so to avoidanypossible leakage of proprietary information we can only accept schemas which have been anonymized.This package supports Python 2.7.x as well as Python 3.5 through 3.9.InstallationWe recommend using pip for installation:pip install adelphiCommandsThe functionality of the "adelphi" tool is divided into several different commands. Details on each command are provided below.export-cqlThis command extracts schemas for the specified keyspaces from a Cassandra instance and then displays the CQL commands necessary to generate them to standard out. You can optionally specify an output directory, in which case the CQL commands are written to files within that directory, one file for each keyspace.The following will display the schemas for the keyspaces "foo" and "bar" on standard out:adelphi --keyspaces=foo,bar export-cqlIf you wish to store the schemas in a directory "baz" you could use the following instead:adelphi --keyspaces=foo,bar --output-dir=baz export-cqlexport-geminiThis command is similar to the "export-cql" command. Schemas are extracted from a Cassandra instance and formatted for use with Scylla'sGeminitool.To display Gemini-formatted schemas for the keyspaces "foo" and "bar" use the following:adelphi --keyspaces=foo,bar export-geminiAnd to store these schemas in a directory "baz":adelphi --keyspaces=foo,bar --output-dir=baz export-geminiexport-nbThis command is also similar to the "export-cql" command. Schemas are extracted from a Cassandra instance and used to generate anosqlbenchconfiguration for the database.To generate a nosqlbench config for the keyspaces "foo" and "bar" use the following:adelphi --keyspaces=foo,bar export-nbAnd to store these configs in a directory "baz":adelphi --keyspaces=foo,bar --output-dir=baz export-nbThe number of cycles for use in the rampup and main scenarios can be specified by command-specific flags:adelphi --keyspaces=foo,bar export-nb --rampup-cycles=10000 --main-cycles=10000The command will use the current Cassandra database to generate sequences and/or distributions (as appropriate) in the nosqlbench configuration for a randomly-selected table within the specified keyspace. Most single-valued CQL data types are supported, although we do not yet have support for any of the following data types:CountersFrozen typesCollection types (list, map, set)UDTscontributeThis command automates the workflow of contributing one or more schemas to the Adelphi project. TheAdelphi schema repositoryis implemented as a Github repository and contributions to this repository take the form of pull requests. The workflow implemented by this command includes the following steps:Fork the Adelphi schema repository into the Github workspace for the specified user
** If the user has already forked the schema repository that fork will be re-usedCreate a branch in the forked repositoryExtract and anonymize schemas from the specified Cassandra instanceAdd files representing the contents of these schemas to the branch in the forked repsitoryCreate a pull request on the Adelphi schema repository for the newly-created branch and filesThe syntax for using this command looks very similar to the export commands above. The following will create a pull request to contribute schemas for the keyspaces "foo" and "bar" to Adelphi:adelphi --keyspaces=foo,bar contributeAuthentication to Github is performed by way of apersonal access token. You must create a token for your Github user before you can contribute your schema(s) to Adelphi. The token can be provided to the command at execution time using a command-line argument but this is discouraged for security reasons. Instead we recommend using an environment variable, in this case theADELPHI_CONTRIBUTE_TOKENenvironment variable. We discuss using environment variables to pass command-line arguments in more detail below.OptionsThe "adelphi" application supports several command-line arguments. The full list of arguments can be accessed via the following:adelphi --helpThe output of this command provides a brief summary of each argument:$ adelphi --help
Usage: adelphi [OPTIONS] COMMAND [ARGS]...
Options:
--hosts TEXT Comma-separated list of contact points [default:
127.0.0.1]
--port INTEGER Database RPC port [default: 9042]
--username TEXT Database username
--password TEXT Database password
--keyspaces TEXT Comma-separated list of keyspaces to include. If not
specified all non-system keypaces will be included
--rf INTEGER Replication factor to override original setting.
Optional.
--no-anonymize Disable schema anonymization
--output-dir TEXT Directory schema files should be written to. If not
specified, it will write to stdout
--purpose TEXT Comments on the anticipated purpose of this schema.
Optional.
--maturity TEXT The maturity of this schema. Sample values would include
'alpha', 'beta', 'dev', 'test' or 'prod'. Optional.
--help Show this message and exit.
Commands:
contribute Contribute schemas to Adelphi
export-cql Export a schema as raw CQL statements
export-gemini Export a schema in a format suitable for use with the the...
export-nb Export a schema in a format suitable for use with the the...Individual commands may have their own options and/or help text. For example the help for the "contribute" command is as follows:$ adelphi contribute --help
Usage: adelphi contribute [OPTIONS]
Contribute schemas to Adelphi
Options:
--token TEXT Personal access token for Github user
--help Show this message and exit.A quick note on keyspacesNone of the commands aboverequireyou to specify keyspaces for export. If you do not supply the "--keyspaces" argument thenallkeyspaces will be considered for export. In either case the application will prune system keyspaces before performing the export.Both the "export-gemini" and "export-nb" commands can only operate against a single keyspace. Therefore these commands must be run against a Cassandra instance containing a single keyspace or the user must leverage the "--keyspaces" flag to specify only a single keyspce. If multiple keyspaces are selected the program will exit with an error message.A quick note on anonymizationThe anonymization process can be explicitly disabled using the "--no-anonymize" argument.Note that since all contributed schemasmustbe anonymized the "--no-anonymize" argument cannot be used when contributing schemas to Adelphi. Supplying this argument when attempting to contribute one or more schemas will cause the application to exit with an error message.Parameters via environment variablesValues for individual arguments can also be specified using corresponding environment variables. The name of the environment variable to use takes the form "ADELPHI_ARGUMENT" where "ARGUMENT" is the uppercase name of the argument. So for example the following is equivalent to the first example in the "export-cql" section above:export ADELPHI_KEYSPACES=foo,bar
adelphi export-cqlTo supply a value for a command-specific parameter use an environment variable of the form "ADELPHI_COMMAND_ARGUMENT" where "COMMAND" is an the uppercase name of the command and "ARGUMENT" the uppercase name of the argument. As mentioned above this feature becomes quite useful for providing the Github personal access token. Using theADELPHI_CONTRIBUTE_TOKENenvironment variable removes the need to specify any security materials when invoking the application. |
adels-dsnd-probability | No description available on PyPI. |
adem | This is package is under testing.
You might experience a lot of bugs; but be rest assured that we are working on them.A stable version will be released soon, with all infos
[email protected] any info and enquiriesThis package containsMatrix: Giving python the ability to work directly on Matrices explicitly
Expressions: Algebraic Manipulations in form of Polynomials for python
Equations: Solving expressions, including Simultaneous equations
Fundamentals: Comprising basic math functionsThis is package is under testing.
You might experience a lot of bugs; but be rest assured that we are working on them.A stable version will be released soon, with all infos
[email protected] any info and enquiries-------------------Version 1.0.0------------------------- |
adengine | ADEngineScroll down for russian languageПрокрутите ниже для русского языкаen-US:AboutThis is a python engine for Activity Data Extractor.It is a special python module, designed to extract data from multiple activity files at the same time.InstallationDownload and install Cisco Packet TracerDownload and installActivity Data ExtractorDownload and install PythonInstall engine fromPyPIusing commandpython3 -m pip install adengineNow the engine should be installed and ready to goRequirementsCisco Packet Tracer 7.3.0, 7.3.1 or 8.0.0.It is possible, that engine would run on higher versions on PacketTracer, but this isuntestedfunctionality.Activity Data Extractor1.0.3 and higher.Python 3.8 or higher.OS depends on version of Packet TracerUbuntu 18.04 for Cisco Packet Tracer 7.3.*Ubuntu 20.04 for Cisco Packet Tracer 8.0.0Java 8 or higher.xvfb and xvfb-run, if you want to run app using virtual display.UsageThere are several possible ADEngine use cases:Use ADEngine from Python code. Actual documentation for API can be foundhereRun ADEngine server using commandpython3 -m adegine.serverAdditional parameters can be found using commandpython3 -m adengine.server --helpParameters can be passed to ADEngine using command line parameters or via environment variable.Command line parameters have and advantage.ParameterDescriptionUsing cmdUsing envPossible valuesDefaultLog levelLevel of logs-ADENGINE_LOG_LEVELDEBUG,INFO,WARNING,ERRORINFOQueue sizeSize of queue with tasks--queue-sizeADENGINE_QUEUE_SIZEpositive integer20Read file timeoutTimeout for ADE to read file in seconds--read-file-timeoutADENGINE_READ_FILE_TIMEOUTpositive integer in range 5..6010Use virtual displayEnable using xvfb to run Packet Tracer--use-virtual-displayADENGINE_USE_VIRTUAL_DISPLAY0 or 1 for env, specified or not specified for cmdDisabledResult TTLResult time to live--result-ttlADENGINE_RESULT_TTLhh:mm:ss00:05:00Tasks before session restartNum of tasks before session restart--tasks-before-session-restartADENGINE_TASKS_BEFORE_SESSION_RESTARTpositive integer100Unix socketPath to unix socket--unix-socketADENGINE_UNIX_SOCKabsolute path to unix socket (non existent)/tmp/adengine.sockMax connectionsNum of maximum connections to server--max-connectionsADENGINE_MAX_CONNECTIONSpositive integer10Run ADEngine client using commandpython3 -m adegine.clientAdditional parameters can be found using commandpython3 -m adengine.client --helpParameters can be passed to ADEngine using command line parameters or via environment variable.Command line parameters have and advantage.ParameterDescriptionUsing cmdUsing envPossible valuesDefaultActivityAbsolute path to activity fileactivity-absolute path to file-PasswordPassword to activity file--password-stringNoneNet stabilization delaySimulated network stabilization delay--net-stabilization-delay-non negative integer in range 0..6000SocketAbsolute path to unix socket to connect to ADEngine server--socketADENGINE_UNIX_SOCKabsolute path to file/tmp/adengine.sockru-RU:О репозиторииДанный репозиторий содержит движок и API на Python для Activity Data Extractor.Это специальный модуль Python, разработанный для параллельного извлечения данных из нескольких файлов Packet Tracer Activity.УстановкаСкачайте и установите Cisco Packet TracerСкачайте и установитеActivity Data ExtractorСкачайте и установите PythonУстановите движок изPyPIиспользуя командуpython3 -m pip install adengineТеперь движок должен быть установлен и готов к использованиюТребования к системеCisco Packet Tracer 7.3.0, 7.3.1 или 8.0.0.Вполне возможно, что движок будет работать и на более свежих версиях Cisco Packet Tracer, но эта функциональностьне протестирована.Activity Data Extractor1.0.3 или выше.Python 3.8 или выше.ОС зависит от версии Packet TracerUbuntu 18.04 для Cisco Packet Tracer 7.3.*Ubuntu 20.04 для Cisco Packet Tracer 8.0.0Java 8 или выше.xvfb и xvfb-run, если вы хотите запустить приложение с использованием виртуального дисплея.UsageЕсть несколько вариантов использования ADEngine:ADEngine можно использовать как модуль Python. Актуальная документация API находитсяздесьЗапускать сервер ADEngine используя командуpython3 -m adegine.serverДополнительный параметры можно посмотреть используя командуpython3 -m adengine.server --helpПараметры могут быть переданы серверу ADEngine используя аргументы командной строки или через переменные среды.Аргументы командной строки имеют преимущество.ПараметрОписаниеАргумент комадной строкиПеременная окруженияВозможные значенияЗначение по умолчаниюLog levelУровень логирования-ADENGINE_LOG_LEVELDEBUG,INFO,WARNING,ERRORINFOQueue sizeРазмер очереди задач--queue-sizeADENGINE_QUEUE_SIZEположительное целое число20Read file timeoutТаймаут для ADE на чтение файла в секундах--read-file-timeoutADENGINE_READ_FILE_TIMEOUTположительное целое число в диапазоне 5..6010Use virtual displayВключает использование виртуального дисплея для запуска Packet Tracer--use-virtual-displayADENGINE_USE_VIRTUAL_DISPLAY0 или 1 для переменной окружения, указано или не указано для аргумента командной строкиВыключеноResult TTLВремя жизни результата--result-ttlADENGINE_RESULT_TTLчч:мм:сс00:05:00Tasks before session restartКоличество выполненных задач перед перезапуском сессии--tasks-before-session-restartADENGINE_TASKS_BEFORE_SESSION_RESTARTположительное целое число100Unix socketПуть к unix сокету--unix-socketADENGINE_UNIX_SOCKабсолютный путь к unix сокету (на данный момент файл не должен существовать)/tmp/adengine.sockMax connectionsМаксимальное количество одновременных подключений к серверу--max-connectionsADENGINE_MAX_CONNECTIONSположительное целое число10Запускать клиент ADEngine используя командуpython3 -m adegine.clientДополнительный параметры можно посмотреть используя командуpython3 -m adengine.client --helpПараметры могут быть переданы клиенту ADEngine используя аргументы командной строки или через переменные среды.Аргументы командной строки имеют преимущество.ПараметрОписаниеАргумент комадной строкиПеременная окруженияВозможные значенияЗначение по умолчаниюActivityАбсолютный путь до файла Packet Tracer Activityactivity-абсолютный путь до файла-PasswordПароль к файлу Packet Tracer Activity--password-строкаNoneNet stabilization delayВремя стабилизации симулируемой сети--net-stabilization-delay-неотрицательное целое число в диапазоне 0..6000SocketАбсолютный путь до Unix сокета для подключения с серверу ADEngine--socketADENGINE_UNIX_SOCKабсолютный путь до файла/tmp/adengine.sock |
adenine | <p align="center"><img src="http://www.slipguru.unige.it/Software/adenine/_static/ade_logo_bitmap.png"><br><br></p>-----------------# Adenine: A data exploration pipeline**adenine** is a machine learning and data mining Python library for exploratory data analysis.The main structure of **adenine** can be summarized in the following 4 steps.1. **Imputing:** Does your dataset have missing entries? In the first step you can fill the missing values choosing between different strategies: feature-wise median, mean and most frequent value or k-NN imputing.2. **Preprocessing:** Have you ever wondered what would have changed if only your data have been preprocessed in a different way? Or is it data preprocessing a good idea after all? **adenine** includes several preprocessing procedures, such as: data recentering, Min-Max scaling, standardization and normalization. **adenine** also allows you to compare the results of the analysis made with different preprocessing strategies.3. **Dimensionality Reduction:** In the context of data exploration, this phase becomes particularly helpful for high dimensional data. This step includes manifold learning (such as isomap, multidimensional scaling, etc) and unsupervised feature learning (principal component analysis, kernel PCA, etc) techniques.4. **Clustering:** This step aims at grouping data into clusters in an unsupervised manner. Several techniques such as k-means, spectral or hierarchical clustering are offered.The final output of **adenine** is a compact, textual and graphical representation of the results obtained from the pipelines made with each possible combination of the algorithms selected at each step.**adenine** can run on multiple cores/machines* and it is fully `scikit-learn` compliant.## Installation**adenine** supports Python 2.7.### Pip installation`$ pip install adenine`### Installing from sources```bash$ git clone https://github.com/slipguru/adenine$ cd adenine$ python setup.py install```## Try Adenine### 1. Create your configuration fileStart from the provided template and edit your configuration file with your favourite text editor```bash$ ade_run.py -c my-config-file.py$ vim my-config-file.py...``````pythonfrom adenine.utils import data_source# -------------------------- EXPERMIENT INFO ------------------------- #exp_tag = '_experiment'output_root_folder = 'results'plotting_context = 'notebook' # one of {paper, notebook, talk, poster}file_format = 'pdf' # or 'png'# ---------------------------- INPUT DATA ---------------------------- ## Load an example dataset or specify your input data in tabular formatX, y, feat_names, index = data_source.load('iris')# ----------------------- PIPELINES DEFINITION ------------------------ ## --- Missing Values Imputing --- #step0 = {'Impute': [True, {'missing_values': 'NaN','strategy': ['nearest_neighbors']}]}# --- Data Preprocessing --- #step1 = {'MinMax': [True, {'feature_range': [(0, 1)]}]}# --- Unsupervised feature learning --- #step2 = {'KernelPCA': [True, {'kernel': ['linear', 'rbf', 'poly']}],'Isomap': [False, {'n_neighbors': 5}],'MDS': [True, {'metric': True}],'tSNE': [False],'RBM': [True, {'n_components': 256}]}# --- Clustering --- ## affinity ca be precumputed for AP, Spectral and Hierarchicalstep3 = {'KMeans': [True, {'n_clusters': [3, 'auto']}],'Spectral': [False, {'n_clusters': [3]}],'Hierarchical': [False, {'n_clusters': [3],'affinity': ['euclidean'],'linkage': ['ward', 'average']}]}```### 2. Run the pipelines```bash$ ade_run.py my-config-file.py```### 3. Automatically generate beautiful publication-ready plots and textual results```bash$ ade_analysis.py results/ade_experiment_<TODAY>```## *Got Big Data?**adenine** takes advantage of `mpi4py` to distribute the execution of the pipelines on HPC architectures```bash$ mpirun -np <MPI-TASKS> --hosts <HOSTS-LIST> ade_run.py my-config-file.py``` |
adenosine | adenosine: enthusiast-grade implementation of atproto.com in PythonStatus:it doesn't really work yet and will eat your dataThis is a hobby project to implement components of the proposed Bluesky AT
Protocol (atproto.com) for federated social media, as
initially announced in Fall 2022. This might be interesting for other folks to
take a spin with, but isn't intended to host real content from real people. The
goal is to think through how the protocol might work by implementing it.The intent is for this to roughly track theadenosineRust implementation.
This will probably be just a Python library, not a CLI or PDS implementation.DisclaimerIn addition to the below standard Free Software disclaimer from the LICENSE
file, note that this project is likely to be out of sync with upstream protocol
specifications; is not intended for real-world use; is entirely naive about
abuse, security, and privacy; will not have an upgrade/migration path; etc.[CONTRIBUTORS] PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, >
EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED >
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.HOWTO: Release pypi.org PackageGo totest.pypi.org(and then laterpypi.org) and generate an API token for
your account (assumign you already have an account).Install deps (on Debian/Ubuntu):sudo apt install python3-build python3-twineBuild this package:python3 -m buildRun the upload:# for test.pypi.org
python3 -m twine upload --repository testpypi dist/*
# for regular pypi.org
python3 -m twine upload dist/* |
adeploy | adeployWe buildadeploy, an universal deployment tool for Kubernetes that supports rendering and deployment of lightweight
Jinja templated k8s manifests and also Helm charts.We’ve added support for ...usingJinja variablesfrom per cluster, namespaces or release configurationeasysecret managementbased onGopassor other command line based password managersrunning deployment tests inCI/CD pipelinespreviewingandpatchingupstreamHelm Chartsbefore deployingextendingupstream Helm Charts with custom Jinja-templates manifestshandy templating forlabels, annotations, probes, resource limitsand other metadata... and even more to make your daily work with k8s easier.Documentation & Supportadeployis Open Source and hosted on GitHub:https://github.com/awesome-it/adeploy.You can report issues on GitHub:https://github.com/awesome-it/adeploy/issues.Find the documentation athttps://awesome-it.de/docs/adeploy/latest.ExamplesThis is how you can render, test (preview) and deploy a Helm Chart:Or you can render, test (preview) and deploy Jinja-templated manifests:You'll find some examples in theexampledirectory.Getting StartedYou can findadeployonGitHub. But it is recommended to install
or upgradeadeployusingpip:$pipinstalladeployOr usepipxto install, upgrade and runadeployin an isolated environment:$pipxinstalladeploy
$pipxupgradeadeployYou should now be able to runadeployfrom the command line:adeploy--helpYou can now start to useadeploy.See theusage documentationto start usingadeploy.Read Morehttps://awesome-it.de/2020/09/11/adeploy-an-universal-deployment-tool-for-kubernetes/ |
adept-augmentations | Adept AugmentationsWelcome to Adept Augmentations, can be used for creating additional data in Few Shot Named Entity Recognition (NER) setting!Adept Augmentation is a Python package that provides data augmentation functionalities for NER training data using thespacyanddatasetspackages. Currently, we support one augmentorEntitySwapAugmenter, however, we plan onadding some more.EntitySwapAugmentertakes either adatasets.Datasetor aspacy.tokens.DocBin. Additionally, it is optional to provide a set oflabelsto be included in the augmentations. It initially created a knowledge base of entities belonging to a certain label. When runningaugmenter.augment()forNruns, it then createsNnew sentences with random swaps of the original entities with an entity of the same corresponding label from the knowledge base.For example, assuming that we have knowledge base for PERSONS and LOCATIONS and PRODUCTS. We can then create additional data for the sentence "Momofuko Ando created instant noodles in Osaka." usingaugmenter.augment(N=2), resulting in "David created instant noodles in Madrid." or "Tom created Adept Augmentations in the Netherlands".Adept Augmentation works for NER labels using the IOB, IOB2, BIOES and BILUO tagging schemes, as well as labels not following any tagging scheme.UsageDatasetsfromdatasetsimportload_datasetfromadept_augmentationsimportEntitySwapAugmenterdataset=load_dataset("conll2003",split="train[:3]")augmenter=EntitySwapAugmenter(dataset)aug_dataset=augmenter.augment(N=4)forentryinaug_dataset["tokens"]:print(entry)# ['EU', 'rejects', 'British', 'call', 'to', 'boycott', 'British', 'lamb', '.']# ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'German', 'lamb', '.']# ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']# ['Peter', 'Blackburn']# ['BRUSSELS', '1996-08-22']spaCyimportspacyfromspacy.tokensimportDocBinfromadept_augmentationsimportEntitySwapAugmenternlp=spacy.load("en_core_web_sm")# Create some example training dataTRAIN_DATA=["Apple is looking at buying U.K. startup for $1 billion","Microsoft acquires GitHub for $7.5 billion",]docs=nlp.pipe(TRAIN_DATA)# Create a new DocBindoc_bin=DocBin(docs=docs)doc_bin=EntitySwapAugmenter(doc_bin).augment(4)fordocindoc_bin.get_docs(nlp.vocab):print(doc.text)# GitHub is looking at buying U.K. startup for $ 7.5 billion# Microsoft is looking at buying U.K. startup for $ 1 billion# Microsoft is looking at buying U.K. startup for $ 7.5 billion# GitHub is looking at buying U.K. startup for $ 1 billion# Microsoft acquires Apple for $ 7.5 billion# Apple acquires Microsoft for $ 1 billion# Microsoft acquires Microsoft for $ 7.5 billion# GitHub acquires GitHub for $ 1 billionPotential performance gainsData augmentation can significantly improve model performance in low-data scenarios.
To showcase this, we trained aSpanMarkerNER model on
the 50, 100, 200, 400 and 800 firstCoNLL03training samples.The augmented dataset is generated like so:# Select N (50, 100, 200, 400 or 800) samples from the gold training datasettrain_dataset=dataset["train"].select(range(N))# Generate augmented dataset, with 4 * N samplesaugmented_dataset=Augmenter(train_dataset).augment(N=4)# Combine the original with the augmented to produce the full dataset# to produce a dataset 5 times as big as the originaltrain_dataset=concatenate_datasets([augmented_dataset,train_dataset])Note that the baseline uses 5 epochs. This way, the training time and steps are identical between the two experiments. All scenarios are executed 5 times,
and we report means and standard errors.Original - 5 EpochsAugmented - 1 EpochN=500.387 ± 0.042 F10.484 ± 0.054 F1N=1000.585 ± 0.070 F10.663 ± 0.038 F1N=2000.717 ± 0.053 F10.757 ± 0.025 F1N=4000.816 ± 0.017 F10.826 ± 0.011 F1N=8000.859 ± 0.004 F10.862 ± 0.002 F1(Note: These results are not optimized and do not indicate maximum performances with SpanMarker.)From these results, it is clear that performing data augmentation usingadept_augmentationscan heavily improve performance in low-data settings.Implemented AugmentersEntitySwapAugmenterKnowledgeBaseSwapAugmenterCoreferenceSwapAugmenterSyntaticTreeSwapAugmenterPotential integrationsPotentially, we can look into integrations of other augmentations packages that do not preserve gold standard knowledge. Good sources for inspiration are:https://github.com/KennethEnevoldsen/augmentyhttps://kennethenevoldsen.github.io/augmenty/tutorials/introduction.htmlhttps://github.com/QData/TextAttackhttps://github.com/infinitylogesh/mutate |
adeptRL | adept is a reinforcement learning framework designed to accelerate research
by providing:a modular interface for using custom networks, agents, and environmentsbaseline reinforcement learning models and algorithms for PyTorchmulti-GPU supportaccess to various environmentsbuilt-in tensorboard logging, model saving, reloading, evaluation, and
renderingproven hyperparameter defaultsThis code is early-access, expect rough edges. Interfaces subject to change.
We're happy to accept feedback and contributions.Read MoreInstallationQuickstartFeaturesPerformanceDocumentationArchitecture OverviewModularNetwork OverviewResume trainingEvaluate a modelRender environmentExamplesCustom Network (stub| example)Custom SubModule (stub|example)Custom Agent (stub|example)Custom Environment (stub|example)InstallationDependencies:gymPyTorch 1.xPython 3.5+We recommend CUDA 10, pytorch 1.0, python 3.6From source:Follow instructions forPyTorch(Optional) Follow instructions forStarCraft 2gitclonehttps://github.com/heronsystems/adeptRLcdadeptRL# Remove mpi, sc2, profiler if you don't plan on using these features:pipinstall.[mpi,sc2,profiler]From docker:docker instructionsQuickstartTrain an AgentLogs go to/tmp/adept_logs/by default. The log directory contains the
tensorboard file, saved models, and other metadata.# Local Mode (A2C)# We recommend 4GB+ GPU memory, 8GB+ RAM, 4+ Corespython-madept.applocal--envBeamRiderNoFrameskip-v4# Distributed Mode (A2C, requires NCCL)# We recommend 2+ GPUs, 8GB+ GPU memory, 32GB+ RAM, 4+ Corespython-madept.appdistrib--envBeamRiderNoFrameskip-v4# IMPALA (requires mpi4py and is resource intensive)# We recommend 2+ GPUs, 8GB+ GPU memory, 32GB+ RAM, 4+ Corespython-madept.appimpala--agentActorCriticVtrace--envBeamRiderNoFrameskip-v4# StarCraft 2 (IMPALA not supported yet)# Warning: much more resource intensive than Ataripython-madept.applocal--envCollectMineralShards# To see a full list of options:python-madept.app-h
python-madept.apphelp<command>Use your own Agent, Environment, Network, or SubModule"""my_script.pyTrain an agent on a single GPU."""fromadept.scripts.localimportparse_args,mainfromadept.networksimportNetworkModule,NetworkRegistry,SubModule1Dfromadept.agentsimportAgentModule,AgentRegistryfromadept.environmentsimportEnvModule,EnvRegistryclassMyAgent(AgentModule):pass# ImplementclassMyEnv(EnvModule):pass# ImplementclassMyNet(NetworkModule):pass# ImplementclassMySubModule1D(SubModule1D):pass# Implementif__name__=='__main__':agent_registry=AgentRegistry()agent_registry.register_agent(MyAgent)env_registry=EnvRegistry()env_registry.register_env(MyEnv,['env-id-1','env-id-2'])network_registry=NetworkRegistry()network_registry.register_custom_net(MyNet)network_registry.register_submodule(MySubModule1D)main(parse_args(),agent_registry=agent_registry,env_registry=env_registry,net_registry=network_registry)Call your script like this:python my_script.py --agent MyAgent --env env-id-1 --custom-network MyNetYou can see all the argshereor how to implement
the stubs in the examples section above.FeaturesScriptsLocal (Single-node, Single-GPU)Best place tostartif you're trying to understand code.Distributed (Multi-node, Multi-GPU)Uses NCCL backend to all-reduce gradients across GPUs without a parameter
server or host process.Supports NVLINK and InfiniBand to reduce communication overheadInfiniBand untested since we do not have a setup to test on.Importance Weighted Actor Learner Architectures, IMPALA (Single Node, Multi-GPU)Our implementation uses GPU workers rather than CPU workers for forward
passes.On Atari we achieve ~4k SPS = ~16k FPS with two GPUs and an 8-core CPU."Note that the shallow IMPALA experiment completes training over 200
million frames in less than one hour."IMPALA official experiments use 48 cores.Ours: 2000 frame / (second * # CPU core) DeepMind: 1157 frame / (second * # CPU core)Does not yet support multiple nodes or direct GPU memory transfers.AgentsAdvantage Actor Critic, A2C (paper|code)Actor Critic Vtrace, IMPALA (paper|code)NetworksModular Network Interface: supports arbitrary input and output shapes up to
4D via a SubModule API.Stateful networks (ie. LSTMs)Batch normalization (paper)EnvironmentsOpenAI GymStarCraft 2 (unstable)Performance~ 3,000 Steps/second = 12,000 FPS (Atari)Local Mode64 environmentsGeForce 2080 TiRyzen 2700x 8-coreUsed to win aDoom competition(Ben Bell / Marv2in)Trained for 50M Steps / 200M FramesUp to 30 no-ops at start of each episodeEvaluated on different seeds than trained onArchitecture:Four Convs(F=32)
followed by anLSTM(F=512)Reproduce withpython -m adept.app local --logdir ~/local64_benchmark --eval -y --nb-step 50e6 --env <env-id>AcknowledgementsWe borrow pieces of OpenAI'sgymandbaselinescode. We indicate where this
is done. |
adeqt | Console plugin for Python Qt applicationsAdeqt gives you a Python shell inside your Qt applications using PyQt or PySide.
You can use this for simple debugging or as a 'power user' feature.How to useInstall the adeqt package:pip install adeqt.If you don't want to add any dependencies, you can copyadeqt.pyinto your own
project instead. You might also want to change the imports to use your chosen
Python Qt package directly (it normally uses theQtPycompatibility layer).Connect up a menu entry or a keyboard shortcut to open the Adeqt window like
this:fromPyQt5.QtCoreimportQtfromPyQt5.QtGuiimportQKeySequencefromPyQt5.QtWidgetsimportQAction,QMainWindow,QShortcutclassMainWindow(QMainWindow):def__init__(self,parent=None):super().__init__(parent)# ... Other application setup ...# Menu entryadeqt_action=QAction("Python console",self)adeqt_action.triggered.connect(self.show_adeqt)some_menu.addAction(adeqt_action)# Keyboard shortcut (here F12)adeqt_shortcut=QShortcut(QKeySequence(Qt.Key_F12),self)adeqt_shortcut.activated.connect(self.show_adeqt)adeqt_window=Nonedefshow_adeqt(self):# Change to 'from .adeqt ...' if you copy adeqt into your applicationfromadeqtimportAdeqtWindowifself.adeqt_windowisNone:self.adeqt_window=AdeqtWindow({'window':self},parent=self)self.adeqt_window.show()The dictionary you pass toAdeqtWindowdefines variables that will be
available in the console. This will normally have at least the main
window/application object, and any other objects you want convenient access to.When using the console window:Ctrl-Enter executes the existing codeTab shows available completionsCtrl-W closes the console windowDesign & limitationsAdeqt is deliberatelysimple, providing a basic console experience. It's
meant to be easy to copy into your project and easy to modify as required.Itdoesn't protect anything from malicious users. Users running a Python
application can probably do anything anyway, but Adeqt makes it very easy.
If you need to restrict what users can do, think about security at other levels.User code runs in the main thread. This makes it easy to safely call Qt
methods, but if you run something slow from the console, the GUI locks up until
it finishes.AlternativesTheJupyter Qt Consolecan beembedded
in an application.
This is a much more featureful console - with rich output, syntax highlighting,
better tab completions, etc. - but it's designed to run code in a separate
'kernel' process. Running the code in the same process as the console
('inprocess') is possible, but not well supported. It also needs quite a few
dependencies.Debuggers can pause your code during execution and give you a place to run
commands and explore the stack. Some modern debuggers can also 'attach' to a
process which wasn't started in a debugger. A good debugger is strictly more
powerful than Adeqt, but that power also makes it trickier to use. |
ader | A Python implementation of the ADER method for solving any (potentially very
stiff) hyperbolic system of PDEs of the following form:An arbitrary number of spatial domains can be used, and this implementation is
capable of solving the equations to any order of accuracy. Second-order
parabolic PDEs will be implemented soon.InstallationRequires Python 3.6. Runpip install aderThe following dependencies will be installed:NumPy 1.14.5SciPy 1.1.0Tangent 0.1.9BackgroundGiven cell-wise constant initial data defined on a computational grid, this
program performs an arbitrary-order polynomial reconstruction of the data in
each cell, according to a modified version of the WENO method, as presented in
[1].To the same order, a spatio-temporal polynomial reconstruction of the data is
then obtained in each spacetime cell by the Discontinuous Galerkin method,
using the WENO reconstruction as initial data at the start of the timestep
(see [2,3]).Finally, a finite volume update step is taken, using the DG reconstruction to
calculate the values of the intercell fluxes and non-conservative intercell
jump terms, and the interior source terms and non-conservative terms (see [3]).The intercell fluxes and non-conservative jumps are calculated using either a
Rusanov-type flux [4], a Roe-type flux [5], or an Osher-type flux [6].UsageDefining the System of EquationsThe user must define the flux function (returning a NumPy array):F(Q, d, model_params)Given a vector of conserved variablesQ,Fmust return the flux in
directiond(whered=0,1,...corresponds to the x-,y-,… axes).model_paramsshould be an object containing any other parameters required
by the model in the calculation of the flux (for example, the heat capacity
ratio in the Euler equations, or the viscosity in the Navier-Stokes equations).model_paramsdoes not have to be used, but it must be contained in the
signature ofF.Similarly, if required, the nonconservative matrix B(Q) in direction d must be
defined as a square NumPy array, thus:B(Q, d, model_params)If required, the source terms must be defined as a NumPy array thus:S(Q, model_params)Note that the governing system of PDEs can be written as:where the system Jacobian M corresponds to:If an analytical form for M is known, it should be defined thus:M(Q, d, model_params)If an analytical form for the eigenvalue of M with the largest absolute value
is known, it should be defined thus:max_eig(Q, d, model_params)The system Jacobian and largest absolute eigenvalue are required for the ADER
method. If no analytical forms are available, they will be derived from the
supplied flux (and nonconservative matrix, if supplied) usingautomatic
differentiation.
This will introduce some computational overhead, however, and some
considerations need to be made when defining the flux function (see the
dedicated section below).Solving the EquationsSuppose we are solving thereactive Euler equationsin 2D. We defineFandSas above (but notB, as the equations are
conservative). We also definemodel_paramsto hold the heat capacity ratio
and reaction constants. These equations contain 5 conserved variables. To solve
them to order 3, with 4 CPU cores, we set up the solver object thus:from ader import Solver
solver = Solver(nvar=5, ndim=2, F=F, B=None, S=S, model_params=model_params, order=3, ncore=4)Analytical forms of the system Jacobian and the eigenvalue of largest absolute
size exist for the reactive Euler equations. If we define these inMandmax_eigwe may instead create the solver object thus:solver = Solver(nvar=5, ndim=2, F=F, B=None, S=S, M=M, max_eig=max_eig, model_params=model_params, order=3, ncore=4)To solve a particular problem, we must define the initial state,initial_grid. This must be a NumPy array with 3 axes, such thatinitial_grid[i,j,k]is equal to the value of the kth conserved variable in
cell (i,j). We must also define listdX=[dx,dy]where dx,dy are the grid
spacing in the x and y axes. To solve the problem to a final time of 0.5, with
a CFL number of 0.9, while printing all output, we call:solution = solver.solve(initial_grid, 0.5, dX, cfl=0.9, verbose=True)Advanced UsageThe Solver class has the following additional arguments:riemann_solver(default ‘rusanov’): Which Riemann solver should be
used. Options: ‘rusanov’, ‘roe’, ‘osher’.stiff_dg(default False): Whether to use a Newton Method to solve the
root finding involved in calculating the DG predictor.stiff_dg_guess(default False): Whether to use an advanced initial
guess for the DG predictor (only for very stiff systems).newton_dg_guess(default False): Whether to compute the advanced
initial guess using a Newton Method (only for very, very stiff systems).DG_TOL(default 6e-6): The tolerance to which the DG predictor is
calculated.DG_MAX_ITER(default 50): Maximum number of iterations attempted if
solving the DG root finding problem iteratively (not with a Newton Method)WENO_r(default 8): The WENO exponent r.WENO_λc(default 1e5): The WENO weighting of the central stencils.WENO_λs(default 1): The WENO weighting of the side stencils.WENO_ε(default 1e-14): The constant used in the WENO method to avoid
numerical issues.The Solver.solve method has the following additional arguments:boundary_conditions(default ‘transitive’): Which kind of boundary
conditions to use. Options: ‘transitive’, ‘periodic’,func(grid, N, ndim). In the latter case, the user defines a function
with the stated signature. It should return a NumPy array with the same
number of axes as grid, but withNmore cells on either side of the grid
in each spatial direction, whereNis equal to the order of the method
being used. These extra cells are required by an N-order method.callback(default None): A user-defined callback function with signaturecallback(grid, t, count)wheregridis the value of the
computational grid at timet(and timestepcount).ExamplesCheck out example.py to see a couple of problems being solved for the GPR model
and the reaction Euler equations.NotesSpeedThis implementation is pretty slow. It’s really only intended to be used only
for academic purposes. If you have a commercial application that requires a
rapid, bullet-proof implementation of the ADER method or the GPR model, then
get in touch ([email protected]).Automatic DifferentiationThe automatic differentiation used to deriveMandmax_eigis
performed usingGoogle’s Tangent library.
Although it’s great, this library is quite new, and it cannot cope with all
operations that you may use in your fluxes (although development is proceeding
quickly). In particular, it will never be able to handle closures, and classes
are not yet implemented. Some NumPy functions such asinvhave not yet been
implemented. If you run into issues, drop me a quick message and I’ll let you
know if I can make it work.ReferencesDumbser, Zanotti, Hidalgo, Balsara -ADER-WENO finite volume schemes with
space-time adaptive mesh refinementDumbser, Castro, Pares, Toro -ADER schemes on unstructured meshes for
nonconservative hyperbolic systems: Applications to geophysical flowsDumbser, Hidalgo, Zanotti -High order space-time adaptive ADER-WENO finite
volume schemes for non-conservative hyperbolic systemsToro -Riemann Solvers and Numerical Methods for Fluid Dynamics: A
Practical IntroductionDumbser, Toro -On Universal Osher-Type Schemes for General Nonlinear
Hyperbolic Conservation LawsDumbser, Toro -A simple extension of the Osher Riemann solver to
non-conservative hyperbolic systems |
adeskForgeWrapper | No description available on PyPI. |
adeso | A.D.E.S.O.Application forDecryption,Encryption andSteganographicOperationsSummary:Steganographyis the practice of concealing information within another message to avoid detection.Encryptionis the process of converting plaintext into ciphertext, which is unreadable.This application provides both of these functionalities. Users can first encrypt their data, via
a password using an AES-128 algorithm in CBC mode.Then, users can hide the encrypted data (ciphertext)
within an image of their choice via steganography.Key Features:A web interface is provided to paste large plaintext objects without a terminal buffer limit.All encryption, decryption, encoding and decoding is done in memory.Tools Used:Front end:SvelteusingSvelteKitAPI:Python 3.11.1usingFlask 2.3.2Cryptography:cryptography 41.0.1Steganography usingLSBGotchas:Large ciphertext and images can result in API lag or lockups for steganography.The decode operation is done on the file selected, not the image displayed on the UI.Links:DocumentationPyPi ProductionPyPi TestInstallation:pip install adeso |
adestis-netbox-plugin-account-management | No description available on PyPI. |
a-detect | time series bottom up segmentation codeChange Log0.0.1 (29/06/2022)First Release |
adetfs | ADETfs - Automated Data Extraction Tool for Fitbit serverPython software for automated extraction of user data from Fitbit server and saving it as csv. Software also saves all the sleep data as JSON file(s) which can be used for example for sleep graphics. This software can also be combined with import_to_redcap module, which can be used to transfer the data to REDCap.Installationpip install adetfsRequirementsSee the requirements.txtUsageSoftware is easy-to-use but there are few tricks you must do.This software is mainly intented to use in a research purposes but can also be used and modified to other purposes (see the license file). This software allows easy way to automate the data extraction from the Fitbit server. Fitbit server is not "open-source database" so you can only use this software with user(s) content.This software can be run from the command-prompt using: python -m adetfs
Before launching, navigate to the folder that contains properties.ini fileIt is also the responsibility of the person who uses this software to follow the local regulations according to any data protection and privacy laws.Properties.ini: This file contains all the parameters for the software to run. You only have to change the values of the parameters to match yours. Use only plain text, no quotations are needed. This file need to placed in the ADETfs folder path. Example of properties.ini file is available with the source code. Parameters in the file are:EMAIL:user = senders gmail address without '@gmail.com' (We suggest creating one for this purpose).password = gmail app password (https://support.google.com/accounts/answer/185833?hl=en)to = email address of receiver
usernames = filepath to txt file that contains json type dictionary with Fitbit userid and the name user want to use in email alert
Example: {"1XX23": "Test user 1", "9BB34":"Test user 2", "8CC35": "userid"}CR:id = client_id (can be obtained from Fitbit by registering an application)secret = client_secret (can be obtained from Fitbit by registering an application)TOKENS:token_file = path to token file. Create a file "filename.txt" and provide the path here. Then use fetch_tokens_to_file module for saving the tokens inside the file.REFRESH_TOKEN:url_path = Fitbit API Refresh token URL pathSLEEP_STATS:api_version = Fitbit API version for sleep statsFOLDER_PATHfolder_path = Folder path in which the data will be saved. Software will create a structure '\data\user_id' in which the user files will be saved
Execute.log and data.log are save under this pathEXTRACTION_LOG:EXTRACTION_LOG_PATH: Path to the extraction log file that contains JSON type information: {USER_ID:last_extraction_date,USER_ID2:last_extraction_date}
This file is used to see if there is new data available for the user.
Extraction is being done until two days before last sync time to not miss any sleep dataComplementary part for REDCap projectsREDCap:token = REDCap project tokenurl = REDCap project URLactivate_main.bat: You can download this file from the project homepageThis file is for automation of the script. Automation will use the time delta between last extraction and last synctime. As a default the tool is meant to be run once a week. If run more seldom, make sure that all the data is being collected (Fitbit has rate limit of 150 queries per hour).In this Batch file, you will need to add the directory of your Python virtualenvironment (if any), your python.exe file for that environment and the directory of this ADETfs main file.You can use Windows Task Scheduler to schedule this file to be run once a week. At the beginning the file has simple check to verify that internet connection is on. If not it will sleep 10 minutes as a default before trying again. Loop will run 5 times and if no connection is established the execution will end and report can be find from the error.log.Client ID and Client Secret:NEVER EXPOSE THE CLIENT ID AND SECRET!Client refers to Fitbit Application you will have to create. Your application will have to be server type and follow the requirements and rules of Fitbit. The author(s) of this software, according to the license, are not in any legal responsibility for your use case.Your application can use this tool as it is or make changes in accordance with the license.When you have created and registered your App, you will have to save the credentials (client id and secret) into the corresponding parameters of Properties.ini file for later use.Be aware, that if ever you will have to revoke the client secret, you will have to save the new secret inside the properties.ini.Tokens for each user:NEVER EXPOSE THE USER TOKENS!You must save your user(s) Fitbit id, expiration of authentication, access token and refresh token into the tokens text file. This have to be done manually using Fitbit Authentication. For this you can use fetch_tokens_to_file module. This module can be downloaded from the project homepage. There is also a copy of gather_keys_oauth2 module (original can be found fromhttps://github.com/orcasgit/python-fitbit) After running the fetch_tokens_to_file module, a browser will open to Fitbit login page and ask the user to login. After login user will be asked the consent. After confirmation, the tokens will be saved in the tokens text file. Before fetching tokens for the next user, you will have to open Fitbit website and log out with the current user. Otherwise you will automatically fetch new tokens for the logged in user.This part is most time consuming as it can not be automated. After doing this for each user, you should have a token file with each row having:
user_id,expires_at,access_token,refresh_tokenMake sure that the file does not contain empty line at the end!CSV and JSON output:Software will save one CSV file for each day for each user with the extracted data. Non-existing data will be empty. In the case of sleep data, if any, then software will save also JSON file for each day for each user. Each CSV file will be named as 'userid_YYYY_MM_DD_all_data.csv' and JSON file as 'sleep_stats_userid_YYYY_MM_DD.json'. If file is already existing name will contain '_copy' at the end. All files will be saved to folder called 'data\user_id'. Path for this folder can be configured inside properties.iniError handling:For fatal errors, software is using "execute.log" to save them. For non-fatal errors, for example non-existing sleep data, errors will be logged in "data_log.log".Automation:Software comes with .bat-file which can be used for automation with most Windows OS. For Linux or other OS, please refer the OS help..bat-file, among other files like properties.ini, can be found from the source code or project website.Email alert:Software is sending email alert after execution with information of possible errors and issues listed in the email.Tests:Software includes few simple tests, which can be found from the tests folder. These can be used to check the correct functioning of the software.ContributingIdeas how to improve the software, add new features or error fix are welcome.
Please open an issue first to discuss what you would like to change / or add.CitationWhen used in research, please look the citation file and cite correspondingly.AcknowledgmentThis tool is benefiting from the following projects:
Fitbit API Python Client Implementation ,https://github.com/orcasgit/python-fitbitCherryPy Object-Oriented HTTP framework,https://www.cherrypy.orgLicenseBSD 3-ClauseThis software includes third party open source software components: Pandas, CherryPy, Fitbit API Python Client Implementation , tqdm, Requests, urllib3, oauthlib, toml, and setuptools.Each of these software components have their own license. Please see ./LICENSES/PANDAS_LICENSE, ./LICENSES/CHERRYPY_LICENSE, ./LICENSES/FITBITAPIPYTHONCLIENTIMPLEMENTATION_LICENSE,
./LICENSES/TQDM_LICENSE, ./LICENSES/OAUTHLIB_LICENSE, ./LICENSES/SETUPTOOLS_LICENSE,
./LICENSES/URLLIB3_LICENSE, and ./LICENSES/REQUESTS_LICENSE., /LICENSES/TOML_LICENSE. |
adevinta-yapo-bi-configuration | No description available on PyPI. |
adevinta-yapo-bi-connect-db | No description available on PyPI. |
adevinta-yapo-bi-postgresql | No description available on PyPI. |
adevinta-yapo-bi-pySpark-postgresql | No description available on PyPI. |
adevinta-yapo-bi-read-params | No description available on PyPI. |
adeweb-docker-scripts | Various Docker scripts to facilitate server management and apps debugging.Disclaimer: this is intented for internal use and may not be generic enough for you. |
adex | ADEX is currently under development.Check out the official documentation:https://adex.readthedocs.io/en/latestExplore the open source code repository:https://github.com/vitorinojoao/adexInstall it from the Python Package Index:https://pypi.org/project/adex |
adext | adextadextis a small package that extendsalarmdecoderto include some additional methods forHome Assistant.Specifically, the following methods have been added:arm_homearm_awayarm_nightEach method accepts the arguments described below to determine which key sequences are used to arm a panel based on factors like panel brand and user config settings.Arguments:code: (Noneorstr) - the code used to arm a panel (i.e.'1234')auto_bypass: (bool) - for Honeywell only. set toTrueto prefix an arming sequence with<code> + 6#in order to automatically bypass any faulted zones. This will require a code to be entered even ifcode_arm_requiredis set tofalse."code_arm_required: (bool) - set toFalseto enable arming without a code. seeArming Key Sequencesbelow.alt_night_mode: (bool) - For Honeywell systems, set totrueto enableNight-Staymode instead ofInstantmode for night arming. For DSC systems, set totrueto enableNo-Entrymode instead ofStaymode for night arming. For both systems, whenever this option is set totrue, a code will be required for night armingregardless of thecode_arm_requiredsetting.SeeArming Key Sequencessection below for more information.Arming Key SequencesThe tables below show the key press sequences used for arming for the different panel brands and configuration setting combinations.Honeywellcode_arm_required = true (default)ModeKey Sequencealarm_arm_homecode+3alarm_arm_awaycode+2alarm_arm_night(alt_night_mode=false, default)code+7alarm_arm_night(alt_night_mode=true)code+33code_arm_required = falseModeKey Sequencealarm_arm_home#3alarm_arm_away#2alarm_arm_night(alt_night_mode=false, default)#7alarm_arm_night(alt_night_mode=true)code+33DSCcode_arm_required = true (default)ModeKey Sequencealarm_arm_homecodealarm_arm_awaycodealarm_arm_night(alt_night_mode=false, default)codealarm_arm_night(alt_night_mode=true)*9+codecode_arm_required = falseThechr(4)andchr(5)sequences below are equivalent to pressing theStayandAwaykeypad keys respectively (as outlined in theAlarmDecoder documentation).ModeKey Sequencealarm_arm_homechr(4)+chr(4)+chr(4)alarm_arm_awaychr(5)+chr(5)+chr(5)alarm_arm_night(alt_night_mode=false, default)chr(4)+chr(4)+chr(4)alarm_arm_night(alt_night_mode=true)*9+code |
adf | Table of contentsOverviewFlow configuration documentationImplementer configuration documentationAPI documentationProcessing functionsOverviewInstallIt is highly advised to install the ADF framework in a virtual env runningpython3.7, as this is (currently) the only supported python version for the AWS implementer. In addition, make sure to properly set yourPYSPARK_PYTHONpath for full spark support :mkvirtualenv adf -p `which python3.7`
export PYSPARK_PYTHON=`which python3`
pip install adfADF in a nutshellAbstract Data Flows (ADF)is a framework that provides data platform automation without infrastructure commitment. Data processing flows are defined in an infrastructure agnostic manner, and are then plugged into anyimplementerconfiguration of your choice. This provides all the major benefits of automation (namely, instantly deployable production ready infrastructure) while avoiding its major pitfall : being tied to your choice of infrastructure.Getting startedFor an easy-to-follow tutorial, please refer to the accompanyingadf_appsister repository and its associated README.Flow configuration documentationGlobal configurationModulesFlowsStarting stepsLanding stepCombination stepReception stepNon-starting stepMetadata configurationEach flow configuration file defines anADF collection. The configuration can be broken down into 3 categories of parameters.Global configurationParameterObligationDescriptionnameREQUIREDThe name for the ADF collection.BATCH_ID_COLUMN_NAMEOPTIONAL, advised not to changeColumn name to store batch IDs.SQL_PK_COLUMN_NAMEOPTIONAL, advised not to changeColumn name to store a PK if needed.TIMESTAMP_COLUMN_NAMEOPTIONAL, advised not to changeColumn name to store timestamps.For example :name:collection-nameBATCH_ID_COLUMN_NAME:modified-batch-id-column-nameSQL_PK_COLUMN_NAME:modified-sql-pk-column-nameTIMESTAMP_COLUMN_NAME:modified-timestamp-column-nameModulesModules are listed under themodulesparameter. Each module must define the following parameters :ParameterObligationDescriptionnameREQUIREDAlias to refer to this module.import_pathREQUIREDValid python import path.For example :modules:-name:module-aliasimport_path:package.moduleFlowsThe actual data flows are defined under theflowsparameter, as a list of named flows, each containing a list of named steps.ParameterObligationDescriptionnameREQUIREDUnique Identifier.stepsREQUIREDList of steps.For example :collection:collection-namemodules:-[...]-[...]flows:name:flow-namesteps:-[...]# Step config-[...]# Step config-[...]# Step configStarting stepsThe first step in a flow is known as a starting step. There are 3 types of starting steps.Landing stepThere are passive steps that define where input data is expected to be received. As such, they cannot define a processing function, metadata, or any custom flow control mechanisms.ParameterObligationDescriptionstartREQUIREDMust be set tolanding.layerREQUIREDData layer name.nameREQUIREDIdentifier unique within this flow.versionOPTIONALData version ID.funcBANNEDCannot define a processing function.func_kwargsBANNEDCannot define function keywords.metaBANNEDCannot define custom metadata.sequencerBANNEDCannot define a custom sequencer.data_loaderBANNEDCannot define a custom data loader.batch_dependencyBANNEDCannot define custom batch dependency.For example :name:collection-nameflows:-name:flow-namesteps:-start:landinglayer:layer-namename:landing-step-nameversion:source-data-version-nameCombination stepThese steps define the start of a flow that takes as input multiple previous steps.ParameterObligationDescriptionstartREQUIREDMust be set tocombination.layerREQUIREDData layer name.nameREQUIREDIdentifier unique within this flow.input_stepsREQUIREDInput steps to combine.versionOPTIONALData version ID.funcOPTIONALThe processing function.func_kwargsOPTIONALExtra kwargs to pass to the function.metaOPTIONALMetadata constraints on output.sequencerOPTIONALDefined for the step itself.data_loaderOPTIONALDefined for the step and the input steps.batch_dependencyOPTIONALDefined only for the input steps.A minimal working example :name:collection-namemodules:-name:module-aliasimport_path:package.moduleflows:-name:flow-name-0steps:-start:landinglayer:layer-namename:landing-step-name-name:flow-name-1steps:-start:landinglayer:layer-namename:landing-step-name-name:combination-flowsteps:-start:combinationlayer:layer-namename:combination-step-nameinput_steps:-flow_name:flow-name-0step_name:landing-step-flow_name:flow-name-1step_name:landing-stepfunc:# REQUIRED if there are more than one input stepsload_as:moduleparams:module:module-aliasname:processing_function_nameAn example with all optional configurations :name:collection-namemodules:-name:module-aliasimport_path:package.moduleflows:-name:flow-name-0steps:-start:landinglayer:layer-namename:landing-step-name-name:flow-name-1steps:-start:landinglayer:layer-namename:landing-step-name-name:combination-flowsteps:-start:combinationlayer:layer-namename:combination-step-nameversion:version-nameinput_steps:-flow_name:flow-name-0step_name:landing-stepdata_loader:module:module-aliasclass_name:DataLoaderClassNameparams:[...]# class init paramsbatch_dependency:module:module-aliasclass_name:BatchDependencyClassNameparams:[...]# class init params-flow_name:flow-name-1step_name:landing-stepdata_loader:module:module-aliasclass_name:DataLoaderClassNameparams:[...]# class init paramsbatch_dependency:module:module-aliasclass_name:BatchDependencyClassNameparams:[...]# class init paramsfunc:load_as:moduleparams:module:module-aliasname:processing_function_namefunc_kwargs:[...]# kwargs dictionarymeta:[...]# metadata for outputsequencer:module:module-aliasclass_name:SequencerClassNameparams:[...]# class init paramsdata_loader:module:module-aliasclass_name:DataLoaderClassNameparams:[...]# class init paramsReception stepA reception step is used when we want a processing step to output more than one data structure. When a processing step is hooked to reception steps, no data will actually be saved at the processing step itself. Instead, the reception steps will serve as storage steps. As a result, much like landing steps, they cannot define a processing function or any custom flow control mechanisms, but they can define metadata.ParameterObligationDescriptionstartREQUIREDMust be set toreception.layerREQUIREDData layer name.nameREQUIREDIdentifier unique within this flow.keyREQUIREDKeyword by which to specify this step.input_stepsREQUIREDUpstream steps to store results of.metaOPTIONALMetadata constraints on incoming data.versionBANNEDUses input step version.funcBANNEDCannot define a processing function.func_kwargsBANNEDCannot define function keywords.sequencerBANNEDCannot define a custom sequencer.data_loaderBANNEDCannot define a custom data loader.batch_dependencyBANNEDCannot define custom batch dependency.For example :name:collection-namemodules:-name:module-aliasimport_path:package.moduleflows:-name:flow-namesteps:-start:landinglayer:layer-namename:landing-step-name-layer:layer-namename:processing-stepfunc:# see below for example functionload_as:moduleparams:module:module-aliasname:multiple_outputs-name:reception-flow-0steps:-start:receptionlayer:layer-namename:reception-step-namekey:reception-key-0input_steps:-flow_name:flow-namestep_name:processing-step-name:reception-flow-1steps:-start:receptionlayer:layer-namename:reception-step-namekey:reception-key-1input_steps:-flow_name:flow-namestep_name:processing-stepHooking a processing step into a reception step changes the expected output signature of the processing function. Instead of returning a single ADS, it must now return a dictionary whose keys correspond to the reception step keys. In our case :defmultiple_outputs(ads:AbstractDataStructure,)->Dict[str,AbstractDataStructure]:return{"reception-key-0":ads[ads["col_0"]==0],"reception-key-1":ads[ads["col_0"]!=0],}Non-starting stepA non-starting step is any step that is not the first step in a flow. It can customize any and all flow control mechanisms, as well as define a processing function and define metadata.ParameterObligationDescriptionlayerREQUIREDData layer name.nameREQUIREDIdentifier unique within this flow.versionOPTIONALData version ID.funcOPTIONALThe processing function.func_kwargsOPTIONALExtra kwargs to pass to the function.metaOPTIONALMetadata constraints on output.sequencerOPTIONALDefines batch sequencing.data_loaderOPTIONALDefines data loading.batch_dependencyOPTIONALDefines batch dependency.startBANNEDMust not be set.A minimal working example :name:collection-nameflows:-name:flow-name-0steps:-start:landinglayer:layer-namename:landing-step-name-layer:layer-namename:processing-step-namefunc:# if not set, the input data is merely copiedload_as:evalparams:expr:'lambdaads:ads[ads["col_0"==0]]'An example with all optional configurations :name:collection-namemodules:-name:module-aliasimport_path:package.moduleflows:-name:flow-name-0steps:-start:landinglayer:layer-namename:landing-step-name-layer:layer-namename:processing-step-nameversion:version-namefunc:load_as:evalparams:expr:'lambdaads:ads[ads["col_0"==0]]'func_kwargs:[...]# kwargs dictionarymeta:[...]# metadata for outputsequencer:module:module-aliasclass_name:SequencerClassNameparams:[...]# class init paramsdata_loader:module:module-aliasclass_name:DataLoaderClassNameparams:[...]# class init paramsbatch_dependency:module:module-aliasclass_name:BatchDependencyClassNameparams:[...]# class init paramsMetadata configurationMetadata is configured by specifying column names, data types, and requested behavior when missing. You can also set the default missing column behavior, as well as what to do with extra columns. All metadata parameters except for the column name are optional.ParameterValuesDescriptioncolumn.nameAnyName of the columncolumn.caststr,int,float,complex,datetime,date,timedeltaData typecolumn.on_missingignore,fail,fillMissing column behaviorcolumn.fill_valAny, defaults toNoneValue to fill column if missingin_partitiontrue,falseWhether to use in partitionon_missing_defaultignore,fail,fillDefault missing column behavioron_extraignore,fail,cutWhat to do with extra columnsFor example :name:collection-nameflows:-name:flow-namesteps:-start:landinglayer:layer-namename:landing-step-layer:layer-namename:meta-stepmeta:columns:-name:essential_columncast:stron_missing:failin_partition:true-name:integer_columncast:inton_missing:fillfill_val:"FILL_VALUE"-name:weakly_defined_columnon_missing_default:ignoreon_extra:cutImplementer configuration documentationLocal ImplementerAWS ImplementerManaged infrastructure AWS ImplementerPrebuilt infrastructure AWS ImplementerWhile implementer configurations are allowed to vary freely, there is one parameter they must all contain to actually specify which implementer they are destined for. Its value must be a valid python import path.ParameterDescriptionimplementer_classModule path followed by implementer class nameFor example, if the implementer class is defined in the modulepackage.module, and the implementer class name isImplementerClass, the corresponding configuration would be :implementer_class:package.module.ImplementerClassLocal ImplementerThe local implementer requires a root path, as well as a list of layer names to associate to each layer type. The available layer types are :3 file based CSV data layers, each of which manipulates data differently to perform computation :list_of_dicts: Loads data as a list of dictionaries to perform computation.pandas: Loads data as pandas DataFrame to perform computation.spark: Loads data as a pyspark DataFrame to perform computation.2 database backed data layers, each of which uses a different database engine to perform storage and computation :sqlite: Uses an sqlite database to store and process data.postgres: Uses a postgresql database to store and process data.If at least onepostgreslayer is used, then connection information must also be passed. Admin credentials may also be passed if one wishes for the implementer to create the database and technical user in question.ParameterObligationDescriptionimplementer_classREQUIREDModule path followed by implementer class nameroot_pathREQUIREDRoot path to store data and state handlerextra_packagesOPTIONALList of local paths to any packages requiredlayers.list_of_dictsOPTIONALList of list of dict based layerslayers.pandasOPTIONALList of pandas based layerslayers.sparkOPTIONALList of pyspark based layerslayers.sqliteOPTIONALList of sqlite based layerslayers.postgresOPTIONALList of postgres based layerspostgres_configOPTIONALhost,port,db,user,pw,admin_user,admin_pwFor example, to configure a local implementer without any postgres based layers :implementer_class:ADF.components.implementers.MultiLayerLocalImplementerextra_packages:[.]root_path:path/to/datalayers:pandas:-pandas-layer-name-0-pandas-layer-name-1spark:-spark-layer-name-0-spark-layer-name-1sqlite:-sqlite-layer-name-0-sqlite-layer-name-1To be able to include postgres based layers, one must add connection information :implementer_class:ADF.components.implementers.MultiLayerLocalImplementerextra_packages:[.]root_path:path/to/datapostgres_config:db:adf_db# Requireduser:adf_user# Requiredpw:pw# Requiredhost:localhost# Optional, defaults to localhostport:5432# Optional, defaults to 5432admin_user:postgres# Optional, will be used to create db and user if neededadmin_pw:postgres# Optional, will be used to create db and user if neededlayers:pandas:-pandas-layer-name-0-pandas-layer-name-1spark:-spark-layer-name-0-spark-layer-name-1postgres:-postgres-layer-name-0-postgres-layer-name-1AWS ImplementerManaged infrastructure AWS ImplementerThe AWS implementer configuration file is similar in structure to that of the local implementer. When ADF is given free rein over infrastructure deployment, individual layers carry sizing information. In addition, the state handler configuration and sizing must also be specified.ParameterObligationDescriptionimplementer_classREQUIREDModule path followed by implementer class namemodeREQUIREDSet tomanagedto tell your implementer to handle infrastructure deploymentnameREQUIREDAn identifier for the implementerlog_folderREQUIREDA local folder in which to store subcommand logsbucketREQUIREDThe S3 bucket used for data storages3_prefixREQUIREDS3 prefix for all data and uploaded configurationstate_handlerREQUIREDengine,db_name,db_instance_class,allocated_storageextra_packagesOPTIONALList of local paths to any additional required packageslambda_layersOPTIONALsep,timeout,memoryemr_layersOPTIONALmaster_instance_type,slave_instance_type,instance_count,step_concurrency,format,landing_formatemr_serverless_layersOPTIONALinitial_driver_worker_count,initial_driver_cpu,initial_driver_memory,initial_executor_worker_count,initial_executor_cpu,initial_executor_memory,max_cpu,max_memory,idle_timeout_minutesredshift_layersOPTIONALdb_name,node_type,number_of_nodesathena_layerOPTIONALlanding_formatFor example :implementer_class:ADF.components.implementers.AWSImplementerextra_packages:[.]mode:managed# ADF will handle infrastructure deploymentname:implementer-namelog_folder:local/path/to/logsbucket:YOUR-BUCKET-NAME-HEREs3_prefix:YOUR_S3_PREFIX/state_handler:engine:postgres# only postgres is currently supporteddb_name:ADF_STATE_HANDLERdb_instance_class:db.t3.microallocated_storage:20lambda_layers:lambda-layer-name:sep:","# separator for CSVstimeout:60memory:1024emr_layers:heavy:master_instance_type:m5.xlargeslave_instance_type:m5.xlargeinstance_count:1step_concurrency:5format:parquet# the format in which to store datalanding_format:csv# the format in which to expect data in landing stepsemr_serverless_layers:serverless:initial_driver_worker_count:1initial_driver_cpu:"1vCPU"initial_driver_memory:"8GB"initial_executor_worker_count:1initial_executor_cpu:"1vCPU"initial_executor_memory:"8GB"max_cpu:"32vCPU",max_memory:"256GB",idle_timeout_minutes:15,redshift_layers:expose:db_name:exposenumber_of_nodes:1node_type:ds2.xlargeathena_layers:dump:landing_format:csv# the format in which to expect data in landing stepsPrebuilt infrastructure AWS ImplementerIf you wish to connect your AWS implementer to pre-existing infrastructure, you can do this by changing the implementer mode toprebuilt. Once the implementer setup is run, it is possible to output aprebuiltconfiguration and use it moving forward. This is the recommended usage, asprebuiltmode requires fewer permissions to run, as well as fewer API calls to determine the current state of the infrastructure. Unlike inmanagedmode, no sizing information is provided. Instead, we pass endpoints and various configurations that define the data layer.ParameterObligationDescriptionimplementer_classREQUIREDModule path followed by implementer class namemodeREQUIREDSet tomanagedto tell your implementer to handle infrastructure deploymentnameREQUIREDAn identifier for the implementerlog_folderREQUIREDA local folder in which to store subcommand logsbucketREQUIREDThe S3 bucket used for data storages3_prefixREQUIREDS3 prefix for all data and uploaded configurationstate_handler_urlREQUIREDURL to state handler DBextra_packagesOPTIONALList of local paths to any additional required packageslambda_layersOPTIONALlambda_arn,lambda_name,s3_fcp_template,s3_icp,sep,sqs_arn,sqs_name,sqs_urlemr_layersOPTIONALbucket,s3_prefix,cluster_id,cluster_arn,name,public_dns,log_uri,format,landing_formatemr_serverless_layersOPTIONALapplication_id,bucket,environ,format,landing_format,role_arn,s3_fcp_template,s3_icp,s3_launcher_key,s3_prefix,venv_package_keyredshift_layersOPTIONALtable_prefix,endpoint,port,db_name,user,role_arnathena_layersOPTIONALbucket,db_name,landing_format,s3_prefix,table_prefixFor example :implementer_class:ADF.components.implementers.AWSImplementerextra_packages:[.]mode:prebuilt# ADF will plug into pre-existing infrastructurename:implementer-namelog_folder:local/path/to/logsbucket:YOUR-BUCKET-NAME-HEREs3_prefix:YOUR_S3_PREFIX/state_handler_url:postgresql://username:[email protected]:5432/DB_NAMElambda_layers:light:lambda_arn:LAMBDA_ARNlambda_name:LAMBDA_FUNCTION_NAMEs3_fcp_template:s3://TEMPLATE/TO/FCP/PATH/fcp.{collection_name}.yamls3_icp:s3://ICP/PATH/icp.yamlsep:','# separator for CSVssqs_arn:SQS_ARNsqs_name:SQS_QUEUE_NAMEsqs_url:https://url.to/sqs/queueemr_layers:heavy:bucket:YOUR-BUCKET-NAME-HEREs3_prefix:S3/PREFIX/# where to store data in the bucketcluster_id:EMR_CLUSTER_IDcluster_arn:EMR_CLUSTER_ARNname:EMR_CLUSTER_NAMEpublic_dns:https://url.to.emr.clusterlog_uri:s3://PATH/TO/LOGS/format:parquet# the format in which to store datalanding_format:csv# the format in which to expect data in landing stepsemr_serverless_layers:serverless:application_id:app-idbucket:YOUR-BUCKET-NAME-HEREenviron:AWS_DEFAULT_REGION:aws-regionRDS_PW:RDS_STATE_HANDLER_PASSWORDREDSHIFT_PW:REDSHIFT_PASSWORDformat:parquet# the format in which to store datalanding_format:csv# the format in which to expect data in landing stepsrole_arn:EXECUTION_ROLE_ARNs3_fcp_template:s3://TEMPLATE/TO/FCP/PATH/fcp.{collection_name}.yamls3_icp:s3://ICP/PATH/icp.yamls3_launcher_key:KEY/TO/ADF/LAUNCHER/adf-launcher.pys3_prefix:S3/PREFIX/# where to store data in the bucketvenv_package_key:S3/PREFIX/venv_package.tar.gzredshift_layers:expose:table_prefix:TABLE_PREFIXendpoint:https://url.to.dbport:PORT_NUMBERdb_name:DB_NAMEuser:DB_USERNAMErole_arn:EXECUTION_ROLE_ARNathena_layers:dump:bucket:YOUR-BUCKET-NAME-HEREdb_name:ATHENA_DB_NAMElanding_format:csv# the format in which to expect data in landing stepss3_prefix:S3/PREFIX/TO/DATA/# where to store data in the buckettable_prefix:'expose_'API documentationAbstractDataStructureColumn manipulation methodsData accessAggregation methodsAbstractDataColumnColumn operationsColumn aggregationsOperatorsAbstractDataInterfaceAbstractStateHandlerADFSequencer and ADFCombinationSequencerADFSequencerADFCombinationSequencerADFDataLoaderADFBatchDependencyHandlerAbstractDataStructureAnAbstract Data Structure(ADS) provides a dataframe like API for data manipulation. This is the native input and output format for your processing functions, barring concretization. The actual execution details of the below methods will depend on which type of ADS the underlying data layer has provided us with (Pandas based, Spark based, SQL based etc.).Column manipulation methodsdeflist_columns(self)->List[str]Lists columns currently in the ADS.defcol_exists(self,col_name:str)->boolCheck if columncol_nameexists.defprune_tech_cols(self)->"AbstractDataStructure"Removes technical columns from the ADS.defrename(self,names:Dict[str,str])->"AbstractDataStructure"Renames columns from the keys of thenamesdictionary to the values of thenamesdictionary.Data accessdef__getitem__(self,key:Union[str,AbstractDataColumn,List[str]])->Union["AbstractDataStructure",AbstractDataColumn]Ifkeyis a string, returns the corresponding column :ads["col_name"]Ifkeyis anAbstractDataColumn, return an ADS filtered based on the truth value of the column :ads[ads["col_name"] == "filter_value"]Ifkeyis a list of strings, returns an ADS containing only the subset of columns specified inkey:ads[["col_0", "col_1"]]def__setitem__(self,key:Union[str,Tuple[str,AbstractDataColumn]],value:Union[Any,AbstractDataColumn],)->NoneIfvalueis anAbstractDataColumn, use it to set the specified entries in the ADS :ads["col_name"] = ads["col_name"]*2Ifvalueis any other type, fill every specified entry with its value :ads["col_name"] = "col_value"Ifkeyis a string, set the values of the corresponding column. Creates the column if it does not already exist.Ifkeyis a(str, AbstractDataColumn)type tuple, set the values of the columnkey[0]only for rows filtered bykey[1]. Note thatkey[0]must necessarily already exist as a column. Can set using either a constant value or another column. For example:ads["col_0",ads["col_1"]=="FILTER_VAL"]="SOME_VAL"ads["col_0",ads["col_1"]=="FILTER_VAL"]=ads["col_2"]defto_list_of_dicts(self)->List[Dict]Returns a list of dictionaries, each of which corresponds to a single row in the ADS.Aggregation methodsdef__len__(self)->intReturns the number of rows in the ADS.def__bool__(self)->boolReturnFalseif the ADS has 0 rows,Trueotherwise.defjoin(self,other:"AbstractDataStructure",left_on:List[str],right_on:List[str],how:Literal["left","right","outer","inner","cross"]="inner",l_modifier:Callable[[str],str]=lambdax:x,r_modifier:Callable[[str],str]=lambdax:x,modify_on:bool=True,)->"AbstractDataStructure"Joins 2 ADS objects together.other: The right-hand ADS in the join.left_on: The left-hand columns on which to join.right_on: The right-hand columns on which to join.how: The join type.l_modifier: A function that modifies column names for the left-hand ADS.r_modifier: A function that modifies column names for the right-hand ADS.modify_on: Specify whether the column name modification functions should apply to the join columns.defgroup_by(self,keys:List[str],outputs:Dict[str,Tuple[Callable[["AbstractDataStructure"],Any],Type,],],)->"AbstractDataStructure"Performs a group by operation on a given ADS.keys: List of columns on which to group.outputs: Dictionary defining aggregations to perform. The dict key is the output column name. The dict value is a 2-tuple whose first entry is a callable defining the aggregation, and whose second entry is the output type.For example, to group on columnscol_0andcol_1, and compute the integer maximum and minimum values of columncol_2for each group, one would write :ads.group_by(keys=["col_0","col_1"],outputs={"min_col_2":(lambdaads:ads["col_2"].min(),int),"max_col_2":(lambdaads:ads["col_2"].max(),int),},)defunion(self,*others:"AbstractDataStructure",all:bool=True)->"AbstractDataStructure"Performs a union with all given input ADSs.others: A varargs list of ADSs.all: IfFalse, deduplicate results.defdistinct(self,keys:Optional[List[str]]=None)->"AbstractDataStructure"Deduplicate entries. Can optionally deduplicate only on a subset of columns by specifying thekeysarguments.defapply(self,output_column:str,func:Callable[[Dict],Any],cast:Type)->"AbstractDataStructure"Apply a User Defined Function (UDF) on the ADS.output_column: Name of the output column that will contain the result of the UDF.func: The UDF in question. Takes a dict as input that corresponds to a given row of the ADS.cast: The output data type.For example :ads.apply("output_col",lambdax:str(x["col_0"]).upper(),str)defsort(self,*cols:str,asc:Union[bool,List[bool]]=True)->"AbstractDataStructure"Sort the ADS along the given columns. Set if ascending or descending order usingasc.deflimit(self,n:int)->"AbstractDataStructure"Output a subset of the ADS based on the given number of rows.AbstractDataColumnAnAbstract Data Columnis a column of an ADS. Much like with an ADS, specific execution details vary based on the ADS it originated from (Pandas based, Spark based, SQL based etc.).Column operationsdefas_type(self,t:Type,**kwargs)->"AbstractDataColumn"Cast a column to the requested type. Acceptable types are :strintfloatcomplexbooldatetime.datetime: default conversion options may be overridden by specifying kwargsauto_convert,as_timestamp, anddatetime_format.datetime.datedatetime.timedeltadefisin(self,comp:List)->"AbstractDataColumn"Returns a boolean column where rows are set toTruewhen entries are in the givencomplist, andFalseotherwise.Column aggregationsdefmin(self)->AnyReturns minimum value of column.defmax(self)->AnyReturns maximum value of column.defsum(self)->AnyReturns sum of all column entries.defmean(self)->AnyReturns average value of column entries.defcount(self)->intdef__len__(self)->intReturns number of rows in column.def__bool__(self)->boolReturnFalseif the column has 0 rows,Trueotherwise.OperatorsAll binary and unary operators are supported. For example :ads["col_0"]*22-ads["col_0"]ads["col_0"]/ads["col_1"]~(ads["col_0"]==ads["col_1"])AbstractDataInterfaceAnAbstract Data Interfacehandles all matters related to persisting data. Abstract Data Interfaces correspond either directly to a given data layer, or to a transition between 2 data layers. Much like with an ADS, execution details depend on the underlying persistance details (file based, database based, cloud based etc.).defread_batch_data(self,step:ADFStep,batch_id:str)->AbstractDataStructureFor a given step and batch ID, return the corresponding ADS.defread_full_data(self,step:ADFStep)->AbstractDataStructureFor a given step return all available data.defread_batches_data(self,step:ADFStep,batch_ids:List[str])->Optional[AbstractDataStructure]For a given step and a list of batch IDs, return the corresponding ADS. ReturnsNoneif the input batch ID list is empty.defwrite_batch_data(self,ads:AbstractDataStructure,step:ADFStep,batch_id:str)->NoneGiven an ADS and a target step and batch ID, persist the ADS.defdelete_step(self,step:ADFStep)->NoneDelete all data in the given step.defdelete_batch(self,step:ADFStep,batch_id:str)->NoneDelete data corresponding to a specific batch ID for a given step.AbstractStateHandlerAnAbstract State Handlercontains all information related to the current processing state. In particular, it can list all batch IDs and their current state.defto_ads(self)->AbstractDataStructureReturns an ADS describing all batches. The output ADS will always have the following columns :collection_nameflow_namestep_nameversionlayerbatch_idstatusdatetimemsgThis method gives you complete read capabilities on the state handler, allowing you to extract any information you need from it. All following methods are merely shortcuts built on top of this one.defto_step_ads(self,step:ADFStep)->AbstractDataStructureReturns an ADS containing the processing state of a given ADF step.defget_entries(self,collection_name:Optional[str]=None,flow_name:Optional[str]=None,step_name:Optional[str]=None,version:Optional[str]=None,layer:Optional[str]=None,batch_id:Optional[str]=None,status:Optional[str]=None,)->List[Dict]Returns a list of batch IDs corresponding to the given filters.defget_step_submitted(self,step:ADFStep)->List[str]defget_step_running(self,step:ADFStep)->List[str]defget_step_deleting(self,step:ADFStep)->List[str]defget_step_failed(self,step:ADFStep)->List[str]defget_step_success(self,step:ADFStep)->List[str]For a given ADF step, return all batch IDs in the given state (submitted,running,deleting,failed, orsuccess).defget_step_all(self,step)->List[str]For a given ADF step, return all batch IDs.defget_batch_info(self,step:ADFStep,batch_id:str)->DictFor a given batch ID of a given ADF step, return a dictionary containing all batch information.defget_batch_status(self,step:ADFStep,batch_id:str)->strFor a given ADF step, returns the status of a given batch ID. Raises an error if the batch ID is unknown to the state handler.ADFSequencer and ADFCombinationSequencerAnADF Sequencerdefines the batches to be processed by a given step at any given time. To define your own ADF Sequencer, you must inherit from either theADFSequencerorADFCombinationSequencerbase class (the latter should only be used for combination steps). In both cases, there are 2 abstract methods that require defining.ADFSequencerdeffrom_config(cls,config:Dict)->"ADFSequencer"Given configuration parameters, return anADFSequencerinstance. If you define a custom__init__for this, make sure it callssuper().__init__().defget_to_process(self,state_handler:AbstractStateHandler,step_in:ADFStep,step_out:ADFStep,)->List[str]Input arguments :state_handler: Contains the current processing state.step_in: The input step.step_out: The output step.How the output shapes the flow of data :The return value is the list of batch IDs the output step is expected to create.By default, previously submitted batches are ignored, there is no need for your method to check for them.If you want your sequencer to resubmit such batches, you have to explicitly setredotoTruein your constructor by callingsuper().__init__(redo=True).ADFCombinationSequencerdeffrom_config(cls,config:Dict)->"ADFCombinationSequencer"Given configuration parameters, return anADFCombinationSequencerinstance. If you define a custom__init__for this, make sure it callssuper().__init__().defget_to_process(self,state_handler:AbstractStateHandler,combination_step:ADFCombinationStep,)->List[Tuple[List[str],str]]Input arguments :state_handler: Contains the current processing state.combination_step: The combination step in question.How the output shapes the flow of data :The return value is a list of 2-tuples :The first entry is a list of batch IDs corresponding to the input steps.The second entry is the corresponding batch ID output by the combination step.By default, previously submitted batches are ignored, there is no need for your method to check for them.If you want your sequencer to resubmit such batches, you have to explicitly setredotoTruein your constructor by callingsuper().__init__(redo=True).ADFDataLoaderAnADF Data Loaderdefines what data is passed to your processing function at each step. To define your own ADF Data Loader, you must inherit from theADFDataLoaderbase class. There are 2 abstract methods that then require defining.deffrom_config(cls,config:Dict)->"ADFDataLoader"Given configuration parameters, return anADFDataLoaderinstance. If you define a custom__init__for this, make sure it callssuper().__init__().defget_ads_args(self,data_interface:AbstractDataInterface,step_in:ADFStep,step_out:ADFStep,batch_id:str,state_handler:AbstractStateHandler,)->Tuple[List[AbstractDataStructure],Dict[str,AbstractDataStructure],]Input arguments :data_interface: The data persistance interface from which to load data.step_in: The input step.step_out: The output step.batch_id: The output batch ID.state_handler: Contains the current processing state.How the output is passed to your data processing functions :The first output is a list of ADSs. These are passed as positional arguments.The second output is a dict, for which each entry is passed as a keyword argument.Simply put, if yourget_ads_argsmethod returnsargs, kwargs, then these are passed to your processing functionfuncsimply as :func(*args,**kwargs)ADFBatchDependencyHandlerAnADF Batch Dependency Handlerdefines which batches are defined as beingdownstreamof batches in previous steps. This is mainly used to define which downstream batches are also deleted when deleting a particular batch of data. To define your own ADF Batch Dependency Handler, you must inherit from theADFBatchDependencyHandlerbase class. There are 2 abstract methods that then require defining.deffrom_config(cls,config:Dict)->"ADFBatchDependencyHandler"Given configuration parameters, return anADFBatchDependencyHandlerinstance. If you define a custom__init__for this, make sure it callssuper().__init__().defget_dependent_batches(self,state_handler:AbstractStateHandler,step_in:ADFStep,step_out:ADFStep,batch_id:str,)->List[str]Input arguments :state_handler: Contains the current processing state.step_in: The input step.step_out: The output step.batch_id: The input batch ID.The return value represents the list of batch IDs in the output step that are considered dependent on the given batch ID in the input step.Processing functionsInput signatureNon-starting stepCombination stepOutput signatureConcretizationThe processing function signature depends on the specific step configuration. In particular,data loaderswill modify the expected input arguments, and the presence of downstream reception steps will modify the expected output.Input signatureNon-starting stepFor non-starting steps, the input arguments will depend on the outputargsandkwargsof the step data loader, in addition to anyfunc_kwargsdefined in the step configuration. For example, consider the builtinKwargDataLoader, that merely passes the current batch of data as a keyword argument, meaningargsis an empty list andkwargsis a single entry dictionary whose key is user defined :name:collection-namemodules:-name:flow_configimport_path:ADF.components.flow_config-name:module-aliasimport_path:package.moduleflows:-name:flow-name-0steps:-start:landinglayer:layer-namename:landing-step-name-layer:layer-namename:processing-step-nameversion:version-namefunc:load_as:moduleparams:module:module-aliasname:foofunc_kwargs:extra_kwarg:kwarg_valdata_loader:module:flow_configclass_name:KwargDataLoaderparams:kwarg_name:custom_kwarg_nameIn this case, the expected function input signature is :deffoo(custom_kwarg_name:AbstractDataStructure,extra_kwarg:str)Thecustom_kwarg_nameargument will contain the ADS passed by our data loader, and theextra_kwargargument will contain the valuekwarg_valas defined in our flow configuration.Combination stepFor a combination step, theargsandkwargsare the combination of the outputs of the data loaders of all input steps, plus the data loader of the combination step itself. Again, let's use theKwargDataLoaderto illustrate this, as well as theFullDataLoaderwhich loads all available data for a given step.name:collection-namemodules:-name:flow_configimport_path:ADF.components.flow_config-name:module-aliasimport_path:package.moduleflows:-name:flow-name-0steps:-start:landinglayer:layer-namename:landing-step-name-name:flow-name-1steps:-start:landinglayer:layer-namename:landing-step-name-name:combination-flowsteps:-start:combinationlayer:layer-namename:combination-step-nameversion:version-nameinput_steps:-flow_name:flow-name-0step_name:landing-stepdata_loader:module:flow_configclass_name:KwargDataLoaderparams:kwarg_name:custom_kwarg_name_0-flow_name:flow-name-1step_name:landing-stepdata_loader:module:flow_configclass_name:KwargDataLoaderparams:kwarg_name:custom_kwarg_name_0func:load_as:moduleparams:module:module-aliasname:foofunc_kwargs:extra_kwarg:kwarg_valdata_loader:module:flow_configclass_name:FullDataLoaderparams:{}Each input step data loader will add an entry to the inputkwargs. The data loader of the combination step itself will load the full data of that same step as the sole entry of our outputargs. Finally, there is also the user definedextra_kwarg, defined directly in our flow configuration, that will enrich the inputkwargs. Putting all of this together, we get the following input signature :deffoo(full_data:AbstractDataStructure,custom_kwarg_name_0:AbstractDataStructure,custom_kwarg_name_1:AbstractDataStructure,extra_kwarg:str,)Output signatureBy default, a processing function must output a single ADS :deffoo(ads:AbstractDataStructure)->AbstractDataStructure:returnads[ads["some_col"]=="some_val"]However, hooking a processing step into a reception step changes the expected output signature of the processing function. Instead of returning a single ADS, it must now return a dictionary whose keys correspond to the reception step keys. Take the following flow configuration as an example :name:collection-namemodules:-name:module-aliasimport_path:package.moduleflows:-name:flow-namesteps:-start:landinglayer:layer-namename:landing-step-name-layer:layer-namename:processing-stepfunc:load_as:moduleparams:module:module-aliasname:multiple_outputs-name:reception-flow-0steps:-start:receptionlayer:layer-namename:reception-step-namekey:reception-key-0input_steps:-flow_name:flow-namestep_name:processing-step-name:reception-flow-1steps:-start:receptionlayer:layer-namename:reception-step-namekey:reception-key-1input_steps:-flow_name:flow-namestep_name:processing-stepA valid corresponding processing function would then be :defmultiple_outputs(ads:AbstractDataStructure,)->Dict[str,AbstractDataStructure]:return{"reception-key-0":ads[ads["col_0"]==0],"reception-key-1":ads[ads["col_0"]!=0],}ConcretizationIt is possible to define processing functions using familiar "concrete" APIs (such as Pandas, PySpark, raw SQL etc.) using a procedure known asconcretization. However, this comes at the cost that when the corresponding step is mapped to a layer, that layer must support the chosen concretization. For example, concretizing to a PySpark dataframe will fail for an SQL based layer. Our data flows remain abstract, but they become somewhat constrained in their eventual layer mapping.To define a processing function as concrete, you may use theconcretizedecorator, which takes as input a concretization type. The decorator will transform all input ADSs into the requested type. It will do so in a nested manner, also transforming ADSs within input lists, tuples, or dictionaries. It also expects that type as the function output type. :frompandasimportDataFramefromADF.utilsimportconcretize@concretize(DataFrame)deffoo(df:DataFrame)->DataFrame:returndf.drop_duplicates()There are 2 main use cases for concretization that may be worth the slight trade-off of layer constraint :Migrating a non ADF pipeline to ADF, allowing reuse of business logic code.Exploiting API specific optimizations, such asbroadcastfor a PySpark dataframe. |
adf2dms | adf2dmsConvert Amiga disk images from ADF to DMS (DiskMasher) formatNote: Experimental softwareThis code is experiental and currently only implements the uncompressed
("NOCOMP") and RLE ("SIMPLE") compression modes.Usage$ python3 -m adf2dms --help
usage: adf2dms [-h] [-0] [-a FILE] [-b FILE] [-f] [-o file.dms] [-s TRKNUM] [-e TRKNUM | -n COUNT] file.adf
Convert an ADF file to DMS (DiskMasher) format
positional arguments:
file.adf ADF file to read
optional arguments:
-h, --help show this help message and exit
-0, --store store tracks uncompressed
-a FILE, --fileid FILE
attach FILE_ID.DIZ file
-b FILE, --banner FILE
attach banner file
-f, --force-overwrite
overwrite output file if it already exists
-o file.dms, --output file.dms
DMS file to create instead of stdout
-s TRKNUM, --low-track TRKNUM
first track, default: 0
-e TRKNUM, --high-track TRKNUM
last track, default: determined by file length
-n COUNT, --num-tracks COUNT
number of tracks to add, default: determined by file length
Input files ending in .adz or .gz will automatically be un-gzipped.Building adf2dmsgit clone https://github.com/dlitz/adf2dms
python3 -m build adf2dms |
adf2pdf | adf2pdf - a tool that turns a batch of paper pages into a PDF
with a text layer. By default, it detects empty pages (as they
may easily occur during duplex scanning) and excludes them from
the OCR and the resulting PDF.For that, it usesSane'sscanimagefor the scanning,Tesseractfor theoptical character recognition(OCR), and
the Python packagesimg2pdf,Pillow (PIL)andPyPDF2for some image-processing tasks and PDF mangling.Example:$ adf2pdf contract-xyz.pdf2017, Georg [email protected] document feed (ADF) supportFast empty page detectionOverlaying of scanning, image processing, OCR and PDF creation
to minimize the total runtimeFast creation of small PDFs using the fineimg2pdfpackageOnly use of safe compression methods, i.e. no error-prone
symbol segmentation style compression likeJBIG2or JB2
that is used inXerox photocopiersand the DjVu format.Install InstructionsAdf2pdf can be directly installed withpip, e.g.$ pip3 install --user adf2pdfor$ pip3 install adf2pdfSee also thePyPI adf2pdf project page.Alternatively, the Python fileadf2pdf.pycan be directly
executed in a cloned repository, e.g.:$ ./adf2pdf.py report.pdfIn addition to that, one can install the development version from
a cloned work-tree like this:$ pip3 install --user .Hardware RequirementsA scanner with automatic document feed (ADF) that is supported by
Sane. For example, theFujitsu ScanSnap S1500works
well. That model supports duplex scanning, which is quite
convenient.Example continuedRunningadf2pdffor a 7 page example document takes 150 seconds
on an i7-6600U (Intel Skylake, 4 cores) CPU (using the ADF of the
Fujitsu ScanSnap S1500). With the defaults,adf2pdfcallsscanimagefor duplex scanning into 600 dpi lineart (black and
white) images. In this example, 6 pages are empty and thus
automatically excluded, i.e. the resulting PDF then just contains
8 pages.The resulting PDF contains a text layer from the OCR such that
one can search and copy'n'paste some text. It is 1.1 MiB big,
i.e. a page is stored in 132 KiB, on average.Software RequirementsThe script assumes Tesseract version 4, by default. Version 3 can
be used as well, but thenew neural network system in Tesseract
4just performs magnitudes better than the old OCR model.
Tesseract 4.0.0 was released in late 2018, thus, distributions
released in that time frame may still just include version 3 in
their repositories (e.g. Fedora 29 while Fedora 30 features version
4). Since version 4 is so much better at OCR I can't recommend it
enough over the stable version 3.Tesseract 4 notes (in case you need to build it from the sources):Build instructions- warning: if you miss theautoconf-archivedependency you'll get weird autoconf error
messagesData files- you need the training data for your
languages of choice and the OSD dataPython packages:img2pdf(Fedora package: python3-img2pdf)Pillow (PIL)(Fedora package: python3-pillow-devel)PyPDF2(Fedora package: python3-PyPDF2) |
adfluo | adfluoadfluo, adfluis, adfluere, adfluxi, adfluxumto flow on/to/towards/byto glide/drift quietlyadfluois a Python library for pipeline-oriented feature computation, mainly aimed at tricky
multimodal datasets that might require a wide range of different features to be computed from.Adfluo makes your feature extraction code:clean: it encourages you to outline clearly the steps needed to compute
a feature as a pipeline of atomic stepsdata scientist-friendly:adfluo's output has a predictable structure,
ensuring that once you've run the feature extraction, you'll be able to focus a 100%
on your data-science/statistics work.efficient: if different features have computation steps in common,adfluowill do its best to compute only what is necessary, without any extra configuration.reusable: By separating the input data from the feature computation logic,
you'll easily be able to reuse an existing extraction pipeline on another dataset,
or use another extraction pipeline on the same dataset.sample-oriented:adfluoorganizes its processing around samples of data.InstallationAdfluo is available on Pypi, and has no special dependencies, a simplepipinstalladfluowill do.Exampleimportrandom# Defining our dataset as a list of dictsmy_dataset=[{"numbers":[random.randint(1,20)forjinrange(50)],"idx":i}foriinrange(20)]# TODO: examples# - mean, std dev of numbers# - "relative" mean using idx |
adfly-api | Adfly APIUnofficial Adfly API Python WrapperInstallationpip install adfly-apiExamples#ImportfromadflyimportAdflyApi#Initializeapi=AdflyApi(user_id=12345678,public_key='xxxxxxxxxx',secret_key='xxxxxxxxxx',)# Url Groups examples.api.get_groups()# Expand examples.api.expand(['http://adf.ly/D','http://adf.ly/E','http://q.gs/4'],[3,'1A','1C'])api.expand(None,'1F')# Shorten examples.api.shorten(['http://docs.python.org/library/json.html','https://github.com/benosteen'],)api.shorten('http://docs.python.org/library/json.html')# Urls examples.api.get_urls()api.get_urls(search_str='htmlbook')api.update_url(136,advert_type='int',group_id=None)api.update_url(136,title='一些中国',fb_description='fb о+писан и+е',fb_image='123')api.delete_url(136)CreditsOriginally developed byBen O'Steen |
adfmapping | adfmappingADF Dynamic mapping of Source Data/columns to Sink Columns.Pre-requisitesBoth tech and non-tech audiences can use it with a lil bit of guidance.Python 3+ installedon your machineUsageGet Csv file from your Azure Application Insights or Log Analytics workspaceInstall the packagepip install adfmappingRun the below command to generate mapping JSON.adfmapping Mapping --csvfile C:\user\appIn.csv |
adfmt | adfmta simple format tool for apiDoc. |
adfotg | ADF On-The-GoADF On-The-Go (adfotg)converts yourRaspberry Pi Zerointo an
USB drive with a web interface. It organises yourADFimages and
allows you to bundle one or more of them intovirtual USB drives.
You can swap, download, upload and mount ADFs without ever disconnecting
the USB cable from yourGotek, all from any modern web browser.ADF On-The-Go (adfotg)is a HTTP service designed for use in aRaspberry Pi Zero. The RPi must be connected through its USB OTG
port to aGotekFloppy Drive emulator in anAmigacomputer.
ADF On-The-Go can prepare ADF images from files, split big files into
floppy-sized chunks, or just mount the ADF images directly. It allows to
store bundles of ADF files on their own virtual USB drives and swap
multiple virtual USB drives freely. All of this is controlled through a
website interface, by default hosted on a HTTP port41364.There also is a REST API, if you happen to not like the default UI, but
its current status isunstable.----------- Linux ------------ USB --------- IDE ---------
| ADF OTG |------>| RPi Zero |------>| Gotek |---->| Amiga |
----------- ------------ no+5V --------- ---------!!! HARDWARE DAMAGE RISK !!!CUT OR BLOCK THE +5V LINE IN THE USB CABLE!This line connects the voltage from the Raspberry Pi to Gotek and powers
up your Gotek and your Amiga. This isundesirable. When Amiga PSU is
OFF, the Amiga will be put in a strange half-state with LEDs lighting up,
but the computer remaining off. The RPi will also reboot. When Amiga PSU
is ON, the +5V USB line will prevent the Amiga's Power LED from dimming
when Amiga reboots.FOR SAFETY MEASURES, CUT OR BLOCK THE +5V LINE!SecurityThis is important!There's no network security provided by the app itself!It doesn't even put a basic HTTP authentication in place. When you host it
on your device, keep it in a private network without remote access.This software requires'root' privilegesto perform certain
operations. While the application will run as a normal user, it will abusesudoto obtain root privileges when needed. Ensure your RPi user cansudowithout password prompt.RequirementsAdfotg supports Raspberry Pi OS 12 (bookworm) since version 0.4.0.Software:Raspberry Pi OS 12 (bookworm) or newerPython 3, pipxmtoolsHardware:Raspberry Pi ZeroGotekAn AmigaOS:sudoprivileges will be requiredPreparing your Raspberry PiThis is mandatory.We need to make sure we are using the dwc2 USB driver,
and thatdwc2andg_mass_storagemodules are enabled.If you have a fresh Raspberry Pi OS, it's enough to do:echo dtoverlay=dwc2 | sudo tee -a /boot/config.txt
echo dwc2 | sudo tee -a /etc/modules
echo g_mass_storage | sudo tee -a /etc/modules`Then reboot your RPi.The above is based onhttps://gist.github.com/gbaman/50b6cca61dd1c3f88f41In case of trouble with connecting to Gotek, you may try to
diagnose the USB problems by connecting the RPi to an USB socket
in a PC. When an USB drive image is mounted, the PC should see
the RPi as an USB drive.InstallThis program is designed to be run on aRaspberry Pi Zerowith theRaspberry Pi OS. Installing the release package on anything else is not
recommended, although will succeed and should be harmless (no warranty).On your Raspberry Pi:sudo apt update && sudo apt install mtools pipx
sudo PIPX_HOME=/opt/adfotg PIPX_BIN_DIR=/usr/local/bin pipx install adfotgThe first time installation may be lengthy (it's only aZero, after all).Integrating with Raspberry Pi OSAfterpipx install adfotgis done, run:sudo adfotg --installThis will:Add 'adfotg' system user to Raspberry Pi OS and allow this user a
password-less sudo privilege.Create adfotg's default config file in/etc/adfotg.conf.Create adfotg's base directory at/var/lib/adfotg.Addadfotg.serviceto systemd; adfotg will start with the system.UpdateMake sure you do this logged in as the same OS user that you
used during the installation.sudo PIPX_HOME=/opt/adfotg PIPX_BIN_DIR=/usr/local/bin pipx upgrade adfotgAdfotg needs to be restarted now. If you integrated it with yourRaspberry Pi OS(see the section below), then it's sufficient to
do this:sudo systemctl restart adfotgUninstallAdfotg needs to be uninstalled manually and this is even more involved
than installing. Depending on how far you've went with installation and
what you wish to keep, you may be skipping some steps.First of all, stop the service:sudo systemctl stop adfotgIf you wish to uninstall, and then install adfotg again, do:sudo PIPX_HOME=/opt/adfotg PIPX_BIN_DIR=/usr/local/bin pipx uninstall adfotgThis will keep the internal Python setup, your config and your ADF/USB library.
Then proceed as if installing for the first time.To remove the internal Python setup, do:sudo rm -rf /opt/adfotgTo uninstall from systemd and to remove the adfotg OS user:sudo rm /usr/local/lib/systemd/system/adfotg.service
sudo systemctl daemon-reload
sudo deluser adfotgTo remove the config:sudo rm /etc/adfotg.confTo remove your ADF and USB images library:sudo rm -rf /var/lib/adfotgDevelopmentPlease seeCONTRIBUTING.md.TroubleshootingProblem:Gotek perpetually displays---when connected to RPi,
even though it works with my usual USB drive.Solution:---indicates that you have Cortex firmware installed on
your Gotek. See if you haveSELECTOR.ADFon your USB drive. If yes,
this ADF must also be placed on every mount image you create in adfotg.Problem:I upgraded to a new version, but there are oddities
happening or I don't see any changes.Solution:There may be two reasons for this. Your browser might've
cached the old version of the site or the adfotg service wasn't
restarted. See the "Update" section in README to learn how to restart
the service and clear your browser cache.Problem:This software ceases to work after the system upgrade.Solution:Sorry, both Raspberry Pi OS and the Python rules for
software distribution and installation tend to change. Reinstalling
adfotg from scratch may help. Other than that, contact me for more
help.BackgroundGotekis a hardware floppy-drive replacement for
legacy machines. Instead of using failure-prone floppy disks, it allows
to use a USB flash drive with floppy-disk images. Multiple images can
be stored on a single flash drive and Gotek allows by default to choose
between them through buttons located on the case. While Gotek is an
excellent device that eradicates the inconvenience of floppy disks,
it not only doesn't solve the inconvenience of disk swapping but makes
it worse by replacing labeled floppy-disks with incomprehensible
ordinal numbers (from 0 to 999).Raspberry Pi Zerois a cheap mini-computer
that can run Linux. It has two major features that are in use in this project:WiFiUSB On-The-GoWhile WiFi (or any Ethernet connection) is used here as the access layer
to the ADF On-The-Go software, USB On-The-Go is the real enabler. While
it has many applications, we are only interested in one. It allows to
make the RPi appear to be an USB flash drive - a flash drive which
contents we can fully control and change on-the-fly using Linux command
line tools and which we can program to serve the content we want.Guide for setting up OTG mode on Raspberry Pi can be found here:https://gist.github.com/gbaman/50b6cca61dd1c3f88f41amitoolscontains xdftool,
with which adfotg is capable of manipulating ADF image files to
some extent. Adfotg doesn't depend on amitools, but incorporates
a subset of its source code and installsadfotg-xdftoolas a separate
tool.REST APIREST API documentation is currently a Work-In-Progress.
adfotg is capable of providing the documentation for itself
in a plain-text format through the/helpendpoint. |
adfp | adfp'adfp' (Auto DiFferential for Python) is the framework for auto diffenretials.
This project imtates the implemetation of latest deep learning framework. |
adfpy | 🏭🍰 adfPyadfPy aims to make developers lives easier by wrapping the Azure Data Factory Python SDK with an intuitive, powerful, and easy to use API that hopefully will remind people of working with Apache Airflow ;-).InstallpipinstalladfpyUsageGenerally, using adfPy has 2 main components:Write your pipeline.Deploy your pipeline.adfPy has an opinionated syntax, which is heavily influenced by Airflow. For documentation on what the syntax looks like, please read the docshere.
Some examples are provided in the examples directory of this repository.Once you've written your pipelines, it's time to deploy them! For this, you can use adfPy's deployment script:pipinstalladfpy
adfpy-deploy--path<your_path_here>Note:This script will ensure all pipelines in the provided path are present in your target ADF.This script will alsoremoveany ADF pipelines that arenotin your path, but are in ADF.Still to comeadfPy is still in development. As such, some ADF components are not yet supported:DatasetsLinked servicesTriggers (support for Schedule Triggers is available, but not for Tumbling Window, Custom Event, or Storage Event)Developer setupadfPy is built withPoetry. To setup a development environment run:poetryinstall |
ad-freiburg-qgram-utils | Utilty functions for implementing a q-gram indexInstall withpip install ad-freiburg-qgram-utilsWorks on Linux, Windows (64bit) and MacOS |
adfs-aws-login | Log in to AWS using ADFSThe aim for this is to create a general purpose CLI ADFS login with a limited set of trusted dependencies.InstallationIt'savailable on PyPI. Install by runningpip install adfs-aws-login.RunThe executable is calledadfs-aws-login. Log in with default profile by simply runningadfs-aws-loginor specify a profile withadfs-aws-login --profile [profile].Seeadfs-aws-login -hfor more options.If the environment variableADFS_DEFAULT_PASSWORDis defined, that will be used as the password.ConfigureConfigure the profiles in$HOME/.aws/config. Following is an example with all supported configuration keys (and a few aws default ones):[profile example]
region=us-east-1
output=json
adfs_login_url=https://login.example.com/adfs/ls/IdpInitiatedSignOn.aspx?loginToRp=urn:amazon:webservices
[email protected]
adfs_role_arn=arn:aws:iam::1234567890:role/DeployRole
adfs_session_duration=8 |
adfsmail | Библиотека AdfsMailОбзорAdfsMail — это библиотека Python для взаимодействия с электронными почтовыми сервисами, предоставляющая простой способ доступа и управления электронной почтой внутри VPN или извне с использованием многофакторной аутентификации (MFA). Библиотека поддерживает базовые операции с электронной почтой, такие как получение писем, доступ к метаданным писем и работа с вложениями.УстановкаПеред установкой AdfsMail убедитесь, что на вашей системе установлен Python. Установить AdfsMail можно с помощью pip:pip install adfsmailИспользованиеВнутри VPNДля использования AdfsMail внутри VPN инициализируйте класс AdfsMail с вашим адресом электронной почты, паролем и доменом.from adfsmail import AdfsMail
mail = AdfsMail(
'[email protected]',
'password', # Замените на ваш реальный пароль
'domain.ru'
)Вне VPN с использованием многофакторной аутентификацииПри доступе к почтовому сервису извне VPN используйте AdfsMailMFA для обработки многофакторной аутентификации.from adfsmail import AdfsMailMFA
mail = AdfsMailMFA(
'[email protected]',
'password', # Замените на ваш реальный пароль
'domain.ru'
)Получение писемПолучение первых 25 писем:m = mail.get_mail()Чтобы получить следующие 30 писем, укажите параметры offset (пропустить первые 25) и max_return (получить 30 сообщений):m2 = mail.get_mail(offset=25, max_return=30)Операции с письмамиВы можете выполнять различные операции с полученными письмами, такие как получение информации об отправителе, получателях, теме, теле письма, дате и времени, а также работать с вложениями.Получить информацию об отправителе# m[0] - первое письмо из списка
sender = m[0].get_from()Получить информацию о получателяхrecipients = m[0].get_to()Получить тему письмаsubject = m[0].get_subject()Получить тело письмаbody = m[0].get_body() # Обычно htmlПолучить дату и время получения письмаdatetime_received = m[0].get_datetime()Получить вложенияattachments = m[0].get_attachments()Получить имя файла вложения# m[0].get_attachments()[0] - первое вложение письма
attachment_filename = m[0].get_attachments()[0].get_filename()Сохранить вложение в указанный путьm[0].get_attachments()[0].save('path/to/save/file.xl')Получить количество вложенийattachments_count = m[0].get_attachments_count()Справочник APIВ классах AdfsMail и AdfsMailMFA доступны следующие методы:get_mail(offset=0, max_return=25): Получает список писем, начиная с указанного offset и до max_return числа писем.get_from(): Возвращает отправителя письма.get_to(): Возвращает список получателей письма.get_subject(): Возвращает тему письма.get_body(): Возвращает тело письма.get_datetime(): Возвращает дату и время получения письма.get_attachments(): Возвращает список вложений в письме.get_attachments_count(): Возвращает количество вложений в письме.Каждый объект вложения имеет следующие методы:get_filename(): Возвращает имя файла вложения.save(save_path): Сохраняет вложение по указанному пути. |
adftestpy | No description available on PyPI. |
adftotxt | ADF is basically JSON Document. To use it in places like Google Sheet, it needs to parsed and converted to Simple Texts. This is what is being done here. |
adg | ADG is a tool generating diagrams and producing their expressions for given many-body formalisms. Diagrammatic rules from the formalism are combined with graph theory objects to produce diagrams and expressions in a fast, simple and error-safe way.The only input consists in the theory and order of interest, and the N-body character of the operators of interest. The main output is a LaTeX file containing the diagrams, their associated expressions and additional informations that can be compiled by ADG if needed. Other computer-readable files may be produced as well. |
ad-geo-backend | No description available on PyPI. |
adgmaker | # ADGMakerAutomatically create and install hundreds of awesome Free (as in
Freedom!) Ableton Live Instuments from the super high-qualityPhilharmonia
Orchestrasamples.InstallationADGMaker requires that you have Python, OSX, and Ableton Live installed
already. Then, simply:$ pip install adgmakerAutomatic UsageThe simplest way to use ADGMaker is with the ‘–all’ and ‘–install’
arguments, which will fetch the instrument archives from the internet,
create ADGs, and install them into your Ableton installation
automatically:$ adgmaker --all --installThen go to File -> Manage Files -> Manage User Library and use your new
instruments! You’ll see them under the “Drums” tab. Tada!Manual UsageDownload an instrument fromthe Philharmonia Orchestra
websiteand unzip it.Then, (from a virtualenv), run:python adgmaker.py double_bass/Then copy all of the .adg files to:~/Music/Ableton/User\ Library/Presets/Instruments/Drum\ Rack/(or wherever your Ableton is installed).Then copy all of the mp3s to:~/Music/Ableton/User\ Library/Samples/ImportedThen go to File -> Manage Files -> Manage User Library and use your new
instruments!CaveatsSome of the ADGs only have a few items in them, but most instruments
have at least a few ADG files that have a couple of complete scales. All
of the percussion instruments have been combined into a single
percussion ADG.Some of the samples have a slight delay, so you might have to manually
set the sample start time to your liking. I also like to add a little
bit of fade out, reverb, and put them all into the same choke group,
depending on the sound I want.TODOSupport other sound archives?Support making multi-instrument racks?Tests / CI!Enjoy! |
adgodfhkdfh | QuickSampleA sample python package deployment utility for SQLShack Demo. |
adguard | adguardAPI Wrapper client for AdGuard.Installpython3-mpipinstalladguardLook at the fileexample.pyfor a usage example. |
adguardhome | Python: AdGuard Home API ClientAsynchronous Python client for the AdGuard Home API.AboutThis package allows you to control and monitor an AdGuard Home instance
programmatically. It is mainly created to allow third-party programs to automate
the behavior of AdGuard.An excellent example of this might be Home Assistant, which allows you to write
automations, to turn on parental controls when the kids get home.InstallationpipinstalladguardhomeUsagefromadguardhomeimportAdGuardHomeimportasyncioasyncdefmain():"""Show example how to get status of your AdGuard Home instance."""asyncwithAdGuardHome("192.168.1.2")asadguard:version=awaitadguard.version()print("AdGuard version:",version)active=awaitadguard.protection_enabled()active="Yes"ifactiveelse"No"print("Protection enabled?",active)ifnotactive:print("AdGuard Home protection disabled. Enabling...")awaitadguard.enable_protection()if__name__=="__main__":asyncio.run(main())Changelog & ReleasesThis repository keeps a change log usingGitHub's releasesfunctionality. The format of the log is based onKeep a Changelog.Releases are based onSemantic Versioning, and use the format
ofMAJOR.MINOR.PATCH. In a nutshell, the version will be incremented
based on the following:MAJOR: Incompatible or major changes.MINOR: Backwards-compatible new features and enhancements.PATCH: Backwards-compatible bugfixes and package updates.ContributingThis is an active open-source project. We are always open to people who want to
use the code or contribute to it.We've set up a separate document for ourcontribution guidelines.Thank you for being involved! :heart_eyes:Setting up development environmentThis Python project is fully managed using thePoetrydependency
manager. But also relies on the use of NodeJS for certain checks during
development.You need at least:Python 3.11+PoetryNodeJS 20+ (including NPM)To install all packages, including all development requirements:npminstall
poetryinstallAs this repository uses thepre-commitframework, all changes
are linted and tested with each commit. You can run all checks and tests
manually, using the following command:poetryrunpre-commitrun--all-filesTo run just the Python tests:poetryrunpytestAuthors & contributorsThe original setup of this repository is byFranck Nijhof.For a full list of all authors and contributors,
checkthe contributor's page.LicenseMIT LicenseCopyright (c) 2019-2023 Franck NijhofPermission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.