package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
ads2bibdesk
ADS to BibDeskAPI edition(ads2bibdesk)ads2bibdeskhelps you add astrophysics articles listed on NASA/ADS to yourBibDeskdatabase using thenewADS Developer API.The program is loosely based on the originalads_bibdeskfrom J. Sick et al.However, the query is handled with a python client for the ADS API (ads, maintained by A. Casey). Obsolete codes are replaced in favor of newer built-in Python modules with a simplified code structure. The macOS workflow building process have been updated. The project packaging now follows the new PyPAguideline.Due to the API usage,ads2bibdeskrequires the user to specify a personal API key, per the new NASA/ADS policy. The instruction on how to obtain a key can be found on this official github repo:adsabs-dev-api. In short, to obtain access to the ADS Developer API, one must do two things:Create an accountand log in to the latest version of the ADSPush the “Generate a new key” button underCustomize Settings -> Account Settings -> API TokenThe API key can be written into yourads2bibdeskpreference file~/.ads/ads2bibdesk.cfg(see thetemplate). Following the Python/ads package’sinstruction, one can also save the key to~/.ads/dev_keyor as an environment variable namedADS_DEV_KEY.Repo:https://github.com/r-xue/ads2bibdeskPyPI:https://pypi.python.org/pypi/ads2bibdeskCredit to the contributors of the originalads_bibdesk@jonathansick@RuiPereira@keflavichfor their initial implementation.QuickstartInstallationThe command line script can be installed via:pipinstall--usergit+https://github.com/r-xue/ads2bibdesk.git# from GitHubpipinstall--userads2bibdesk# from PyPI (likely behind the GitHub version)pipinstall--user.# from a local copyTo build the macOS app andserviceworkflow, you need to further run:pipinstall--user-U--no-deps--force-reinstall--install-option="--service"ads2bibdesk# from PyPIThe option “–service” will create two filesAdd to BibDesk.workflowandAdd to BibDesk.appin~/Downloads/. To install the service, clickAdd to BibDesk.workflowand it will be moved to~/Library/Services/. For the app, just drag and drop it to any preferred location.Note:Only Python >=3.7 is supported (seebelow).With the “–user” option, you must add the user-level bin directory (e.g.,~/Library/Python/3.X/bin) to your PATH environment variable in order to launchads2bibdesk.Both the macOS service and app are based on the Automatorworkflow). They simply wrap around the command line program and serve as its shortcuts.The service shortcut will not work within some applications (e.g., Safari) on macOS >=10.14 due to new privacy and security features built in macOS (see thisissue)UsageFrom the Command lineAdd or update a new article from ADS:ads2bibdesk "1807.04291" ads2bibdesk "2018ApJ...864L..11X" ads2bibdesk "2013ARA&A..51..105C" ads2bibdesk "10.3847/2041-8213/aaf872"ads2bibdeskaccepts three kinds of article identifier at this momentADS bibcode (e.g.1998ApJ...500..525S,2019arXiv190404507R)arXiv id (e.g.0911.4956).doi (e.g.10.3847/1538-4357/aafd37)A full summary ofads2bibdeskcommands is available via:ads2bibdesk --helpFrom the macOS appCopy the article identifider to the clipboard, in any applicationlaunchAdd to BibDesk.appFrom the macOS serviceHighligh and right-click on the article identifiderChoose ‘Services > Add to Bibdesk’ from the right-click menuCompatibility and DependencyI’ve only tested the program on the following macOS setup:macOS (>=10.14)Python (>=3.7.3)BibDesk (>=1.7.1)While the program likely works on slightly older software versions, I don’t focus on the backward compatibility. On my working machine (Catalina), I set Python 3.8 from MacPorts as default:sudo port install python38 py38-pip py38-ipython sudo port select python python38 sudo port select ipython py38-ipython sudo port select pip pip38StatusThe following functions havealreadybeen implemented in the package:query the article metadata (title, abstract, BibTeX, etc.) with the new API by article identifiers (no more in-house ADS/arxiv HTML parsing functions)download article PDFs using the ADS gateway linksuse an authorized on-campussshproxy machine (with your public key) to download PDFs behind the journal paywalladd/update the BibDesk database and attach downloaded PDFs (largely borrowing theAppleScriptmethod from the originalads_bibdesk)Other changes from the originalads_bibdeskinclude:clean up the dependency requirementsreplace obsolete Python syntax/functions/modules with newer ones, e.g. optparser->argparser, f-string formatting, and use configparser()The macOS Automator workflow is running the installed console script rather than an embedded Python programSome less-used features from the originalads_bibdeskare gone: notably, the “ingest” and “preprint-update” modes. But I plan to at least add back the “preprint-update” option, by scanning/updatingarticle_bibcodeassociated with arXiv). My improvement proposal can be foundhere.
ads2gephi
ads2gephiis a command line tool for querying and modeling citation networks from the Astrophysical Data System (ADS) in a format compatible with Gephi, a popular network visualization tool. ads2gephi has been developed at the history of science department of TU Berlin as part ofa research project on the history of extragalactic astronomyfinanced by the German Research Foundation DFG (PI Karin Pelte).You can installads2gephifrom PyPI:pip install ads2gephiUsageWhen using the tool for the first time to model a network, you will be prompted to enter your ADS API key. Your key will then be stored in a configuration file under ~/.ads2gephi.In order to sample an initial citation network, you need to provide ads2gephi with a plain text file with bibcodes (ADS unique identifiers), one per line, as input. The queried network will be output in a SQLite database stored in the current directory:ads2gephi -c bibcodes_example.txt -d my_fancy_netzwerk.dbAfterwards you can extend the queried network by providing the existing database file and using the additional sampling options. For example, you can extend the network by querying all the items cited in every publication previously queried:ads2gephi -s ref -d my_fancy_netzwerk.dbFinally you might want to also generate the edges of the network. There are several options for generating edges. For example you can use a semantic similarity measure like bibliographic coupling or co-citation:ads2gephi -e bibcp -d my_fancy_netzwerk.dbYou can also do everything at once:ads2gephi -c bibcodes_example.txt -s ref -e bibcp -d my_fancy_netzwerk.dbAll other querying and modelling options are described in the help page:ads2gephi --helpOnce you've finished querying and modeling, the database file can be directly imported in Gephi for network visualization and analysis.Special thanks toEdwin Henneken
ads2inspire
ads2inspireReplace ADS citations with the appropriate INSPIRE ones in latex and bibtexWhy? Because ADS citation keys are not stable: they start out as something like2019arXiv191207609s, and after being accepted to a journal turn into something like2020PhRvD.101f4007S. This means you have to rewrite your latex, or you might even end up citing both entries!InstallationFrom PyPIIn your Python environment runpython -m pip install ads2inspireFrom conda-forgeIn your conda environment runconda install -c conda-forge ads2inspireFrom this repositoryIn your Python environment from the top level of this repository runpython -m pip install .From GitHubIn your Python environment runpython -m pip install "git+https://github.com/duetosymmetry/ads2inspire.git#egg=ads2inspire"UsageFirst latex/bibtex/latex your file, then runads2inspire[--backup][--filter-type[ads|all]][--fill-missing]auxfile.aux[texfile1.tex[texfile2.tex[...]]]If your main tex file is namedwonderful.tex, then your auxfile will be namedwonderful.aux.ads2inspirewill read the aux file, query INSPIRE, then rewrite all the texfiles named on the command line, and append to the first bibtex file named in auxfile. The option--backupwill make the program write backups of the tex and bib files before rewriting them. The option--filter-typecontrols which keys to search for on INSPIRE: the default"ads"will only search for keys that look like ADS keys, while"all"will try all keys (aside from those that look like INSPIRE keys). The--fill-missingflag will query for INSPIRE-like keys that were referenced in the LaTeX source, but missing from the .bib file, and fill them into the .bib file if found.ContributingNote that I have done very little testing! Want to pitch in and help make this code better? Please fork and send me PRs!TODO:More testingMore filter typesmore?CitationThe preferred BibTeX entry for citation ofads2inspireis@software{ads2inspire, author = "{Stein, Leo C. and Feickert, Matthew}", title = "{ads2inspire: v0.3.1}", version = {v0.3.1}, doi = {10.5281/zenodo.3903987}, url = {https://github.com/duetosymmetry/ads2inspire}, }
ads4MO
ads4MO - Added Download ServicesI developed this software because motivated to improve my efficiency and productivity. It is just an attemp/idea/prototype and it is not fully optimased or considered stable.This project gave me also ideas to develop other toolsastool4NC,JupLab4NetCDFandMerOC. To know more about them just visit the projects web pages which are hyperlinked above.I created also achat-communitypowered by "Gitter" where is possible have an exchange of ideas,functionalities,bugs and many more. Just clickto acces the chat room.Many thanks to visit this page and try this python software.Carmelo SammarcoIntroduction:It is possible download data by MONTH, DEPTH or DAY until a maximum of three selected variables(Planning to increase this number thought). It brings in a very intuitive scripting way what was already proposed withMerOC.Be aware that:The tool is in development so it can be possible find bugs, errors and imprecisions. Please to report them if you find one.Dependencies:The dependencies required which are not installed by default are listed below:motuclient>=1.8.1ftputil>=3.4Installation and module usageIt is possible to install and then use in both UNIX and Windows operative systems following the below steps:pip install ads4MOwe can import the module as:from ads4MO import downloadOnce the module is imported we can call the interactive download process typing;download()At this point the system is going to ask:Username and passwordType of the downloadwhich can be set typing one of the following:MONTH: The entire period selected will be downloaded by monthsDEPTH: The entire period selected will be downloaded by depth levelsDAY: The entire period selected will download as daily filesMONTH&DEPTH: The entire period selected will be downloaded by months and depth levelsYEAR: The entire period selected will be downloaded by years. Very usefull just when you want extract a grid point (The --longitude-min = --longitude-min and --latitude-min = --latitude-max).Starting/Ending Time: If not values as HH:MM:SS are typed then "12:00:00" is going to be used as default value.Motu client scriptwhich is generated by the CMEMS web portal. Please to copy and paste just from the "--motu" until the end. You can leave untouched "--out-dir <OUTPUT_DIR> --out-name <OUTPUT_FILENAME> --user <USERNAME.> --pwd <PASSWORD.>" because they were already set previously.Following an example of the full script generted by the Web-portal:python -m motuclient --motu http://..... --service-id GLOBAL_ANALYSIS_FORECAST_PHY_001_024-TDS --product-id global-analysis-forecast-phy-001-024 --longitude-min -180 --longitude-max 179.9166717529297 --latitude-min -80 --latitude-max 90 --date-min "2019-04-19 12:00:00" --date-max "2019-04-19 12:00:00" --depth-min 0.493 --depth-max 0.4942 --variable thetao --variable bottomT --out-dir <OUTPUT_DIR> --out-name <OUTPUT_FILENAME> --user <USERNAME> --pwd <PASSWORD>What you need to use as module's input:--motu http://nrt.cmems-du.eu/motu-web/Motu --service-id GLOBAL_ANALYSIS_FORECAST_PHY_001_024-TDS --product-id global-analysis-forecast-phy-001-024 --longitude-min -180 --longitude-max 179.9166717529297 --latitude-min -80 --latitude-max 90 --date-min "2019-04-19 12:00:00" --date-max "2019-04-19 12:00:00" --depth-min 0.493 --depth-max 0.4942 --variable thetao --variable bottomT --out-dir <OUTPUT_DIR> --out-name <OUTPUT_FILENAME> --user <USERNAME> --pwd <PASSWORD>The results are going to be downloaded in the file path in which the terminal/command-prompt was at the moment of the data request.Stand-alone applications (No Python installation required):The stand-alone App for Windows OS can be downloaded fromHERE.The APP was developed, compiled and tested in Windows 10 environment. As soon as I have time I will try to test it in other Windows environments. It will generate a folder in which all the data downloaded are going to be stored. This folder will be stored in the same system path where the executable "ads4mo-win.exe" is located.The stand-alone App for macOS can be downloaded fromHERE.If the APP doesn't start because of "unidentified developer" message then you need to give the system the permission to run it. In more details, please to go inSecurity & Privacy(Tab "General") and then click on the button which will allows the execution of the tool. It will create a Desktop folder which will store all the data downloaded.
ads-api
No description available on PyPI.
ads-async
Asyncio (or sans-i/o) TwinCAT AMS/ADS testing server in pure Python.… and maybe a prototype client, too.RequirementsPython 3.9+(Optional) pytmc (for loading .tmc files in the server)Server FunctionalityReference asyncio implementation.Loads .tmc files for symbol information (basic types only).Supports read, write, read/write of symbols (by handle or name).Supports ‘sum up’ bulk reads (by way of read_write).Pretends to create/delete notifications (not yet working)Client FunctionalityPreliminary symbol, handle, and notification supportLOGGER port message decodingShortcuts for common information (project/application/task names, task count)Ability to easily prune unknown notification handlesAutomatic reconnectionLog system configurationInstallation$ git clone [email protected]:pcdshub/ads-async $ cd ads-async $ pip install .Running the Tests$ pip install pytest $ pytest -vv ads_async/tests
adsb
ADS-B tools for PythonThe adsb package currently provides tools for working with ADSB messages produced by software that provides BaseStation-like output, such asdump1090.This project is very early in development.Quickstartadsb is available on PyPI and can be installed withpip.$pipinstalladsbAfter installing adsb you can use it like any other Python module.Here is a simple example:importadsb# Fill this section in with the common use-case.TheAPI Referenceprovides API-level documentation.Change LogVersion History0.0.1Project created.
ads-b
ads-b: A python 3 library for encoding/decoding ADS-B messagesInstallationThe latest version ofads-bcan be installed fromPyPIusingpip.python3-mpipinstallads-bBasic Usageimportadsb# TODO: Fill this in
adsbcot
Display Aircraft in TAKADSBCOT is software for monitoring and analyzing aviation surveillance data via the Team Awareness Kit (TAK) ecosystem of products.ADSBCOT captures & reports real-time ADS-B data received from aircraft (or other airbrone vehicles and drones) into TAK products using native TAK protocols, including Cursor on Target (CoT).ADSBCOT has been evaluated with WinTAK, iTAK, ATAK & TAK Server.ADSBCOT is in active use today in a variety of missions.Documentation is available here.Use ADS-B Aggregators? Check out my sister softwareADSBXCOT.Want a turn-key ADS-B to TAK Gateway? Check outAirTAK.LicenseCopyright Sensors & Signals LLChttps://www.snstac.comLicensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
ads_bd
No description available on PyPI.
adsbibdesk
This is the command line edition of ADS to BibDesk, a tool for retrieving the bibtex, abstract and PDF of an astronomical journal article published onADSorarXiv.organd add it to yourBibDeskdatabase.ADS to BibDesk is a tool for retrieving the bibtex, abstract and PDF of an astronomical journal article published onADSorarXiv.organd adding it to yourBibDeskdatabase.ADS to BibDesk comes in two flavours: an Automator Service that you can use to grab papers in any app (e.g., in Safari, or Mail), or a command line app.Developers:please read theCONTRIBUTINGdocument for details on how to build the ADS to BibDesk CLI/Service from source, make changes, and submit pull requests.Command Line QuickstartADS to BibDesk can also be run directly from the command line. The command line script can be installed via:python setup.py installYou may need to run the last command withsudo.Onceadsbibdeskis installed, you can call it with the same types of article tokens you can launch the Service with, e.g.,:adsbibdesk 1998ApJ...500..525SA full summary ofadsbibdeskcommands is available via:adsbibdesk --helpSummary of article tokensThe URL of an ADS or arXiv article page,The ADS bibcode of an article (e.g.1998ApJ…500..525S),The arXiv identifier of an article (e.g.0911.4956), orAn article DOI.Other ModesBesides the primary mode (adding a single paper to BibDesk, ADS to BibDesk has three other modes: previewing papers, updated preprints, and ingesting PDF archives into BibDesk.Previewing PapersUse the-oswitch to simply download and view the PDF of an article without adding it to BibDesk. E.g.,:adsbibdesk -o 1998ApJ...500..525SUpdating PreprintsRun ADS to BibDesk with the-uswitch to find and update all astro-ph preprints in your BibDesk bibliography:adsbibdesk -uTo restrict this update to a date range, you can use the–from_date(-f) and–to_date(-t) flags with dates inMM/YYformat. For example, to update preprints published in 2012, run:adsbibdesk -u --from_date=01/12 --to_date=12/12Note that this operation can take some time since we throttle requests to ADS to be a nicer robot.PDF Ingest ModeWith the command-line ADS to BibDesk, you can ingest a folder of PDFs that originated from ADS into BibDesk. This is great for users who have amassed a literature folder, but are just starting to use BibDesk. This will get you started quickly.You need the programpdf2jsonto use this script. The easiest way to get pdf2json and its dependencies is throughHomebrew, the Mac package manager. Once homebrew is setup, simply runbrew install pdf2json.To run this workflow,:adsbibdesk -p my_pdf_dir/wheremy_pdf_dir/is a directory containing PDFs that you want to ingest.Note that this workflow relies on a DOI existing in the PDF. As such, it will not identify astro-ph pre-prints, or published papers older than a few years. Typically the DOI is published on the first page of modern papers. This method was inspired by a script byDr Lucy Lim.LicenseCopyright 2014 Jonathan Sick, Rui Pereira and Dan-Foreman MackeyADS to BibDesk is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.ADS to BibDesk is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.You should have received a copy of the GNU General Public License along with ADS to BibDesk. If not, see <http://www.gnu.org/licenses/>.
ads-bidding-utils
No description available on PyPI.
ads-bluebook
No description available on PyPI.
adsb-tools
adsb_toolsScripts and stuff to help you parse through data from [Dump1090] messages.Dump1090 is a Mode S decoder specifically designed for RTLSDR devices. It is a simple ADS-B (Automatic Dependent Surveillance - Broadcast) receiver, decoder and web-server. It requires a RTLSDR USB-stick and Osmocom's librtlsdr. FlightAware maintains thecurrent version of Dump1090.Aircraft Classfromadsb_tools.aircraftimportAircraft"""You must provide the base url for your ADS-B feeder,E.g., localhost:8080 or 192.x.x.x:8080Optional: provide base_latitude and base_longitude, which can be thecoordinates of your receiver. By providing base coordinates, each aircraftmessage will be augmented with additional info, including distance from basecoordinates, and direction. Nearest aircraft will be identified, and additionalexternal info will be retrieved, e.g. registration, image"""aircraft=Aircraft(base_adsb_url,base_latitude,base_longitude)# get nearest aircraftnearest_aircraft=aircraft.nearest_aircraftThis class has methods to augment and provide additional information about aircrafts based on ADS-B messages. The class takes in ADS-B messages from Dump1090 aircraft JSON and a base station's coordinates (latitude and longitude) as input. The class then calculates the distance and direction of each aircraft from the base station and adds this information to the ADS-B messages.The class has the following methods:__init__(self, base_adsb_url, base_latitude=None, base_longitude=None): The class constructor. It initializes the base_adsb_url, earth_radius_km, aircraft_list, and nearest_aircraft properties. If the base_latitude and base_longitude arguments are provided, the augment_aircraft_list() and set_nearest_aircraft() methods are called.augment_aircraft_list(self): This method adds additional options to the aircraft list based on aircraft properties. The new options include the aircraft's distance from the base station, the direction of the aircraft from the base station, and the aircraft's ICAO 24-bit address.set_nearest_aircraft(self): This method finds the nearest aircraft to the base station and adds links to various databases, such as the HEXDB and ADSB databases.retrieve_external_aircraft_options(self): This method retrieves external flight information and aircraft images for the nearest aircraft. It updates the nearest_aircraft dictionary with the aircraft image URL and the merged flight information.map_static_aircraft_options(self, stored_aircraft): This method maps static aircraft information from a stored aircraft dictionary to the nearest aircraft dictionary.get_aircraft_list(self): This method retrieves the aircraft messages and returns them as a list.add_aircraft_options(self): This method adds additional data to aircraft based on aircraft properties. It calculates the distance and direction of each aircraft from the base station and adds this information to the aircraft message. It also adds the aircraft's ICAO 24-bit address to the message.
adsb-track
ADS-B TrackPython logging utility for ADS-B receivers.Basic Installationpip install adsb-trackDeveloper InstallationClone the repositorygit clone https://github.com/xander-hirsch/adsb-track.Installpipenvto the user pip installation withpython3 -m pip install pipenv --user --upgradeInstall the project pipenv developer environment. Navigate to the project directory root. Run the commandpython3 -m pipenv install --dev.Install theadsb-trackpackage to the pipenv environment with the commandpython3 -m pipenv run pip install -e .UsageThe database recording functionality is provided inadsb_track/record.py$ python -m adsb_track.record --help usage: record.py [-h] [--host HOST] [--port PORT] [--rawtype {raw,beast,skysense}] --lat LAT --lon LON DATABASE Capture and record ADS-B data. positional arguments: DATABASE The SQLite database file to record data. optional arguments: -h, --help show this help message and exit --host HOST The host of the ADS-B source --port PORT The port of the ADS-B source --rawtype {raw,beast,skysense} The ADS-B output data format --lat LAT Receiver latitude --lon LON Receiver longitudeDocumentationhttps://adsb-track.xanderhirsch.us
adsbxcot
Display Aircraft in TAKADSBXCOT is a PyTAK gateway for displaying aircraft tracks from ADS-B Aggregators inTAK Products, including ATAK, WinTAK & iTAK.ADSBXCOT converts ADS-B messages from aircraft into Cursor on Target (CoT), as spoken by TAK. ADS-B messages are read from global ADS-B data aggregators. ADS-B data sent to TAK retains much of the aircraft’s track, course & speed parameters, and includes other metadata about the aircraft: Flight, Tail, Category, and et al.ADSBXCOT runs in any Python 3.6+ environment, on both Windows & Linux.N.B.: Almost all ADS-B Aggreators require pre-authorization before allowing access to ADS-B data. This pre-authorization is often in the form of an API key. Many of these services provide these API keys for free, provided you feed data from your local ADS-B receiver into their cloud-service. Otherwise, they charge a fee for access to ADS-B data. Your organization should reach out to these ADS-B data aggregator services directly to negotiate terms for your use.Documentation is available here.Have your own ADS-B receiver? Check outADSBCOT.Want a turn-key ADS-B to TAK Gateway? Check outAirTAK.Supported ADS-B Aggregators:ADS-B Exchange:https://www.adsbexchange.com/adsb.fi:https://adsb.fi/ADS-B Hub:https://www.adsbhub.org/Airplanes.Live:https://airplanes.live/LicenseCopyright Sensors & Signals LLChttps://www.snstac.comLicensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
adscli
OverviewMicroservices are great, but running them on your dev box is annoying. Each one has its own commands to start and stop, check status - and how are you supposed to know where the logs are?ads fixes this by requiring each participating service to expose a simple uniform interface for the most common commands: start, stop, status, and log locations.To use ads, drop a file calledads.ymlin each service’s directory:start_cmd: gradle run > obscure/logs/dir/out & # ads can be used with any build system - cmds are just bash stop_cmd: pgrep -f ninja-service | xargs kill -9 # Still the most reliable way to kill a process status_cmd: pgrep -f ninja-service # Exit status indicates whether any process matched log_paths: - obscure/logs/dir/* - even/more/secret/logs/dir/**/ninja.log # Note the glob support description: Web service that turns your ordinary app into badass rockstart tech. # Optional but a good ideaThere are a few more fields, but this will get you started.Create one more file calledadsroot.yml, in the root of your codebase:# Actually, you don't need to put anything in it yet. # The existence of the file is sufficient.Now you can run ads from anywhere in the codebase and get at any of the services.$ cd /anywhere/in/codebase $ ads list ninja: Web service that turns your ordinary app into badass rockstart tech.Once you’ve “adsified” a bunch of services, ads makes it really convenient to operate on several at once.$ ads status --- db: not running --- ninja: not running --- pirate: not running $ ads up ninja pirate --- Starting [ninja, pirate] --- Starting ninja --- Starting pirateGetting startedDependenciesads has been tested with python 2.7.8 on Mac OS Yosemite-El Capitanpythonpip: install witheasy_install pipshell stuff available on any Unixy OS (find,bash,tail,cat)Installingpip install adscliA brief tourIf you want to follow along, install ads (see “Installing”) and clone this repo to get the sample project (git clonehttps://github.com/adamcath/ads).$ cd ads/doc/samples/introWhat do we got here?$ ads list All services in current project (intro): ninja: Slices and chops, mostly db: (No description) pirate: Walks the plank and shivers timbers ... # We'll come back to the rest of this stuffLet’s start a service:$ ads up -v ninja --- Starting [ninja] --- Checking if ninja is already running cd /intro/./ninja pgrep -f ninja.sh --- Starting ninja cd /intro/./ninja mkdir logs bash ninja.sh >logs/ninja.out 2>logs/ninja.err & --- Started ninja-v makes ads show you what it’s doing. You can usually omit it.Up is idempotent, so you don’t have to remember what state it was in:$ ads up -v ninja --- Starting [ninja] --- Checking if ninja is already running cd /intro/./ninja pgrep -f ninja.sh 4743 --- ninja is already runningToo much chopping; let’s stop ninja:$ ads down -v ninja --- Checking if ninja is running cd /intro/./ninja pgrep -f ninja.sh 4863 --- Stopping ninja cd /intro/./ninja pgrep -f ninja.sh | xargs kill -9 --- Stopped ninjaI forget, is ninja up?$ ads status -v ninja cd /intro/./ninja pgrep -f ninja.sh --- ninja: not runningAny command can take a list of services:$ ads up -v ninja pirate --- Starting [ninja, pirate] ...If you don’t say which service, ads does ‘em all (you can override this by settingdefaultin adsroot.yml or ~/.ads_profile.yml):$ ads status --- db: not running --- ninja: ok --- pirate: okLet’s tail the logs:$ ads logs cd /Users/arc/Projects/ads/doc/samples/intro tail -F ninja/logs/ninja.err \ ninja/logs/ninja.out \ pirate/logs/treasure-chest/pirate.err \ pirate/logs/treasure-chest/pirate.log ==> ninja/logs/ninja.err <== ==> ninja/logs/ninja.out <== Chop! Chop! ==> pirate/logs/treasure-chest/pirate.err <== ==> pirate/logs/treasure-chest/pirate.log <== Arrrrr! Arrrrr!tail -F works pretty well with multiple log files, but if you want to focus on one, just specify the service.The logs command has some cool variants:$ ads help logs usage: logs [-h] [--tail | --list | --cat] [--general | --errors] [service [service ...]] ... --tail (Default) Follow the logs with tail -f --list List the paths of all log files which exist (useful for pipelining) --cat Dump the contents of all log files to stdoutFAQMy service needs some one-time setup before it runs. How do I tell ads this?This is a common scenario; for example, you may need to set up the DB schema before you can start anything. ads doesn’t have a solution for this yet. Your service should probably try to detect the missing precondition, refuse to start, and direct the user to the relevant wiki page.Does ads let me define dependencies?No. This is one area where ads is opinionated: in production, any service could go down, and the other services would have to be able to deal with that. The dependant service might go unhealthy, but it shouldn’t crash. Therefore, starting in an arbitrary order is a special case of the general problem, which you cannot avoid, of some services being up and others being down.In other words, if a service can’t even start without another running, they’re actually one service.What youcando is specify groups of services and easily start them all (see “Groups” below).Can I specify a “build” step separate from “run”?No. If running requires building, it should just do it. If that’s slow, then improve your project’s build avoidance to reduce rebuilds.Can I use ads to run my services in production?You could, but it’s probably not very useful. The main benefit of ads is the ergonomics of running several services from source. In my experience, this is not a big problem in production.Why isn’t this just…part of the build system?Building is a very general problem, and build systems are quite flexible. This flexibility comes at a cost: even in a well-factored build system, you always have to figure out which targets you’re supposed to run. ads is a “run” system, not a build system, so it can be restricted to a fixed set of commands - the ones you need to run services.Big projects often involve multiple languages and build systems. I wanted a uniform way to run them all.It’s fairly annoying to implement things likeads logsin most build systems.an init.d script (or similar)?ads is inspired by OS service managers, but:I don’t want to “install” each service on my dev box. That would raise awkward questions about what happens when I change the code. I want to run things straight from source.init.d scripts are pretty fugly. Maybe other service managers are better; if so, I’d be curious to learn about them.I suspect that if this were a good solution, people would be doing it.some project-specific helper scripts?In my experience, code bases frequently evolve a set of helper scripts that make it tolerable to deal with multiple projects. They work well when there’s one command to rule them all, but then somebody wants a way tojust restart my stuff. Now you add some commands to just do that. It becomes very hard to prevent spaghetti unless you end up designing something like ads, which lets you freely compose commands with services. But then…you could have just used ads!docker/vagrant/virtualization tech x?Virtualization solves a very different (and much deeper) set of problems than ads. If you have multiple services running in VMs, you still need something to wrangle them. If everything you use is managed by docker-compose, you might not need ads. But if some stuff is in docker, other is native, other in VMs, etc, ads is still useful.DevelopmentInstalling from sourcegit clonehttps://github.com/adamcath/ads.gitpip install-e.Running the automated testsGet the source./unit_tests.sh && ./functional_tests.shAdvanced featuresGroupsYou can define groups of services inadsroot.yml:groups: north-america: - usa - canadaand then operate on the whole group at once:$ ads up north-americaGroups can contain other groups (but not cycles! Nice try!).
adsctl
Google Ads API CLI and PromptGoogle Ads Interface for humans.This is a work in progress, please open an issue if you find any bugs or have any suggestions.Features:A command line tool for executing GAQL queries against the Google Ads API. Likepsqlfor the Google Ads API.A command line tool for managing Google Ads resources. Likekubectlfor the Google Ads API.Centralized configuration with multiple account managementAutomatically update refresh tokenPython API with Pandas integrationInstallationpipinstalladsctlGetting startedRequirements:All the requirements to use the Google Ads API including a Developer Token and OAuth2 credentialsSeeGoogle Ads API Quickstartfor more details.ConfigurationThis project manages it's own configuration files. To create the configuration file run:adsctlconfig# Open the location of the config filesadsctlconfigexploreOpen the new default config file and fill it with your credentials: Dev Token, Client ID, Client Secret and Customer ID.To login and get a refresh token:adsctlauth<path-to-secret.json>The token is saved automatically in the config file. You can see it by running:# View configadsctlconfigviewMultiple AccountsYou can manage multiple accounts in the config file by adding TOML sections.current_account="default"[... default account ...][accounts.another]developer_token=""customer_id=""login_customer_id=""[accounts.another.oauth]client_id=""client_secret=""Set the current account:$adsctlconfigset-accountanother $adsctlconfigget-account anotherGAQL PromptAn interactive shell for executing GAQL queries against the Google Ads API.$gaql >>>SELECTcampaign.id,campaign.nameFROMcampaignORDERBYcampaign.id +----+-----------------------------+---------+-------------+||resourceName|name|id||----+-----------------------------+---------+-------------||0|customers/XXX/campaigns/YYY|name1|10000000000||1|customers/XXX/campaigns/YYY|name2|10000000000||2|customers/XXX/campaigns/YYY|name3|10000000000|+----+-----------------------------+---------+-------------+By default it uses the it intableformat but you can control the output format with the-ooption:# Print the plain protobuf response$gaql-oplain# Print the contents of a CSV file$gaql-ocsvYou can also run a single inline command and redirect the output to a file:gaql-c'SELECT campaign.id, campaign.name FROM campaign ORDER BY campaign.id'-ocsv>my-query.csvThis assumes only table is returned but in more complex queries that include other resources or when using metrics or segments multiple tables are created. On those cases use the -o csv-files flag to save each table to a different file based on the table name.$gaql-c'SELECT campaign.id, campaign.name FROM campaign ORDER BY campaign.id'-ocsv-files $ls campaign.csvQuery variablesYou can specify one or more variables using the jinja syntax, those variables will be replaced with the values specified in the-voptions.gaql-c'SELECT campaign.id, campaign.name FROM campaign WHERE campaign.id = {{ id }} ORDER BY campaign.id'-vid=123456789You can also pass-vwithout a command and use this the variables in the prompt queries:$gaql-vid=123456789-vfield=name >>>SELECTcampaign.id,campaign.{{field}}FROMcampaignWHEREcampaign.id={{id}}ORDERBYcampaign.idOther optionsYou can overwrite the account and customer ID using the-aand-ioptions. Seegaql --helpfor more details.CLICampaign ManagementGet campaigns```shell adsctlgetcampaignName Status Id -------------------------------------------- -------- ----------- Interplanetary Cruise Campaign #168961427368 Paused 20370195066 Interplanetary Cruise Campaign #168961215970 Paused 20379497161Status Managementadsctleditcampaign-i<campaign-id>statusenabled adsctleditcampaign-i20370195066statusenabledadsctlgetcampaignName Status Id -------------------------------------------- -------- ----------- Interplanetary Cruise Campaign #168961427368 Enabled 20370195066 Interplanetary Cruise Campaign #168961215970 Paused 20379497161Update budgetadsctleditcampaign-i<campaign-id>budget<amount>Python APIYou can also use the Python API to easily execute GAQL queries and get the results as a Python dict or pandas DataFrame.importadsctlasads# Read config file and creates the Google Ads clientgoogle_ads=ads.GoogleAds()# Execute GAQL queryget_campaigns_query="""SELECT campaign.name,campaign_budget.amount_micros,campaign.status,campaign.optimization_score,campaign.advertising_channel_type,metrics.clicks,metrics.impressions,metrics.ctr,metrics.average_cpc,metrics.cost_micros,campaign.bidding_strategy_typeFROM campaignWHERE segments.date DURING LAST_7_DAYSAND campaign.status != '{{ status }}'"""tables=google_ads.query(get_campaigns_query,params={"status":"REMOVED"}})# Print Pandas DataFramesfortable_name,tableintables.items():print(table_name)print(table,"\n")campaign resourceName status ... name optimizationScore 0 customers/XXXXXXXXXX/campaigns/YYYYYYYYYYY ENABLED ... my-campaign 0.839904 [1 rows x 6 columns] metrics clicks costMicros ctr averageCpc impressions 0 210 6730050 0.011457 32047.857143 18330 campaignBudget resourceName amountMicros 0 customers/XXXXXXXXXX/campaignBudgets/ZZZZZZZZZZZ 1000000DisclaimerThis is not an official Google product.This repository is maintained by a Googler but is not a supported Google product. Code and issues here are answered by maintainers and other community members on GitHub on a best-effort basis.
ads-deploy
ads-deploy docker image + toolsads-deploy bridges the gap between your PLC project in TwinCAT XAE + Visual Studio and the Python/EPICS tools we use for development and deployment (PyTMC,ads-ioc) by providing a full EPICS and Python environment in a containerized Docker image.Featurespytmc pragma linting / verificationBuild and run ads-based EPICS IOCs directly from WindowsGenerate batch files to run the IOC outside of Visual StudioAuto-generate and run simple Typhon screens directly from WindowsNo need to transfer your project and files to a Linux machine just to generate the IOCInstallationNote: this is partly outdated - Docker is no longer required and conda may be used in place of itStep-by-step notes are available here:https://confluence.slac.stanford.edu/display/PCDS/Installing+ads-deploy+on+WindowsUsing just the Docker container is simple on all platforms. Run the following to check it out:WindowsC:\>dockerrun-itpcdshub/ads-deploy:latest/bin/bashOSX / Linux$eval$(docker-machineenv)$dockerrun-itpcdshub/ads-deploy:latest/bin/bashUpdating versionsSteps to update ads-deploy:Update ads-ioc-docker (follow its README)Tag and release pytmc (use v0.0.0 style as usual)Update theFROMpcdshub/ads-ioc versionUpdate environment variables:PYTMC_VERSION,ADS_IOC_VERSIONRebuild. Match theADS_DEPLOY_VERSIONwith the pytmc version, as it tends to change the most:$ export ADS_DEPLOY_VERSION={pytmc version} $ docker build -t pcdshub/ads-deploy:${ADS_DEPLOY_VERSION} . $ docker build -t pcdshub/ads-deploy:latest .Push to DockerHub$ docker push pcdshub/ads-deploy:${ADS_DEPLOY_VERSION} $ docker push pcdshub/ads-deploy:latestCommit, tag, and push to GitHub$ git tag ${ADS_DEPLOY_VERSION} $ git push $ git push --tagsLinksDocker Hub
ad_sdl.wei
weiThe Workflow Execution Interface (WEI) for Autonomous Discovery/Self Driving Laboratories (AD/SDLs)For more details and specific examples of how to use wei, please see ourdocumentation.Table of ContentsweiTable of ContentsUsageInstallationDevelopmentContributingAcknowledgmentsLicenseUsageTo get started with WEI, we recommend:Using theWEI Template Workcellas a starting point to learn how to use WEI, and how to create your own Workcell.Consulting theDocumentationInstallationTo install thead_sdl.weipython package, you can runpip install ad_sdl.weiTo install from source, rungit clone https://github.com/ad-sdl/wei.git cd wei pip install -e .DevelopmentInstall Dockerfor your platform of choiceFor managing the Python Package,install PDMFor automatic development checks/linting/formatting,install pre-commitEnsure you haveGNU makeinstalled (on apt-based systems like Debian and Ubuntu,sudo apt install make)Clone the repository.Within the cloned repository, runmake buildto build the python package and docker image locally.For a complete list of make commands available, runmake help.Using PDMManaging DependenciesBuild and PublishRunning Using PDMUsing Pre-commitTo run pre-commit checks before committing, runpre-commit run --all-filesormake checksNONE OF THE FOLLOWING SHOULD BE DONE REGULARLY, AND ALL CHECKS SHOULD BE PASSING BEFORE BRANCHES ARE MERGEDTo skip linting during commits, useSKIP=ruff git commit ...To skip formatting during commits, useSKIP=ruff-format git commit ...To skip all pre-commit hooks, usegit commit --no-verify ...Seepre-commit documentationfor moreContributingPlease reportbugs,enhancement requests, orquestionsthrough theIssue Tracker.If you are looking to contribute, please seeCONTRIBUTING.md.Citing@Article{D3DD00142C,author="Vescovi, Rafael and Ginsburg, Tobias and Hippe, Kyle and Ozgulbas, Doga and Stone, Casey and Stroka, Abraham and Butler, Rory and Blaiszik, Ben and Brettin, Tom and Chard, Kyle and Hereld, Mark and Ramanathan, Arvind and Stevens, Rick and Vriza, Aikaterini and Xu, Jie and Zhang, Qingteng and Foster, Ian",title="Towards a modular architecture for science factories",journal="Digital Discovery",year="2023",pages="-",publisher="RSC",doi="10.1039/D3DD00142C",url="http://dx.doi.org/10.1039/D3DD00142C",abstract="Advances in robotic automation{,} high-performance computing (HPC){,} and artificial intelligence (AI) encourage us to conceive of science factories: large{,} general-purpose computation- and AI-enabled self-driving laboratories (SDLs) with the generality and scale needed both to tackle large discovery problems and to support thousands of scientists. Science factories require modular hardware and software that can be replicated for scale and (re)configured to support many applications. To this end{,} we propose a prototype modular science factory architecture in which reconfigurable modules encapsulating scientific instruments are linked with manipulators to form workcells{,} that can themselves be combined to form larger assemblages{,} and linked with distributed computing for simulation{,} AI model training and inference{,} and related tasks. Workflows that perform sets of actions on modules can be specified{,} and various applications{,} comprising workflows plus associated computational and data manipulation steps{,} can be run concurrently. We report on our experiences prototyping this architecture and applying it in experiments involving 15 different robotic apparatus{,} five applications (one in education{,} two in biology{,} two in materials){,} and a variety of workflows{,} across four laboratories. We describe the reuse of modules{,} workcells{,} and workflows in different applications{,} the migration of applications between workcells{,} and the use of digital twins{,} and suggest directions for future work aimed at yet more generality and scalability. Code and data are available at https://ad-sdl.github.io/wei2023 and in the ESI."}LicenseWEI is MIT licensed, as seen in theLICENSEfile.
adsense.portlet
Introduction============Changelog=========1.0 (xxxx-xx-xx)----------------* Initial release
adsense_scraper
adsense_scraper is a simple module that uses Twill and html5lib to scrape Google AdSense earnings data from your account.For example, this is useful as a cron job or other sort of periodic task to store a copy of your earnings in your own database so that you don’t have to visit the AdSense site every day.
ads-evt
Anomaly Detection in Streams with Extreme Value TheoryThis repository wraps the original implementation ofSPOT published in KDD'17as an installable package. We refactor the original one, removing duplicated code. To verify the faithfulness, several test cases are introduced intests/test_faithfulness.py.UsageInstall this package viapip install ads-evt.ads_evthas almost the same interface as the original implementation.fromtypingimportListimportmatplotlib.pyplotaspltimportnumpyasnpimportads_evtasspot# physics.dat is a file in the original repositorywithopen("physics.dat",encoding="UTF-8")asobj:data=np.array(list(map(float,obj.read().split(","))))init_data=2000proba=1e-3depth=450models:List[spot.SPOTBase]=[# spot.SPOT(q=proba),# spot.dSPOT(q=proba, depth=depth),# spot.biSPOT(q=proba),# The original implementation of bidSPOT uses n_points=8 for _grimshaw by defaultspot.bidSPOT(q=proba,depth=depth,n_points=8),]foralginmodels:alg.fit(init_data=init_data,data=data)alg.initialize()results=alg.run()# Plotfigs=alg.plot(results)plt.show()For developersExecute test cases with the following commands# Install dependencies for developmentgitsubmoduleupdate--init python-mpipinstall-rrequirements-dev.txt# Execute test casescoveragerun coveragereportLicencesGNU GPLv3
adsexplore
No description available on PyPI.
adsg-core
Architecture Design Space Graph CoreGitHub Repository|DocumentationThe Architecture Design Space Graph (ADSG) allows you to model design spaces using a directed graph that contains three types of architectural choices:Selection choices: selecting among mutually-exclusive options, used forselectingwhich nodes are part of an architecture instanceConnection choices: connecting one or more source nodes to one or more target nodes, subject to connection constraints and optional node existence (due to selection choices)Additional design variables: continuous or discrete, subject to optional existence (due to selection choices)Modeling a design space like this allows you to:Model hierarchical relationships between choices, for example only activating a choice when another choice has some option selected, or restricting the available options for choices based on higher-up choicesFormulate the design space as an optimization problem that can be solved using numerical optimization algorithmsGenerate architecture instances for a given design vector, automatically correct incorrect design variables, and get information about which design variables were activeImplement an evaluation function (architecture instance --> metrics) and run the optimization problemInstallationFirst, create a conda environment (skip if you already have one):conda create --name adsg python=3.10 conda activate adsgThen install the package:conda install numpy scipy~=1.9 pip install adsg-coreOptionally also install optimization algorithms (SBArchOpt):pip install adsg-core[opt]If you want to interact with the ADSG from aJupyter notebook:pip install adsg-core[nb] jupyter notebookDocumentationRefer to thedocumentationfor more background on the ADSG and how to implement architecture optimization problems.CitingIf you use the ADSG in your work, please cite it:Bussemaker, J.H., Ciampa, P.D., & Nagel, B. (2020). System architecture design space exploration: An approach to modeling and optimization. In AIAA Aviation 2020 Forum (p. 3172). DOI:10.2514/6.2020-3172ContributingThe project is coordinated by: Jasper Bussemaker (jasper.bussemaker at dlr.de)If you find a bug or have a feature request, please file an issue using the Github issue tracker. If you require support for using ADSG Core or want to collaborate, feel free to contact me.Contributions are appreciated too:Fork the repositoryAdd your contributions to the forkUpdate/add documentationAdd tests and make sure they pass (tests are run usingpytest)Read and sign theContributor License Agreement (CLA), and send it to the project coordinatorIssue a pull requestNOTE:DoNOTdirectly contribute to theadsg_core.optimization.assign_encand.sel_choice_encmodules! Their development happens in separate repositories:AssignmentEncodingSelectionChoiceEncodingUseupdate_enc_repos.pyto update the code in this repository.Adding Documentationpip install -r requirements-docs.txt mkdocs serveRefer tomkdocsandmkdocstringsdocumentation for more information.
adshli
ADSHLI implements a python client for the Beckhoff Twincat AMS/ADS protocol. The client is independent from any Beckhoff supplied libraries. Consequently, no Twincat installation is needed.It provides two APIs:A low level API allowing you to directly send ADS commands to the PLCA high level API providing convenience functions making access to the PLC as easy as possible.For documentation please see the readme file and sample_code.py available at:https://github.com/simonwaid/adshli
adsk
No description available on PyPI.
adskalman
No description available on PyPI.
adskeeper
Failed to fetch description. HTTP Status Code: 404
adsk-flaskoidc
FlaskOIDCThis package relies purely on theAuthlibpackage.AuthlibA wrapper of Flask with pre-configured OIDC support. Ideal for microservices architecture, each request will be authenticated using Flask'sbefore_requestmiddleware. Necassary endpoints can be whitelisted using an environment variableFLASK_OIDC_WHITELISTED_ENDPOINTS.Installation:pip3installflaskoidcUsage:After simply installing the flaskoidc you can simply use it like below:fromflaskoidcimportFlaskOIDCapp=FlaskOIDC(__name__)Configurations:Please make sure to extend your configurations fromBaseConfig(only if you are sure what you are doing. Recommended way is to use the environment variables for the configuration.)fromflaskoidcimportFlaskOIDCfromflaskoidc.configimportBaseConfig# Custom configuration class, a subclass of BaseConfigCustomConfig(BaseConfig):DEBUG=Trueapp=FlaskOIDC(__name__)app.config.from_object(CustomConfig)FollowingENVIRONMENT VARIABLESMUST be set to get the OIDC working.FLASK_OIDC_PROVIDER_NAME(default: 'google')The name of the OIDC provider, likegoogle,okta,keycloaketc. I have verified this package only for google, okta and keycloak. Please make sure to open a new issue if any of your OIDC provider is not working.FLASK_OIDC_SCOPES(default: 'openid email profile')Scopes required to make your client works with the OIDC provider, separated by a space.OKTA: make sure to addoffline_accessin your scopes in order to get the refresh_token.FLASK_OIDC_USER_ID_FIELD(default: 'email')Different OIDC providers have different id field for the users. Make sure to adjust this according to what your provider returns in the user profile i.e.,id_token.FLASK_OIDC_CLIENT_ID(default: '')Client ID that you get once you create a new application on your OIDC provider.FLASK_OIDC_CLIENT_SECRET(default: '')Client Secret that you get once you create a new application on your OIDC provider.FLASK_OIDC_REDIRECT_URI(default: '/auth')This is the endpoint that your OIDC provider hits to authenticate against your request. This is what you set as one of your REDIRECT URI in the OIDC provider client's settings.FLASK_OIDC_CONFIG_URL(default: '')To simplify OIDC implementations and increase flexibility, OpenID Connect allows the use of a "Discovery document," a JSON document found at a well-known location containing key-value pairs which provide details about the OpenID Connect provider's configuration, including the URIs of the authorization, token, revocation, userinfo, and public-keys endpoints.Discovery Documents may be retrieved from:Google:https://accounts.google.com/.well-known/openid-configurationOKTAhttps://[YOUR_OKTA_DOMAIN]/.well-known/openid-configurationhttps://[YOUR_OKTA_DOMAIN]/oauth2/[AUTH_SERVER_ID]/.well-known/openid-configurationAuth0: https://[YOUR_DOMAIN]/.well-known/openid-configuration`Keycloak: http://[KEYCLOAK_HOST]:[KEYCLOAK_PORT]/auth/realms/[REALM]/.well-known/openid-configurationA few other environment variables along with their default values are.# Flask `SECRET_KEY` config valueFLASK_OIDC_SECRET_KEY:'!-flask-oidc-secret-key'# Comma separated string of URLs which should be exposed without authentication, else all request will be authenticated.FLASK_OIDC_WHITELISTED_ENDPOINTS:"status,healthcheck,health"You can also set the config variables specific toFlask-SQLAlchemyusing the same key as the environment variables.# Details about this below in the "Session Management" section.SQLALCHEMY_DATABASE_URI:'sqlite:///sessions.db'Known Issues:Need to make sure it still works with the clients_secrets.json file or via env variables for each endpoint of a custom OIDC provider.refresh_tokenis not yet working. I am still trying to figure out how to do this using Authlib.You may enter problems when installing cryptography, check itsofficial document
adsl2
adsl2这是一个有趣的程序你可以在这发现许多有趣的东西生活不止眼前的苟且还有诗和远方好看的皮囊千千万有趣的灵魂万里挑一扯这么多久是想告诉你这个程序只是在windown下进行自动拨号的,别惊讶!!
adslab
This package involves the code for ads lab experiements Meant to be used for legal purposes
adslconnector
No description available on PyPI.
ads-libraries
# adslibraries__CAUTION__: this is a simple functional project, although it should work as generally expectedIt saves your personal ads libraries (https://ui.adsabs.harvard.edu/user/libraries/) into a bibtex file that can be used in latex It saves bibtex citation keys as “FirstAuthorYear” (e.g. Ferrigno2017) If not unique, it adds a letter to the other occurrencies (e.g. Ferrigno2017, Ferrigno2017a)It is possible to choose a set of personal libraries or “all”.You need to store your ADS tocken into the file $HOME/.ads/dev_keyseehttps://ui.adsabs.harvard.edu/user/settings/token## Source of methodshttps://github.com/adsabs/adsabs-dev-api/blob/master/Libraries_API.ipynb## Installation`bash $ pip installads-libraries$ python setup.py install `## Help`bash $ save_ads_libraries--help`## Example:`bash $ save_ads_libraries my_ads.bib all `
ads-linguistics
No description available on PyPI.
ads-log-daemon
Daemon for translating TwinCAT ADS Logger messages to JSON for interpretation by [pcds-]logstash.DocumentationRequirementsPython 3.7+ads-asyncInstallation$ python -m pip install .Running the Tests$ pytest -vv
adslproxy
拨号主机设置首先配置好代理,如使用 Squid,运行在 3128 端口,并设置好用户名和密码。配置好代理之后,即可使用本工具定时拨号并发送至 Redis。安装 ADSLProxypip3 install -U adslproxy设置环境变量# Redis 数据库地址和密码 export REDIS_HOST= export REDIS_PASSWORD= # 本机配置的代理端口 export PROXY_PORT= # 本机配置的代理用户名,无认证则留空 export PROXY_USERNAME= # 本机配置的代理密码,无认证则留空 export PROXY_PASSWORD=执行adslproxy send运行结果:pip3 install -U adslproxy设置环境变量# Redis 数据库地址和密码 export REDIS_HOST= export REDIS_PASSWORD= # 本机配置的代理端口 export PROXY_PORT= # 本机配置的代理用户名,无认证则留空 export PROXY_USERNAME= # 本机配置的代理密码,无认证则留空 export PROXY_PASSWORD=执行adslproxy send运行结果:2020-04-13 01:39:30.811 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:39:30.812 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:39:30.812 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:39:30.893 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully 2020-04-13 01:39:37.034 | INFO | adslproxy.sender.sender:run:108 - Get New IP 113.128.9.239 2020-04-13 01:39:42.221 | INFO | adslproxy.sender.sender:run:117 - Valid proxy 113.128.9.239:3389 2020-04-13 01:39:42.458 | INFO | adslproxy.sender.sender:set_proxy:82 - Successfully Set Proxy 2020-04-13 01:43:02.560 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:43:02.561 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:43:02.561 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:43:02.630 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully 2020-04-13 01:43:08.770 | INFO | adslproxy.sender.sender:run:108 - Get New IP 113.128.31.230 2020-04-13 01:43:13.955 | INFO | adslproxy.sender.sender:run:117 - Valid proxy 113.128.31.230:3389 2020-04-13 01:43:14.037 | INFO | adslproxy.sender.sender:set_proxy:82 - Successfully Set Proxy 2020-04-13 01:46:34.216 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:46:34.217 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:46:34.217 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:46:34.298 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully此时有效代理就会发送到 Redis。
adslproxy-enhance
拨号主机设置首先配置好代理,如使用 Squid,运行在 3128 端口,并设置好用户名和密码。配置好代理之后,即可使用本工具定时拨号并发送至 Redis。安装 ADSLProxypip3 install -U adslproxy设置环境变量# Redis 数据库地址、端口和密码 export REDIS_HOST= export REDIS_PORT= export REDIS_PASSWORD= # 拨号云主机配置的代理端口 export PROXY_PORT= # 拨号云主机的代理用户名,无认证则留空 export PROXY_USERNAME= # 拨号云主机配置的代理密码,无认证则留空 export PROXY_PASSWORD=执行adslproxy send运行结果:运行结果:2020-04-13 01:39:30.811 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:39:30.812 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:39:30.812 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:39:30.893 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully 2020-04-13 01:39:37.034 | INFO | adslproxy.sender.sender:run:108 - Get New IP 113.128.9.239 2020-04-13 01:39:42.221 | INFO | adslproxy.sender.sender:run:117 - Valid proxy 113.128.9.239:3389 2020-04-13 01:39:42.458 | INFO | adslproxy.sender.sender:set_proxy:82 - Successfully Set Proxy 2020-04-13 01:43:02.560 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:43:02.561 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:43:02.561 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:43:02.630 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully 2020-04-13 01:43:08.770 | INFO | adslproxy.sender.sender:run:108 - Get New IP 113.128.31.230 2020-04-13 01:43:13.955 | INFO | adslproxy.sender.sender:run:117 - Valid proxy 113.128.31.230:3389 2020-04-13 01:43:14.037 | INFO | adslproxy.sender.sender:set_proxy:82 - Successfully Set Proxy 2020-04-13 01:46:34.216 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:46:34.217 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:46:34.217 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:46:34.298 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully此时有效代理就会发送到 Redis。
adslproxy-ota
拨号主机设置首先配置好代理,如使用 Squid,运行在 3128 端口,并设置好用户名和密码。配置好代理之后,即可使用本工具定时拨号并发送至 Redis。安装 ADSLProxypip3 install -U adslproxy-ota设置环境变量# Redis 数据库地址、端口和密码 export REDIS_HOST= export REDIS_PORT= export REDIS_PASSWORD= # 拨号云主机配置的代理端口 export PROXY_PORT= # 拨号云主机的代理用户名,无认证则留空 export PROXY_USERNAME= # 拨号云主机配置的代理密码,无认证则留空 export PROXY_PASSWORD=执行adslproxy send运行结果:运行结果:2020-04-13 01:39:30.811 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:39:30.812 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:39:30.812 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:39:30.893 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully 2020-04-13 01:39:37.034 | INFO | adslproxy.sender.sender:run:108 - Get New IP 113.128.9.239 2020-04-13 01:39:42.221 | INFO | adslproxy.sender.sender:run:117 - Valid proxy 113.128.9.239:3389 2020-04-13 01:39:42.458 | INFO | adslproxy.sender.sender:set_proxy:82 - Successfully Set Proxy 2020-04-13 01:43:02.560 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:43:02.561 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:43:02.561 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:43:02.630 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully 2020-04-13 01:43:08.770 | INFO | adslproxy.sender.sender:run:108 - Get New IP 113.128.31.230 2020-04-13 01:43:13.955 | INFO | adslproxy.sender.sender:run:117 - Valid proxy 113.128.31.230:3389 2020-04-13 01:43:14.037 | INFO | adslproxy.sender.sender:set_proxy:82 - Successfully Set Proxy 2020-04-13 01:46:34.216 | INFO | adslproxy.sender.sender:loop:90 - Starting dial... 2020-04-13 01:46:34.217 | INFO | adslproxy.sender.sender:run:99 - Dial Started, Remove Proxy 2020-04-13 01:46:34.217 | INFO | adslproxy.sender.sender:remove_proxy:62 - Removing adsl1... 2020-04-13 01:46:34.298 | INFO | adslproxy.sender.sender:remove_proxy:69 - Removed adsl1 successfully此时有效代理就会发送到 Redis。
adsmiff
No description available on PyPI.
adsml
AdsML is a suite of business-to-business electronic commerce standards intended to support the exchange of advertising business messages and content delivery using XML. Typical users include newspapers, advertising agencies, broadcasters and others who buy or sell advertising.This module is a part-implementation of the protocol in Python. Currently it implements part of the AdsML-Bookings component of the standard.Currently built for Python 3 only - please let me know if you require Python 2 support.InstallationInstalling from PyPI:pip install adsmlUsageExample:import adsml parser = AdsMLParser("adsml-file.txt") # process the header header = parser.get_header() order = parser.get_order() print ("Order: {}: (buyers ref {})".format(order, order.buyers_reference)) print ("Booking party: {}".format(order.booking_party)) print ("Selling party: {}".format(order.selling_party)) print ("Campaign: {}".format(order.campaign)) print ("Payer information: {}".format(order.payer_information)) print ("Placement: {}".format(order.placement)) print ("Notes: {})".format(order.notes))Release notes0.1 - First working release, pinned to Python 3 only (use pip >9.0 to ensure pip Python version requirement works properly)
ads_modules
UNKNOWN
adsms
adsmsSend SMS aircraft alerts based on ADS-B dataUsageCopy the configuration file, make any necessary changes, and run:./adsms.py <configuration_file>Configuration filetextbelt_key: yourTextbeltAPI keydata: a URL to a readsb/tar1090aircraft.jsonendpointtracker: a URL to a tar1090 tracker (e.g.https://globe.theairtraffic.com/)database: an SQLite file in which to store subscriptionspid_file: path to which to write the PID (set to empty string to not write a PID file)max_age: maximum age of aircraft pings in seconds; pings older than this will be ignoredmin_disappearance: the minimum time in seconds for which an aircraft must go "off the radar" before disappearing for new pings to trigger notifications againdelay: time to wait after processing all rules before running the loop againDatabase Schemaadsmsuses a SQLite database to store subscriptions and information about tracked aircraft. Currently, the only table issubscriptions.subscriptionsThesubscriptionstable has the following columns:Column NameData TypeDescriptionrowidINTEGERUnique identifier for the subscription.phoneTEXTPhone number to receive notifications for this subscription.icaoTEXTICAO address of the aircraft to track.descriptionTEXTDescription of the aircraft being tracked.last_seenINTEGERTimestamp of the last time this aircraft was seen by the system.This table stores information about each subscription, including the phone number to send notifications to, the ICAO address of the aircraft to track, a description of the aircraft, and the last time it was seen by the system.Use of ChatGPTPortions of both this README and theadsmscode have been partially written with ChatGPT.
adsmsg
ADSPipelineMsgInterpipeline communication messages
adsmt
Simple Active Directory console-based management toolkit (CLI - command line interface) which is designed to execute basic administration of MSAD like creating, modifying, listing, deleting and searching users, groups, computers; printing and managing domain tree structure and so on. The point of this toolkit creation was to give to system administrators an ability to easily manage Samba4-based AD domains (when there is no ability to use GUI MS management console and samba utilities (samba-tool, ldbsearch and so on) are not comfortable to use.
adsmutils
Failed to fetch description. HTTP Status Code: 404
adso
A topic modelling library built on top of scipy/numpy and nltk.installTo install:pip install adsoconfigadso need to write some files to disk. As default adso uses the~/.adsofolder, but it can be change setting the enviromental variableADSODIR.documentationDocumentation with examples is hosted onGitHub PagesSome examples on how to use adso are also available intestsandexamplesfolders.
adsocket
ADSocketWebSocket server based onaiohttp.InstallUsing python packagepipinstalladsocketHow it worksTo start the very basic server, this command will do:adsocketassuming that you have redis server started on localhost listening on port 6379.Now you should be able to connect to server on ws://localhost:5000Basic usageAdsocket should work out of the box, however it's probably not what you would expect. To customize adsocket you can create custom channels, authentication or commands.Example of setting filefromadsocket.conf.default_settingsimport*# NOQACHANNELS={'global':{'driver':'my_package.channels.GlobalChannel','create_on_startup':True,},'user':{'driver':'my_package.channels.MyUserChannel','create_on_startup':False,}}AUTHENTICATION_CLASSES=('sraps_socket.auth.SrapsAuth',)DISCONNECT_UNAUTHENTICATED=FalseTo apply changes to adsocket you need to export environment variable with path to settingsexportADSOCKET_SETTINGS=my_package.settingsSending messages from you applicationSeeadsocket-transport.Documentation@TodoGoalsOut motivation to behind is follows:High scalabilityHigh performanceEasy customizationEasy extendabilityChannelsAll communication between server and client is through channels. Any client (understand websocket connection) can be member of n channels. There is no automatic subscription to channel so in order ot receive messages from server client have to subscribe to channels he or she wants to receive messages from or publish messages to.Custom channels@TodoCommands@TodoCustom command@Todo
adsocket-transport
Adsocket transportInstallpipinstalladsocket-transportUsageTransport initialization and sending message is very simplefromadsocket_transportimportADSocketTransportadsocket=ADSocketTransport(driver='redis',host='redis://localhost:6379',db=1)adsocket.send_data(data={'obj':'user','obj_id':4},channels={'name':'global','id':'global'})in case of async transportfromadsocket_transportimportADSocketAsyncTransportadsocket=ADSocketAsyncTransport(driver='redis',host='redis://localhost:6379',db=1)awaitadsocket.send_data(data={'obj':'user','obj_id':4},channels={'name':'global','id':'global'})Alternatively you can create message manuallyfromadsocket_transportimportMessage,ADSocketAsyncTransportadsocket=ADSocketAsyncTransport(driver='redis',host='redis://localhost:6379',db=1)message=Message(type='publish',data={'obj':'user','obj_id':4},channel='global',channel_id='global')awaitadsocket.send(message)For more seeadsocket-transport.
adsocket-transport-django
Adsocket transportInstallpipinstalladsocket-transport-djangoUsageUsing django.db.singals is very easy...fromdjango.appsimportAppConfigfromadsocket_transport_django.appsimportADSocketConfigclassVideosConfig(ADSocketConfig,AppConfig):"""Basic application config"""name="myapp"verbose_name="My App"adsocket_signals=['myapp.ws_message_creator.VideoMessageCreator']fromadsocket_transport_django.creatorimportADSocketCreatorfromadsocket_transport_djangoimportCREATE,UPDATE,DELETE,MessagefrommyappimportmodelsclassVideoMessageCreator(ADSocketCreator):classMeta:model=models.Tododefcreate(self,model):returnMessage(type='publish',channel=f'todos:{model.pk}',data={'obj':model.pk,'action':'create'})defupdate(self,model):returnMessage(type='publish',channel=f'todos:{model.pk}',data={'obj':model.pk,'action':'update'})defdelete(self,model):returnMessage(type='publish',channel=f'todos:{model.pk}',data={'obj':model.pk,'action':'delete'})
adsorb
Lightweight loose coupling library for Python. Add listeners to a named event and call them from anywhere to collect their responses.Event listeners are called in an undefined sequence and all responses are collected before execution resumes with the caller. Future versions may add support to call listeners in parallel threads.InstallationRuneasy_install adsorborpip install adsorbto get fromPyPI(not yet implemented).To install from source, runpython setup.py install.To run the tests, installnoseandcoverageand runpython setup.py nosetests.UsageSeehttp://jace.github.com/adsorb/0.1Initial version
adsorption-file-parser
Adsorption File ParserA pure python parser to sorption files from various instrumentation manufacturers. It comes with minimal dependencies and maximum flexibility.Currently supports files from:Micromeritics (‘.xls’ reports)Surface Measurement Systems DVS (‘.xlsx’ reports)3P instruments (‘.xlsx’ reports)Quantachrome (‘.txt’ raw isotherm data)MicrotracBEL (‘.dat’, ‘.xls’ and ‘.csv’ files)statuslicensetestspackageInstallationInstall using pippipinstalladsorption-file-parserDocumentationThe main read function returns two dictionaries: ametadictionary, which contains various metadata that is present in the report (date, user, units) and thedatadictionary, containing lists of individual isotherm data.fromadsorption_file_parserimportreadmeta,data=read(path="path/to/file",manufacturer="manufacturer",fmt="supported format")Bugs or questions?For any bugs found or feature requests, please open anissueor, even better, submit apull request.
adspawner
Fetch usergroups from AD.
adspower
adspowerThe package for interacting with API of anti-detect browser AdsPower.Telegram channelBug reportsadspower's source code is made available under theMIT LicenseRestrictionsDuring using the package, AdsPower must be opened.The local API is available only in paid AdsPower subscriptionsAdsPower has frequency control for all APIs, max. access frequency: 1 request/secondQuick startFor example, you can use driver by the following way.fromadspowerimportAdsPowerads_power=AdsPower('ja54rwh')withads_powerasdriver:driver.get('https://github.com/blnkoff/adspower')Here is example of usage of creating profiles from proxiesfromadspowerimportAdsPower,CreateProfileParams,UserProxyConfigdefcreate_profiles(config:dict[str],proxies)->list[AdsPower]:group_name=config['ads_power']['group_name']group_id=AdsPower.query_group(group_name=group_name)[0]['group_id']data_set=[]forproxyinproxies:proxy_config=UserProxyConfig(proxy_soft='other',proxy_type=proxy.type,proxy_host=proxy.host,proxy_port=proxy.port,proxy_user=proxy.login,proxy_password=proxy.password)data_set.append(CreateProfileParams(group_id=group_id,user_proxy_config=proxy_config))profiles=[]fordataindata_set:profiles.append(AdsPower.create_profile(data))returnprofilesHere is example of retrieving profiles from AdsPowerfromadspowerimportAdsPowerdefretrieve_profiles(config:dict[str]):group_name=config['ads_power']['group_name']group_id=AdsPower.query_group(group_name=group_name)['group_id']response=AdsPower.query_profiles(group_id=group_id)profiles=[AdsPower(profile['user_id'])forprofileinresponse]returnprofilesInstalling adspowerTo install the package from GitHub you can use that:pipinstallgit+https://github.com/blnkoff/adspower.gitTo install the package from PyPi you can use that:pipinstalladspower
adsputils
No description available on PyPI.
adspy
pyads is a simple way of interacting with a bibtex library built primarily fromthe ADS (http://adsabs.harvard.edu/abstract_service.html). Your mileage mayvary, but I've found it useful for quickly looking up cite info. There are alot of improvements that could be made.Here's a short script to start showing the functionality. I'll try to get getsome more docs up here soon.lib = ADSLibrary('test')for t in ["2011ApJ...735...96S", "2008ApJ...689.1063S"]:lib.add_entry(t)lib.write_bib()lib.save()lib.find('Skillman')lib.find('Dinosaur')lib.find('2008ApJ...689.1063S')b = lib.find('Skillman2011')b.download() # Not fully testedb.open() # Also not fully tested.# You would import an exisiting bib library with:# lib.import_bib('library.bib')# lib.save()nn
adspygoogle
Notice: This library is now sunsetAs of 1/5/15, this library is sunset and will no longer receive updates for new versions of supported APIs or bug fixes. Once all supported APIs are no longer compatible with this library, it will be removed from GitHub and PyPI.A newer library named googleads is available that supports Python 2.7 and Python 3.3+. You can read more about it here:The release announcement.googleads github page.Migrating from adspygoogle to googleads.The Google Ads APIs Python Client LibrariesThe Google Ads APIs Python Client Libraries support the following products:AdWords API Python Client Library v15.17.1DoubleClick for Advertisers API Python Client Library v2.4.2DFP API Python Client Library v9.12.0You can find more information about the Google Ads Python Client Librarieshere.InstallationYou have two options for installing the Ads Python Client Libraries:Install with a tool such as pip:$ sudo pip install adspygoogle $ --allow-external PyXML $ --allow-unverified PyXML $ --allow-external ElementTree $ --allow-unverified ElementTree $ --allow-external cElementTree $ --allow-unverified cElementTreeInstall manually after downloading and extracting the tarball:$ sudo python setup.py installExamples and Configuration ScriptsThis package only provides the core components necessary to use the client libraries. If you would like to obtain example code for any of the included client libraries, you can find it on ourdownloads page.Known IssuesDue to changes to PyPI’s installation process, using ‘pip install’ to install the library currently requires a number of ‘allow-external’ and ‘allow-unverified’ flags for the external dependencies of PyXML, ElementTree, and cElementTree. If you’re using a version of Python greater than 2.5 and would prefer not to install PyXML or cElementTree from external sources, then follow the steps below to install via the tarball without those dependencies (cElementTree is installed already as a default library in > Python 2.5). Alternatively, if you would prefer to download these libraries to install yourself, they can be downloaded at these locations:ElementTreecElementTreePyXMLThe installation of PyXML and cElementTree will fail on Ubuntu 13.04. If you are trying to install adspygoogle on Ubuntu 13.04, you should avoid installing these dependencies. If you need to use either of these dependencies, there is a work-around that can be found inthis bug. Another alternative is to install manually and exclude these dependencies. To do so, first download and extract the tarball below. In the root directory, run the following command:$ sudo python setup.py install --no_PyXML --no_cElementTreeContact UsDo you have an issue using the Ads Python Client Libraries? Or perhaps some feedback for how we can improve them? Feel free to let us know on ourissue tracker.
adspygoogle.adwords
For additional information, please seehttp://code.google.com/p/google-api-ads-python
adspygoogle.dfp
WARNING: PyPI release is unofficial. For official information, please seehttp://code.google.com/p/google-api-ads-python
ads-pytest-parser
Pytest Parser to Azure DevOps Test CaseAssociate DevOps test cases with pytest nunit XML resultsUsageThe parser should be run on an Azure DevOpspytest-azurepipelinesgenerated XML file, because it searches for the following attributes:"test-case""methodname""result"The Python script requires the following arguments:org: Azure DevOps organization nameproject: Azure DevOps project nameplan: Azure DevOps test plan namesuite: Azure DevOps test suite nameauth: Azure DevOps personal access tokenxml: pytest-azurepipelines generated XML fileThe test cases should be named in the following format:testcase_<DevOpsID>, i.e.def testcase_123456():For example:ads-pytest-parser org project planID suiteID authToken test.xmlThe script will then associate the test cases with the test results - for the test cases the outcome will be set.
adsquery
# adsqueryThis tool let you query the ADS using a simple CLI.## InstallationYou'll need an API key from NASA ADS labs. Sign up for the newest version of ADS search at https://ui.adsabs.harvard.edu, visit account settings and generate a new API token. The official documentation is available at https://github.com/adsabs/adsabs-dev-api.You can then install the tool through pip:```$ pip install adsquery```After that, you should be able to use it under the name `adsquery`:```$ adsquery --help$ adsquery query --help```## QueryingTo make a query to the ADS, try:```$ adsquery query [my-query]```The query can either be an ADS compatible query or a bashist one. For example:```$ adsquery query --first-author Einstein 1915$ adsquery query ^Einstein 1915```will result in the same results (papers by Einstein in 1915).## ExamplesImagine we're looking at the original 1915 Einstein's paper that's explaining for the first time the advance of the perihelion of Mercury. Here's an example of how to use adsquery```$ adsquery query --first-author Einstein --year 1915[0]: 1915, Annalen der Physik — Einstein, A., Antwort auf eine Abhandlung M. v. Laues Ein Satz der Wahrscheinlichkeitsrechnung und seine Anwendung auf die Strahlungstheorie[1]: 1915, Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften (Berlin — Einstein, Albert, Die Feldgleichungen der Gravitation[2]: 1915, Naturwissenschaften — Einstein, A., Experimenteller Nachweis der Ampèreschen Molekularströme[3]: 1915, Sitzungsber. preuss.Akad. Wiss — Einstein, A., Erklarung der Perihelionbewegung der Merkur aus der allgemeinen Relativitatstheorie[4]: 1915, Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften (Berlin — Einstein, Albert, Erklärung der Perihelbewegung des Merkur aus der allgemeinen Relativitätstheorie[5]: 1915, Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften (Berlin — Einstein, Albert, Zur allgemeinen Relativitätstheorie[6]: 1915, Koninklijke Nederlandse Akademie van Wetenschappen Proceedings Series B Physical Sciences — Einstein, A., de Haas, W. J., Experimental proof of the existence of Ampère's molecular currents[7]: 1915, Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften (Berlin — Einstein, Albert, Zur allgemeinen Relativitätstheorie (Nachtrag)[8]: 1915, Deutsche Physikalische Gesellschaft — Einstein, Albert, de Haas, Wander Johannes, Notiz zu unserer Arbeit "Experimenteller Nachweis der Ampèreschen Molekularströme"[9]: 1915, Deutsche Physikalische Gesellschaft — Einstein, Albert, Berichtigung zu meiner gemeinsam mit Herrn J. W. de Haas veröffentlichten Arbeit "Experimenteller Nachweis der Ampèreschen Molekularströme"```Here we see we got a few papers from the period, let's keep only the ones matching the work `Merkur` (Mercury in German)```Comma separated articles to download [e.g. 1-3, 4], [m] for more [q] to quit or add more parameters to request [e.g. year:2016]: Merkur[0]: 1915, Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften (Berlin — Einstein, Albert, Erklärung der Perihelbewegung des Merkur aus der allgemeinen Relativitätstheorie# Now let's download itComma separated articles to download [e.g. 1-3, 4], [m] for more [q] to quit or add more parameters to request [e.g. year:2016]: 0Download [d], bibtex[b], quit[q]? d```The file is now located at `~/ADS/1915SPAW.......831E_Einstein.pdf`. If you wanted the bibtex entry, you should replace the last `d` by `b`.# Features- [x] query the ADS- [x] interactively prompt the user- [x] show bibtex reference- [x] download pdf file## Bugs and suggestionsFeel free to fill in an issue if you have any problem or suggestion## ThanksSpecial thanks to andycasey for providing https://github.com/andycasey/ads and the ADS!
adsrxn
python package for adsorption-reaction processThis is a simple package to run adsorption-reaction process simulation.
adss
No description available on PyPI.
adstex
No description available on PyPI.
adstxt
AdstxtValidate ads.txt files against the official ads.txt specs.Free software: MIT licenseDocumentation:https://adstxt.readthedocs.io.History0.1.0 (2018-09-15)First release on PyPI.
adsutils
[![Build Status](https://travis-ci.org/adsabs/adsutils.svg?branch=master)](https://travis-ci.org/adsabs/adsutils)[![Coverage Status](https://coveralls.io/repos/github/adsabs/adsutils/badge.svg?branch=master)](https://coveralls.io/github/adsabs/adsutils?branch=master)ADSutils========This is a module with various ADS specific utilities## InstallingIf you just want to work with these utilities, you can install `adsutils` with pip. It is still advisable to use it in a virtual environment. In your virtual environment just do```pip install adsutils```and you should be all set to go. This has been tested under MacOS X, CentOS and Ubuntu with Python 2.7.In case you want to work with the code: clone the repo to a local directory```git clone https://github.com/adsabs/adsutils adsutils```Go into the newly created directory and create a virtual environment```virtualenv --no-site-packages -ppython2.7 venv```and start it```source venv/bin/activate```Update `pip` like```pip install -U pip```and then install the required software```pip install -r requirements.txt```Test if things are working:```python adsutils/test/nosetests.py```## Utility to create bibcodesImport the relevant module:```from adsutils import make_bibcode```and provide the necessary metadata:```data = {"year":"2006","bibstem":'PhRvL',"volume":"96","page":"295701","author":'Gr&uuml;nwald, Michael',}```and then call```bibcode = make_bibcode(data)```and a bibcode will get generated. You will have to determine the correct journal abbreviation (bibstem). The journal abbreviations are available here: http://adsabs.harvard.edu/abs_doc/journals2.html## Utility to resolve reference stringsImport the relevant module:```from adsutils import resolve_references```You can provide reference data in various formats:* A single reference string* A newline-separated set of reference strings* A (Python) list of reference stringsExamples:A case with just one reference string:```refdata = 'Hermsen, W., et. al. 1992, IAU Circ. No. 5541'result = resolve_references(refdata)```in which case the result (always a list of dictionaries) will look like```[{'refstring': u'Hermsen, W., et. al. 1992, IAU Circ. No. 5541','confidence': 'Success','bibcode': u'1992IAUC.5541....1H'}]```Multiple reference strings work as follows:```refdata = ['J. B. Gupta, and J. H. Hamilton, Phys. Rev. C 16, 427 (1977)', 'Pollock, J. T. 1982, Ph. D. Thesis, University of Florida']result = resolve_references(refdata)```in which case the result (always a list of dictionaries) will look like```[{'refstring': u'J. B. Gupta, and J. H. Hamilton, Phys. Rev. C 16, 427 (1977)','confidence': 'Success','bibcode': u'1977PhRvC..16..427G'},{'refstring': u'Pollock, J. T. 1982, Ph. D. Thesis, University of Florida','confidence': 'Success','bibcode': u'1982PhDT.........1P'}]```# Possible outcomeThe resolver can return three classes of 'confidence' levels:* Success* Failed* Not verifiedThe only class that needs some explanation is the last one; it is quite possible that the metadata contains enough information to guess a bibcode. The year could be off by 1 (which can also apply to the page or volume number) or a journal was abbreviated in a non-standard way. It is also possible that all the metadata is correct, but the record is not in the ADS database. Even though a bibcode is returned, you cannot assume it is correct. These <em>Not verified</em> cases need further inspection.## Utility to get ADS journal abbreviation from publication nameAn essential part of the ADS publication identifier (<em>bibcode</em>) is the publication abbreviation (<em>bibstem</em>). This utility takes a string representing the publication name and attempts to match it to an ADS abbreviation. It returns a list of candidates and associated scores.Import the relevant module:```from adsutils import get_pub_abbreviation```The bibstem candidates are then found as follows:```pubstring = 'American Astronautical Society Meeting'result = get_pub_abbreviation(pubstring)```which returns a list of tuples with candidates and their associated scores (sorted by score, descending):```[(1.0, 'aans.meet'), (0.98545706272125244, 'AAS......'), (0.95637118816375732, 'aans.symp'), (0.93698060512542725, 'AAS......'), (0.91897505521774292, 'acs..meet')]```You can specify that you are only interested in exact matches in the following way:```pubstring = 'Astrophysical Journal'result = get_pub_abbreviation(pubstring, exact=True)```which would result in```[(1, 'ApJ......')]```while```pubstring = 'Astrophysical Journ'result = get_pub_abbreviation(pubstring, exact=True)```would result in```[]```
adsystem
No description available on PyPI.
adsystemaiogram
Failed to fetch description. HTTP Status Code: 404
adt
No description available on PyPI.
adt-cache
# Abstract Cachememory cache | redis cache하루 단위의 rcp_no 저장해서 이미 처리된 rcp_no 를 구분한다collection compare for distributed environment
adt-clients
No description available on PyPI.
adtcls
No description available on PyPI.
adt-decorators
OverviewThis package provides a class decorator for definingAlgebraic Data Types (ADTs)as known fromHaskell(data),OCaml(type), andRust(enum).FeaturesSimplicity.This package exports only a single definition: theadtclass decorator:fromadtimportadtConcision.Constructors are specified via class annotations, allowing for syntax comparable to Rust'senums:@adtclassEvent:MouseClick:[int,int]KeyPress:{'key':str,'modifiers':list[str]}Pattern Matchingviamatchis fully supported (Python >= 3.10):event=Event.KeyPress(key='a',modifiers=['shift'])matchevent:caseEvent.MouseClick(x,y):print(f"Clicked at ({x},{y}).")caseEvent.KeyPress(key,mods):print(f"Pressed key{key}.")Named and unnamed constructor fieldsare supported:Constructors with named fields, likeKeyPress, are specified as adict[str, type];Constructors with unnamed fields, likeMouseClick, are specified as alist[type];Constructors with a single unnamed field can also be specified as atype;Constructors with no fields are specified as the empty list.Getters, Setters, and Instance-Checkingmethods are derived as an alternative to pattern matching, e.g.ifevent.is_mouse_click():print(f"Clicked at ({event._1},{event._2}).")elifevent.is_key_press():print(f"Pressed key{event.key}.")Constructors are customizable dataclasses.Thedataclassdecorator derives many useful method implementations, e.g. structural equality and string-conversion.Additonal keyword arguments toadtare forwarded as keyword arguments to thedataclassannotations of all constructors:@adt(frozen=True)# <-- Use @dataclass(frozen=True) for all constructors.classEvent:MouseClick:[int,int]KeyPress:{'key':str,'modifiers':list[str]}event=Event.MouseClick(5,10)event._0=42# Error! Constructor dataclass is frozen.Constructors inherit from the decorated type.Making the constructors inherit from the decorated class, allows to define methods with pattern matching directly in the decorated class and call them on objects of the constructor classes:@adtclassEvent:MouseClick:[int,int]KeyPress:{'key':str,'modifiers':list[str]}defprint(self):matchself:caseEvent.MouseClick(x,y):print(f"Clicked at ({event._1},{event._2}).")caseEvent.KeyPress(key,mods):print(f"Pressed key{event.key}.")Event.MouseClick(5,10).print()Constructors can be exported into the global namespace.@adt(export=True)# <-- Makes `Event.` prefixes optional for constructors.classEvent:MouseClick:[int,int]KeyPress:{'key':str,'modifiers':list[str]}defprint(self):matchself:caseMouseClick(x,y):...# <-- As promised: no `Event.MouseClick`!caseKeyPress(key,mods):...# <-- As promised: no `Event.KeyPress`!Reflection.The decorated class has a static fieldconstructors: dict[str, type]which maps the constructor names to their classes, e.g.key_event=Event.constructors['KeyPress'](key='a',modifiers=['shift'])TranslationThe code generated in the above example by theadtdecorator for theEventADT behaves equivalent to the following code, with the exception that the constructor classes are constructed anonymously, so the global namespace is not even temporarily polluted unless@adt(export=True)is used.fromdataclassesimportdataclassclassEvent:def__init__(self,*args,**kwargs):raiseTypeError("Tried to construct an ADT instead of one of it's constructors.")defis_mouse_click(self)->bool:returnisinstance(self,Event.MouseClick)defis_key_press(self)->bool:returnisinstance(self,Event.KeyPress)@dataclassclassMouseClick(Event):_1:int_2:int@dataclassclassKeyPress(Event):key:strmodifiers:list[str]Event.MouseClick=MouseClickEvent.KeyPress=KeyPressifnotexport:delMouseClickdelKeyPressEvent.constructors={'MouseClick':Event.MouseClick,'KeyPress':Event.KeyPress,}Related packagesThe following compares this package to packages which aim to provide similar functionality:algebraic-data-typesalso describes ADTs via class decorators and annotations, but doesnotsupport pattern matching viamatch, as it is aimed at older python versions. Also the package does not support named constructor parameters.algebraic-data-typeandUxADTdoes not support a concise definition via decorators and does not support pattern matching viamatch.choicetypesimplements similar functionality, but instead of having subclasses for the constructors, the__init__-method of the main ADT-Class takes a named argument for each constructor variant, which is more verbose, error-prone and does not have a straightforward way to support named constructor arguments.match-variantsupports pattern matching viamatchand realizes ADTs by inheriting from a base class calledVariantthat seems to process the annotations. It does not seem to support named constructor parameters and methods that check if the ADT is a certain constructor.py-foldadtcomes without any documentation and has unclear functionality. It defines various algebraic structures like semirings with unclear connection to ADTs.
adt-extension
adt-extensionPython abstract data structure (ADT) extension.Install:pipinstalladt-extensionImport:fromadt_extensionimportSet,SwitchDictExtensionsCurrently the package have the ADT extensions:ClassExtension ofDescriptionSetsetSet with element type and validation rule.SwitchDictdictDictionary with the possibility of behavior of aswitch case.SetSet with element type and validation rule for the elements that will be inserted into the set.Example:fromadt_extensionimportSet# A set with only even integersset_int_even=Set(element_type=int,rule=lambdax:(x%2==0))# Elements that satisfies the element type and validation ruleset_int_even.update({4,6})# Elements that satisfies the element type, but not validation ruleset_int_even.update({5})# Elements that not satisfies the element typeset_int_even.update({"qwe",True})print(set_int_even)# Output:# Set({4, 6})# Remove element typeset_int_even.element_type=None# Remove validation ruleset_int_even.rule=NoneSwitchDictDictionary with the possibility to perform a function or return a value when trying to access a nonexistent index in the dictionary class.Example:fromadt_extensionimportSwitchDict# Same behavior of a normal dictionaryswitch_dict=SwitchDict({'Apartament':125,'House':250,'Condominium':300,})# Add default caseswitch_dict.default_case=999# List exampleproperties_list=['Apartament','House','Condominium','Treehouse','Hotel',]# Get valuesproperties_values=[switch_dict[x]forxinproperties_list]print(properties_values)# Output:# [ 125, 250, 300, 999, 999 ]# Remove default case, becomes a normal dictionaryswitch_dict.default_case=None
adtk
Anomaly Detection Toolkit (ADTK)Anomaly Detection Toolkit (ADTK) is a Python package for unsupervised / rule-based time series anomaly detection.As the nature of anomaly varies over different cases, a model may not work universally for all anomaly detection problems. Choosing and combining detection algorithms (detectors), feature engineering methods (transformers), and ensemble methods (aggregators) properly is the key to build an effective anomaly detection model.This package offers a set of common detectors, transformers and aggregators with unified APIs, as well as pipe classes that connect them together into models. It also provides some functions to process and visualize time series and anomaly events.Seehttps://adtk.readthedocs.iofor complete documentation.InstallationPrerequisites: Python 3.5 or later.It is recommended to install the most recentstablerelease of ADTK from PyPI.pipinstalladtkAlternatively, you could install from source code. This will give you thelatest, but unstable, version of ADTK.gitclonehttps://github.com/arundo/adtk.gitcdadtk/ gitcheckoutdevelop pipinstall./ExamplesPlease seeQuick Startfor a simple example.For more detailed examples of each module of ADTK, please refer toExamplessection in the documentation oran interactive demo notebook.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update unit tests as appropriate.Please seeContributingfor more details.LicenseADTK is licensed under the Mozilla Public License 2.0 (MPL 2.0). See theLICENSEfile for details.
adtools
adtools-pyTools for working with Active Directory from Python
adtpulsepy
adtpulsepy is an open-source unofficial API for the ADT Pulse alarm system
adt.py
No description available on PyPI.
adtrees
adtreesimplements some methods for qualitative and quantitative evaluation of security usingattack treesandattack-defense trees.The package is intended to be used together with theADTool, but this is not obligatory.PrerequisitesOptimization problems on attack-defense trees are solved usinglp_solve. For the installation oflp_solve, seeUsing lpsolve from Python. General information onlp_solvecan be foundhere.No special prerequisites for the remaining functionalities ofadtrees.Installationpip install adtreesExampleimportadtreesasadt# initialize attack(-defense) tree from an output file 'tree.xml' produced by the ADToolT=adt.ADTree('tree.xml')# create a basic assignment of cost for the basic actions of the defender in Tba=adt.BasicAssignment()forbinT.basic_actions('d'):ba[b]=10# create an instance of the 'maximal coverage' optimization problemproblem=adt.ADTilp(T,costassignment=ba,budget=100,problem='coverage')# solve the problemproblem.solve()# the optimal set of countermeasures and some additional information is displayedFor other functionalities and more details, refer to the walk-through examples inexamples folder.
adtree-viz
adtree-vizIntroAn Attack-Defense Tree modelling lib that allows user to model attack-defense scenarios using an internal DSL.Project inspired byhttps://github.com/hyakuhei/attackTreesandhttps://github.com/tahti/ADTool2.The main goals are:add support for AND nodesbe able to break down a large tree into multiple subtrees.keep it simple, only Attack and Defense nodesUsageRequirements:GraphvizPython 3.9Install the librarypipinstalladtree-vizQuick startfromadtree.modelsimportAttack,Defence,AndGate,ADTreefromadtree.rendererimportRendererfromadtree.themesimportRedBlueFillThemetree=ADTree("REFS.01",Attack("the goal",[Attack("path1",[Defence("defend path1",[Attack("path1 defence defeated")])]),Attack("path2",[Attack("path2.1"),AndGate([Attack("path3.1"),Attack("path3.2"),]),]),]))theme=RedBlueFillTheme()renderer=Renderer(theme=theme,output_format="png",view=True)renderer.render(tree=tree,filename="my-adtree")The above should produce an attack-defence tree like this:Composing treesTrees can be composed of multiple subtrees. Which of the subtrees get expanded is decided at render time based on thesubtrees_to_expandvariable.fromadtree.modelsimportAttack,ADTree,ExternalADTreefromadtree.rendererimportRendererfromadtree.themesimportNoFormatThemesome_external_ref=ExternalADTree("EXT.01","External resource covered by other docs")some_internal_ref1=ADTree("INT.01",root_node=Attack("internal path1",[Attack("path 1.1",[ADTree("INT.01.A",Attack("nested path 1.1A"))])]))some_internal_ref2=ADTree("INT.02",root_node=Attack("internal path2",[Attack("path 2.1")]))tree=ADTree("REFS.01",Attack("node1",[some_external_ref,some_internal_ref1,some_internal_ref2]))theme=NoFormatTheme()renderer=Renderer(theme=theme,output_format="png",view=False)# Default is to not expandrenderer.render(tree=tree,filename="default")# Optionally expand some nodesrenderer.render(tree=tree,subtrees_to_expand=[some_internal_ref1],filename="partially_expanded")The above will render two files.One with all the subtrees collapsed (the default):And another file with one subtree expanded:Analysing treesCurrently, there is only one analyser available, the IsDefendedAnalyser. Traverse the tree and mark each nodes as either defended or undefended A node is considered defended if:is a Defence node and has no childrenis an Attack node and has a direct defended Defence node as childis an Attack or Defence node and all child nodes are defended nodesis an AndGate and at least one child node is defendedExample with custom rendering of the defended nodesfromadtree.modelsimportNodeType,Node,Attack,ADTree,Defence,AndGatefromadtree.analysersimportIsDefendedAnalyserfromadtree.rendererimportRendererfromadtree.themesimportNoFormatThemeclassCustomIsDefendedTheme(NoFormatTheme):defget_node_attrs_for(self,node:Node):metadata_attrs={"style":"filled"}ifnode.get_node_type()==NodeType.DEFENCE:metadata_attrs|={"shape":"box",}ifnode.get_node_type()==NodeType.AND_GATE:metadata_attrs|={"shape":"triangle",}ifnode.has_metadata(IsDefendedAnalyser.METADATA_KEY):fillcolor="#C8FFCB"ifnode.get_metadata(IsDefendedAnalyser.METADATA_KEY)else"#FFD3D6"metadata_attrs|={"fillcolor":fillcolor,}returnmetadata_attrstree=ADTree("REFS.01",Attack("the goal",[Attack("path1",[Defence("defend path1",[Attack("path1 defence defeated")])]),Attack("path2",[Attack("path2.1",[Defence("def2.1"),Attack("path2.1.1")]),AndGate([Attack("path3.1"),Attack("path3.2",[Defence("defended")]),]),]),]))analyser=IsDefendedAnalyser()analyser.analyse_tree(tree)theme=CustomIsDefendedTheme()renderer=Renderer(theme=theme,output_format="png",view=False)# Default is to not expandrenderer.render(tree=tree,filename="default")The above should produce an attack-defence tree like this:DevelopmentCreate a venvpython3.9-mvenvvenvActivate.venv/bin/activateInstall depspipinstall-rrequirements.txtRun testsPYTHONPATH=srcpython-mpytestRun individual test filePYTHONPATH=srcpython-mpytest./test/adtree/test_theme.pyRun individual test methodsPYTHONPATH=srcpython-mpytest--capture=no./test/adtree/test_theme.py-k"metadata"Release to Github and PyPiCreate tag and push./release.shManually build and releaseRun the below to generate a distributable archive:python3-mbuildTheadtree-viz-x.xx.x.tar.gzarchive can be found in thedistfolder.Deploy to PyPipython3-mtwineupload-rpypidist/*# Use __token__ as username# Use PyPi API TOKEN as password
adt-sdk
No description available on PyPI.
adttools
adttoolsSystem RequirementPython 3.6 or abovepandasTable of contentsModelHelperlist_modelsfind_model_components_listpickerget_component_dictRelationshipHelperlist_relationshipsadd_relationshipdelete_relationshipfind_relationships_with_targetfind_and_delete_relationshipsPropertyHelperget_twin_detailprepare_propertyprepare_componentprepare_relationshipsubmitupdate_propertyadd_propertyremove_propertyTwinHelperadd_twindelete_twinQueryHelperquery_twinsquery_relationshipsrun_queryDeployHelpercsv_deployclearadttools.helperAll helpers requires 3 parameters to initialize. Eithertoken_pathortokenshould be given.host_name:strHost name of the Azure Digital Twins Instance. You can get it from the Azure portal.token_path:strThe path of a text file storing the bearer token get by using this command with Azure CLI.az account get-access-token --resource 0b07f429-9f4b-4714-9392-cc5e8e80c8b0token:strA string of token. Get by using this command with Azure CLI.az account get-access-token --resource 0b07f429-9f4b-4714-9392-cc5e8e80c8b0adttools.helper.ModelHelperclass adttools.helper.ModelHelper(host_name, token_path=None, token=None)This class can help you deal with the searching requirements of model.list_modelslist_models(model=None)List all model ifmodelis not specified.Ifmodelis specified, list all related models including extending and component.Parametersmodel:strModel ID.ReturnType:dictDictionary of each models.find_model_components_listfind_model_components_list(model)Get a list of the name of components ofmodel.This method will affect a private variable which has a getter methodget_component_dict().Parametersmodel:strModel ID.ReturnNoneget_component_dictget_component_dict()Get current found components of models.ReturnType:dictoflistofstrThe key is model, value is a list of component of this model.pickerpicker(model_folder, model_list, output_folder='picked')Instead of upload all models of a folder, you can use this method the pick the necessary models.This method will copy the models ofmodel_list, including the models which depend on it frommodel_folderto the folderoutput_folder.You can use Azure Digital Twins Explorer orAzure CLI commandto upload the folder of picked models.Parametersmodel_folder:strFolder path storing lots of models. This method will pick the models from here.model_list:listofstrList of model IDs.output_folder:strThe picked models will be copied to here.ReturnNoneadttools.helper.RelationshipHelperclass adttools.helper.RelationshipHelper(host_name, token_path=None, token=None)This class can help you deal with the CRUD requirements of relationships between digital twins.list_relationshipslist_relationships(source, rname=None)List all relationships which the source issource.Ifrnameis specified, the response will only contain the relationships with this name.Parameterssource:strDigital twin ID.rname:str(Default:None)Name of the relationship, if not specified, it will list all relationships.ReturnType:Response(from the libraryrequests)To get the content (JSON format string) of response, use.text.To get the status code of this HTTP request, use.status_code.add_relationshipadd_relationship(source, target, rname, init_property={})Add a relationship fromsourcetotargetwith namername. The properties of relationship can be add withinit_property.Parameterssource:strSource digital twin ID.target:strTarget digital twin ID.rname:strName of the relationship.init_property:dictInitial value given to the properties.should look like{"p_1": 123, "p_2":{"sub_p_1": "some value"}}ReturnType:Response(from the libraryrequests)To get the status code of this HTTP request, use.status_code.delete_relationshipdelete_relationship(source, rid)Delete a relationship with Idridfromsource.Parameterssource:strSource digital twin ID.rid:strID of the relationship.ReturnType:Response(from the libraryrequests)To get the status code of this HTTP request, use.status_code.find_relationships_with_targetfind_relationships_with_target(source, target, rname=None)List all relationships which the source issourceand the target istarget.Ifrnameis specified, the response will only contain the relationships with this name.Parameterssource:strSource digital twin ID.target:strTarget digital twin ID.rname:str(Default:None)Name of the relationship, if not specified, it will list all relationships.ReturnType:listofdictThedictinside contains 4 keys:relationshipId,relationshipName,sourceId,targetId.find_and_delete_relationshipsfind_and_delete_relationships(source, rname=None, target=None)Delete all relationships which the source issource.Ifrnameis specified, it will only delete the relationships with this name.Iftargetis specified, it will only delete the relationships which the target is this ID.Parameterssource:strSource digital twin ID.target:str(Default:None)Target digital twin ID.rname:str(Default:None)Name of the relationship, if not specified, it will delete all relationships matched.ReturnNoneadttools.helper.PropertyHelperclass adttools.helper.PropertyHelper(host_name, token_path=None, token=None)This class can help you deal with the CRUD requirements of properties of digital twins, including the properties of a component.Except for the methodget_twin_detail, the other methods are like a builder pattern in order to update multiple properties of a twin in one API calling.e.g.,from adttools.helper import PropertyHelper ph = PropertyHelper(token_path="...", host_name="...") ph.prepare_property(dtid="Room1") .update_property(key="temperature", value=60) .update_property(key="humidity", value=55) .add_property(key="name", value="sensor") .remove_property(key="remove") .submit()get_twin_detailget_twin_detail(dtid)Get the details of a digital twin (twin ID:dtid), including properties.Parametersdtid:strDigital twin ID.ReturnType:Response(from the libraryrequests)To get the content (JSON format string) of response, use.text.To get the status code of this HTTP request, use.status_code.prepare_propertyprepare(dtid)Start a process for updating property. You can use the methodsupdate_property,add_property,remove_propertyafter calling this method.Parametersdtid:strDigital twin ID.Returnselfprepare_componentprepare_component(dtid, component_path)Start a process for updating component. You can use the methodsupdate_property,add_property,remove_propertyafter calling this method.Parametersdtid:strDigital twin ID.Returnselfprepare_relationshipprepare_relationship(source, rid)Start a process for updating properties of a relationship. You can use the methodsupdate_property,add_property,remove_propertyafter calling this method.Parameterssource:strSource digital twin ID.rid:strID of the relationship.Returnselfsubmitsubmit()Submit the process.ReturnNoneupdate_propertyupdate_property(key, value)Add an "update" process to current updating process.Parameterskey:strKey of property.value:str,intorfloatValue of property.Returnselfadd_propertyadd_property(key, value)Add an "add" process to current updating process.Parameterskey:strKey of property.value:str,intorfloatValue of property.Returnselfremove_propertyremove_property(key)Add an "remove" process to current updating process.Parameterskey:strKey of property.Returnselfadttools.helper.TwinHelperclass adttools.helper.TwinHelper(host_name, token_path=None, token=None)This class can help you deal with the basic requirements of digital twins.add_twinadd_twin(dtid, model, init_property={}, init_component={})Add a digital twin with specified model ID, the initial value of properties and component can be set by using dictionary.Parametersdtid:strDigital twin IDmodel:strdtmi (digital twins model ID)init_property:dictInitial value given to the propertiesshould look like{"p_1": 123, "p_2":{"sub_p_1": "some value"}}init_component:dictInitial value given to the componentsshould look like{"c_1": {"c_1_property": "some value"}}ReturnType:Response(from the libraryrequests)To get the status code of this HTTP request, use.status_code.delete_twindelete_twin(dtid)Delete a digital twin with digital twin ID.Parametersdtid:strDigital twin IDReturnType:Response(from the libraryrequests)To get the status code of this HTTP request, use.status_code.adttools.helper.QueryHelperclass adttools.helper.QueryHelper(host_name, token_path=None, token=None)This class can help you deal with the requirements of querying digital twins and relationships.query_twinsquery_twins(dtid=None, condition=None)Query twins.Parametersdtid:str(Default:None)Source digital twin ID.condition:str(Default:None)Other condition can be placed here.ReturnType:listofdictThe matched twins.query_relationshipsquery_relationships(source=None, target=None, rname=None)Query relationships.Parameterssource:str(Default:None)Source digital twin ID.target:str(Default:None)Target digital twin ID.rname:str(Default:None)Relationship name.ReturnType:listofdictThe matched relationships.run_queryrun_query(self, query)Run a query string.Parametersquery:strQuery string.ReturnType:listofdictThe matched objects.adttools.helper.DeployHelperclass adttools.helper.DeployHelper(host_name, token_path=None, token=None)This class can help you deal with the requirements of batch deployment of digital twins.csv_deploycsv_deploy(path, atomic=True)Deploy digital twins with a csv file.Columns of this CSV file should bemodelid,dtid,init_property,init_component, ,rname,rtarget,init_rproperty.init_property,init_componentandinit_rpropertyare optional columns.modelid: model IDdtid: Twin IDinit_property: (JSON format) Can be empty, the initial value of properties.init_component: (JSON format) Can be empty, the initial value of components.rname: Relationship name, ifrnameis specified,rtargetis required. If multiple relationships are required, just add a new line withoutmodelidand using an existingdtid.rtarget: Target twin ID if a relationship (rname) is specified.init_rproperty: Initial value of properties of relationship if a relationship (rname) is specified.Parameterspath:strCSV file path.atomic:boolIf set asTrue, any step failed during this deployment will start a deletion process to delete the twins and relationships just created.If set asFalse, any step failed during this deployment will store to a CSV file (file name:<file name>_failed.csv) containing the failed twins and relationships. You can fix it and re-deploy it.ReturnNoneclearclear()Clean all twins and relationships.ReturnNone
adtypingdecorators
StatusCompatibilitiesContactAdTypingDecoratorsPython decorators allowing to check and/or enforce types in functions' arguments based on typing hints.InstallationpipinstalladtypingdecoratorsUsageimportnumpyasnpimportpandasaspdfromadtypingdecoratorsimporttyping_raise,typing_convert,typing_warn,typing_customdefto_array(a:int):returnnp.array([a,2*a])defto_array_2(a:int):returnnp.array([a,3*a])@typing_raisedeff_raise(a:int):returna+1@typing_convertdeff_convert(a:int):returna+1@typing_warndeff_warn(a:int):returna+1@typing_custom(convertors={int:to_array,"b":to_array_2},exclude=["c",pd.DataFrame])deff_custom(a:np.ndarray,b:np.ndarray,c:np.ndarray,d:np.ndarray):returna+1,b+1,c+1,d+1f_raise(1)# Returns 2, as expected# noinspection PyTypeCheckerf_raise(1.5)# Raises TypeError# noinspection PyTypeCheckerf_convert(1.5)# Returns 2 (converted 1.5 into 1)# noinspection PyTypeCheckerf_convert("foo")# Raises ValueError (while trying to convert 'foo' to interger)# noinspection PyTypeCheckerf_warn(1.5)# Returns 2.5, and warns# noinspection PyTypeCheckera_,b_,c_,d_=f_custom(1,2,3,pd.DataFrame([4]))# a_ is np.array([2, 3])# b_ is np.array([3, 7])# c_ is 4# d_ is pd.DataFrame([5])
adu
Failed to fetch description. HTTP Status Code: 404
adua
AduaAdua is a Python package that provides various functionalities, including web search, face recognition, audio processing, and more. It is designed to assist users with tasks and interactions using speech and text.FeaturesWeb search: Perform searches on popular search engines.Face recognition: Capture faces from video and perform facial recognition.Audio processing: Convert text to speech and speech to text.Wolfram Alpha integration: Get answers to queries using Wolfram Alpha.Image generation: Generate images based on queries.OpenAI GPT integration: Get answers to questions using OpenAI GPT.InstallationYou can install Adua using pip:pipinstalladua Usage Here's a quick example of how to use Adua in your Python code:<code>from adua import Adua# Initialize Aduaadua = Adua()# Perform a web searchadua.web_search('howtocreatePythonpackage')# Capture faces from videofaces, face_num, face_locations, face_encodings, frame = adua.Capture_face_vid()# Convert text to speechadua.speak('Hello,howareyou?')# Convert speech to textquery=adua.listen()</code> License ThisprojectislicensedundertheMITLicense.
aduana
No description available on PyPI.
aduck
Failed to fetch description. HTTP Status Code: 404
ad-udacitydistributions
No description available on PyPI.
adul
AdulAdul package python for Super AI Engineerhttps://pypi.org/project/adul/Installpip3 install -r requirements.txt pip3 install adulUsagefrom adul import super_ai cd demohome work 1นับว่า alphabet ใน name list กี่ตัวpython3 count_alphabet.pyhome work 2เขียนโปรแกรมหาค่าสูงสุดในลิสต์python3 maximum_list.pyเขียน 8 puzzlepython3 eight_puzzle.pyเขียน depth-first search พร้อม อะนิเมชั่นpython3 depth_first_search.pyเลือกข้อมูล 1 อย่างจาก Kaggle มาทำการ visualize ข้อมูลในรูปแบบกราฟต่างๆpython3 visualize_kaggle.py
adult-dataset
adult-datasetA PyTorch dataset wrapper for theAdult (Census Income)dataset. Adult is a popular dataset in machine learning fairness research.This package provides theadult.Adultclass: atorch.utils.data.Datasetsloading and, optionally, downloading the Adult dataset. It can be used like theMNISTdataset intorchvision.Beyondadult.Adult, this package also providesadult.AdultRaw, which works just asadult.Adult, but does not standardize the features in the dataset and does not apply one-hot encoding.Installationpipinstalladult-datasetBasic UsagefromadultimportAdult# load (if necessary, download) the Adult training datasettrain_set=Adult(root="datasets",download=True)# load the test settest_set=Adult(root="datasets",train=False,download=True)inputs,target=train_set[0]# retrieve the first sample of the training set# iterate over the training setforinputs,targetiniter(train_set):...# Do something with a single sample# use a PyTorch data loaderfromtorch.utils.dataimportDataLoaderloader=DataLoader(test_set,batch_size=32,shuffle=True)forepochinrange(100):forinputs,targetsiniter(loader):...# Do something with a batch of samplesAdvanced UsageTurn off status messages while downloading the dataset:Adult(root=...,output_fn=None)Use theloggingmodule for logging status messages while downloading the dataset instead of placing the status messages onsys.stdout.importloggingAdult(root=...,output_fn=logging.info)
adummypatt
No description available on PyPI.
adumo
No description available on PyPI.
aduneoclientfedid
aduneoclientfedidIdentity Federation Test Client by AduneoQuick viewaduneoclientfedidis used to test OpenID Connect, OAuth 2 and SAML configurations. It acts as a federation client mimicking an application.After an initial configuration, various flows are tested. The application may obtain tokens and assertions that can be validated, then used for user info, introspection and exchange.It is useful for:testing a newly installed identity providerlearning how identity federation worksunderstanding a specific featuredebugging a faulty client configuration by replicating itlearning how to code OpenID Connect, OAuth 2 or SAML 2Supported protocolsaduneoclientfedidsupports OpenID Connect, OAuth 2 and SAML.OpenID ConnectThe client is compatible with OpenID Connect Core 1.0 incorporating errata set 1 (https://openid.net/specs/openid-connect-core-1_0.html).OAuth 2The client is compatible withRFC 6749 The OAuth 2.0 Authorization Framework (https://www.rfc-editor.org/rfc/rfc6749)RFC 7662 OAuth 2.0 Token Introspection (https://www.rfc-editor.org/rfc/rfc7662)RFC 8707 Resource Indicators for OAuth 2.0 (https://www.rfc-editor.org/rfc/rfc8707)RFC 8693 OAuth 2.0 Token Exchange (https://www.rfc-editor.org/rfc/rfc8693)SAMLThe client is compatible with the essential parts of SAML V2.0 Specifications (http://saml.xml.org/saml-specifications)and its use with OAuth 2 :RFC 7522 Security Assertion Markup Language (SAML) 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants (https://www.rfc-editor.org/rfc/rfc7522)Installationaduneoclientfedidis a web server that is installed locally, most of the time onlocalhostand accessed with a web browser.Python must be installed on the system which will run the web server. It is compatible with Python 3.6 and later.It has been tested on Windows and various Linux systems. On Windows, it can be executed from a command line prompt or in a Powershell window.The simpliest way to install it is to download it from PyPI.First, it is advisable to create a virtual environment in a directory where you want to install the software.$mkdirclientfedid$cdclientfedid$python-mvenvmy-env(depending on your operating system, you might have to usepython3instead ofpython, or use a different command -virtualenv -p python3 my-envfor instance)and activate it. Depending on the system:$sourcemy-env/bin/activateor> my-env\Script\activatethen install it withpip:$pipinstalladuneoclientfedidBy default, packages needed for SAML are not installed, because they are tricky on some systems. If you want to use SAML, install with the [saml] option:$pipinstalladuneoclientfedid[saml]You may have to manually install some Linux packages. Please refer to the xmlsec documentation (https://pypi.org/project/xmlsec) for more information.Running aduneoclientfedidOnce the packages are successfully installed, create a root directory where the configuration and logs will be created. This root directory can be located anywhere on the disk. The natural option is the directory where the Python virtual environment (venv) has been created.If you want to create a new root directory:mkdir clientfedidcd clientfedidTwo directories will be created in this directory:confwhere a default configuration file is generatedlogMake sure the current user is allowed to create these items.There are several ways of launching the server:clientfedidaduneoclientfedidpython -m aduneoclientfedidIf successfull, a similar line is displayed:Fri Jan 6 18:15:52 2023 Server UP - https://localhost:443On Unix/Linux systems, non-administrative users are prevented by default to start a server on ports below 1024.HTTPS running on port 443, the server won't launch, with the following error:PermissionError: [Errno 13] Permission deniedThe easiest way out is to modify the port to a value larger than 1024, for instance 8443.To change the port, just had the-portargument. Launching the server on port 8443 becomes:clientfedid -port 8443When you use the previous command to launch the client for the first time (when theconfdirectory has not yet been created), the port is configured in the configuration file (the fileclientfedid.cnfin theconfdirectory). Now you don't have to specify the port in the command line for the next execution.You can also change the listening interface, with the-hostargument.By default, the server only listens on thelocalhostinterface (127.0.0.1), meaning you can only reach it from the same computer (with a web browser onhttps://localhost). If you want to access it from another computer, you have to change the listening network interface.To listen on any interface, run the server with an empty host:clientfedid -host ""Now you can point a browser to something likehttps://mycomputer.domain.com.Once the server is running, stop it with Ctrl+C.This server is only meant to be running for the time when the tests are conducted. It is not optimized to run for a long time. It is not optimized to run as a demon. It is definitely not secure enough.It is usually run on the tester's computer or on a computer controlled by the tester.Running from sourcesThere are situations where it is not possible to install the server with pip.It's still possible to run it from the sources.First, the following packages must be manually installed:certificharset_normalizer (at the time of writing, urllib3 is only compatible with version 2, not the newer version 3)idnaurllib3requestscffipycparsercryptographypyopenssldeprecatedwraptjwcryptoAdditionaly (for SAML):lxmlxmlsecSources are downloaded fromhttps://github.com/Aduneo/aduneoclientfedid, usually as a ZIP download through theCodebutton.Create a root directory.Create a Python virtual environment, activate it and install all necessary packages, in the order given earlier.Unzip the sources, go to the directory containing the aduneoclientfedid folder and run:python -m aduneoclientfedidTesting OpenID Connectaduneoclientfedidacts as an OpenID Connect Relaying Party (RP). It triggers user authentications, receives ID Tokens and retrieves user information through theuserinfoendpoint.Once an ID token is obtained, if the RP is compatible, the token can be exchanged for an access token (using OAuth 2.0 Token Exchange - RFC 8693). This simulates a web application that authenticates users (OpenID Connect) and then connects to web services (OAuth 2).How it worksYou will needaduneoclientfedidinstalled and started, usually on the local machineaccess to the OpenID Provider (the identity server you want to test)a test user created on the OpenID Provider (along with its password or any other authentication method)bothaduneoclientfedidand the OP configured (more on that later)When all of this is done, connect with a web browser toaduneoclientfedidmain page. Usuallyhttps://localhost(it's possible to install it on a different machine, to change to port, and to deactivate HTTPS for testing purposes).The browser will probably display a warning since loading a page fromlocalhostis restricted when the connection is encrypted. Bypass the warning, or change the configuration to switch to unencrypted or to connect to a real IP address.Once you have configured a flow with anOpenID Provider(as explained in the next part), you can click on theLoginbutton next to the name of the configuration you wish to test.A page is displayed with the parameters and options from the configuration. You have the liberty to change whatever you need to perform your test. The changes only apply to the current session and leave configuration data as they are.The authentication flow is started when you click onSend to IdP.The browser is redirected to the IdP where authentication occurs. Then the browser is redirected back toaduneoclientfedidwith the result (success or error).A page is displayed with the ID Token and its validation parameters (if authentification was successful).You can then start a userinfo flow to retrieve information in the ID token.The userinfo request is added to the page and one again you change any value before hittingSend request.You can also restart an authentification flow, with the exact same parameters as the first one.ConfigurationA configuration represents a flow between an OP and a client. Once a configuration is defined, authentications can be started.You can define as many configurations as you want, with different OPs or with the same OP.A new configuration is created with theAdd OIDC Clientbutton. A name is required. Choose any name that speaks to you, for it has no technical meaning. It is obviously advised that the name includes references to the OP and to what you are to test.Some parameters of the OIDC flow are configured in the OP and other inaduneoclientfedid.OpenID Provider configurationThe OP needs the minimum following information:redirect URI: theaduneoclientfedidURL where the browser is directed after the user has been authenticatedThis information is taken from theRedirect URIfield on theaduneoclientfedidconfiguration page. The default URL ishttps://localhost/client/oidc/login/callback(varies depending on configuration specifics). You can change it to suit you need. Make sure any custom URL is not used in a configuration from a different protocol (OAuth 2 or SAML). To avoid that, it is better to add an indication about the protocol (oidc) in the URL.Beware that you must enter the same URL in the configurations of the OP andaduneoclientfedid.Some OP software automatically generate a client ID and a client secret. You need this information to configureaduneoclientfedid. Other software require this information is manually entered.Depending on the OP, additional configuration is required, for example the definition of the allowed scopes, or the authentication methods allowed for the various endpoints.aduneoclientfedid configurationaduneoclientfedidneeds the following information:the OP endpoints: the URL where the browser is directed to authenticate the user and URLs for various OP web services (token retrieval, public keys, userinfo, etc.)client ID, identifying the client in the OPclient secret (the password associated with the client ID)the method used to authenticate the clientWhile it is possible to detail every endpoint URL, the easiest way is to give the discovery URI, also known as the well known configuration endpoint that returns theconfiguration documentwith all necessary information.This discovery URL is the following construct: issuer URL +/.well-known/openid-configuration.Here are some examples:Azure AD:https://login.microsoftonline.com/\/v2.0/.well-known/openid-configurationOkta: https://<domain>.okta.com/.well-known/openid-configurationForgeRock AM: https://<server>/am/oauth2/<realm>/.well-known/openid-configurationKeycloak: https://<server>/realms/<realm>/.well-known/openid-configurationThe client ID and client secret are either generated by the OP or entered in the OP configuration.The authentication method describes how these credentials are transmitted:POST: in the HTTP body (widely used)Basic: in the HTTP headersSome OPs accept any authentication method while other must be precisely configured.Default parametersWhen configuring an OpenID Connect service, you also provide default values for flow parameters.Thescopesare keywords representing information that the OP should send alongside the identity after a successfull authentication. Multiple scopes are separated by spaces.The parameter MUST containopenidper OIDC’s flow configured in the client (it distinguishes an OpenID Connect flow and an OAuth 2 flow).The OpenID Connect Specifications define several default scopes and additional ones which can be configured in the OP.The most used scopes for testing purposes are "openid email profile" :openid indicates an OpenID Connect flowemail is obviously the email addressprofile returns basing information about the user: name, given name, gender, locale, birthdate, etc.aduneoclientfedidis only compatible with thecoderesponse type, the implicit flow being deprecated since 2018.OptionsOptions describeaduneoclientfedid's behavior out of the OpenID Connect specifications.The only option indicates is HTTPS certificates must be validated.When testing a production environment, it is advised to verify certificates, to replicate the exact flows.Other environments typically have specific certificates (self-signed or signed by an internal PKI). Since certificate verification will likely fail, it's best to disable it.OpenID Connect LogoutaduneoclientfedidimplementsOpenID Connect RP-Initiated Logout 1.0, but not yet either Front-Channel or Back-Channel.Logout is initiated from the home page.Testing OAuth 2aduneoclientfedidacts both as a OAuth 2 client (a web app) and a resource server (RS, ie a web service).In a first stepaduneoclientfedidsimulates a client, triggers a user authentication and receives an access token (AT). Then it takes the role of a resource server that would have been inkoved by the client. The RS would have received the access token and now has to validate it.The validation method depends on the nature of the access token:JWTs are validated by verifying the signature (not yet implemented byaduneoclientfedidfor ATs)opaque tokens must beintrospected(presented to the introspection endpoint for validation and user information retrieval)aduneoclientfedidperforms token exchanges (RFC 8693) to get other access tokens or ID tokens from an access token. At the time of writing very few AS have implemented this RFC.OAuth 2 flows (introspections and token exchanges) can also be initiated after a SAML authentication.How it worksYou will needaduneoclientfedidinstalled and started, usually on the local machineaccess to the Authorization Server (the identity server you want to test)a test user created on the Authorization server (along with its password or any other authentication method)bothaduneoclientfedidand the OP configured (more on that later)When all of this is done, connect with a web browser toaduneoclientfedidmain page. Usuallyhttps://localhost(it's possible to install it on a different machine, to change to port, and to deactivate HTTPS for testing purposes).The browser will probably display a warning since loading a page fromlocalhostis restricted when the connection is encrypted. Bypass the warning, or change the configuration to switch to unencrypted or to connect to a real IP address.Once you have configured a flow with anauthorization server(as explained in the next part), you can click on theLoginbutton next to the name of the configuration you wish to test.A page is displayed with the parameters and options from the configuration. You have the liberty to change whatever you need to perform your test. The changes only apply to the current session and leave configuration data as they are.The authentication flow is started when you click onSend to IdP.The browser is redirected to the AS where authentication occurs. Then the browser is redirected back toaduneoclientfedidwith the result (success or error).A page is displayed with the Access Token.Then, you can start an introspection flow or a token exchange flow.ConfigurationA configuration represents a flow between an Authorization Server and a client. Once a configuration is defined, authorizations can be started.You can define as many configurations as you want, with different ASs or with the same AS.A new configuration is created with theAdd OAuth Clientbutton. A name is required. Choose any name that speaks to you, for it has no technical meaning. It is obviously advised that the name includes references to the OP and to what you are to test.Some parameters of the OAuth 2 flow are configured in the OP and other in aduneoclientfedid.Authorization Server configurationThe AS needs the minimum following information:redirect URI: theaduneoclientfedidURL where the browser is directed after the user has been authenticatedThis information is taken from theRedirect URIfield on theaduneoclientfedidconfiguration page. The default URL ishttps://localhost/client/oidc/login/callback(varies depending on configuration specifics). You can change it to suit you need. Make sure any custom URL is not used in a configuration from a different protocol (OIDC or SAML). To avoid that, it is better to add an indication about the protocol (oidc) in the URL.Beware that you must enter the same URL in the configurations of the OP andaduneoclientfedid.Some AS software automatically generate a client ID and a client secret. You need this information to configureaduneoclientfedid. Other software require this information is manually entered.Depending on the AS, additional configuration is required, for example the definition of the allowed scopes, or the authentication methods allowed for the various endpoints.If introspection is used for validating the AT, you need to create a configuration foraduneoclientfedidacting as a resource server. All is needed is a login and a secret. Each authorization server software has its own configuration waysome have dedicated objects to represent a RSothers treat RS as clients with minimal configuration Refer to the software documentation to determine how to proceed.aduneoclientfedid configurationTheaduneoclientfedidconfiguration page is split in 2 sections:"Token request by the client" is the configuration when it acts as a client"Token validation by the API (resource server)" when it acts as a resource serverTo obtain an Access Token, the following information is needed:the AS endpoints: URL where the browser is directed to authenticate the user and URLs for various AS web services (token retrieval, introspection, etc.)client ID, identifying the client in the ASclient secret (the password associated with the client ID)the method used to authenticate the clientOAuth does not have a discovery URI mechanism like OpenID Connect, where the client can retrieve all endpoints (and additional parameters). Normally, each individual endpoint must be provided.But some AS software publish a discovery URI, which can be the same as OpenID Connect, or different. If it's different, make sure to enter the correct URI. Otherwise you might have an unpredictable behavior.This is the case with Okta :https://<domain>.okta.com/.well-known/oauth-authorization-serverForgeRock AM has the same discovery URI for OpenID Connect and OAuth 2.The client ID and client secret are either generated by the AS or entered in the AS configuration.The authentication method describes how these credentials are transmitted:POST: in the HTTP body (widely used)Basic: in the HTTP headersSome Authorization Servers accept any authentication method while other must be precisely configured.If tokens are validated by introspection, you can configure how to perform it:introspection endpoint (if not retrieved through the discovery URI)resource server client ID: the login used by the web service that has received the Access Tokenresource server secret: the corresponding secret (aduneoclientfedidis only compatible with a password at the moment)Default parametersWhen configuring an OAuth 2 service, you also provide default values for flow parameters.Thescopesare keywords representing the type of access that is requested. They are entirely dependent on your own installation. They usually represent access types (read, write, create, delete, etc.).Theresourceparameter is defined by RFC 8707 but not implemented by many AS. Check compatibility before using it.aduneoclientfedidis only compatible with thecoderesponse type, the implicit flow being deprecated since 2018.OptionsOptions describeaduneoclientfedid's behavior outside of the OAuth RFCs.The only option indicates if HTTPS certificates must be validated.When testing a production environment, it is advised to verify certificates, to replicate the exact flows.Other environments typically have specific certificates (self-signed or signed by an internal PKI). Since certificate verification will likely fail, it's best to disable it.Access Token IntrospectionAfter an access token has been obtained, it can be introspected.After clicking on the "Introspect AT" button, a form is displayed in two parts:first the parameters defined by RFC 7662 (token and token type hint)then the request as it is going to be sent to the authorization server: endpoint, data, authentication parametersAny change in the first part is reflected on the second (but not the other war around).During tests, you'll probably have to enter the same information many times (credentials for instance). To help you with that, you can use the internal clipboard. It keeps all inputs that are entered so that you just have to select it when it's needed again. The clipboard is opened when clicking the icon on the right of each form field. By default, passwords are not stored in the clipboard, but a configuration parameter enables this feature.Refreshing Access TokensIf a refresh token (RT) was retrieved during OAuth flow, it can be used to get a new access token.As with introspection, a two-part form is displayed:top form: parameters defined by RFC 6749 (section 6)bottom form: the request as it will be sent to the authorization serverToken ExchangeRFC 8693 defines a way to obtain a new token (ID or access) from an existing valid token (ID or access).Few authorization servers have implemented it, so check it's available.Testing SAML 2aduneoclientfedidis a SAML 2 Service Provider (SP). It simulates an application authenticating to an Identity Provider (IdP).SAML authentication is only available when thexmlsecPython module is installed. Refer to this page for instruction on how to install it:https://pypi.org/project/xmlsec/. Sometimes it's easy (Windows) sometimes it requires some skills (Ubuntu).After a successful authentication an OAuth 2 Access Token can be obtained when the IdP is compatible with RFC 7522 (Security Assertion Markup Language (SAML) 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants).How it worksYou will needaduneoclientfedidinstalled and started, usually on the local machineaccess to the Identity Provider (the identity server you want to test)a test user created on the IdP (along with its password or any other authentication method)bothaduneoclientfedidand the IdP correctly configured (more on that later)When all of this is done, connect with a web browser toaduneoclientfedidmain page. Usuallyhttps://localhost(it's possible to install it on a different machine, to change to port, and to deactivate HTTPS for testing purposes).The browser will probably display a warning since loading a page fromlocalhostis restricted when the connection is encrypted. Bypass the warning, or change the configuration to switch to unencrypted or to connect to a real IP address.Once you configured a flow in the client as explained in the next part, you can click on theLoginbutton next to the name of the configuration you wish to test.A page is displayed with the default configuration and the default options. You have the liberty to change whatever you need to perform your test.The authentication flow is started when you click onSend to IdP.The browser is redirected to the IdP where authentication occurs. Then the browser is redirected toaduneoclientfedid.A page is displayed with the SAML assertion and its validation parameters.You can then retrieve an access token if needed (and if the IdP is RFC 7522 compliant).ConfigurationA configuration represents a flow between an Identity Provider and a client. Once a configuration is defined, authentications can be started.You can define as many configurations as you want, with different IdPs or with the same IdP.A new configuration is created with theAdd SAML SPbutton. A name is required. Choose any name that speaks to you, for it has no technical meaning. It is obviously advised that the name includes references to the OP and to what you are to test.A SAML configuration is an exchange of metadata files :the SP generates an XML file that is uploaded to the IdPthe IdP generates an XML file that is uploaded to the SPWhile this is the easy way to proceed, it is still possible to enter each parameter individually.Having gathered information from the IdP, you configureaduneoclientfedideither by uploading the metadata file, which results in the parameter fields being automatically populatedor by manually entering it: entity ID, SSO URL and certificate (optionally Single Logout URL) The certificate must be in PEM format, with or without a header and a footer.aduneoclientfedidgenerates an XML metadata file based on the information provided in the form:SP Entity ID: references the SP. It must be a URI, it is recommended it is a URLSP Assertion Consumer Service (ACS) URL: callback URL toaduneoclientfedidafter authentication. Default ishttps://localhost/client/saml/login/acs, but you can change it (as long as it stays in the same domain).keys and certificate: this information is used to sign the requests. You can either use the default key or provide your own (in case you want to replicate an exact real world behavior). Communicate the certificate butNOT the private key.NameID policy: expected user identifier field returned in the SAML assertionAuthentication binding: method used to send an authentication requestLogout binding (optional): method used to send a logout requestThose values are communicated to the IdP either manually or via a metadata file (downloaded through theDownload SP metadatabutton)There obviously needs to be a coherence between the configurations of the SP and the IdP.Many problems arise because of incompatible NameID policies. NameID is the field with the user's identity. SAML defines different formats and different values. The easiest format to configure would be the email (urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress), but it is not always the best choice for an identifier (actually, it's a pretty terrible choice in most cases). A better option is an uid present in the identity repository of the organization, which has to be conveyed in the unspecified format (urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified). It often requires a specific configuration on the IdP part.SAML Logoutaduneoclientfedidimplements Single Logout, with the POST or Redirect bindings.Logout is initiated from the home page.General configurationSome configuration parameters affecting the server behaviour are modified in the configuration file, using your favorite editor. There is no web console now for these parameters.The configuration file is named clientfedid.cnf and is in the conf directory that has been created in the current folder (the one from which the python command has been issued).It's a JSON file, so be careful of trailing commas. As a reminder, the following syntax is not permitted by JSON:{"param1": "value1","param2": "this will result in an error",}(remove the last comma to make it JSON compliant)There are 6 main sections in the configuration file:meta: information about the configuration file itself. It only contains the name of the file containing the key used to encrypt passwordsserver: HTTP server parameters (port, SSL, etc.)preferencesoidc_clients, oauth_clients, saml_clients: these parts detail the various clients and configurations in the applicationAny manual change in thre configuration file requires the server to be restarted (Ctrl-C then clientfedid/aduneoclientfedid/python -m aduneoclientfedid).meta/key: encryption key file nameAll parameters with a name ending with an exclamation point (!) are automatically encrypted (client secrets), using a symmetric key.A key is automatically generated at first launch and store in a file named clientfedid.key.It is a good practice to protect this file.server/hostIdentifies the network card used by the HTTP server.Using the default localhost makes sure no other machine is (easily...) able to access it.An empty value ("") opens it to anyone (depending on your local firewall settings).It can be a name or an IP address.server/portListening port for the HTTP server.Default is 443. It might not work on Unix/Linux systems. The easiest fix is to choose a port number greater than 1024 (8443 is a good candidate).server/sslActivates HTTPS. Possible values areonandoff.Since most of the security of OpenID Connect/OAuth 2 relies on HTTPS, it is advisable to leave the default (on).But you may have to turn it off for testing purposes.server/ssl_key_file and server/ssl_cert_fileWhen SSL is activated, these parameters contains the file withthe SSL private key (ssl_key_file), PEM formatthe associated certificate (ssl_cert_file), PEM formatIf those files are not referenced in the configuration file (which is the default), aduneoclientfedid will automatically create a key and certificate. Those items are deleted after the server is stopped.The certificate is self-signed, with server/host as the subject (the FQDN of the machine if server/host is empty).preferences/logging/handlerList of logging handlers:console: displays logs in the window used to launch the serverfile: adds logs in a file in a directory (logs) created alongsideconfdirectory.webconsole: displays logs in a browser window that can be opened by "console" button on the upper right side of the page, or automatically when an authentication flow is startedBy default, all handlers are activated.preferences/open_webconsole:onif the browser window displaying logs is automatically opened every time an authentication flow is started (default).preferences/clipboard/encrypt_clipboardThe clipboard stores all texts typed in application forms, to be easily used multiple times without having to enter them each time.Its content is stored in theconfdirectory.Ifencrypt_clipboardison, the file is encrypted usingclientfedid.keyas a key. This is the default.Otherwise, its content is in plain text.preferences/clipboard/remember_secretsIndicates if secrets are stored in the clipboard (default isoff).
adup
adupAll Auto Dupes Finder
adutils
adutilsHelper functions for AppDaemon apps. Currently there is just code, no documentation. Sorry ¯\(ツ)/¯InstallpipinstalladutilsApps usingadutilsAutoMoLiEnChNotifreezeHealthcheck
adux
A sample project to test packaging using PyPI.Reference:Python Packaging User Guide <https://packaging.python.org>
advai-core
Failed to fetch description. HTTP Status Code: 404
advai-versus
AdvaiThis is a Python package that provides a basic FastAPI application, demonstrating a simple RESTful API structure.It's current version is intended test for deploying packages to pypiFeaturesThis is a namespace test applicationInstallationYou can install advai directly from PyPI:pip install advaiQuick StartAfter installation, you can start using advai by creating a FastAPI application. Here's a quick example:from advai import create_app app = create_app() if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000)To run the app, use the following command:uvicorn your_script_name:app --reloadRunning TestsTo run tests, navigate to your project directory and execute:pytestDocker and Kubernetes SupportThis template does not currently contain docker or kubenetes configurationsContributingContributions are welcome! Please read our contributing guidelines for more information.Licenseadvai is released under the MIT License.AuthorsAdvai Development Lead
adval
The code is Python 3What is Adversarial Validation? The objective of any predictive modelling project is to create a model using the training data, and afterwards apply this model to the test data. However, for the best results it is essential that the training data is a representative sample of the data we intend to use it on (i.e.the test data), otherwise our model will, at best, under-perform, or at worst, be completely useless.*Adversarial Validation*is a very clever and very simple way to let us know if our test data and our training data are similar; we combine ourtrainandtestdata, labeling them with say a0for the training data and a1for the test data, mix them up, then see if we are able to correctly re-identify them using a binary classifier.If we cannot correctly classify them,i.e.we obtain an area under the [receiver operating characteristic curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) (ROC) of 0.5 then they are indistinguishable and we are good to go.However, if we can classify them (ROC > 0.5) then we have a problem, either with the whole dataset or more likely with some features in particular, which are probably from different distributions in the test and train datasets. If we have a problem, we can look at the feature that was most out of place. The problem may be that there were values that were only seen in, say, training data, but not in the test data. If the contribution to the ROC is very high from one feature, it may well be a good idea to remove that feature from the model.Adversarial Validation to reduce overfitting The key to avoid overfitting is to create a situation where the local cross-vlidation (CV) score is representative of the competition score. When we have a ROC of 0.5 then your local data is representative of the test data, thus your local CV score should now be representative of the Public LB score.Procedure:drop the training data target columnlabel thetestandtraindata with0and1(it doesn’t really matter which is which)combine the training and test data into one big datasetperform the binary classification, for example using XGboostlook at our AUC ROC scoreInstallationFast install:pip install advalExample on Mobile Price Classification Datasetfromadval.validationimportadVal# In this dataset:# target = "price_range"# 95 = ½ threshold similarity ratio you want# Id Column = "id"# run modulek=adVal(train,test,95,"price_range","id")# get auc_scorek.auc_score()
adva-module
Adva ModuleAdva module for python 3.xRequirementsPython 3+Installation$pipinstalladva_moduleQuickstartImport theadva_modulelibrary to use the functionsfromadva_moduleimportAdva