package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
anchora
UNKNOWN
anchorage
AnchorageAs the internet ages, link rot takes over larger and larger swathes of it, bringing with it the disappearance of interesting reads, courses and resources and much more that many of us treasure. Anchorage is an attempt to let you save your little corner for good :)Anchorage is a Python library and CLI to bulk archive your bookmark collection easily and without friction. It allows you to retrieve your bookmark collection from your browser of choice, filter out duplicates, local files and entries matching string, substring and regex searches, and archive the whole thing: online in theInternet Archiveor locally, usingArchiveBox.Read on for the Anchorage user's manual.The full Python API documentation is available here.Table of Contents1. Introduction3. Requirements & Install4. Anchorage configuration4. Anchorage CLI6. Python API6.1 Anchorage configuration6.3 Bookmark retrieval6.3 Archiving1. IntroductionWhat follows is the Anchorage user's manual.First it will deal with the requirements and install of the library, and then with its configuration, the Anchorage CLI and its Python API. A thorough documentation of each API method is available in thedocssite.2. Requirements & InstallA workingDockerinstall is the only requirement, beyond Python and Anchorage's dependencies.Without Docker: Docker is used to runArchiveBox, via a provideddocker-compose file. Without Docker Anchorage will not be able to archive your collection locally, but it will still be able to save it online in the Internet Archive.Anchorage can be installed using pip as any Python package. Its dependencies will be downloaded automatically.pip install anchorage3. Anchorage configurationTo access a browser's bookmarks file, Anchorage stores its location in its configuration file:~/.anchorage/config.tomlThere's anexampleconfig.tomlin this repo for reference.To add a new browser simply add a new top-level key, followed by its bookmark file paths. Anchorage only needs the path in your operating system to work.[<browser name>] linux = <path> macos = <path> windows = <path>Importantly:Linux and MacOS paths are stored infull.Windows paths are stored from theAppDatadirectory.The defaultconfig.tomlcontains the bookmark file paths for Google Chrome, Mozilla Firefox and Microsoft Edge and Edge Beta forWindowsonly. To use Anchorage inLinux or MacOSadd the bookmark file path of your browser of choice to yourconfig.toml.Editing the Anchorage config fileThe config file can be edited just as any other. New browsers will automatically be listed in the CLI.Importantly:Set unknown bookmark file paths to "?". That way the CLI will recognize those as unknown and behave appropriately.4. Anchorage CLIThe CLI will guide you through retrieving your bookmarks from your browser of choice, applying filters to you bookmark collection and archiving your bookmarks in the Internet Archive or locally, using ArchiveBox.To start the CLI open your shell and typeanchorageYou will be asked whether you're ready to proceed. On the ok it will ensure all dependencies are present.1. Config checkIf a config file is found, you will be prompted to choose whether to keep the current config or overwrite it with the default one.2. Browser choiceYou will be prompted to choose which browser to retrieve your bookmark collection from. The browser choices are sourced fromconfig.toml. Refer tosection 3for editing it to add a missing browser or enter the path to the bookmarks file of your browser, if it's missing (equal to "?").3. Applying filters to the collectionFilters can be applied to your bookmark collection before archiving. Any or all of four filters can be chosen, one specific for URLs:Local files: remove local URLs (say, PDFs stored in your computer) from the collection.and three general:Match string: remove bookmark URLs, names or bookmark directories matching a provided string or any string in a string list.Match substring: remove bookmark URLs, names or bookmark directories containing a provided string or any string in a string list.Regex: remove bookmark URLs, names or bookmark directories matching a provided regex formula.For each you will be prompted to choose to apply it to any or all of the previous.4. Archive choiceYou will be then asked to choose whether to archive your collection online or locally.OnlineBy default websites will not be archived if a previous image exists in The Internet Archive. This is to save time: we rest easy as a those sites are saved already at some point. In case you want to save a current snapshot of the colection, you will be prompted whether to override this and archive all sites in the collection regardless. This may take significantly longer. Based on your choice, you will be given an estimate of the archive time.LocalTo archive your collection locally you will be prompted for an archive directory.5. RunAfter a last confirmation the process will begin. A progress bar will inform you of how far the process is from finishing, how many bookmarks have been saved and provide a dynamic estimate of the time remaining before the process is finished.5. Python API: user's guideThe full documentation of the Anchorage API is available in thedocssite.5.1 Anchorage configurationGenerate the Anchorage config file with theinitcommand.from anchorage import init init()5.2 Bookmark retrievalThree methods are relevant:path(<browser>): obtain the path to your chosen browser's bookmarks file (in your OS) fromconfig.toml.load(<path>): read your chosen browser's JSON or JSONLZ4 bookmarks file and return a Python dictionary.bookmarks(<dict>): create an instance of thebookmarksclass.Thebookmarksclass creates a second bookmarks dictionary more suitable for our intent, and contains methods to filter and loop through the collection. Filters can be applied as seen below.from anchorage import path, load, bookmarks collection = bookmarks(load(path(<browser name>)), drop_local_files= <boolean>, drop_dirs= <string or list of strings>, drop_names= <string or list of strings>, drop_urls= <string or list of strings>, drop_dirs_subs= <string or list of strings>, drop_names_subs= <string or list of strings>, drop_urls_subs= <string or list of strings>, drop_dirs_regex= <string>, drop_names_regex= <string>, drop_urls_regex= <string> )5.3 ArchivingInput:bookmarksinstance or bookmark dictionary returned byload.Onlinefrom anchorage import anchor_online anchor_online(bookmarks, overwrite=<bool>)Theoverwriteparameter determines whether to save snapshots of sites already present in the Internet Archive or not.Locallyfrom anchorage import anchor_locally anchor_locally(bookmarks, archive=<dir>)Thearchiveparameter specifies the directory in which to create the local archive.Running the ArchiveBox default NGINX server can be done with the following command.from anchorage import server server()Back to top
anchor-bio
![Anchor logo](https://raw.githubusercontent.com/YeoLab/anchor/master/logo/v1/logo.png)[![](https://img.shields.io/travis/YeoLab/anchor.svg)](https://travis-ci.org/YeoLab/anchor)[![](https://img.shields.io/pypi/v/anchor.svg)](https://pypi.python.org/pypi/anchor)[![codecov](https://codecov.io/gh/YeoLab/anchor/branch/master/graph/badge.svg)](https://codecov.io/gh/YeoLab/anchor)## What is `anchor`?Anchor is a python package to find unimodal, bimodal, and multimodal features in any data that is normalized between 0 and 1, for example alternative splicing or other percent-based units.* Free software: BSD license* Documentation: https://YeoLab.github.io/anchor## InstallationTo install `anchor`, we recommend using the[Anaconda Python Distribution](http://anaconda.org/) and creating anenvironment, so the `anchor` code and dependencies don't interfere withanything else. Here is the command to create an environment:```conda create -n anchor-env pandas scipy numpy matplotlib seaborn```### Stable (recommended)To install this code from the Python Package Index, you'll need to specify ``anchor-bio`` (``anchor`` was already taken - boo).```pip install anchor-bio```### Bleeding-edge (for the brave)If you want the latest and greatest version, clone this github repository and use `pip` to install```git clone [email protected]:YeoLab/anchorcd anchorpip install . # The "." means "install *this*, the folder where I am now"```## Usage`anchor` was structured like `scikit-learn`, where if you want the "finalanswer" of your estimator, you use `fit_transform()`, but if you want to see theintermediates, you use `fit()`.If you want the modality assignments for your data, first make sure that youhave a `pandas.DataFrame`, here it is called `data`, in the format (samples,features). This uses a log2 Bayes Factor cutoff of 5, and the default Betadistribution parameterizations (shown [here]())```pythonimport anchorbm = anchor.BayesianModalities()modalities = bm.fit_transform(data)```If you want to see all the intermediate Bayes factors, then you can do:```pythonimport anchorbm = anchor.BayesianModalities()bayes_factors = bm.fit(data)```## History### 1.1.1 (2017-06-29)- In `infotheory.binify`, round the decimal numbers before they are written as strings### 1.0.1 (2017-06-28)- Documentation and build fixes### 1.0.0 (2017-06-28)* Updated to Python 3.5, 3.6### 0.1.0 (2015-07-08)* First release on PyPI.
anchorconnector
Anchor ConnectorThis is a simple library for connecting to the unofficial Anchor API.It can be used to export data from your dashboard athttps://anchor.fm/dashboard.Supported Endpointstotal_playsplays_by_age_rangeplays_by_appplays_by_deviceplays_by_episodeplays_by_genderplays_by_geoplays_by_geo_cityepisodesFor each episode, the following endpoints are supported:episode_playsepisode_performanceepisode_aggregated_performanceepisode_all_time_video_dataSee__main.py__for all endpoints.CredentialsBefore you can use the library, you must extract your Anchor credentials from the dashboard; they arenotexposed through your Anchor settings.You can use ourweb-extensionfor that ortake a look at the codeto see how to do it manually.Installationpip install anchorconnectorUsage as a libraryfromanchorconnectorimportAnchorConnectorconnector=AnchorConnector(base_url=BASE_URL,webstation_id=WEBSTATION_ID,anchorpw_s=ANCHOR_PW_S,)end=datetime.now()start=end-timedelta(days=30)total_plays=connector.total_plays(True)logger.info("Podcast Total Plays ={}",json.dumps(total_plays,indent=4))plays_by_age_range=connector.plays_by_age_range(start,end)logger.info("Plays by Age Range ={}",json.dumps(plays_by_age_range,indent=4),)# plays_by_app = connector.plays_by_app(start, end)# plays_by_device = connector.plays_by_device(start, end)# plays_by_episode = connector.plays_by_episode(start, end)# plays_by_gender = connector.plays_by_gender(start, end)# plays_by_geo = connector.plays_by_geo()# plays_by_geo_city = connector.plays_by_geo_city("Germany")# ...forepisodeinconnector.episodes():logger.info("Episode ={}",json.dumps(episode,indent=4))web_episode_id=episode["webEpisodeId"]episode_meta=connector.episode_plays(web_episode_id)logger.info("Episode Metadata ={}",json.dumps(episode_meta,indent=4))# ...See__main.py__for all endpoints.DevelopmentWe usePipenvfor virtualenv and dev dependency management. With Pipenv installed:Install your locally checked out code indevelopment mode, including its dependencies, and all dev dependencies into a virtual environment:pipenvsync--devCreate an environment file and fill in the required values:cp.env.sample.envRun the script in the virtual environment, which willautomatically load your.env:pipenvrunanchorconnectorTo add a new dependency for use during the development of this library:pipenvinstall--dev$packageTo add a new dependency necessary for the correct operation of this library, add the package to theinstall_requiressection of./setup.py, then:pipenvinstallTo publish the package:pythonsetup.pysdistbdist_wheel twineuploaddist/*ormakepublish
anchor-custom
Failed to fetch description. HTTP Status Code: 404
anchor-droplet-chip
⚓ anchor-droplet-chipMeasuring single-cell susceptibility to antibiotics within monoclonal fluorescent bacteria.We are imaging the entire chip using 20x 0.7NA objective lens using automatic stitching in NIS. Bright-field image 2D and TRITC-3D acquired. The 3D stack is converted to 2D using maximum projection in NIS or Fiji. Both channels are then merged together and saved as a tif stack. After that this package can be applied to detect the individual droplets and count the fluorescent cells.As the chips are bonded to the coverslip manually, they contain a randon tilt and shift, so detecting individual droplets proved to be unreliable. The current approach consisnts of preparing a well-lebelled template bright-field image and a labelled mask and matching the experimental brightfield image to the template.Installationpipinstallanchor-droplet-chipUsageNotebook:jupyter lab example.ipynbNapari plugin: see the menu `Plugins / andhor-droplet-chips / ...Command line:python -m adc.align --helppython -m adc.count --helpDowloading the raw dataHead to release pagehttps://github.com/BaroudLab/anchor-droplet-chip/releases/tag/v0.0.1and download files one by one.OrExecute the notebook example.ipynb - the data will be fetched automatically.Aligning the chips with the template and the maskDay 1:python-madc.alignday1/00ng_BF_TRITC_bin2.tiftemplate_bin16_bf.tiflabels_bin2.tifThis command will create the stack day1/00ng_BF_TRITC_bin2-aligned.tif, which can be viewed in Fiji.Day 2:python-madc.alignday2/00ng_BF_TRITC_bin2_24h.tiftemplate_bin16_bf.tiflabels_bin2.tifCounting the cells day 1 and day2python -m adc.count day1/00ng_BF_TRITC_bin2-aligned.tif day1/counts.csv python -m adc.count day2/00ng_BF_TRITC_bin2_24h-aligned.tif day2/counts.csvCombining the tables from 2 dayspython adc.merge day1/counts.csv day2/counts.csv table.csvPlotting and fitting the probabilitiesSample dataBatch processing:First you'll need to clone the repo locally and install it to have the scripts at hand.gitclonehttps://github.com/BaroudLab/anchor-droplet-chip.gitcdanchor-droplet-chip pipinstall.Make a data foldermkdirdataDownload the dataset from Zenodohttps://zenodo.org/record/6940212zenodo_get6940212-odataProceed with Snakemake pipeline to get tha table and plots. Be careful with the number of threads-cas a single thread can consume over 8 GBs of RAM.snakemake-c4-ddatatable.csvNapari plugin functionalutiesnd2 readerOpen large nd2 file by drag-n-drop and select anchor-droplet-chip as a reader. The reader plugin will aotimatically detect the subchannels and split them in different layers. The reader will also extract the pixel size from metadata and save it as Layer.metadata["pixel_size_um"] The data itself is opened ad dask array using nd2 python library.SubstackSome datasets are so big, it's hard to even to open them, let alone doing processing in them.anchor-droplet-chip / Make a sub stackaddresses this problem. Upon opening the plugin you'll see all dimensions of you dataset, and the axes will become named accordingly. Simply choose the subset of data you need, and click "Crop it!". This will create a new layer with the subset of data. Note that no new files are created in the process and in the background nd2 library lazy loading chunks of data from the original nd2 file.Populate ROIs along the lineDraw a line in the new shapes layer and call the widget. It will populate square ROIs along the line. Adjust the number of columns and rows. This way you can manually map the 2D wells on your chip.Crop ROIsUse this widget to crop the mapped previously ROIs. The extracted crops can be saved as tifs.Split along axisAllows to split any dataset along a selected axis and save the pieces as separate tifs (imagej format, so only TZCYX axes supported)Select the axis nameClick Split it! and check the table with the names, shapes and paths.To change the prefix, set the folder by clicking at "Choose folder"Once the table looks right, click "Save tifs" and wait. The colunm "saved" will be updated along the way.
anchore
.. image:: https://anchore.io/service/badges/image/f017354b717234ebfe1cf1c5d538ddc8618f3ab0d8c67e290cf37f578093d121:target: https://anchore.io/image/dockerhub/anchore%2Fcli%3AlatestAnchore=======Anchore is a set of tools that provides visibility, transparency, andcontrol of your container environment. With anchore, users cananalyze, inspect, perform security scans, and apply custom policies tocontainer images within a CI/CD build system, or used/integrateddirectly into your container environment.This repository contains the anchore analysis scanner tool (with abasic CLI interface), which can be appropriate for lower-levelintegrations - for new users and current users who have been lookingto deploy Anchore as a centralized service with an API, an open sourceproject called the Anchore Engine has been released (with its ownlight-weight client CLI) which extends the capabilities of anchorebeyond what usage of this scanner tool alone can provide. The projectpage links are below, which include installation/quickstartinstructions, API documents and usage guides.`Anchore Engine <https://github.com/anchore/anchore-engine>`_`Anchore Engine CLI <https://github.com/anchore/anchore-cli>`_If you would like to deploy Anchore as an API accessible servicewithin your environment, you should visit the `Anchore Engine<https://github.com/anchore/anchore-engine>`_ project page to getstarted - note that the anchore-engine uses the anchore analysisscanner code from this repository as a dependency - if you're usingthe anchore engine you will not need to install the software from thisrepository manually. If you are a current user of anchore and are notready to try the anchore-engine yet, or you are interested in the coreanchore container analysis scanner open source software itself, thisis the code you're looking for.Using Anchore Scanner via Docker================================Anchore is available as a `Docker image <https://hub.docker.com/r/anchore/cli/>`_.1. ``docker pull anchore/cli``2. ``docker run -d -v /var/run/docker.sock:/var/run/docker.sock --name anchore anchore/cli:latest``3. ``docker exec anchore anchore feeds sync``4. Use docker exec to run anchore commands in the container, such as: ``docker exec anchore anchore analyze --image <myimage> --dockerfile </path/to/Dockerfile>``The general model is to run the container in detached mode to providethe environment and use 'docker exec' to execute anchore commandswithin the container. See the above link on how to use the containerspecifically and options that are container specific.Using Anchore Scanner Installed Directly on Host========================================To get started on CentOS 7 as root:1) install docker (see docker documentation for CentOS 7 install instructions)``https://docs.docker.com/engine/installation/linux/centos/``2) install some packages that full functionality of anchore will require (run as root or with sudo)``yum install epel-release````yum install python-pip rpm-python dpkg``To get started on Ubuntu >= 15.10 as root:1) install docker engine >= 1.10 (see docker documentation for Ubuntu >= 15.10 install instructions)``https://docs.docker.com/engine/installation/linux/ubuntulinux/``2) install some packages that full functionality of anchore will require (run as root or with sudo)``apt-get install python-pip python-rpm yum``Next, on either distro:3) install Anchore to ~/.local/``cd <where you checked out anchore>````pip install --upgrade --user .````export PATH=~/.local/bin:$PATH``4) run anchore! Here is a quick sequence of commands to help get going``anchore --help````docker pull nginx:latest````anchore feeds list````anchore feeds sync````anchore analyze --image nginx:latest --imagetype base````anchore audit --image nginx:latest report````anchore query --image nginx:latest has-package curl wget````anchore query --image nginx:latest list-files-detail all````anchore query --image nginx:latest cve-scan all````anchore toolbox --image nginx:latest show``For more information, to learn about how to analyze your ownapplication containers, and how to customize/extend Anchore, pleasevisit our github page wiki at https://github.com/anchoreJenkins=======If you are a Jenkins user, please visit our github wiki installationdocumentation athttps://github.com/anchore/anchore/wiki/Anchore-and-Jenkins-Integrationto learn more about using the Jenkins Anchore build-step plugin.Vagrant=======* Install Vagrant and Virtualbox* Download the Vagrantfile* ``vagrant up``* ``vagrant ssh``* ``sudo -i``* Continue with step 4)Manual Pages============Man pages for most of the anchore commands are available in:$anchore/doc/man, where $anchore is the install location of the pythoncode for your distro(e.g. /usr/local/lib/python2.7/dist-packages/anchore for ubuntu). Toinstall them, copy them to the appropriate location for yourdistro. The man pages are generated from --help and --extended-helpoptions to anchore commands, so similar content is available direcltyfrom the CLI as well.
anchorecli
OverviewThe Anchore CLI provides a command line interface on top of theAnchore EngineREST API.Using the Anchore CLI users can manage and inspect images, policies, subscriptions and registries for the following:Supported Operating SystemsAlpineAmazon Linux 2CentOSDebianGoogle DistrolessOracle LinuxRed Hat Enterprise LinuxRed Hat Universal Base Image (UBI)UbuntuSupported PackagesGEMJava Archive (jar, war, ear)NPMPython (PIP)Installing Anchore CLI from sourceThe Anchore CLI can be installed from source using the Python pip utilitygit clone https://github.com/anchore/anchore-cli cd anchore-cli pip install --user --upgrade .Or can be installed from the installed form source from the PythonPyPIpackage repository.Installing Anchore CLI on CentOS and Red Hat Enterprise Linuxyum install epel-release yum install python-pip pip install anchorecliInstalling Anchore CLI on Debian and Ubuntuapt-get update apt-get install python-pip pip install anchorecli Note make sure ~/.local/bin is part of your PATH or just export it directly: export PATH="$HOME/.local/bin/:$PATH"Installing Anchore CLI on Mac OS / OS XUse Python’spippackage manager:sudo easy_install pip pip install --user anchorecli export PATH=${PATH}:${HOME}/Library/Python/2.7/binTo ensureanchore-cliis readily available in subsequent terminal sessions, remember to add that last line to your shell profile (.bash_profileor equivalent).To updateanchore-clilater:pip install --user --upgrade anchorecliConfiguring the Anchore CLIBy default the Anchore CLI will try to connect to the Anchore Engine athttp://localhost/v1with no authentication. The username, password and URL for the server can be passed to the Anchore CLI as command line arguments.--u TEXT Username eg. admin --p TEXT Password eg. foobar --url TEXT Service URL eg. http://localhost:8228/v1Rather than passing these parameters for every call to the cli they can be stores as environment variables.ANCHORE_CLI_URL=http://myserver.example.com:8228/v1 ANCHORE_CLI_USER=admin ANCHORE_CLI_PASS=foobarCommand line examplesAdd an image to the Anchore Engineanchore-cli image add docker.io/library/debian:latestWait for an image to transition toanalyzedanchore-cli image wait docker.io/library/debian:latestList images analyzed by the Anchore Engineanchore-cli image listGet summary information for a specified imageanchore-cli image get docker.io/library/debian:latestPerform a vulnerability scan on an imageanchore-cli image vuln docker.io/library/debian:latest osPerform a policy evaluation on an imageanchore-cli evaluate check docker.io/library/debian:latest --detailList operating system packages present in an imageanchore-cli image content docker.io/library/debian:latest osSubscribe to receive webhook notifications when new CVEs are added to an updateanchore-cli subscription activate vuln_update docker.io/library/debian:latestMore InformationFor further details on use of the Anchore CLI with the Anchore Engine please refer toAnchore Engine
anchorer
AnchorerPlugin forvirtualenvwrapperthat extendsmkvirtualenvbehaviour to add code that is loaded by the python interpreter for every run. The loaded code resolves symlinks in discovered site-package directories, allowing symlinks to virtualenvs to be updated while scripts/services are running.Example problem anchorer solves# assuming you have the virtualenvwrapper python package installed, and have sourced virtualenvwrapper.shmkvirtualenvenv-v1 mkvirtualenvenv-v2# create a pseudo-virtualenv which is a symlink to a particular versionln-s"$WORKON_HOME/env-v1""$WORKON_HOME/active-env"# now use the linked environment to start something in env-v1workonactive-env# start some imaginary python service which may import modules a long time after startingpython-mmy_long_runner&# update the active symlink, switching what versionln-sT"$WORKON_HOME/env-v2""$WORKON_HOME/active-env"# imagine at this point that my_long_runner tries to import a module, it will be using un-resolved paths which will mean# the modules will be loaded from an environment that is not the one it started inArchitecturevirtualenvwrapper.anchorer.fix.main()resolves paths that are used at runtimecurrent working directorypaths used for determining where packages are foundvirtualenvwrapperrunsvirtualenvwrapper.anchorer.plugin.pre_mkvirtualenv(...)during calls tomkvirtualenvto modify the virtualenv's site-packages directory:__anchorer.pyis added, it is a copy of the fix module__anchorer.pthis added, it simply imports__anchorerwhich causes the main method to run. Seesite docsfor more information on the mechanism
anchore-syft
Syft Python DistributionsA project that packages Syft as a Python package, enablingsyftto be installed from PyPI:pipinstallanchore_syftAfterwards, Syft can be run using eithersyftoranchore_syft.PyPI package versions will follow themajor.minor.patchversion numbers of Syft releases.Binary wheels for Windows, macOS, and Linux for most CPU architectures supported on PyPI are provided.Syft PyPI Package HomepageSyft Source CodeSyft License: Apache-2.0Installing SyftSyft can be installed by pip with:pipinstallanchore_syftor:python-mpipinstallanchore_syftBuilding from the source dist package requires internet access in order to download one of the pre-compiled release binaries fromhttps://github.com/anchore/syft/releases. Platforms that Syft doesn't provide pre-compiled binaries for will not work at all, unless someone feels inclined to submit a PR that fetches an appropriate Go compiler to build Syft from source.Using with pipxUsingpipx run anchore_syft <args>will run Syft without any install step, as long as the machine has pipx installed (which includes GitHub Actions runners).Using with pyproject.tomlSyft can be added to theproject.dependencieskey in a pyproject.toml file for Python packages that require Syft.[project]dependencies=["anchore_syft"]LicenseThe code for this project is covered by theApache License, Version 2.0. Source distributions do not include a copy of the Syft source code or binaries. Binary wheels include a compiled Syft binary, which also falls under the Apache 2.0 license.Syft is distributed under theApache License, Version 2.0. For more information about Syft, visithttps://github.com/anchore/syft
anchor-exp
No description available on PyPI.
anchor-gpt
Find hallucination prone prompts and use them to fine-tune / ground your LLMWhy Anchor GPT ?Because you can't get groundtruth answers for every prompt and fine-tuning / grounding with the right data gives much better results. We compared side by side fine-tuning with prompts sampled randomly and with CoreSet (the core algo of anchor-gpt) and the results speak for themselves 👇Accuracy on a sample of the MMLU test dataset of a fine tuned LLama with 1000 datapoints sampled from the Alpaca dataset using either Random sampling or CoreSetInstallationpipinstallanchor-gptStep by StepUse prompt logger to log your prompts and their grouding scoresfromanchor_gptimportPromptLogger,Prompt# Your regular grounding processprompt_embeddings=embedding_model.encode(prompt)index_response=my_index_endpoint.find_neighbors(queries=prompt_embeddings,num_neighbors=10,)grounding_data=[]grounding_distances=[]forgrounding_index,grounding_distanceinindex_response:grounding_data.append(my_index_endpoint.get(grounding_index))grounding_distances.append(grounding_distance)grounded_prompt=build_prompt(prompt,grounding_data)# Call your LLMchat_response=my_llm.chat(grounded_prompt,temperature=0.1)# Log the promptprompt_logger=PromptLogger()my_prompt=prompt_logger.log(Prompt(text=prompt,response=chat_response,scores={'grounding_distances':grounding_distances},embeddings=prompt_embeddings,))Add additional scores like user feedback asynchronouslymy_prompt.update_scores({'user_feedback':0.8})Retreive the worst performing prompts to fine-tune your model or improve your grounding database# Define a custom prompt scoring methoddefretriever(store,threshold):defprompt_average_score(prompt):return0.2*prompt.scores['grounding_distances'][0]+0.8*prompt.scores['user_feedback']returnlist(filter(lambdax:prompt_average_score(x)>threshold,store.select_prompts()))# Retreive the ones above a thresholdworst_prompts=prompt_logger.retrieve(retriever,0.5)# Remove near duplicates to only keep what mattersdeduped_prompts=prompt_logger.deduplicate(worst_prompts,100)# Add the right answers to your grounding DB to better answer those prompts next timeExample in a chat servicefromanchor_gptimportPromptLogger,Promptprompt_logger=PromptLogger()# Your regular chat endpoint with logging [email protected]("/chat",methods=["POST"])defchat():# Do your grounding as normal:prompt_embeddings=model.encode(request.json["prompt"])vector_store_results=vector_store.query(prompt_embeddings,top_k=10)grounded_prompt=build_prompt(prompt,vector_store_results)chat_response=my_llm.chat(grounded_prompt,temperature=0.1)# Then log the prompt with the response, scores and embeddings.# Prompts are stored locally in a SQLite database.prompt_logger.log(Prompt(text=request.json["prompt"],response=chat_response,scores={'grounding_distances':[r.distanceforrinvector_store_results]},embeddings=prompt_embeddings,))returnchat_response# A new hallucination retreival endpoint to get the worst prompts from your [email protected]("/hallucinations",methods=["GET"])defhallucinations():defretriever(store,threshold):defprompt_average_score(prompt):returnprompt.scores['grounding_distances'][0]returnlist(filter(lambdax:prompt_average_score(x)>threshold,store.select_prompts()))# Retrieve a list of the prompts with the greated distance from your grounding dataworst_prompts=prompt_logger.retrieve_n(0.5)# Remove near duplicates and only keep 10 promptsdeduped_prompts=prompt_logger.deduplicate(worst_prompts,10)# Clean up the storeprompt_logger.store.purge()returnjsonify([{'text':p.text,'response':p.response}forpindeduped_prompts])
anchorhub
AnchorHubAnchorHubis a command-line tool that makes it easy and intuitive to utilize GitHub’s auto-generated anchor tags in yourMarkdowndocuments, allowing you to create rich, user-friendly documentation in your GitHub repos without having to figure out what those auto-generated tags will be.FeaturesEasily use GitHub’s automatically generated anchor tagsSimple, customizable syntax that just worksWorks with single files, a single directory level, or an entire directory treeInstallationYou can install AnchorHub usingpip:$ pip install anchorhubIf you’re having trouble with pip, you can also install from source:$ git clone https://github.com/samjabrahams/anchorhub.git $ cd anchorhub $ python setup.py installTo-do ListVerify cross-platform compatibility (currently only tested on OSX)Support for ReStructuredTextDefine API for using custom anchor generation orMore tests!Known IssuesShould not change text within in-line code blocks (those marked by ` backticks)Quick Start Guide1. Define your tagsInside your Markdown files, define tags at the end of header lines. By default, the syntax for this is{#my-tag-here}:# This is a header that I would like to make a tag for {#tag} You can also use Setext (underlined) style headers {#setext} ------------------------------------------------------------The default is similar toPandoc’s Markdown header identifiers2. Use the tags as you would regular HTML anchorsElsewhere, you can use the previously defined tags inlinks to provide a direct path to the header:[This links back to the header using the AnchorHub tag 'tag'](#tag) [This one links to the Setext header](#setext)3. Run AnchorHub on your Markdown filesanchorhubwill parse your Markdown files. You’ve got a few options for runninganchorhub: run it on a single file, run it on a single level of a directory, or run it on an entire directory tree.Single file use: $ anchorhub mytags.md Directory use (single level): $ anchorhub . Directory use (provided directory level and all subdirectories): $ anchorhub . -rThis will output your processed files in a new folder in your current directory, ‘anchorhub-out/’4. Enjoy your (relatively) hassle-free GitHub anchor linksAssuming all of the above Markdown was in a file named ‘mytags.md’, here is what we’d find inside of ‘anchorhub-out/mytags.md’:# This is a header that I would like to make a tag for You can also use Setext (underlined) style headers ------------------------------------------------------------ ... [This links back to the header using the AnchorHub tag 'tag'](#this-is-a-header-that-i-would-like-to-make-a-tag-for) [This one links to the Setext header](#you-can-also-use-setext-underlined-style-headers)LicenseCopyright 2016, Sam Abrahams. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
anchor-kr
No description available on PyPI.
anchormake
No description available on PyPI.
anchorman
# Welcome to AnchormanTurn your text into [hypertext](http://en.wikipedia.org/wiki/Hypertext) and enrich the content. Anchorman finds terms in text and replaces them with another representation.The replacement is rule-based. Each term is checked against the rules and will be applied if valid.# How many items will be marked at all in the text. replaces_at_all: 5# Input term has to be exact match in text. case_sensitive: true## Featuresreplacement rulesconsider text units in the rules (e.g. paragraphs)replace only n items of the same itemspecify restricted_areas for linking by tag: a, imgsort elements by value before apply themreturn applied elements## Usage>>> from anchorman import annotate >>> text = 'The quick brown fox jumps over the lazy dog.' >>> elements = [{'fox': {'value': '/wiki/fox', 'data-type': 'animal'}}] >>> print annotate(text, elements) 'The quick brown <a href="/wiki/fox" data-type="animal">fox</a> jumps over the lazy dog.'## InstallationTo install Anchorman, simply:pip install anchorman## Credits and contributionsWe published this at github and pypi to provide our solution to you. Pleased for feedback and contributions.Thanks [@tarnacious](https://github.com/tarnacious) for inspiration and first steps.## Todocheck if position exist in input and save extra processinghtml.parser vs lxml in bs4 - benchmarks and drawbacks<img src=”https://raw.githubusercontent.com/rebeling/anchorman/master/docs/anchorman.png” width=”200”>Stay tuned.
anchor-pki
AnchorPython client for Anchor PKI. Seehttps://anchor.dev/for detailsConfigurationThe Following environment variables are available to configure the defaultAutoCert::Manager.HTTPS_PORT- the TCP numerical port to bind SSL to.ACME_ALLOW_IDENTIFIERS- A comma separated list of hostnames for provisioning certsACME_DIRECTORY_URL- the ACME provider's directoryACME_KID- your External Account Binding (EAB) KID for authenticating with the ACME directory above with anACME_HMAC_KEY- your EAB HMAC_KEY for authenticating with the ACME directory aboveACME_RENEW_BEFORE_SECONDS-optionalStart a renewal this number number of seconds before the cert expires. This defaults to 30 days (2592000 seconds)ACME_RENEW_BEFORE_FRACTION-optionalStart the renewal when this fraction of a certificate's valid window is left. This defaults to 0.5, which means when the cert is in the last 50% of its lifespan a renewal is attempted.AUTO_CERT_CHECK_EVERY-optionalthe number of seconds to wait between checking if the certificate has expired. This defaults to 1 hour (3600 seconds)If bothACME_RENEW_BEFORE_SECONDSandACME_RENEW_BEFORE_FRACTIONare set, the one that causes the renewal to take place earlier is used.Example:Cert start (not_before) moment is :2023-05-24 20:53:11 UTCCert expiration (not_after) moment is :2023-06-21 20:53:10 UTCACME_RENEW_BEFORE_SECONDSis1209600(14 days)ACME_RENEW_BEFORE_FRACTIONis0.25- which equates to a before seconds value of604799(~7 days)The possible moments to start renewing are:14 days before expiration moment -2023-06-07 20:53:10 UTCwhen 25% of the valid time is left -2023-06-14 20:53:11 UTCCurrently theAutoCert::Managerwill use whichever is earlier.Example configurationHTTPS_PORT=44300ACME_ALLOW_IDENTIFIERS=my.lcl.host,*.my.lcl.hostACME_DIRECTORY_URL=https://acme-v02.api.letsencrypt.org/directoryACME_KID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXACME_HMAC_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXNotesThe HTTP User Agent for the anchor-autocert client isanchor-pki autocert python client v{VERSION}DevelopmentDevelopment and distribution is facilitated with poetry.lint the project - 2 steps:poetry run black ./poetry run pylint ./src/anchor_pkirun testspoetry run pytest tests/run tests with coveragepoetry run pytest --cov-report=term-missing --cov=./src/anchor_pki/ tests/buildpoetry buildDevelopment assumes a.envfile at the root of the python module. Currently the only required items in it are:ACME_KID=... ACME_HMAC_KEY=... VCR_RECORD_MODE=none # set to have new tests record new cassetsTo re-record all cassettesMake sure theACME_KIDandACME_HMAC_KEYvalues in thetests/anchor_pki/autocert/test_manager.pyis kept in sync with the values in the.envfile when re-recording the cassettes as the values will need to be available during CI to match the cassette data.Update the.envfile with:VCR_RECORD_MODE=allThen update the value forvcr_recorded_atintests/anchor_pki/autocert/test_manager.pyto be sometime after the cassettes were recorded but before the certificates expire.LicenseThe python packages is available as open source under the terms of theMIT License
anchorpoint
A Python library for anchoring annotations with text substring selectors.Anchorpoint supplies TextQuoteSelector and TextPositionSelector classes based on the Web Annotation Data Model, which is aW3C Recommendation. Anchorpoint includes helper methods for switching between selector types, and apydanticschema for serialization. Anchorpoint is used byLegislicefor referencing laws such as statutes, and byAuthoritySpokefor referencing judicial opinions.API Documentationis available on readthedocs.Anchorpoint relies onpython-rangesto perform set operations on spans of text.Related PackagesIn Javascript, try theText Quote AnchorandText Position Anchorpackages.In Python trypython-ranges, which is the basis for much of the TextPositionSelector class’s behavior.
anchorpy
AnchorPyAnchorPy is the gateway to interacting withAnchorprograms in Python. It provides:A static client generatorA dynamic client similar toanchor-tsA Pytest pluginA CLI with various utilities for Anchor Python development.Read theDocumentation.Installation (requires Python >=3.9)pipinstallanchorpy[cli,pytest]Or, if you're not using the CLI or Pytest plugin features of AnchorPy you can just runpip install anchorpy.Development SetupIf you want to contribute to AnchorPy, follow these steps to get set up:InstallpoetryInstall dev dependencies:poetryinstallActivate the poetry shell:poetryshell
anchorpy-core
anchorpy-corePython bindings for Anchor Rust code
anchorpy-fork
AnchorPyAnchorPy is the gateway to interacting withAnchorprograms in Python. It provides:A static client generatorA dynamic client similar toanchor-tsA Pytest pluginA CLI with various utilities for Anchor Python development.Read theDocumentation.Installation (requires Python >=3.9)pipinstallanchorpy[cli]Or, if you're not using the CLI features of AnchorPy you can just runpip install anchorpy.Development SetupIf you want to contribute to AnchorPy, follow these steps to get set up:InstallpoetryInstall dev dependencies:poetryinstallInstallnox-poetry(note: do not use Poetry to install this, seehere)Activate the poetry shell:poetryshell
anchors
anchorsPython package for calculating scores from ancnchor or modifier screensFree software: MIT licenseDocumentation:https://anchors.readthedocs.io.TutorialTo install:$ pip install anchorsBasic Usageimportpandasaspdfromanchorsimportget_guide_residuals,get_gene_residualslfc_df=pd.read_csv('https://raw.githubusercontent.com/PeterDeWeirdt/anchor_screen_parp_lfcs/master/parp_example_lfcs.csv')refernce_condition_df=pd.read_csv('https://raw.githubusercontent.com/PeterDeWeirdt/anchor_screen_parp_lfcs/master/parp_example_mapping.csv')guide_residuals,model_info,model_fit_plots=get_guide_residuals(lfc_df,refernce_condition_df)guide_mapping_df=pd.read_csv('https://raw.githubusercontent.com/PeterDeWeirdt/anchor_screen_parp_lfcs/master/brunello_guide_map.csv')gene_residuals=get_gene_residuals(guide_residuals,guide_mapping_df)FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2020-09-21)First release on PyPI.0.3.0 (2020-09-21)Calculate guide and gene residuals.
anchor-topic
Copyright (c) 2018 Michelle YuanPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.Description-Content-Type: UNKNOWNDescription: # anchor-topicBuild a topic model from the anchoring algorithm. This code supports finding anchors for one corpus (Arora et al., 2013) or two corpora (Yuan et al., 2018). It also supports monoword anchoring (Arora et al., 2013) and multiword anchoring (Lund et al., 2017).Platform: UNKNOWNClassifier: Programming Language :: Python :: 3Classifier: License :: OSI Approved :: MIT LicenseClassifier: Operating System :: OS Independent
anchor-txt
anchor_txt: attributes in markdownanchor_txt adds the ability to embed attributes in markdown files so that external tools can more easily link them to eachother and code, as well as perform other operations.Useanchor_txt.Section.from_md_pathto load a markdown file.# Markdown Syntax The syntax for anchor_txt attributes is simple.Headers of the form# header {#anchor}will have theanchortag extracted and available inHeader.anchorA header creates a Section, which can have subsections.Sections have attributes, which are embedded yaml either inline or in fenced code blocks, shown below.An inline attribute looks like one of these:``@{foo}``@{bar:2}` `Fenced code block attributes look like below. They must include the identifieryaml(orjson) and end with a@`yaml @ foo: null bar: 2 `Attributes blocks within a Section are combined through the same process asdict.update, except overlapping keys throw an error.# Developer Runmake initto create the necessary virtualenvRunmake testfor basic tests ormake checkfor lints and formatting.# LicenseThe source code is Licensed under either ofApache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) orhttp://www.apache.org/licenses/LICENSE-2.0)MIT license ([LICENSE-MIT](LICENSE-MIT) orhttp://opensource.org/licenses/MIT)at your option.Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
anchovy
AnchovyAnchovy is a minimal, unopinionated file-processing framework equipped with a complete static website generation toolkit.Minimal:Anchovy’s core is around a thousand lines of code and has no mandatory dependencies. Plus, Anchovy can be used for real projects with just a few pip-installable extras, even if you want to preprocess CSS.Unopinionated:Anchovy offers a set of components which can be easily configured to your site’s exact requirements, without tediously ripping out or overriding entrenched behaviors. Anchovy does not assume you are building a blog or that you wish to design your templates in a specific way. You can even build things that aren’t websites! Plus, Anchovy operates on files, so it’s simple to integrate tools like imagemagick, dart-sass, or less.js if you need them.Complete:Anchovy comes with a dependency auditing system, allowing you to grab any component you want without installing anything but Anchovy and find out what youwillneed to run your build. Choose from a wealth of Steps, Anchovy’s modular file processors, for everything from rendering Jinja templates and minifying CSS to unpacking archives and thumbnailing images. Plus, add a few extra parameters or lines of configuration to get automatic intelligent minimum builds based on input checksums, and get a reproducible run artifact to boot— even if you want to fetch HTTP resources or write your own Steps. Iterate quickly by launching a lightweight development-grade web server once the build is complete.InstallationAnchovy has no essential prerequisites and can be installed withpip install anchovyto get just the framework and a few built-in components, but for typical usagepip install anchovy[base]is recommended. This will pull in support for Jinja2 templating, markdown, minification, and Anchovy’s CSS preprocessor. A full list of available extras may be found in thepyproject.tomlfile.Alternatively, Anchovy may be installed directly from source withpip install git+https://github.com/pydsigner/anchovyor the correspondingpip install git+https://github.com/pydsigner/anchovy#egg=anchovy[base].Command Line UsageAnchovy operates on config files written in Python, or even modules directly.python -m anchovy -hanchovy -m mypackage.anchovyconf -o ../release/python -m anchovy mysite/anchovy_site.py -- -hShow MeRunanchovy examples/code_index.py -s -p 8080, then open a browser to localhost:8080 (or click the link in the console). This example offers the most extensive demonstration of Anchovy’s functionality as of version 1.0.What’s the Baseline?Here’s minimal example performing about what thestaticjinjamarkdown example offers:frompathlibimportPathfromanchovyimport(DirectCopyStep,InputBuildSettings,JinjaMarkdownStep,OutputDirPathCalc,REMatcher,Rule,)# Optional, and can be overridden with CLI arguments.SETTINGS=InputBuildSettings(input_dir=Path('site'),working_dir=Path('working'),output_dir=Path('build'),custody_cache=Path('build-cache.json'),)RULES=[# Ignore dotfiles found in either the input_dir or the working dir.Rule((REMatcher(r'(.*/)*\..*',parent_dir='input_dir')|REMatcher(r'(.*/)*\..*',parent_dir='working_dir')),None),# Render markdown files, then stop processing them.Rule(REMatcher(r'.*\.md'),[OutputDirPathCalc('.html'),None],JinjaMarkdownStep()),# Copy everything else in static/ directories through.Rule(REMatcher(r'(.*/)*static/.*',parent_dir='input_dir'),OutputDirPathCalc(),DirectCopyStep()),]This example is very simple, but it’s legitimately enough to start with for a small website, and offers an advantage over other minimal frameworks by putting additional batteries within an arm’s reach. If we stored the configuration inconfig.pyand added a raw site like this:site/ static/ styles.css toolbar.js base.jinja.html index.md about.md contact.mdpython -m anchovy config.pywould produce output like this:output/ static/ styles.css toolbar.js index.html about.html contact.htmlThis example can be found in runnable form asexamples/basic_site.pyin the source distribution. Available command line arguments can be seen by passing-h:python -m anchovy examples/basic_site.py -- -h. The--is required becauseanchovyitself also accepts the flag.Programmatic UsageAnchovy is very usable from the command line, but projects desiring to customize behavior, for example by running tasks before or after pipeline execution, may utilizeanchovy.cli.run_from_rules():importtimefrompathlibimportPathfromanchovy.cliimportrun_from_rulesfromanchovy.coreimportContextfrommy_site.configimportSETTINGS,RULESclassMyContext(Context):deffind_inputs(path:Path):# Only process files modified in the last hour.hour_ago=time.time()-3600forcandidateinsuper().find_inputs(path):ifcandidate.stat().st_mtime>hour_ago:yieldcandidatedefmain():print('Pretending to run pre-pipeline tasks...')run_from_rules(SETTINGS,RULES,context_cls=MyContext)print('Pretending to run post-pipeline tasks...')if__name__=='__main__':main()
anchovy-css
Anchovy CSSanchovy_css is a pure-Python CSS pre-processor. The key feature it currently offers is arbitrary selector and media query nesting. Future releases will add more feature parity with established competitors, adding inclusions, features flags, and custom properties/functions.Installationanchovy_css includes wheels which can be installed using pip:pip install anchovy_css.Alternatively, anchovy_css may be installed directly from source:pip install git+https://github.com/pydsigner/anchovy_css.
ancIBD
ancIBDThis Python software package screens ancient human DNA for long IBD blocks (Identity by Descent segments) shared between pairs of individuals.Please find the official documentation and installation instructions atthe official ancIBD readthedocs page.
ancient
AncientConvert between integers and roman numerals in PythonInstallInstall from PyPi$pipinstallcnnclusteringor clone the developement version from GitHub$gitclonehttps://github.com/janjoswig/Ancient.git $cdAncient $pipinstall.UsageImportfromancientimportromanBasic conversionsConvert integer values to Roman numeralsforiinrange(10):print(roman.roman(i))N I II III IV V VI VII VIII IXBy default, the conversion follows the standard scheme using a subtractive representation for the values 4, 9, 14, etc. (e.g. IV instead of IIII). An additive representation can be selected via themappingkeyword (see alsoCustom Mappings).foriinrange(10):print(roman.roman(i,mapping="ascii-additive"))N I II III IIII V VI VII VIII VIIIIComposition of large numbers (>4999) can be improved using an extended mapping.foriin[5000,10000,50000,100000]:print(roman.roman(i,mapping="unicode-extended"))ↁ ↂ ↇ ↈInterpretation of Roman numeralsforiin["I","IV","IIII","XX","XL","C"]:print(roman.interpret_roman(i))1442040100The Roman data typeThe packag provides theRomandata type to handle Roman numeralsnumber=roman.Roman(5)print(f"{number!r}")print(f"{number!s}")Roman(5,format='ascii-std')VThe type behaves like an integer in arithmetic operationsprint(number+2)print(number-roman.Roman(1))print(number*2)print(number/2)# Integer division!VII IV X IICustom MappingsA mapping of Roman symbols to integer values used for interconversions has the formmapping={"M":1000,"D":500,"C":100,"L":50,}For the conversion of integers to Roman numerals, such a mapping should have a decreasing order in the integer values. To ensure this, mappings can inherit fromroman.Symbols. Note, that only one symbol is effectively used if the same value is mapped to more than one symbols.custom_mapping=roman.Symbols()custom_mapping.update({"ↆ":50,"Ж":100,"I":1,"Ʌ":5})print(custom_mapping){'Ж':100,'ↆ':50,'Ʌ':5,'I':1}A cutsom mapping can be used in conversions instead of the default mappingsroman.roman(156,mapping=custom_mapping)'ЖↆɅI'A set of mappings is provided as instances of froman.Symbolsinroman.symbolsprint(roman.symbols.keys())dict_keys(['ascii-additive','ascii-std','ascii-variant','unicode-additive','unicode-std','unicode-extended','unicode-extended-claudian'])Mappings stored in this place can be used by their key in conversions. Instances of typeRomanhave an attributeformatthat controls the conversion and should be a valid mapping key.number=roman.Roman(100)print(number)roman.symbols["custom"]=custom_mappingnumber.format="custom"print(number)C ЖZero and negative numbersThe package can handle negative numbersnumber=roman.Roman(-10)print(number)-XThe symbol used to represent 0 is stored on the used mappings and can be changed.print(roman.symbols["unicode-std"].nullum)N
ancientfiles
Project web page is inhttps://github.com/turulomio/ancientfiles
ancientgram
No description available on PyPI.
ancient-helper-kit
Ancient Helper Kithelper functions to be used in a pipeline for virus discovery
ancient_math
UNKNOWN
ancientMetagenomeDirCheck
AncientMetagenomeDirCheckA python package to check AncientMetagenomeDir.AncientMetagenomeDirCheck will verify a dataset, in tabular format, against a json schema (and some other checks...).InstallFrom PyPI using pippipinstallancientMetagenomeDirCheckThe latest development version, directly from GitHubpipinstall--upgrade--force-reinstallgit+https://github.com/SPAAM-workshop/AncientMetagenomeDirCheck.gitDocumentation$ancientMetagenomeDirCheck--help Usage:ancientMetagenomeDirCheck[OPTIONS]DATASETSCHEMAancientMetagenomeDirCheck:PerformsvaliditycheckofancientMetagenomeDirdatasetsAuthor:MaximeBorryContact:<borry[at]shh.mpg.de>Homepage&Documentation:github.com/spaam-workshop/ancientMetagenomeDirCheckDATASET:pathtotsvfileofdatasettocheckSCHEMA:pathtoJSONschemafile Options:--versionShowtheversionandexit.-v,--validityTurnonschemachecking.-d,--duplicateTurnonlineduplicatelinechecking.-i,--doiTurnonDOIduplicatechecking.-m,--markdownOutputisinmarkdownformat-dc,--duplicated_entriesTEXTCommmaseparatedlistofcolumnstocheckforduplicatedentries--helpShowthismessageandexit.
ancientsolutions-crypttools
UNKNOWN
ancillamap
Ancilla Map
ancipher
Ancipher"Alpha Numeric Cipher"🪛 InstalltionIt is a python (precisely v3) package, uploaded onPyPi.pip install ancipher📑 UsageFirstly import it:from ancipher import ancNow, useanc()(datatype: string)anc("As simple as that!")Output45 51mpl3 45 7h47!🖱️ RequirementsObviouslyPython 3A Divinemonk creation!
ancli
Building argument parser from a function annotation. A simple utility inspired byFireanddocopt. Ad-hoc solution for someone who often writes scripts with a single entry point.How?The process of building CLI withancliis very simple.Write a plain Python function with annotated parameters.Wrap it withmake_cli.Run your script.Examples1. Function with annotated parametersThe functionrunhas explicitly annotated parameters and its signature is used to instantiateargparse.ArgumentParserinstance that accepts parameters with specific types and default (if any) parameters. If default value is not provided, then the parameter is considered to be required.fromancliimportmake_clidefrun(path:str,flag:bool=True,iterations:int=1):print(f'run: path={path}, flag={flag}, iterations={iterations}')if__name__=='__main__':make_cli(run)Now this snippet can be used as follows.$pythonscript.py--pathfile.txt--flag0run:path=file.txt,flag=False,iterations=12. Function without annotationsThe functions without type annotations try to infer the parameters types based on their default values.fromancliimportmake_clidefrun(a,b=2,c=3.0):forparamin(a,b,c):print(type(param))if__name__=='__main__':make_cli(run)The parameters without default values are considered as strings.$pythonscript.py--a1--b2--c3.0<type'str'><type'int'><type'float'>3. Runninganclias a moduleRunning package as a module allows to dynamically build a CLI from some function. You just need to specify a path to the module, and function which should be treated as an entry point.$python-mancliexamples.functions:compute--a2--b642
anclib
anclibUsing classes and methods in anclib.py it is possible to parse and analyze text files containing results related to ancestral reconstruction of DNA or protein sequences. (Rudimentary - beginning of project)AvailabilityThe anclib.py source code is available on GitHub:https://github.com/agormp/ancliband can be installed from PyPI:https://pypi.org/project/anclib/Installationpython3 -m pip install anclibUpgrading to latest version:python3 -m pip install --upgrade anclibDependenciesPythonThe anclib library depends on these other python modules, which are automatically included when using pip to install:phylotreelib librarysequencelib libraryNumPy packagepandas libraryrpy2 packageLevenshtein Python C extension moduleRThe anclib.py library requires R to be installed, along with the following R-packages:tidyverseapetidytreeHighlightsTo be written
ancora
Stellar Anchor implementation using FastAPI
ancpbids
AboutancpBIDS is a lightweight Python library to read/query/validate/write BIDS datasets. It can be used in workflows or analysis pipelines to handle IO specific aspects without bothering much about low level file system operations. Its implementation is based on the BIDS schema and allows it to evolve with the BIDS specification in a generic way. Using a plugin mechanism, contributors can extend its functionality in a controlled and clean manner.!!! ANNOUNCEMENT !!! As of version 0.22.0 the BIDSLayout has moved over toPyBIDSwhere it will be developed and maintained in future. ancpBIDS itself does not support this interface anymore but will act as a core package to PyBIDS and downstream projects needing a lightweight IO library to handle BIDS datasets. This documentation has not yet been updated to reflect this change.Read more onreadthedocs.io
ancv
ancvGetting your resume akaan CV(ANSI-v🤡) straight to your and anyone else's terminals:Be warned though, for this is kinda useless and just for fun:Getting startedCreate your resume according to theJSON Resume Schema(see also theschema specification) either:manually (seetheheyhosamplefor a possible starting point),exporting fromLinkedInusingJoshua Tzucker's LinkedIn exporter(repo)[^1], orexporting from one of the platforms advertised as offeringJSON resume integration.Create apublicgistnamedresume.jsonwith your resume contents.You're now the proud owner of an ancv. Time to try it out.The following examples work out-of-the-box.Replaceheyhowith your GitHub usernameonce you're all set up.curl:curl-Lancv.io/heyhowith-Lbeing shorthand for--location, allowing you to follow the redirect fromhttp://ancv.iothrough tohttps://ancv.io. It's shorter than its also perfectly viable alternative:curlhttps://ancv.io/heyhoLastly, you might want to page the output for easiest reading, top-to-bottom:curl-sLancv.io/heyho|lessIf that garbles the rendered output, tryless -raka--raw-control-chars.wget:wget-O---quietancv.io/heyhowhere-Ois short for--output-document, used here to redirect to stdout.PowerShell 7:(iwrancv.io/heyho).Contentwhereiwris an alias forInvoke-Webrequest, returning an object whoseContentwe access.PowerShell 5:(iwr-UseBasicParsingancv.io/heyho).Contentwhere-UseBasicParsingisonlyrequired if you haven't set up Internet Explorer yet (yes, really). If you have, then it works as PowerShell 7 (where that flag is deprecated and the default anyway).ConfigurationAll configuration is optional.The CV is constructed as follows:In summary:you control:thetemplate.Essentially the order of items, indentations, text alignment, position of dates and more. Templates are like layouts/skeletons.thetheme.This controls colors, italics, boldface, underlining, blinking (yes, really) and more. A couple themes exist but you can easily add your own one.thelanguageto use.Pre-set strings like section titles (Education, ...), names of months etc. are governed bytranslations, of which there are a couple available already. All other text is free-form.text content like emojis and newlines to control paragraph breaks.Emojis are user-controlled: if you want them, use them in yourresume.json; in the future, there might betemplateswith emojis baked in, but you'd have to actively opt into using one.date formatting, in a limited fashion through a specialdec31_as_yeartoggle. If that toggle istrue, dates in the formatYYYY-12-31will be displayed asYYYYonly.lastly, there's a toggle for ASCII-only output.It only concerns thetemplateand controls the drawing of boxes and such (e.g.,-versus─: only the latter will produce gapless rules). If you yourself use non-ASCII characters in your texts, use alanguagecontaining non-ASCII characters (Spanish, French, ...) or athemewith non-ASCII characters (e.g., a theme might use the•character to print bullet points), non-ASCII Unicode will still occur. As such, this toggle currently isn't very powerful, but with some care itdoesultimately allow you to be ASCII-only.If you come up with new templates, themes or translations, a PR would be highly appreciated.youdo notcontrol:anything about a viewer's terminal!Any recent terminal will support a baseline of features (e.g., colors), but large parts of the functionalities depend on thefontused: proper Unicode support is needed for pretty output (seeascii_only), and ideally emojis if you're into that (although it's easy to pick an emoji-free template). Many themes leverage Unicode characters as well.access to your CV: like the gist itself, it will be publicly available on GitHub.How to configureConfiguringancvrequires going beyond the vanilla JSON Resume schema. You will need to add an (entirely optional)$.meta.ancvfield to yourresume.json. Theprovided schemawill be of help here: an editor capable of providing auto-completion based on it, likeVisual Studio Code, will make filling out the additional configuration a breeze.The schema will further inform you of the default values (used for unspecified fields). Since everything is optional, avalid JSON resume(without anancvsection) is valid forancvuse as well.InstallationAs a libraryInstall the package as usual:pipinstallancvThis also allows you to import whatever you could want or need from the package, if anything. Note that it's pretty heavy on the dependencies.As a containerSee also the availablepackages aka images:dockerpullghcr.io/alexpovel/ancvVersioned tags (so you can pin a major) are available.Local usageOnce installed, you could for example check whether yourresume.jsonis valid at all (validate) or get a glimpse at the final product (render):# pip route:$ancvrenderresume.json# container route:$dockerrun-v$(pwd)/resume.json:/app/resume.jsonghcr.io/alexpovel/ancvrenderSelf-hostingSelf-hosting is a first-class citizen here.Context: Cloud HostingThehttps://ancv.iosite is hosted onGoogle Cloud Run(serverless) and deployed thereautomatically, such that the latest release you see here is also the code executing in that cloud environment. That's convenient to get started: simply create aresume.jsongist and you're good to go within minutes. It can also be used for debugging and playing around; it's a playground of sorts.You're invited to use this service for as much and as long as you'd like. However, obviously, as an individual I cannot guarantee its availability in perpetuity. You might also feel uncomfortable uploading your CV onto GitHub, since ithasto be public for this whole exercise to work. Lastly, you might also be suspicious of me inserting funny business into your CV before serving it out. If this is you, self-hosting is for you.SetupFor simplicity, using Docker Compose (with Docker's recentCompose CLI plugin):Clone this repository onto your server (or fork it, make your edits and clone that)cd self-hostingEditCaddy's config file(more info) to contain your own domain namePlace yourresume.jsoninto the directoryRundocker compose upCaddy (chosen here for simplicity) will handle HTTPS automatically for you, but will of course require domain names to be set up correctly to answer ACME challenges. Handling DNS is up to you; for dynamic DNS, I can recommendqmcgaw/ddns-updater.If you self-host in the cloud, the server infrastructure might be taken care of for you by your provider already (as is the case for Google Cloud Run). In these cases, a dedicated proxy is unnecessary and a singleDockerfilemight suffice (adjusted to your needs). Trueserverlessis also a possibility and an excellent fit here. For example, one could useDigital Ocean'sFunctions. If you go that route and succeed, please let me know! (I had given up with how depressingly hard dependency management was, as opposed to tried-and-tested container images.)[^1]: The exporter has a couple caveats. You will probably not be able to paste its result into a gist and have it work out of the box. It is recommended to paste the export into an editor capable of helping you find errors against the contained$schema, like VS Code. Alternatively, a localancv render your-file.jsonwill printpydanticvalidation errors, which might be helpful in debugging. For example, the exporter might leave$.basics.urlan empty string, which isn't a valid URI and therefore fails the schema and, by extension,ancv. Similarly,endDatekeys might get empty string values.Remove these entriesentirely to stay conformant to the JSON Resume Schema (to whichancvstays conformant).
ancypatch
No description available on PyPI.
ancypwn
No description available on PyPI.
ancypwn-backend-unix
No description available on PyPI.
ancypwn-backend-windows-remote
No description available on PyPI.
ancypwn-backend-wsl2
No description available on PyPI.
ancypwn-terminal-alacritty
No description available on PyPI.
ancypwn-terminal-iterm2
No description available on PyPI.
ancypwn-terminal-termite
No description available on PyPI.
and
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
anda
anda[toc]pipinstallandaThis is a Python package for collecting, manipulation and visualizing various ancient Mediterranean data. It focus on their temporal, textual and spatial aspects. It is structured into several gradually evolving submodules, namelygr,imda,concs, andtextnet.anda.grfromandaimportgrThis module is dedicated to preprocessing of ancient Greek textual data. It contains functions for lemmatization, posttagging and translation. It relies heavely on Morhesus Dictionary.LemmatizationA minimal usage is to lemmatize individual word. You can either ask for only the first lemma (return_first_lemma()) or for all possibilities (return_all_unique_lemmata(). In most cases , the outcome is the same:gr.return_first_lemma("ἐπιστήμην")>'ἐπιστήμη'gr.return_all_unique_lemmata("ἐπιστήμην")>'ἐπιστήμη'Above these are functionslemmatize_string()andgr.get_lemmatized_sentences(). Both work with string of any length. The first returns a list of lemmata. The second returns a list of lemmatized sentences.string="Πρότασις μὲν οὖν ἐστὶ λόγος καταφατικὸς ἢ ἀποφατικὸς τινὸς κατά τινος. Οὗτος δὲ ἢ καθόλου ἢ ἐν μέρει ἢ ἀδιόριστος. Λέγω δὲ καθόλου μὲν τὸ παντὶ ἢ μηδενὶ ὑπάρχειν, ἐν μέρει δὲ τὸ τινὶ ἢ μὴ τινὶ ἢ μὴ παντὶ ὑπάρχειν, ἀδιόριστον δὲ τὸ ὑπάρχειν ἢ μὴ ὑπάρχειν ἄνευ τοῦ καθόλου, ἢ κατὰ μέρος, οἷον τὸ τῶν ἐναντίων εἶναι τὴν αὐτὴν ἐπιστήμην ἢ τὸ τὴν ἡδονὴν μὴ εἶναι ἀγαθόν."gr.lemmatize_string(string)>['πρότασις','λόγος','καταφατικός','ἀποφατικός','καθόλου','μέρος','ἀδιόριστος','λέγω','καθόλου','πᾶς','μηδείς','ὑπάρχω','μέρος','πᾶς','ὑπάρχω','ἀδιόριστον','ὑπάρχω','ὑπάρχω','ἄνευ','καθόλου','μέρος','οἷος','ἐναντίος','αὐτην','ἐπιστήμη','ἡδονην','ἀγαθός']gr.get_lemmatized_sentences(string)>[['πρότασις','λόγος','καταφατικός','ἀποφατικός'],['καθόλου','μέρος','ἀδιόριστος'],['λέγω','καθόλου','πᾶς','μηδείς','ὑπάρχω','μέρος','πᾶς','ὑπάρχω','ἀδιόριστον','ὑπάρχω','ὑπάρχω','ἄνευ','καθόλου','μέρος','οἷος','ἐναντίος','αὐτην','ἐπιστήμη','ἡδονην','ἀγαθός']]All lemmatization functions can be further parametrized by several argumentsall_lemmata=False:filter_by_postag=["n","a","v"]: returns only nouns ("n"), adjectives ("a") and verbs ("v")involve_unknown=True, ifFalse, it returns only words found in the dictionaryThus, you can run:lemmatized_sentences=gr.get_lemmatized_sentences(string,all_lemmata=False,filter_by_postag=["n","a","v"],involve_unknown=False)print(lemmatized_sentences)>[['λόγος'],['μέρος'],['πᾶς','μηδείς','ὑπάρχω','μέρος','πᾶς','ὑπάρχω','ὑπάρχω','ὑπάρχω','ἄνω/ἀνίημι','μέρος','οἷος','ἐναντίος','ἐπιστήμη','ἀγαθός']](1)get_lemmatized_sentences(string, all_lemmata=False, filter_by_postag=None, involve_unknown=False): it receives a raw Greek text of any kind and extent as its input Such input is processed by a series of subsequent functions embedded within each other, which might be also used independently(1)get_sentences()splits the string into sentences by common sentence separators.(2)lemmatize_string(sentence)first callstokenize_string(), which makes a basic cleaning and stopwords filtering for the sentence, and returns a list of words. Subsequently, each word from the tokenized sentence is sent either toreturn_first_lemma()or toreturn_all_unique_lemmata(), on the basis of the value of the parameterall_lemmata=(set toFalseby default).(4)return_all_unique_lemmata()goes to themorpheus_dictvalues and returns all unique lemmata.(5) Parameterfilter_by_postag=(defaultNone) enables to sub-select chosen word types from the tokens, on the basis of first character in the tag "p" . Thus, to choose only nouns, adjectives, and verbs, you can setfilter_by_postag=["n", "a", "v"].PREFERENCE: If verb, noun, and adjective variants are available, only then noun and adjective form is returned. If both noun and adjective is available, only noun is returned.TranslationNext to the lemmatization, there is also a series of functions for translations, likereturn_all_unique_translations(word, filter_by_postag=None, involve_unknown=False), useful for any wordform, andlemma_translator(word), where we already have a lemma.gr.return_all_unique_translations("ὑπάρχειν",filter_by_postag=None,involve_unknown=False)>'to begin, make a beginning'gr.lemma_translator("λόγος")>'the word'Morphological analysisYou can also do a morphological analysis of a stringgr.morphological_analysis(string)[1:4]>[{'i':'564347','f':'μέν','b':'μεν','l':'μέν','e':'μεν','p':'g--------','d':'20753','s':'on the one hand, on the other hand','a':None},{'i':'642363','f':'οὖν','b':'ουν','l':'οὖν','e':'ουν','p':'g--------','d':'23870','s':'really, at all events','a':None},{'i':'264221','f':'ἐστί','b':'εστι','l':'εἰμί','e':'ειμι','p':'v3spia---','d':'9722','s':'I have','a':None}]imdaThis module will serve for importing various ancient Mediterranean resources. Most of them will be imported directly from open third-party online resources. However, some of them have been preprocessed as part of the SDAM project.The ideal is that it will work like this:imda.list_datasets() >>> ['roman_provinces_117', 'EDH', 'roman_cities_hanson', 'orbis_network']And:rp=imda.import_dataset("roman_provinces_117","gdf")type(rp)>>>geopandas.geodataframeconcsThis module contains functions for workingtextnetThis module contains functions for generating, analyzing and visualizing word co-occurrence networks. It has been designed especially for working with textual data in ancient Greek.Versions history0.0.8 - bugs removed0.0.7 -filter_by_postagwith preference of nouns and adjectives by default0.0.6 - greek dictionaries included within the package0.0.5 - experimenting with data inclusion0.0.4 - docs
andak
PythonTemplateSimple Template
andaluh
Andaluh-pyTransliterate español (spanish) spelling to andaluz proposalsTable of ContentsDescriptionUsageInstallationRoadmapSupportContributingDescriptionTheAndalusian varieties of [Spanish](Spanish:andaluz; Andalusian) are spoken in Andalusia, Ceuta, Melilla, and Gibraltar. They include perhaps the most distinct of the southern variants of peninsular Spanish, differing in many respects from northern varieties, and also from Standard Spanish. Further info:https://en.wikipedia.org/wiki/Andalusian_Spanish.This package introduces transliteration functions to convertespañol(spanish) spelling to andaluz. As there's no official or standard andaluz spelling, andaluh-py is adopting theEPA proposal (Er Prinzipito Andaluh). Further info:https://andaluhepa.wordpress.com. Other andaluz spelling proposals are planned to be added as well.UsageUse from the command line with theandaluhtool:$ andaluh -h usage: andaluh [-h] [-e {s,z,h}] [-j] [-i FILE] [text] Transliterate español (spanish) spelling to Andalûh EPA. positional arguments: text Text to transliterate. Enclosed in quotes for multiple words. optional arguments: -h, --help show this help message and exit -e {s,z,h} Enforce seseo, zezeo or heheo instead of cedilla (standard). -j Keep /x/ sounds as J instead of /h/ -i FILE Transliterates the plain text input file to stdout $ andaluh "El veloz murciélago hindú comía feliz cardillo y kiwi. La cigüeña tocaba el saxofón detrás del palenque de paja." Er belôh murçiélago indú comía felîh cardiyo y kiwi. La çigueña tocaba er çâççofón detrâh der palenque de paha. $ andaluh -e z -j "El veloz murciélago hindú comía feliz cardillo y kiwi. La cigüeña tocaba el saxofón detrás del palenque de paja." Er belôh murziélago indú comía felîh cardiyo y kiwi. La zigueña tocaba er zâzzofón detrâh der palenque de paja.Import the python library for your own projects:importandaluh# Transliterate with andaluh EPA proposalprint(andaluh.epa("El veloz murciélago hindú comía feliz cardillo y kiwi. La cigüeña tocaba el saxofón detrás del palenque de paja."))>>>Erbelôhmurçiélagoindúcomíafelîhcardiyoykiwi.Laçigueñatocabaerçâççofóndetrâhderpalenquedepaha.# Enforce seseo instead of cedilla and 'j' for /x/ sounds. Show transliteration debug info.print(andaluh.epa("El veloz murciélago hindú comía feliz cardillo y kiwi. La cigüeña tocaba el saxofón detrás palenque de paja.",vaf='s',vvf='j',debug=True))h_rules=>Elvelozmurciélagoindúcomíafelizcardilloykiwi.Lacigüeñatocabaelsaxofóndetráspalenquedepaja.x_rules=>Elvelozmurciélagoindúcomíafelizcardilloykiwi.Lacigüeñatocabaelsâssofóndetráspalenquedepaja.ch_rules=>Elvelozmurciélagoindúcomíafelizcardilloykiwi.Lacigüeñatocabaelsâssofóndetráspalenquedepaja.gj_rules=>Elvelozmurciélagoindúcomíafelizcardilloykiwi.Lacigueñatocabaelsâssofóndetráspalenquedepaja.v_rules=>Elbelozmurciélagoindúcomíafelizcardilloykiwi.Lacigueñatocabaelsâssofóndetráspalenquedepaja.ll_rules=>Elbelozmurciélagoindúcomíafelizcardiyoykiwi.Lacigueñatocabaelsâssofóndetráspalenquedepaja.l_rules=>Elbelozmurciélagoindúcomíafelizcardiyoykiwi.Lacigueñatocabaelsâssofóndetráspalenquedepaja.psico_pseudo_rules=>Elbelozmurciélagoindúcomíafelizcardiyoykiwi.Lacigueñatocabaelsâssofóndetráspalenquedepaja.vaf_rules=>Elbelozmursiélagoindúcomíafelizcardiyoykiwi.Lasigueñatocabaelsâssofóndetráspalenquedepaja.word_ending_rules=>Elbelôhmursiélagoindúcomíafelîhcardiyoykiwi.Lasigueñatocabaelsâssofóndetrâhpalenquedepaja.digraph_rules=>Elbelôhmursiélagoindúcomíafelîhcardiyoykiwi.Lasigueñatocabaelsâssofóndetrâhpalenquedepaja.exception_rules=>Elbelôhmursiélagoindúcomíafelîhcardiyoykiwi.Lasigueñatocabaelsâssofóndetrâhpalenquedepaja.word_interaction_rules=>Erbelôhmursiélagoindúcomíafelîhcardiyoykiwi.Lasigueñatocabaersâssofóndetrâhderpalenquedepaja.Erbelôhmursiélagoindúcomíafelîhcardiyoykiwi.Lasigueñatocabaersâssofóndetrâhderpalenquedepaja.InstallationFrom PyPI repository$ sudo pip install andaluhFrom source code~/andaluh-py$ pip install .Remember use-eoption fordevelop mode.RoadmapAdding more andaluh spelling proposals.Contractions and inter-word interaction rules pending to be implemented.Silent /h/ sounds spelling rules pending to be implemented.Some spelling intervowel /d/ rules are still pending to be implemented.Transliteration rules for some consonant ending words still pending to be implemented.The andaluh EPA group is still deliberating about the 'k' letter.SupportPleaseopen an issuefor support.ContributingPlease contribute usingGithub Flow. Create a branch, add commits, and open a pull request.
andaluh-po
Andaluh-poTransliterate español (spanish) gettext po files to andaluz spelling proposals. Generates po and mo filesTable of ContentsDescriptionUsageRoadmapSupportContributingDescriptionTheAndalusian varieties of [Spanish](Spanish:andaluz; Andalusian) are spoken in Andalusia, Ceuta, Melilla, and Gibraltar. They include perhaps the most distinct of the southern variants of peninsular Spanish, differing in many respects from northern varieties, and also from Standard Spanish. Further info:https://en.wikipedia.org/wiki/Andalusian_Spanish.This application provides with a basicgettextspanishpo filestransliteration tool, usingandaluh-pypackage. Further info:https://github.com/andalugeeks/andaluh-pyPLEASE NOTICEandalu-po performs and automatic transliteartion from spanish to andaluz, hence all english, acronym or non spanish words will be wrongly transliterated. We encourage you to manually review all po files transliterated.UsageUse from the command line with theandaluh-potool:$ andaluh-po -h usage: andaluh-po [-h] [-d DEST] po Transliterate español (spanish) gettext po files to andaluz spelling proposals. Generates po and mo files. positional arguments: po the original po file to transliterate optional arguments: -h, --help show this help message and exit -d DEST, --dest DEST destination path of the transliterated po and mo files, defaults to cwd $ andaluh-po /tmp/language-pack-gnome-es-base-18.04+20180712/data/es/LC_MESSAGES/nautilus.po -d /tmp $ head /tmp/nautilus.po -n 80 | tail -n 20 "Nautilus supports all the basic functions of a file manager and more. It can" " search and manage your files and folders, both locally and on a network, " "read and write data to and from removable media, run scripts, and launch " "applications. It has three views: Icon Grid, Icon List, and Tree List. Its " "functions can be extended with plugins and scripts." msgstr "" "Nautilû âmmite toâ lâ funçionê báçicâ de un hêttôh de arxibô y argunâ mâh. " "Puede bûccâh y hêttionâh çû arxibô y carpetâ, tanto localê como remotâ, leêh" " y êccribîh datô de y en dîppoçitibô êttraíblê, ehecutâh çecuençiâ de órdenê" " y abrîh aplicaçionê. Tiene trêh bîttâ: rehiya de iconô, lîtta de iconô y " "árbô. Çe puede ampliâh çu funçionalidá con çecuençiâ de órdenê y " "complementô." # #-#-#-#-# nautilus-es.po (Nautilus) #-#-#-#-# # components/music/nautilus-music-view.c:198 #: data/org.gnome.Nautilus.desktop.in:3 src/nautilus-mime-actions.c:107 #: src/nautilus-properties-window.c:4602 src/nautilus-window.c:3084 msgid "Files" msgstr "Arxibô"RoadmapMigrating to python3SupportPleaseopen an issuefor support.ContributingPlease contribute usingGithub Flow. Create a branch, add commits, and open a pull request.
andbug
UNKNOWN
and-cli
No description available on PyPI.
ande
Efficient Method for Optimizing Anomaly Detection with Clustering Algorithmsand for Unifiying in a PackageTo create a common platform for anomaly detection process with some popular clustering algorithms to be an easy solution for data analysis to verify the process data with other clustering algorithms.Table of ContentsAbout The ProjectBuilt WithGetting StartedPrerequisitesInstallationUsageRoadmapContributingLicenseContactAcknowledgmentsAbout The ProjectThe world of data is growing very fast, and it is a new challenge for data analysis to develop new methods to handle this massive amount of data. A large number of data have many hidden factors that need to be identified and used for different algorithms. Clustering is one of the significant parts of data mining. The term comes from the idea of classifying unsupervised data. Now-a-days a lot of algorithms are implemented. Besides that, all those algorithms have some limitations, creating an opportunity to innovate newalgorithms for clustering. The clustering process can be separated in six different ways: partitioning, hierarchical, density, gridmodel, and constraint-based models. The aim of the package is to implement various types of clustering algorithms and helps to determine which one is more accurate on detecting impure data from a large data set. To create a common platform for Some popluar algorithms for anomaly detection are implemented and converged all of them into a package(AnDe). The algorithms which are implemented and combined into the package are: K-means, DBSCAN, HDBSCAN, Isolation Forest, Local Outlier Factor and Agglomerative Hierarchical Clustering. The package reduce the consumption of time by compressing implementation hurdles of each algorithms. The package is also makes the anomaly detection procedure more robust by visualizing in a more precise way along with visualization of comparison in performance(accuracy, runtime and memory consumption) of those algorithmimplemented.Built WithFor using this package, some popular packages are need to be configured in the working environment.NumpyPandaMatplotlibTimeosSklearnHdbscanTracemalloc(back to top)Getting StartedThis is an example of how you set up thie pacage and use in you script.PrerequisitesAt first, need install the package in your working environment for using this package.pipinstallpython=3.8pipinstallnumpypipinstallpandaspipinstallmatplotlibpipinstalltimepipinstallospipinstallsklearnpipinstallHdbscanpipinstallTracemallocInstallationDownload the package from (https://github.com/cbiswascse/AUnifiedPackageForAnomalyDetection)Install the package in you environment.pipinstallcb-clusterImport the pacage in your script.fromEMOADCAUPimportCluster(back to top)UsageCall the cluster function.fromandeimportandeande.ClusterView()Input the Location of CSV file.Please,InputtheLocationofCSV:Select yes(y) If you have Catagorical data in your dataset.DoyouwanttoincludeCatagoricaldata[y/n]:Select yes(y) If you want to scaling your dataset with MinMaxScaler.ScalingdatawithMinMaxScaler[y/n]:Available Clusering Algorithm Kmeans Dbscan Isolation Forest Local Factor Outlier Hdbscan AgglomerativeChooseyourAlgorithm:Kmeans Clusering:Number of ClusterHowmanyclustersyouwant?:Select one of Average Method for Performance Metricsweighted,micro,macro,binaryDbscan:Input a Epsilon valueepsiloninDecimal:Input a Min Samples valueMinSamplesInInteger:Select one of Average Method for Performance Metricsweighted,micro,macro,binary11.Hdbscan:Minimum size of clusterMinimunsizeofclustersyouwant?:Select one of Average Method for Performance Metricsweighted,micro,macro,binary13.Isolation Forest:Contamination valueContaminationvaluebetween[0,0.5]:Select one of Average Method for Performance Metricsweighted,micro,macro,binaryLocal Outlier Factor:Contamination valueContaminationvaluebetween[0,0.5]:Select one of Average Method for Performance Metricsweighted,micro,macro,binary17.Agglomerative:Number of ClusterHowmanyclustersyouwant?:18.Select one of Average Method for Performance Metricsweighted,micro,macro,binary(back to top)LicenseDistributed under the MIT License. SeeLICENSE.txtfor more information.(back to top)ContactChandrima Biswas [email protected] Link:https://github.com/cbiswascse/AUnifiedPackageForAnomalyDetection(back to top)AcknowledgmentsI would like to convey my heartfelt appreciation to my supervisor Prof.Dr. Doina Logofatu,for all her feedback, guidance, and evaluations during the work. Without her unique ideas, as well as her unwavering support and encouragement, I would never have been able to complete this project. In spite of her hectic schedule, she listened to my problem and gavethe appropriate advice.Furthermore, I express my very profound gratitude Prof. Dr. Peter Nauth for being the second supervisor of this work.
andebox
andeboxAnsible Developer's (tool)Box,andebox, is a script to assist Ansible developers by encapsulating some boilerplate tasks. One of the core features is the ability to runansible-teston a local copy of a collection repository without having to worry about setting environment variables nor having theexpecteddirectory structureabovethe collection directory.It also allows some basic stats gathering from thetests/sanity/ignore-X.Y.txtfiles.InstallationInstall it as usual:pip install andeboxRequirementsansible-core for actionstestandtox-testpyyaml for reading galaxy.ymldistutils for comparingLooseVersionobjects for actionignorevagrant for actionvagrantandeboxand any other dependency must be installed inside the VM, but that setup is the user responsibilitySetup-less ansible-testNo need to clone in specific locations or keep track of env variables. Simply clone whichever collection you want and run theansible-testcommand as:# Run sanity test(s) $ andebox test -- sanity --docker default --test validate-modules plugins/modules/mymodule.py # Run sanity test(s) excluding the modules listed in the CLI from the sanity 'ignore-X.Y.txt' files $ andebox test -ei -- sanity --docker default --test validate-modules plugins/modules/mymodule.py # Run unit test(s) $ andebox test -- unit --docker default test/units/plugins/modules/mymodule.py # Run integration test $ andebox test -- integration --docker default mymodule # Run tests in multiple Ansible versions using tox $ andebox tox-test -- sanity --docker default --test validate-modules plugins/modules/mymodule.py $ andebox tox-test -- unit --docker default test/units/plugins/modules/mymodule.py $ andebox tox-test -- integration --docker default mymodule # Run tests in multiple specific Ansible versions using tox $ andebox tox-test -e ac211,ac212 -- unit --docker default test/units/plugins/modules/mymodule.py # ansible-core 2.11 & 2.12 only $ andebox tox-test -e a4,dev -- integration --docker default mymodule # ansible 4 & development branchBy default,andeboxwill discover the full name of the collection by parsing thegalaxy.ymlfile found in the local directory. If the file is not present or if it fails for any reason, use the option--collectionto specify it, as in:$ andebox test --collection community.general -- sanity --docker default -v --test validate-modulesPlease notice thatandeboxuses whicheveransible-testis available inPATHfor executionStats on ignore filesGathering stats from the ignore files can be quite annoying, especially if they are long. One can run:$ andebox ignores -v2.10 -d4 -fc '.*:parameter-list-no-elements' 24 plugins/modules/ovirt validate-modules:parameter-list-no-elements 8 plugins/modules/centurylink validate-modules:parameter-list-no-elements 6 plugins/modules/redfish validate-modules:parameter-list-no-elements 5 plugins/modules/oneandone validate-modules:parameter-list-no-elements 4 plugins/modules/rackspace validate-modules:parameter-list-no-elements 4 plugins/modules/oneview validate-modules:parameter-list-no-elements 3 plugins/modules/opennebula validate-modules:parameter-list-no-elements 3 plugins/modules/univention validate-modules:parameter-list-no-elements 3 plugins/modules/consul validate-modules:parameter-list-no-elements 3 plugins/modules/sensu validate-modules:parameter-list-no-elementsRuntime configQuickly peek what is theruntime.ymlstatus for a specific module:$ andebox runtime scaleway_ip_facts D modules scaleway_ip_facts: deprecation in 3.0.0 (current=2.4.0): Use community.general.scaleway_ip_info instead.Or using a regular expression:$ andebox runtime -r 'gc[pe]' R lookup gcp_storage_file: redirected to community.google.gcp_storage_file T modules gce: terminated in 2.0.0: Use google.cloud.gcp_compute_instance instead. R modules gce_eip: redirected to community.google.gce_eip R modules gce_img: redirected to community.google.gce_img R modules gce_instance_template: redirected to community.google.gce_instance_template R modules gce_labels: redirected to community.google.gce_labels R modules gce_lb: redirected to community.google.gce_lb R modules gce_mig: redirected to community.google.gce_mig R modules gce_net: redirected to community.google.gce_net R modules gce_pd: redirected to community.google.gce_pd R modules gce_snapshot: redirected to community.google.gce_snapshot R modules gce_tag: redirected to community.google.gce_tag T modules gcp_backend_service: terminated in 2.0.0: Use google.cloud.gcp_compute_backend_service instead. T modules gcp_forwarding_rule: terminated in 2.0.0: Use google.cloud.gcp_compute_forwarding_rule or google.cloud.gcp_compute_global_forwarding_rule instead. T modules gcp_healthcheck: terminated in 2.0.0: Use google.cloud.gcp_compute_health_check, google.cloud.gcp_compute_http_health_check or google.cloud.gcp_compute_https_health_check instead. T modules gcp_target_proxy: terminated in 2.0.0: Use google.cloud.gcp_compute_target_http_proxy instead. T modules gcp_url_map: terminated in 2.0.0: Use google.cloud.gcp_compute_url_map instead. R modules gcpubsub: redirected to community.google.gcpubsub R modules gcpubsub_info: redirected to community.google.gcpubsub_info R modules gcpubsub_facts: redirected to community.google.gcpubsub_info R doc_fragments _gcp: redirected to community.google._gcp R module_utils gce: redirected to community.google.gce R module_utils gcp: redirected to community.google.gcpwhere D=Deprecated, T=Tombstone, R=Redirect.Run Integration Tests in Vagrant VMsTo run the test inside a VM managed byvagrant:# Run test in VM named "fedora37" using sudo $ andebox vagrant -n fedora37 -s -- --python 3.9 xfs_quota --color yesAlso beware thatandeboxdoes not create nor manageVagrantfile. The user is responsible for creating and setting up the VM definition. It must haveandeboxandansible-core(oransible-baseoransible) installed on a virtual environment. By default, the venv is expected to be at/venvbut the localtion can be specified using the--venvparameter.
and-eggs
AndEggsAndEggs is a Python library that allows you to create an email bot using Gmail's API.Simple DocumentationClient ClassTheClientclass will be the main entry point of your program.Create a ClientFirst, create a new client by calling theClientconstructor:>>>fromandeggsimportClient>>>client=Client('CommandPrefix')Then, you can use theClientobject to interact with Gmail's API.Creating a CommandTo create a command, you should create an asynchronous function that will be called when the command is triggered. It should take in aMailContextobject as its only parameter. This object contains all the information about the email that triggered the command. To turn it into a command, use theClient.commanddecorator:>>>@client.command('CommandName')>>>asyncdefcommand(context:MailContext):>>># Do something>>>passSending an EmailTo send an email, you should use theClient.sendmethod:>>>client.send(>>>to='RecipientEmail',>>>subject='Subject',>>>body='Body',>>>is_html=False,>>>)Activating the ClientTo activate the client, you should call theClient.runmethod:>>>client.run('youremail','yourpassword')EventsEvents are triggered when certain actions happen. Below is a list of the events that can be triggered and their parameters:on_ready: Triggered when the client is started.on_message: Triggered when a new email is received. The parameter is aMailContextobject.on_send: Triggered when an email is sent. The parameter is aMailContextobject.on_command: Triggered when a command is triggered. The parameter is aMailContextobject.on_stop: Triggered when the client is stopped.TasksA task is a function that runs in the background. You can use theClient.taskdecorator to create a task:>>>@client.task(interval=60)>>>asyncdeftask():>>># Do something>>>passMailContextOn all of your commands, you will receive aMailContextobject. This object contains all the information about the email that triggered the command.Attributessender: ASenderobject.subject: The subject of the email.body: The body of the email.Sending an EmailYou can also send an email using theMailContext.sendmethod:>>>context.send(>>>'Body',>>>)If you want to format your text with html, you should use theMailContext.sendfmethod:>>>context.sendf(>>>'<p>{}</p>',>>>'Body',>>>)
andeplane-ai
cognite-aiA set of AI tools for working with CDF in Python.MemoryVectorStoreStore and query vector embeddings created from CDF. This can enable a bunch of use cases where the number of vectors aren't that big.Install the package%pip install cognite-aiThen you can create vectors from text (both multiple lines or a list of strings) like thisfrom cognite.ai import MemoryVectorStore vector_store = MemoryVectorStore(client) vector_store.store_text("Hi, I am a software engineer working for Cognite.") vector_store.store_text("The moon is orbiting the earth, which is orbiting the sun.") vector_store.store_text("Coffee can be a great way to stay awake.") vector_store.query_text("I am tired, what can I do?")Smart data framesChat with your data using LLMs. Built on top ofPandasAIversion 1.5.8. If you have loaded data into a Pandas dataframe, you can runInstall the package%pip install cognite-aiChat with your datafrom cognite.ai import load_pandasai SmartDataframe, SmartDatalake = await load_pandasai() workorders_df = client.raw.rows.retrieve_dataframe("tutorial_apm", "workorders", limit=-1) workitems_df = client.raw.rows.retrieve_dataframe("tutorial_apm", "workitems", limit=-1) workorder2items_df = client.raw.rows.retrieve_dataframe("tutorial_apm", "workorder2items", limit=-1) workorder2assets_df = client.raw.rows.retrieve_dataframe("tutorial_apm", "workorder2assets", limit=-1) assets_df = client.raw.rows.retrieve_dataframe("tutorial_apm", "assets", limit=-1) from cognite.client import CogniteClient client = CogniteClient() smart_lake_df = SmartDatalake([workorders_df, workitems_df, assets_df, workorder2items_df, workorder2assets_df], cognite_client=client) smart_lake_df.chat("Which workorders are the longest, and what work items do they have?") s_workorders_df = SmartDataframe(workorders_df, cognite_client=client) s_workorders_df.chat('Which 5 work orders are the longest?')Configure LLM parametersparams = { "model": "gpt-35-turbo", "temperature": 0.5 } s_workorders_df = SmartDataframe(workorders_df, cognite_client=client, params=params)
andeplane-pyodide-kernel
jupyterlite-pyodide-kernelA Python kernel forJupyterLitepowered byPyodide,Requirementspython >=3.8jupyterlite >=0.1.0b19InstallTo install the Pyodide kernel labextension and the CLI addons forjupyter lite, run:pipinstalljupyterlite-pyodide-kernelThen build your JupyterLite site:jupyterlitebuild⚠️ The documentation for advanced configuration is available from the main JupyterLite documentation site:configuringcommand line interfaceUninstallTo remove the extension, run:pipuninstalljupyterlite-pyodide-kernelDevelopment InstallBelow is an short overview of getting up and running quickly. Please see thecontributing guidefor full details.Development RequirementsRecommendeda Python virtual environment provided by a tool of choice, e.g.virtualenvmambacondaEnsure the local development environment has:gitnodejs 18python >=3.8Development Quick Startgitclonehttps://github.com/jupyterlite/pyodide-kernelcdpyodide-kernel npmrunquickstartThen, serve the built demo site, documentation, and test reports with Python's built-in http server:jlpmserve
ander
ander: A CLI tool to identify elements that are contained in both fileBasic Usage$catfile1.txt duplicated notduplicated $catfile2.txt duplicated notduplicated $anderfile1.txtfile2.txt duplicatedInstallation$pipinstallanderRequirementsPython >= 3.6Some Python Libraries (seepyproject.toml)LicenseThis software is released under the MIT License, see LICENSE.
andersen
Andersen一个Python的日志库的封装使用方式安装pip install andersen代码使用直接使用importandersenandersen.debug('debug')andersen.info('info')andersen.warn('warn')andersen.error('error')系统自带了一个输出到控制台的handler,日志级别为info,因此上边的代码会输出:自定义配置可以自己指定配置的文件,有两种指定方式:环境变量和函数指定指定环境变量importosos.environ.setdefault('ANDERSEN_CONFIG','log_config.toml')# 必须在导入前设置环境变量importandersenandersen.debug('debug')andersen.info('info')andersen.warn('warn')andersen.error('error')函数加载importandersenandersen.init('log_config.toml')andersen.debug('debug')andersen.info('info')andersen.warn('warn')andersen.error('error')目前只支持toml的自定义配置文件,示例文件如下[log.common]# 可选参数name="common"# 必要参数level="info"# 可选参数,这里如果配置的话则handler处没有配置的话会到这里查找。如果都找不到则会抛出异常format="[%(asctime)s] [%(levelname)s] %(message)s"# 输出到控制台的配置[[log.common.handlers]]type="std"# 可选参数format="%(asctime)s %(levelname)s %(message)s"level="info"# 输出到文件的配置[[log.common.handlers]]type="file"# 可选参数format="%(asctime)s %(levelname)s %(message)s"level="info"# 必要参数log_file="log/common.log"# 输出到按大小迭代的文件的配置[[log.common.handlers]]type="rotate"# 可选参数format="%(asctime)s %(levelname)s %(message)s"level="info"# 必要参数log_file="log/common1.log"max_bytes=1024# 输出到按时间迭代的文件的配置[[log.common.handlers]]type="time_rotate"# 可选参数format="%(asctime)s %(levelname)s %(message)s"level="info"# 必要参数log_file="log/common2.log"when='h'# 如果需要配置另一个,修改log后边的key名称即可,然后需要配置哪些类型的handler,从上边复制后修改即可[log.sample]name="sample"level="debug"format="[%(asctime)s] [%(levelname)s] %(message)s"# 输出到控制台的配置[[log.sample.handlers]]type="std"# 可选参数format="%(asctime)s %(levelname)s %(message)s"level="debug"相关函数日志配置init(conf=None,default_logger='common')初始化函数。可重复调用,只会实际执行一次。conf 如果为None则从环境变量读取配置文件,如果不存在则使用默认配置,即创建一个输出到控制台的日志对象default_logger 初始化后默认使用的日志对象generate_sample_config(filename)将示例配置写入一个toml文件。写入内容见上文filename 目标文件get_logger(key='common')获取日志对象。key必须在配置文件里已经定义。返回的是一个python的logging.Logger对象key 日志的key值,即配置中log后边所跟的参数,默认是common日志输出debug(*args,**kwargs)输出调试信息。kwargs中可选参数logger 要使用哪个日志输出信息,未指定的话使用默认日志输出sep 默认分隔符,如果指定该参数那么下边三个参数的默认值是该值list_sep 列表分隔符,即使用该参数连接参数argspara_sep 参数分隔符,即使用该参数连接args和kwargs生成的字符串dict_sep 字典分隔符,即使用该参数连接参数kwargsinfo(*args,**kwargs)输出信息。kwargs中可选参数logger 要使用哪个日志输出信息,未指定的话使用默认日志输出sep 默认分隔符,如果指定该参数那么下边三个参数的默认值是该值list_sep 列表分隔符,即使用该参数连接参数argspara_sep 参数分隔符,即使用该参数连接args和kwargs生成的字符串dict_sep 字典分隔符,即使用该参数连接参数kwargswarn(*args,**kwargs)输出警告信息。kwargs中可选参数logger 要使用哪个日志输出信息,未指定的话使用默认日志输出sep 默认分隔符,如果指定该参数那么下边三个参数的默认值是该值list_sep 列表分隔符,即使用该参数连接参数argspara_sep 参数分隔符,即使用该参数连接args和kwargs生成的字符串dict_sep 字典分隔符,即使用该参数连接参数kwargserror(*args,**kwargs)输出错误信息。kwargs中可选参数logger 要使用哪个日志输出信息,未指定的话使用默认日志输出sep 默认分隔符,如果指定该参数那么下边三个参数的默认值是该值list_sep 列表分隔符,即使用该参数连接参数argspara_sep 参数分隔符,即使用该参数连接args和kwargs生成的字符串dict_sep 字典分隔符,即使用该参数连接参数kwargs
andersen-ev
Andersen-EVPython package to enable control of the Andersen A2 EV charger. The library routes commands to the charger via Andersen's cloud API. So whilst the A2 cannot be controlled directly, this library could be used to replicate, or even replace the Konnect+ app.Installationpip install andersen-evAlternatively, install directly from this Github repo:pip install git+https://github.com/strobejb/andersen-evAuthenticationRegister your mobile phone with the Andersen Konnect+ app as normal. The email address and password used to register with Andersen are also needed by the python client to authenticate with the cloud API. User credentials should be protected and never hard-coded into scripts or source-control:fromandersen_evimportAndersenA2a2=AndersenA2()a2.authenticate(email=EMAIL,password=PASSWORD)Device confirmation is not implemented yet, but will be soon. When this feature arrives, it will be possible to authenticate with an access token, meaning the password does not need to be persisted.Basic UsageNow that the python client is authenticated, the Andersen APIs be accessed. Andersen's API is based on GraphQL and returns JSON structures for all queries. This python library acts as a simple wrapper that performs the necessary GraphQL queries, and converts all return values into python dictionaries.Retrieve device IDThis is the first step needed after authentication. Most functions exposed by this library will require the 'device ID' of your Andersen charger. This ID can be found using theget_current_user_devicesfunction:devices=a2.get_current_user_devices()deviceId=devices[0]['id']The example above retrieves the ID of the first device (charger) registered with your account. If you have more than one EV charger, then you will need to search by the name or ID of the device, or just use thedevice_id_from_namehelper function:deviceId=a2.device_id_from_name('Charger Name Here')Enable scheduled chargingScheduled charging can be resumed by enabling a specific schedule. The 'slot number' (an integer in the range 0-4) identifies the schedule as it appears in the Konnect app:a2.enable_schedule(deviceId,0)If the charger is locked, you might also want to unlock it at the same time to allow the schedule to take affect.Disable scheduled chargingThe charger will most likely be running off an overnight schedule. The Konnect+ app lets you cancel the schedules, allowing any connected vehicle to start charging:a2.set_all_schedules_disabled(deviceId)The command above disables all schedules and puts the charger into 'ready' (unlocked) state.Define a new scheduleA new schedule can be created by providing the schedule data (start & end time, and days applicable to). The slot number (0-4) needs to be specified separately as the 2nd parameter to the function:schedule={'startHour':0,'startMinute':30,'endHour':4,'endMinute':30,'enabled':True,"dayMap":{"monday":True,"tuesday":True,"wednesday":True,"thursday":True,"friday":True,"saturday":True,"sunday":True}}a2.create_schedule(deviceId,0,schedule)Lock the chargerAndersen chargers can be 'user locked' so that connected vehicles will not charge, and any scheduled charge events will also prevent the vehicle to charge.a2.user_lock(deviceId)Unlock the chargerThe charger can also be unlocked, which will put it in the 'ready' state. Charging will commence if a vehicle is connected.a2.user_unlock(deviceId)Receive device status updatesIt is possible to subscribe to device status updates sent by the cloud service, providing near-realtime information about what the charger is doing (what state it is in), and how much power is being used for charging connected vehicles.importjsonforresultina2.subscribe_device_updates(deviceId):j=json.dumps(result,indent=2)print(j)The results of these notifications contain slightly more information than just querying (polling) the API directly. Specifically, the result includes the current charging status (power level, etc) and can be used to replicate what the Konnect+ app displays. There are lots of values available- just run theexamples/konnect-status.pysample to see it in action.Useful fields seem to be:FieldDescriptionsysSchEnabledTrue when a schedule is enabledsysSchLockedTrue when the device is locked due to a schedulesysUserLockTrue then the device is user-locked (False when unlocked)chargePowerThe current charge levelevseStatedevice status / locked / chargingValues forevseStateare defined below. These appear to be the same values as defined by the OpenEVSE specification.EVSE StateDescription1Ready (disconnected)2Connected3Charging4Error254Sleeping255Disabled (locked by user, or schedule)There doesn't seem to be a reliable way to determine if a charger is physically connected, but not drawing power because of another reason. For example, if the charger is disabled because of a timed schedule, or locked by the user, the EVSE state always appears as 255 (disabled) even when a vehicle is connected. Only when the device is unlocked and there is no schedule enabled, willevseStatereflect the connected/charging status.I've also never observed the Andersen charger reporting the EVSE state as 254 (sleeping) which could be inferred as 'disabled due to a schedule'. These limitations are potentially a bug which could be rectified by future firmware update by Andersen.Example device status{"deviceStatusUpdated":{"id":"....","evseState":255,# 1=ready, 2=connected, 3=charging, 255=locked"online":true,# Connected to cloud"sysRssi":-69,# Wifi signal strength"sysSSID":"SSID HERE",# SSID"sysSchEnabled":True,# True when a schedule is active"sysUserLock":False,# Is device Locked"sysScheduleLock":True,# True when schedule is active"sysSolarPower":null,"sysGridPower":null,"solarMaxGridChargePercent":100,"solarChargeAlways":true,"solarOverride":false,"cfgCTConfig":1,"chargeStatus":{"start":"2023-01-05T00:30:00Z","chargeEnergyTotal":9.128312,"chargePower":0,# current charge level"duration":8472},"scheduleSlotsArray":[# array of schedule slots],"sysSchDSORandom":null}ExamplesThere are two examples that demonstate some of the functionality of the API:examples/konnect-query.pydemonstrates how to lock & unlock, and enable charging schedules.examples/konnect-status.pyis a basic example to demonstrate how to subscribe to device status events.Both examples need your credentials to run. These can be provided by creating a file calledexamples/config.cfg, and speciying your email and password in as follows:[KONNECT][email protected]=...
anderson.paginator
UNKNOWN
anderson.picasso
UNKNOWN
anderson-pro-video-ferramentas
Failed to fetch description. HTTP Status Code: 404
anders-sdk-jupyterlite
Cognite Python SDKThis is the Cognite Python SDK for developers and data scientists working with Cognite Data Fusion (CDF). The package is tightly integrated with pandas, and helps you work easily and efficiently with data in Cognite Data Fusion (CDF).DocumentationSDK DocumentationAPI DocumentationCognite Developer DocumentationPrerequisitesIn order to start using the Python SDK, you needPython3 (>= 3.5) and pipAn API key. Never include the API key directly in the code or upload the key to github. Instead, set the API key as an environment variable. See the usage example for how to authenticate with the API key.This is how you set the API key as an environment variable on Mac OS and Linux:$exportCOGNITE_API_KEY=<yourAPIkey>On Windows, you can followsthese instructionsto set the API key as an environment variable.InstallationTo install this package:$pipinstallcognite-sdkTo install this package without the pandas and NumPy support:$pipinstallcognite-sdk-coreExamplesFor a collection of scripts and Jupyter Notebooks that explain how to perform various tasks in Cognite Data Fusion (CDF) using Python, see the GitHub repositoryhereChangelogWondering about upcoming or previous changes to the SDK? Take a look at theCHANGELOG.ContributingWant to contribute? Check outCONTRIBUTING.
anderssontree
AnderssonTree PackageAbstractThis package provides Andersson Tree implementation written in pure Python.Sources of Algorithmshttp://en.wikipedia.org/wiki/Andersson_treehttp://user.it.uu.se/~arnea/abs/simp.htmlhttp://eternallyconfuzzled.com/tuts/datastructures/jsw_tut_andersson.aspxSome concepts are inspired by bintrees package athttp://bitbucket.org/mozman/bintrees, although this implementation does not support dict, heap, set compatibility.ConstructorAnderssonTree() -> new empty tree;AnderssonTree(mapping) -> new tree initialized from a mapping (requires only an items() method)AnderssonTree(seq) -> new tree initialized from seq [(k1, v1), (k2, v2), … (kn, vn)]Methods__contains__(k) -> True if T has a key k, else False__delitem__(y) <==> del T[y]__getitem__(y) <==> T[y]__iter__() <==> iter(T) <==> keys()__len__() <==> len(T)__repr__() <==> repr(T)__reversed__() <==> reversed(T), reversed keys__setitem__(k, v) <==> T[k] = v__copy__() <==> copy()clear() -> None, remove all items from Tcopy() -> a shallow copy of T, tree structure, i.e. key insertion order is preserveddump([order]) -> None, dumps tree according to orderget(k) -> T[k] if k in T, else Noneinsert(k, v) -> None, insert node with key k and value v, replace value if key existsis_empty() -> True if len(T) == 0iter_items([, reverse]) -> generator for (k, v) items of Tkeys([reverse]) -> generator for keys of Tremove(key) -> None, remove item by keyremove_items(keys) -> None, remove items by keysroot() -> root nodetraverse(f, [order]) -> visit all nodes of tree according to order and call f(node) for each nodeupdate(E) -> None. Update T from dict/iterable Evalues([reverse]) -> generator for values of TOrder valuesORDER_INFIX_LEFT_RIGHT - infix order, left child first, then rightORDER_INFIX_RIGHT_LEFT - infix order, right child first, then leftORDER_PREFIX_LEFT_RIGHT - prefix order, left child first, then rightORDER_PREFIX_RIGHT_LEFT - prefix order, right child first, then leftORDER_POSTFIX_LEFT_RIGHT - postfix order, left child first, then rightORDER_POSTFIX_RIGHT_LEFT - postfix order, right child first, then leftInstallationfrom source:python setup.py installor from PyPI:pip install anderssontreeDocumentationthis README.rst, code itself, docstringsanderssontree can be found on github.com at:https://github.com/darko-poljak/andersontreeTested WithPython2.7.5, Python3.3.2
andes
LTB ANDESPython software for symbolic power system modeling and numerical analysis, serving as the core simulation engine for the [CURENT Largescale Testbed][LTB Repository].LatestStableDocumentationBadgesDownloadsTry on BinderCode QualityBuild StatusWhy ANDESThis software could be of interest to you if you are working on DAE modeling, simulation, and control for power systems. It has features that may be useful if you are applying deep (reinforcement) learning to such systems.ANDES is by far easier to use for developing differential-algebraic equation (DAE) based models for power system dynamic simulation than other tools such asPSAT,DomeandPST, while maintaining high numerical efficiency.ANDES comes with a rich set of commercial-grade dynamic models with all details implemented, including limiters, saturation, and zeroing out time constants.ANDES produces credible simulation results. The following table shows thatFor the Northeast Power Coordinating Council (NPCC) 140-bus system (with GENROU, GENCLS, TGOV1 and IEEEX1), ANDES results match perfectly with that from TSAT.For the Western Electricity Coordinating Council (WECC) 179-bus system (with GENROU, IEEEG1, EXST1, ESST3A, ESDC2A, IEEEST and ST2CUT), ANDES results match closely with those from TSAT and PSS/E. Note that TSAT and PSS/E results are not identical, either.NPCC Case StudyWECC Case StudyANDES provides a descriptive modeling framework in a scripting environment. Modeling DAE-based devices is as simple as describing the mathematical equations. Numerical code will be automatically generated for fast simulation.Controller Model and EquationANDES CodeDiagram:Write into DAEs:In ANDES, what you simulate is what you document. ANDES automatically generates model documentation, and the docs always stay up to date. The screenshot below is the generated documentation for the implemented IEEEG1 model.In addition, ANDES featuresa rich library of transfer functions and discontinuous components (including limiters, deadbands, and saturation functions) available for model prototyping and system analysis.industry-grade second-generation renewable models (solar PV, type 3 and type 4 wind), distributed PV and energy storage model.routines including Newton method for power flow calculation, implicit trapezoidal method for time-domain simulation, and full eigenvalue analysis.developed with performance in mind. While written in Python, ANDES can finish a 20-second transient simulation of a 2000-bus system in a few seconds on a typical desktop computer.out-of-the-box PSS/E raw and dyr data support for available models. Once a model is developed, inputs from a dyr file can be immediately supported.ANDES is currently under active development. Use the following resources to get involved.Start from thedocumentationfor installation and tutorial.Check out examples in theexamples folderRead the model verification results in theexamples/verification folderTry in Jupyter Notebook onBinderAsk a question in theGitHub DiscussionsReport bugs or issues by submitting aGitHub issueSubmit contributions usingpull requestsRead release notes highlightedhereCheck out and and cite ourpaperCiting ANDESIf you use ANDES for research or consulting, please cite the following paper in your publication that uses ANDESH. Cui, F. Li and K. Tomsovic, "Hybrid Symbolic-Numeric Framework for Power System Modeling and Analysis," in IEEE Transactions on Power Systems, vol. 36, no. 2, pp. 1373-1384, March 2021, doi: 10.1109/TPWRS.2020.3017019.Who is Using ANDES?Please let us know if you are using ANDES for research or projects. We kindly request you to cite ourpaperif you find ANDES useful.Sponsors and ContributorsThis work was supported in part by the Engineering Research Center Program of the National Science Foundation and the Department of Energy under NSF Award Number EEC-1041877 and the CURENT Industry Partnership Program.This work was supported in part by the Advanced Grid Research and Development Program in the Office of Electricity at the U.S. Department of Energy.SeeGitHub contributorsfor the contributor list.LicenseANDES is licensed under theGPL v3 License.
andesite.py
andesite.pyA Python client library forAndesite. andesite.py tries to be as flexible as possible while still providing the same, consistent API. The library comes with built-in support fordiscord.py, but it can be used with any library of your choice.The goodiesPythonic, fully typed API including all Andesite "entities"Client pools with balancing and even state migration. If one node goes down its players are seamlessly migrated to another one.Custom state handlers. andesite.py doesn't force you to use its state manager, not even for the client pools. It provides you with a solid in-memory one, but you can swap it out however you want.Future-proof design so that if the library becomes outdated it still remains usable.InstallationYou can install the library from PyPI using pip:pipinstallandesite.pyLook & FeelThe following is a small example of how to use andesite.py. For more in-depth examples and information, please refer to the documentation.Please keep in mind that the following example is incomplete. It only serves to demonstrate some andesite.py code.importasyncioimportandesiteclient=andesite.create_client("http://localhost:5000",# REST endpoint"ws://localhost:5000/websocket",# WebSocket endpointNone,# Andesite password549905730099216384,# Bot's user id)asyncdefmain()->None:result=awaitclient.search_tracks("your favourite song")track_info=result.get_selected_track()# notice that we haven't called any sort of connect method. You can# of course manually connect the client, but if you don't, that's no# biggie because andesite.py will do it for you.awaitclient.play(track_info.track)asyncio.run(main())DocumentationYou can find the documentation on the project's website.Click hereto open the documentation.You can also take a look at theexamplesdirectory for a reference.AlternativesIf andesite.py isn't what you're looking for, first of all, please leave some feedback, but secondly here are some alternative Python client libraries which you can use:granitepy
andi
andimakes easy implementing custom dependency injection mechanisms where dependencies are expressed using type annotations.andiis useful as a building block for frameworks, or as a library which helps to implement dependency injection (thus the name - ANnotation-based Dependency Injection).License is BSD 3-clause.Installationpip install andiandi requires Python >= 3.8.1.GoalSee the following classes that represents parts of a car (and the car itself):classValves:passclassEngine:def__init__(self,valves):self.valves=valvesclassWheels:passclassCar:def__init__(self,engine,wheels):self.engine=engineself.wheels=wheelsThe following would be the usual way of build aCarinstance:valves=Valves()engine=Engine(valves)wheels=Wheels()car=Car(engine,wheels)There are some dependencies between the classes: A car requires and engine and wheels to be built, as well as the engine requires valves. These are the car dependencies and sub-dependencies.The question is, could we have an automatic way of building instances? For example, could we have abuildfunction that given theCarclass or any other class would return an instance even if the class itself has some other dependencies?car=build(Car)# Andi helps creating this generic build functionandiinspect the dependency tree and creates a plan making easy creating such abuildfunction.This is how this plan for theCarclass would looks like:InvokeValveswith empty argumentsInvokeEngineusing the instance created in 1 as the argumentvalvesInvokeWheelswith empty argumentsInvokeCarswith the instance created in 2 as theengineargument and with the instance created in 3 as thewheelsargumentType annotationsBut there is a missing piece in the Car example before. How canandiknow that the classValvesis required to build the argumentvalves? A first idea would be to use the argument name as a hint for the class name (aspinjectdoes), butandiopts to rely on arguments’ type annotations instead.The classes forCarshould then be rewritten as:classValves:passclassEngine:def__init__(self,valves:Valves):self.valves=valvesclassWheels:passclassCar:def__init__(self,engine:Engine,wheels:Wheels):self.engine=engineself.wheels=wheelsNote how now there is a explicit annotation stating that thevalvesargument is of typeValves(same forengineandwheels).Theandi.planfunction can now create a plan to build theCarclass (ignore theis_injectableparameter by now):plan=andi.plan(Car,is_injectable={Engine,Wheels,Valves})This is what theplanvariable contains:[(Valves,{}),(Engine,{'valves':Valves}),(Wheels,{}),(Car,{'engine':Engine,'wheels':Wheels})]Note how this plan correspond exactly to the 4-steps plan described in the previous section.Building from the planCreating a generic function to build the instances from a plan generated byandiis then very easy:defbuild(plan):instances={}forfn_or_cls,kwargs_specinplan:instances[fn_or_cls]=fn_or_cls(**kwargs_spec.kwargs(instances))returninstancesSo let’s see putting all the pieces together. The following code creates an instance ofCarusingandi:plan=andi.plan(Car,is_injectable={Engine,Wheels,Valves})instances=build(plan)car=instances[Car]is_injectableIt is not always desired forandito manage every single annotation found. Instead is usually better to explicitly declare which types can be handled byandi. The argumentis_injectableallows to customize this feature.andiwill raise an error on the presence of a dependency that cannot be resolved because it is not injectable.Usually is desirable to declare injectabilty by creating a base class to inherit from. For example, we could create a base classInjectableas base class for the car components:classInjectable(ABC):passclassValves(Injectable):passclassEngine(Injectable):def__init__(self,valves:Valves):self.valves=valvesclassWheels(Injectable):passThe call toandi.planwould then be:is_injectable=lambdacls:issubclass(cls,Injectable)plan=andi.plan(Car,is_injectable=is_injectable)Functions and methodsDependency injection is also very useful when applied to functions. Imagine that you have a functiondrivethat drives theCarthrough theRoad:classRoad(Injectable):...defdrive(car:Car,road:Road,speed):...# Drive the car through the roadThe dependencies has to be resolved before invoking thedrivefunction:plan=andi.plan(drive,is_injectable=is_injectable)instances=build(plan.dependencies)Now thedrivefunction can be invoked:drive(instances[Car],instances[Road],100)Note thatspeedargument was not annotated. The resultant plan just won’t include it because theandi.planfull_final_kwargsparameter isFalseby default. Otherwise, an exception would have been raised (seefull_final_kwargsargument documentation for more information).An alternative and more generic way to invoke the drive function would be:drive(speed=100,**plan.final_kwargs(instances))dataclasses and attrsandisupports classes defined usingattrsand alsodataclasses. For example theCarclass could have been defined as:# attrs class [email protected](auto_attribs=True)classCar:engine:Enginewheels:Wheels# dataclass example@dataclassclassCar(Injectable):engine:Enginewheels:WheelsUsingattrsordataclassis handy because they avoid some boilerplate.Externally provided dependenciesRetaining the control over object instantiation could be desired in some cases. For example creating a database connection could require accessing some credentials registry or getting the connection from a pool so you might want to control building such instances outside of the regular dependency injection mechanism.andi.planallows to specify which types would be externally provided. Let’s see an example:classDBConnection(ABC):@abstractmethoddefgetConn():pass@dataclassclassUsersDAO:conn:DBConnectiondefgetUsers():returnself.conn.query("SELECT * FROM USERS")UsersDAOrequires a database connection to run queries. But the connection will be provided externally from a pool, so we call thenandi.planusing also theexternally_providedparameter:plan=andi.plan(UsersDAO,is_injectable=is_injectable,externally_provided={DBConnection})The build method should then be modified slightly to be able to inject externally provided instances:defbuild(plan,instances_stock=None):instances_stock=instances_stockor{}instances={}forfn_or_cls,kwargs_specinplan:iffn_or_clsininstances_stock:instances[fn_or_cls]=instances_stock[fn_or_cls]else:instances[fn_or_cls]=fn_or_cls(**kwargs_spec.kwargs(instances))returninstancesNow we are ready to createUserDAOinstances withandi:plan=andi.plan(UsersDAO,is_injectable=is_injectable,externally_provided={DBConnection})dbconnection=DBPool.get_connection()instances=build(plan.dependencies,{DBConnection:dbconnection})users_dao=instances[UsersDAO]users=user_dao.getUsers()Note that being injectable is not required for externally provided dependencies.OptionalOptionaltype annotations can be used in case of dependencies that can be optional. For example:@dataclassclassDashboard:conn:Optional[DBConnection]defshowPage():ifself.conn:self.conn.query("INSERT INTO VISITS ...")...# renders a HTML pageIn this example, theDashboardclass generates a HTML page to be served, and also stores the number of visits into a database. Database could be absent in some environments, but you might want the dashboard to work even if it cannot log the visits.When a database connection is possible the plan call would be:plan=andi.plan(UsersDAO,is_injectable=is_injectable,externally_provided={DBConnection})And the following when the connection is absent:plan=andi.plan(UsersDAO,is_injectable=is_injectable,externally_provided={})It is also required to register the type ofNoneas injectable. Otherwiseandi.planwith raise an exception saying that “NoneType is not injectable”.Injectable.register(type(None))UnionUnioncan also be used to express alternatives. For example:@dataclassclassUsersDAO:conn:Union[ProductionDBConnection,DevelopmentDBConnection]DevelopmentDBConnectionwill be injected in the absence ofProductionDBConnection.AnnotatedOn Python 3.9+Annotatedtype annotations can be used to attach arbitrary metadata that will be preserved in the plan. Occurrences of the same type annotated with different metadata will not be considered duplicates. For example:@dataclassclassDashboard:conn_main:Annotated[DBConnection,"main DB"]conn_stats:Annotated[DBConnection,"stats DB"]The plan will contain both dependencies.Custom buildersSometimes a dependency can’t be created directly but needs some additional code to be built. And that code can also have its own dependencies:classWheels:passdefwheel_factory(wheel_builder:WheelBuilder)->Wheels:returnwheel_builder.get_wheels()As by defaultandican’t know how to create aWheelsinstance or that the plan needs to create aWheelBuilderinstance first, it needs to be told this with acustom_builder_fnargument:custom_builders={Wheels:wheel_factory,}plan=andi.plan(Car,is_injectable={Engine,Wheels,Valves},custom_builder_fn=custom_builders.get,)custom_builder_fnshould be a function that takes a type and returns a factory for that type.The build code also needs to know how to buildWheelsinstances. A plan step for an object built with a custom builder uses an instance of theandi.CustomBuilderwrapper that contains the type to be built in theresult_class_or_fnattribute and the callable for building it in thefactoryattribute:fromandiimportCustomBuilderdefbuild(plan):instances={}forfn_or_cls,kwargs_specinplan:ifisinstance(fn_or_cls,CustomBuilder):instances[fn_or_cls.result_class_or_fn]=fn_or_cls.factory(**kwargs_spec.kwargs(instances))else:instances[fn_or_cls]=fn_or_cls(**kwargs_spec.kwargs(instances))returninstancesFull final kwargs modeBy defaultandi.planwon’t fail if it is not able to provide some of the direct dependencies for the given input (see thespeedargument in one of the examples above).This behaviour is desired when inspecting functions for which is already known that some arguments won’t be injectable but they will be provided by other means (like thedrivefunction above).But in other cases is better to be sure that all dependencies are fulfilled and otherwise fail. Such is the case for classes. So it is recommended to setfull_final_kwargs=Truewhen invokingandi.planfor classes.OverridesLet’s go back to theCarexample. Imagine you want to build a car again. But this time you want to replace theEnginebecause this is going to be an electric car!. And of course, an electric engine contains a battery and have no valves at all. This could be the newEngine:classBattery:passclassElectricEngine(Engine):def__init__(self,battery:Battery):self.battery=valvesAndi offers the possibility to replace dependencies when planning, and this is what is required to build the electric car: we need to replace any dependency onEngineby a dependency onElectricEngine. This is exactly what overrides offers. Let’s see howplanshould be invoked in this case:plan=andi.plan(Car,is_injectable=is_injectable,overrides={Engine:ElectricEngine}.get)Note that Andi will unroll the new dependencies properly. That is,ValvesandEnginewon’t be in the resultant plan butElectricEngineandBatterywill.In summary, overrides offers a way to override the default dependencies anywhere in the tree, changing them with an alternative one.By default overrides are not recursive: overrides aren’t applied over the children of an already overridden dependency. There is flag to turn recursion on if this is what is desired. Checkandi.plandocumentation for more information.Why type annotations?andiuses type annotations to declare dependencies (inputs). It has several advantages, and some limitations as well.Advantages:Built-in language feature.You’re not lying when specifying a type - these annotations still work as usual type annotations.In many projects you’d annotate arguments anyways, soandisupport is “for free”.Limitations:Callable can’t have two arguments of the same type.This feature could possibly conflict with regular type annotation usages.If your callable has two arguments of the same type, consider making them different types. For example, a callable may receive url and html of a web page:defparse(html:str,url:str):# ...To make it play well withandi, you may define separate types for url and for html:classHTML(str):passclassURL(str):passdefparse(html:HTML,url:URL):# ...This is more boilerplate though.Why doesn’t andi handle creation of objects?Currentlyandijust inspects callable and chooses best concrete types a framework needs to create and pass to a callable, without prescribing how to create them. This makesandiuseful in various contexts - e.g.creation of some objects may require asynchronous functions, and it may depend on libraries used (asyncio, twisted, etc.)in streaming architectures (e.g. based on Kafka) inspection may happen on one machine, while creation of objects may happen on different nodes in a distributed system, and then actually running a callable may happen on yet another machine.It is hard to design API with enough flexibility for all such use cases. That said,andimay provide more helpers in future, once patterns emerge, even if they’re useful only in certain contexts.Examples: callback based frameworksSpider exampleNothing better than a example to understand howandican be useful. Let’s imagine you want to implemented a callback based framework for writing spiders to crawl web pages.The basic idea is that there is framework in which the user can write spiders. Each spider is a collection of callbacks that can process data from a page, emit extracted data or request new pages. Then, there is an engine that takes care of downloading the web pages and invoking the user defined callbacks, chaining requests with its corresponding callback.Let’s see an example of an spider to download recipes from a cooking page:classMySpider(Spider):start_url="htttp://a_page_with_a_list_of_recipes"defparse(self,response):forurlinrecipes_urls_from_page(response)yieldRequest(url,callback=parse_recipe)defparse_recipe(self,response):yieldextract_recipe(response)It would be handy if the user can define some requirements just by annotating parameters in the callbacks. Andandimake it possible.For example, a particular callback could require access to the cookies:defparse(self,response:Response,cookies:CookieJar):# ... Do something with the response and the cookiesIn this case, the engine can useandito inspect theparsemethod, and detect thatResponseandCookieJarare required. Then the framework will build them and will invoke the callback.This functionality would serve to inject into the users callbacks some components only when they are required.It could also serve to encapsulate better the user code. For example, we could just decouple the recipe extraction into it’s own class:@dataclassclassRecipeExtractor:response:Responsedefto_item():returnextract_recipe(self.response)The callback could then be defined as:defparse_recipe(extractor:RecipeExtractor):yieldextractor.to_item()Note how handy is that withandithe engine can create an instance ofRecipesExtractorfeeding it with the declaredResponsedependency.In definitive, usingandiin such a framework can provide great flexibility to the user and reduce boilerplate.Web server exampleandican be useful also for implementing a new web framework.Let’s imagine a framework where you can declare your sever in a class like the following:classMyWeb(Server):@route("/products")defproductspage(self,request:Request):...# return the composed page@route("/sales")defsalespage(self,request:Request):...# return the composed pageThe former case is composed of two endpoints, one for serving a page with a summary of sales, and a second one to serve the products list.Connection to the database can be required to sever these pages. This logic could be encapsulated in some classes:@dataclassclassProducts:conn:DBConnectiondefget_products()returnself.conn.query("SELECT ...")@dataclassclassSales:conn:DBConnectiondefget_sales()returnself.conn.query("SELECT ...")Nowproductspageandsalespagemethods can just declare that they require these objects:classMyWeb(Server):@route("/products")defproductspage(self,request:Request,products:Products):...# return the composed page@route("/sales")defsalespage(self,request:Request,sales:Sales):...# return the composed pageAnd the framework can then be responsible to fulfill these dependencies. The flexibility offered would be a great advantage. As an example, if would be very easy to create a page that requires both sales and products:@route("/overview")defproductspage(self,request:Request,products:Products,sales:Sales):...# return the composed overview pageContributingSource code:https://github.com/scrapinghub/andiIssue tracker:https://github.com/scrapinghub/andi/issuesUsetoxto run tests with different Python versions:toxThe command above also runs type checks; we use mypy.Changes0.6.0 (2023-12-26)Drop support for Python 3.5-3.7.Add support for dependencies that need to be built using custom callables.0.5.0 (2023-12-12)Add support for dependency metadata viatyping.Annotated(requires Python 3.9+).Add docs for overrides.Add support for Python 3.10-3.12.CI improvements.0.4.1 (2021-02-11)Overrides support inandi.plan0.4.0 (2020-04-23)andi.inspectcan handle classes now (their__init__method is inspected)andi.planandandi.inspectcan handle objects which are callable via__call__method.0.3.0 (2020-04-03)andi.planfunction replacingandi.to_provide.Rewrite README explaining the new approach based inplanmethod.andi.inspectreturn non annotated arguments also.0.2.0 (2020-02-14)Better attrs support (workaround issue with string type annotations).Declare Python 3.8 support.More tests; ensure dataclasses support.0.1 (2019-08-28)Initial release.
andi-datasets
The anomalous diffusion libraryGenerate, manage and analyze anomalous diffusion trajectoriesGet started|Documentation|Tutorials|Cite usThis library has been created in the framework of theAnomalous Diffusion (AnDi) Challengeand allows to create trajectories and datasets from various anomalous diffusion models. You can install the package using:pipinstallandi-datasetsYou can then import the package in a Python3 environment using:importandi_datasetsLibrary organizationTheandi_datasetsclass allows to generate, transform, analyse, save and load diffusion trajectories from a plethora of diffusion models and experimental generated with various diffusion models. The library is structured in two main blocks, containing either theoretical or phenomenological models. Here is a scheme of the library’s content:Theoretical modelsThe library allows to generate trajectories from various anomalous diffusion models:continuous-time random walk (CTRW),fractional Brownian motion (FBM),Lévy walks (LW),annealed transit time model (ATTM)andscaled Brownian motion (SBM). You can generate trajectories with the desired anomalous exponent in either one, two or three dimensions.Examples of their use and properties can be found inthis tutorial.Phenomenological modelsWe have also included models specifically developed to simulate realistic physical systems, in which random events alter the diffusion behaviour of the particle. The sources of these changes can be very broad, from the presence of heterogeneities either in space or time, the possibility of creating dimers and condensates or the presence of immobile traps in the environment.Examples of their use and properties can be found inthis tutorial.The AnDi Challenges1st AnDi Challenge (2020)The first AnDi challenge was held between March and November 2020 and focused on the characterization of trajectories arising from different theoretical diffusion models under various experimental conditions. The results of the challenge are published in this article:Muñoz-Gil et al., Nat Commun12, 6253 (2021).If you want to reproduce the datasets used during the challenge, please checkthis tutorial. You can then test your predictions and compare them with the those of challenge participants in thisonline interactive tool.2nd AnDi Challenge (2023)The second AnDi challenge isLIVE. Follow the previous link to keep updated on all news. If you want to learn more about the data we will use, you can checkthis tutorial.Version controlDetails on each release are presentedhere.ContributingThe AnDi challenge is a community effort, hence any contribution to this library is more than welcome. If you think we should include a new model to the library, you can contact us in this mail:[email protected]. You can also perform pull-requests and open issues with any feedback or comments you may have.Cite usIf you found this package useful and used it in your projects, you can use the following to directly cite the package:Muñoz-Gil,G.,RequenaB.,VolpeG.,Garcia-MarchM.A.andManzoC.AnDiChallenge/ANDI_datasets:Challenge2020release(v.1.0).Zenodo(2021).https://doi.org/10.5281/zenodo.4775311Or you can cite the paper this package was developed for:- AnDi Challenge 1G.Muñoz-Gil,G.Volpe...C.ManzoObjectivecomparisonofmethodstodecodeanomalousdiffusion.NatCommun12,6253(2021).https://doi.org/10.1038/s41467-021-26320-w- AnDi Challenge 2G.Muñoz-Gil,H.Bachimanchi...C.ManzoIn-principleacceptedatNatureCommunications(RegisteredReportPhase1)arXiv:2311.18100https://doi.org/10.48550/arXiv.2311.18100
andiDB
No description available on PyPI.
andi-pip
nbdev templateUse this template to more easily create yournbdevproject.If you are using an older version of this template, and want to upgrade to the theme-based version, seethis helper script(more explanation of what this means is contained in the link to the script).Troubleshooting TipsMake sure you are using the latest version of nbdev withpip install -U nbdevIf you are using an older version of this template, see the instructions above on how to upgrade your template.It is important for you to spell the name of your user and repo correctly insettings.inior the website will not have the correct address from which to source assets like CSS for your site. When in doubt, you can open your browser's developer console and see if there are any errors related to fetching assets for your website due to an incorrect URL generated by misspelled values fromsettings.ini.If you change the name of your repo, you have to make the appropriate changes insettings.iniAfter you make changes tosettings.ini, runnbdev_build_lib && nbdev_clean_nbs && nbdev_build_docsto make sure all changes are propagated appropriately.Previewing Documents LocallyIt is often desirable to preview nbdev generated documentation locally before having it built and rendered by GitHub Pages. This requires you to run Jekyll locally, which requires installing Ruby and Jekyll. Instructions on how to install Jekyll are providedon Jekyll's site. You can run the commandmake docs_servefrom the root of your repo to serve the documentation locally after callingnbdev_build_docsto generate the docs.In order to allow you to run Jekyll locally this project contains manifest files, called Gem files, that specify all Ruby dependencies for Jekyll & nbdev.If you do not plan to preview documentation locally, you can choose to deletedocs/Gemfileanddocs/Gemfile.lockfrom your nbdev project (for example, after creating a new repo from this template).
andi-py
No description available on PyPI.
andisdk
ANDi SDK is a package to support powerful ANDi scripting API from python environment providing powerful Ethernet and automotive testing development kit.Calling andisdk from PythonANDi SDK allows the creation and handling of Ethernet based messages or channels, this can be done with or without an ANDi test project# creating a message using a projectfromandisdkimportload_projectapi=load_project(path_to_atp)eth_msg=api.message_builder.create_ethernet_message()# creating a message without a projectfromandisdkimportmessage_buildermsg=message_builder.create_ethernet_message()Requirements to run ANDi SDKANDi SDK is portable, it can be used on both Windows and Linux machines.Before running ANDi SDK, the following requirements need to be met:.NET 6 runtime: responsible for running ANDi library files (dlls).CodeMeter: responsible for license handling.Npcap or Winpcap(Windows): responsible for hardware interfaces.Libpcap (Linux): responsible for hardware interfaces.Examples# this example will create and send an UDP messagefromandisdkimportmessage_builder,andiimportsysadapters=andi.get_adapters()if(len(adapters)<=0):print("No adapters found, stopping script")sys.exit()adapter=adapters[0]print("using adapter "+adapter.id+" to send udp message")channel=andi.create_channel("Ethernet")message=message_builder.create_udp_message(channel,channel)message.payload=tuple([0x01,0x02,0x03,0x04])message.udp_header.port_source=1234print("sending udp message with payload "+str([xforxinmessage.payload]))message.send()Using python-can# this example will use python-can to send and receive a CAN messageimportcanbus=can.interface.Bus(interface='andisdk',channel='1',driver='tecmp',link='can',dev_port=1,dev_id=64)payload=b'\x02\x08\x08\xFF\x03\x11\x04\x02'# create can messagemsg=can.Message(arbitration_id=0x80000749,data=payload,is_fd=False)# sendingbus.send(msg)# receiving with timeout 5 secondsmsg_received=bus.recv(5)print(msg_received)Copyrights and licensingThis product is the property of Technica Engineering GmbH. © Copyright 2022-2024 Technica Engineering GmbHThis product will not function without a proper license. A proper license can be acquired by contacting Technica Engineering GmbH. For license related inquiries, this email:[email protected] available from Technica Engineering.
andle
andle is a command line tool to help you sync dependencies, sdk or build tool version in gradle base Android projects.
andluo_pytest
UNKNOWN
andmath
AndMath PackageThis is a simple mathematics and statistics package.https://github.com/andregerbaulet/andmathCurrently implemented:K-Means ClusteringGaussian Process Regression
andomolecules
Package containing useful functions for molecular simulations
andon
Failed to fetch description. HTTP Status Code: 404
andonapp
Python client library for reporting data toAndonInstallpip install andonappUsageIn order to programmatically connect to Andon’s APIs you must first generate an API token. This is done by logging into your Andon account, navigating to theAPI settings page, and generating a new token. Make sure to record the token, and keep it secret.Reference Andon’sgetting started guideandAPI guidefor complete details on these prerequisitesSetting up the ClientNow that you have a token, create a client as follows:fromandonappimportAndonAppClientclient=AndonAppClient(org_name,api_token)Reporting DataHere’s an example of using the client to report a success:client.report_data(line_name='line 1',station_name='station 1',pass_result='PASS',process_time_seconds=100)And a failure:client.report_data(line_name='line 1',station_name='station 1',pass_result='FAIL',process_time_seconds=100,fail_reason='Test Failure',fail_notes='notes')Updating a Station StatusHere’s an example of flipping a station to Red:client.update_station_status(line_name='line 1',station_name='station 1',status_color='RED',status_reason='Missing parts',status_notes='notes')And back to Green:client.update_station_status(line_name='line 1',station_name='station 1',status_color='GREEN')LicenseLicensed under the MIT license.
andor
Object-oriented, high-level interface for Andor cameras (SDK2), written in Cython.NoteThis is not a stand-alone driver. Andor’s proprietary drivers must be installed. The setup script expects to findlibandor.soin/usr/local/lib/(the driver’s default installation directory).Andor provides a low-level,ctypeswrapper on their SDK, calledatcmd. If available, it will be imported asAndor._sdk.This documentation should be read along Andor’s Software Development Kit manual.To build the extension:$ python2.7 setup_extension.py build_ext --inplaceWarningThis module is not thread-safe. IfAcqMode.waitis blocking a background thread, and another function call is made from the main thread, the main thread will block too.UsageThe camera is controlled via the top-level classAndor:>>> from andor2 import Andor >>> cam = Andor()TheAndorinstance is just a container for other objects that control various aspect of the camera:Info: camera information and available featuresTemperature: cooler controlShutter: shutter controlEM: electron-multiplying gain controlDetector: CCD control, including:VSS: vertical shift speedHSS: horizontal shift speedADC: analog-to-digital converterOutputAmp: the output amplifierPreAmp: pre-amplifier controlReadMode: select the CCD read-out mode (full frame, vertical binning, tracks, etc.)Acquire <AcqMode>: control the acquisition mode (single shot, video, accumulate, kinetic)Examples>>> from andor2 import Andor >>> cam = Andor() >>> cam.Temperature.setpoint = -74 # start cooling >>> cam.Temperature.cooler = True >>> cam.Detector.OutputAmp(1) # use conventional CCD amplifier instead of electron multiplying >>> cam.PreAmp(2) # set pre-amplifier gain to 4.9 >>> cam.exposure = 10 # set exposure time to 10 ms >>> cam.ReadMode.SingleTrack(590,5) # set readout mode: single track, 5 pixels wide, centered at 590 pixels>>> cam.Acquire.Video() # set acquisition mode to video (continuous) >>> data = cam.Acquire.Newest(10) # collect latest 10 images as numpy array >>> cam.Acquire.stop()>>> cam.Acquire.Kinetic(10, 0.1, 5, 0.01) # set up kinetic sequence of 10 images every 100ms # with each image being an accumulation of 5 images # taken 10ms apart >>> cam.Acquire.start() # start acquiring >>> cam.Acquire.wait() # block until acquisition terminates >>> data = cam.Acquire.GetAcquiredData() # collect all data
andor3
This is an interface to Andor camera devices which communicate using version 3 of the Andor Software Development Kit (Andor SDK3).It depends on the Andor SKD3 being installed on the system (libatcore.so.3etc on Linux,atcore.dlletc on Windows). These are proprietary, non-free, and closed-source, and thus unable to be distributed with this package. If you feel that those licensing terms restrict the functionality of the very expensive camera you own, and hinder the progress of your project or research, please write to Andor/Oxford Instruments to complain about that.While this package should be usable without knowing the details of the SDK3, consultation of the SDK3 documentation is recommended for details about camera features etc. Unfortunately, the documentation is also not freely available.Tested using an Andor Zyla (both USB3.0 and CameraLink interfaces), however any camera compatible with the SDK3 should work, such as the Neo, Apogee, or i-Star sCMOS.SupportDocumentation can be found online athttps://andor3.readthedocs.io/en/latest/.Source code available athttps://gitlab.com/ptapping/andor3.Bug reports, feature requests and suggestions can be submitted to theissue tracker.LicenseAll original work is free and open source, licensed under the GNU Public License. See theLICENSEfor details.
and-or-not
Python And-Or-Not Concept Helper for beginnersTheand-or-notis a python package that teaches you the basic concepts of the most basic logical operators(and, or and not). The package would keep updating as python itself updates to higher versions.Contains the following logical operators:AndOrNotInstallationIf not already,install pipRepl.it users, just go over to the "shell" and follow the instructions below.Install the package withpiporpip3:pipinstalland-or-notUsageSee more examples atMy DocsSO yeah, I don't want to write anything here, cos I'm lazy:)... Just follow that link if interested :)
andor-sif
Andor SIFThis package has been merged withfujiisoup/sif_parserand will no longer be mainted.Parse.siffiles from an Andor spectrometer.Install withpython -m pip install andor-sifParserTheandor_sif.parse(<file>)method is used to parse a .sif file. This is the main function of this package.CLIInstalls a command line interface (CLI) namedandor_sifthat can be used to convert .sif files to .csv.ExampleLibraryimportpandasaspdimportandor_sifassif# parse the 'my_pl.sif' file(data,info)=sif.parse('my_pl.sif')# place data into a pandas Seriesdf=pd.Series(data[:,1],index=data[:,0])CLIConvert all .sif files in the current directory to .csv.andor_sifConvert all .sif files ending inplin the current directly into a single .csv.andor_sif--join*pl.sifParsing.siffilesThis package uses@fujiisoup/sif_readerto parse.siffiles.
andotp-decrypt
andOTP-decryptA backup decryptor for theandOTPAndroid app.The tools in this package support the password based backup files of andOTP in both the current (0.6.3) old (0.6.2 and before) format.Tools:andotp_decrypt.py: A decryption tool for password-secured backups of theandOTPtwo-factor android app.Output is written to stdoutgenerate_qr_codes.py: A tool to generate new, scanable QR code images for every entry of a dumpImages are saved to the current working directorygenerate_code.py: A tool to generate a TOTP token for an account in the backupInstallationpip install andotp-decryptThe tools will be installed as:andotp_decryptandotp_gencodeandotp_qrcodeDevelopment SetupPoetryinstall (recommended)Install poetrypip install poetry(or use the recommended way from the website)Install everything elsepoetry installLaunch the virtualenvpoetry shellPip installsudo pip3 install -r requirements.txtOn debian/ubuntu this should work:sudo apt-get install python3-pycryptodome python3-pyotp python3-pyqrcode python3-pillow python3-docoptUsageDump JSON to the console:./andotp_decrypt.py /path/to/otp_accounts.json.aesGenerate new QR codes:./generate_qr_codes.py /path/to/otp_accounts.json.aesGenerate a TOTP code for your google account:./generate_code.py /path/to/otp_accounts.json.aes googleThanksThank you for contributing!@alkuzad@ant9000@anthonycicc@erik-h@romed@rubenvdham@wornt@naums@marcopaganini
and-otp-uri
and-otp-uriParses backups from theandOTP-Android appand turns the entries into otpauth-URIs understood by other authenticator apps.With the--generate-pass-entriesoption you can directly create pass entries from the backup.Installationpip install and-otp-uriUsage:To print out URIs for the entries in the backup runand-otp-uri/path/to/backup.jsonSeeand-otp-uri --helpfor all available options.
andoya-core
No description available on PyPI.
andperf
AndPerfAndroid 上的一些性能调优工具安装pip3installandperf使用andperf dev-screenandperf stat-threadandperf top-activityandperf fpsandperf gfx-histandperf meminfo-pieandperf meminfo-trend完整命令列表andperfconfig设置用户自定义配置 andperfcpuinfo查看 andperfdev-mem查看设备内存信息 andperfdev-screen查看设备屏幕信息 andperfdump-config查看当前的用户自定义配置 andperfdump-layout导出当前栈顶Activity布局,并在浏览器打开 andperffps计算fps,最后会绘制一张fps变化图 andperfgfx-hist查看gfx每帧绘制耗时分布直方图 andperfgfx-resetresetapp的gfxinfo,重新开始统计 andperfgfxinfo查看app的gfxinfo信息 andperfmeminfo查看app的meminfo信息 andperfmeminfo-pie将当前app的各部分内存占用按照饼图展示 andperfmeminfo-trend展示app各部分内存随时间的变化 andperfscreencap截图并在浏览器打开 andperfstat-thread统计一段时间内app进程内,各线程获得到的时间片占比 andperfsystrace调用Androidsystrace命令,并在chrome中打开 andperftop-activity查看当前栈顶Activity andperftop-app查看当前栈顶Appconfigconfig 指定app package name,可以在执行其它指令时节省很多输入andperfconfig--app=com.meelive.ingkeeLICENSEMIT
andreani
andreaniThis is a Python module that provides a class called SDK for interacting with theAndreani API. The SDK class has several methods that allow users to login, estimate shipping fees, retrieve shipping labels, and submit shipments to Andreani.SDK class:login(username: str, password: str) -> typing.Optional[LoginResponse]: This method logs in to the Andreani API using the provided username and password. It returns a LoginResponse object if the login was successful, or raises an AndreaniException if there was an error.estimate_price(postalcode: str, contract: str, client: str, office: str, order: Order) -> typing.Optional[FeesResponse]: This method estimates the shipping fees for a given postalcode, contract, client, office, and order. It returns a FeesResponse object if the request was successful, or raises an AndreaniException if there was an error.submit_shipment(shipment: Shipment) -> typing.Optional[SubmitShipmentResponse]: This method submits a shipment to Andreani. It returns a SubmitShipmentResponse object if the request was successful, or raises an AndreaniException if there was an error.get_shipment_status(shipment_number: str) -> typing.Optional[SubmitShipmentResponse]get_label(url:str, save: bool = False, filename: str = None): This method fetches the label for a shipment in pdf format. If thesaveparameter isTrue, the method writes the content of the response to a file with the specifiedfilename. Ifsaveis not provided or isFalse, the method returns the content of the response as bytes.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseThis software is distributed under the MIT licence. See LICENCE for details.Copyright (c) 2021-2022 Juan Pablo [email protected]
andreani-aa-ml
Andreani Advanced Analytics toolsInstalar usando pippip install andreani-aa-mlImportaciónimport aa_mlEjemplo de usoAML Pipeline: Pipeline de ejecución de experimentos en Azure Machine Learningfrom aa_tools import aml_pipeline, logger if __name__ == "__main__": log = logger("test.py", "main") # Not part of pipeline aml_pipeline.create_train_template("test_file.py") log.log_console("Template file created as test_file.py", "INFO") tags = { "scale" : "false", "balanced" : "false", "outliers" : "false", "target" : "target" } log.log_console("Tags defined", "INFO") try: Pipeline = aml_pipeline.pipeline("aa_tools_test", "linear_regression", "regression", tags) except Exception as e: log.log_console(f"Exception initializing pipeline: {e}") log.close() raise e try: Pipeline.run("aml/azure.pkl", "aml/environment.yml", "aml", "train_template.py", log) except Exception as e: log.log_console(f"Exception running pipeline: {e}") log.close() raise e log.close()
andreani-aa-testing
Andreani Advanced Analytics testingInstalar usando pippip install andreani-aa-testingImportaciónimport aa_testingEjemplo de usoTestingfrom aa_testing import testing if __name__ == "__main__": test = Testing(function1, function2, dataset) test.compare_functions() response = test.create_response()Listado de funciones agregadas:Testing: Comparación de latencia entre dos funciones.Listado de funciones a agregar:Aplicar funcion de metricas.
andreani-aa-tools
Andreani Advanced Analytics toolsInstalar usando pippip install andreani-aa-toolsImportaciónimport aa_toolsEjemplo de usoHaversinefrom aa_tools import logger, haversine if __name__ == "__main__": log = logger("test.py", "main") result = haversine(-58.490160, -34.566116, -58.485096, -34.572123) log.log_console(f"Haversine distance: {result}", "INFO") log.close()Listado de funciones agregadas:Haversine: Distancia euclidia entre dos puntos.Logger: Maneja el log según los lineamientos de Andreani.Datalake: Interfaz de conexión al datalake para descargar y cargar archivos csv, parquet y/o json.Listado de funciones a agregar:Distancia de ruta entre dos puntos.Model trainingConexión a Elastic Search
andreasbasiccalculator
Failed to fetch description. HTTP Status Code: 404
andreas-test-distributions
No description available on PyPI.
andrei-lib
No description available on PyPI.
andrej2pdf
Yo, This is the home page of my project, ya fool.
andrejpdf
This is the home page of our project.