package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
alpypeopt
ALPypeOptALPypeOptorAnyLogic Python Pipe for Optimizationis an open source library for connectingAnyLogicsimulation models with python-basedsequential optimizationpackages such asscikit-optimize,optuna,hyperoptandbayesian optmization.WithALPypeOptyou will be able to:Connect your AnyLogic model to asequential optimizationpackage of your choice (e.g. scikit-optimizeskopt).(IN PROGRESS)Scale your optimization loop by launching many AnyLogic models simultaneously (requires an exported model).Debug your AnyLogic models during optimization loop (this special feature improves the user experience during model debugging remarkably).Leverage on the AnyLogic rich visualization as the optimization runs (which ties to the previous bullet point).There is a more comprehensivedocumentationavailable that includes numerous examples to help you understand the basic functionalities in greater detail.NOTE: ALPypeOpt has been engineered as a framework that is independent of any specific sequential optimization package. This design facilitates its compatibility with a wide range of state-of-the-art optimization packages.EnvironmentsALPypeOptincludes 2 environments that make the connection betweenAnyLogicand yourpython scriptpossible:ALPypeOptConnector- The AnyLogic connector ('agent') library to be dropped into your simulation model.alpypeopt- The library that you will use after configuring youroptimization solverin your python script to connect to the AnyLogic model.InstallationTo install the baseALPypeOptlibrary in python, usepip install alpypeopt.To useALPypeOptConnectorinAnyLogic, you can add thelibraryto yourPalette. That will allow you to drag and drop the connector into your model.Note that furtherinstructionsare required to be followed in order for the connector to work.RequirementsTheALPypeOptrequires you to have theAnyLogic software(or a valid exported model). AnyLogic is a licensed software for building simulations that includes an ample variety of libraries for modelling many industry challenges. At the moment, AnyLogic provides afreelicense under the name PLE (Personal Learning Edition). There are other options available. For more information, you can visit theAnyLogic website.Note: This is not a package that is currently backed by the AnyLogic support team.The python packagealpypeoptdoesn't require any additional dependencies other that the sequential optimization package of your choice:E.g.:scikit-optimizeAPI basicsOptimization loopTo be able to solve anoptimization problem, you must have the following:AnAnyLogic modelthat requires certain parameters to be set at the beggining of the run and will impact the overall goal. Using theGas Processing Plantexample, a decision must be taken on the plant setup in order for thetotal revenueto be maximized. For that, the AnyLogic model will be consuming any setup passed to theALPypeOptConnectorand returning the newely calculatedrevenue.Apython scriptthat contains the optimization algorithm. Here is where the optimal solution will be computed. For that, you will need to create an instance (to handle the connection) of the AnyLogic model and encapsulate it under a function that can be consumed by the optimization package that you have chosen to use:fromalpypeoptimportAnyLogicModelfromskoptimportgp_minimizegpp_model=AnyLogicModel(env_config={'run_exported_model':False,'exported_model_loc':'./resources/exported_models/gas_processing_plant','show_terminals':False,'verbose':False})# Initialize model setupgpp_setup=gpp_model.get_jvm().gasprocessingplant.GPPSetup()# Start setting up gas product pricesgpp_setup.setProductPrices(30.0,10.0)# Create input variable ratios as (min, max)# GPU 1&2 plant load ratiosplant_load_ratio=(0.01,0.99)# GPU 1&2 distillation column operating temperaturegpp_opp_temp_ratio=(20.0,100.0)bounds=[plant_load_ratio,# dec1 flow allocationgpp_opp_temp_ratio,# dec1 temperaturegpp_opp_temp_ratio# dec2 temperature]# Encapsulate simulation model as a python functiondefsimulation(x,reset=True):# Setup selected plant loads and temperaturesgpp_setup.setFlowAllocRateToDec1(x[0])gpp_setup.setDecTemperatures(x[1],x[2])# Pass input setup and run model until endgpp_model.setup_and_run(gpp_setup)# Extract model output or simulation resultmodel_output=gpp_model.get_model_output()ifreset:# Reset simulation model to be ready for next iterationgpp_model.reset()# Return simulation value. 'skopt' package only allows minimization problems# Because of that, value must be negatedreturn-model_output.getTotalRevenue()# Setup and execute sequential optimmization modelres=gp_minimize(simulation,# the function to minimizebounds,# the bounds on each dimension of xacq_func="EI",# the acquisition functionn_calls=10,# the number of evaluations of simulationn_random_starts=5,# the number of random initialization pointsrandom_state=1234)# the random seed# Print optimal solutionprint(f"Solution is{res.x}for a value of{-res.fun}")# Run simulation with optimal result to use UI to explore results in AnyLogicsimulation(res.x,reset=False)# Close modelgpp_model.close()Bugs and/or development roadmapAt the moment, ALPypeOpt is at its earliest stage. You can join thealpypeopt projectand raise bugs, feature requests or submit code enhancements via pull request.Support ALPypeOpt's developmentIf you are financially able to do so and would like to support the development ofALPypeOpt, please reach out [email protected] ALPypeOpt software suite is licensed under the terms of the Apache License 2.0. SeeLICENSEfor more information.
alpyperl
ALPypeRLALPypeRLorAnyLogic Python Pipe for Reinforcement Learningis an open source library for connectingAnyLogicsimulation models withreinforcement learningframeworks that are compatible withOpenAI Gymnasiuminterface (single agent).WithALPypeRLyou will be able to:Connect your AnyLogic model to a reinforcement learning framework of your choice (e.g. rayrllib).Scale your training by launching many AnyLogic models simultaneously (requires an exported model).Deploy and evaluate your trained policy from AnyLogic.Debug your AnyLogic models during training (this is a special feature unique to ALPypeRL that improves the user experience during model debugging remarkably).Identify and replicatefailed runsby having control on theseedused for each run.Leverage on the AnyLogic rich visualization while training or evaluating.There is a more comprehensivedocumentationavailable that includes numerous examples to help you understand the basic functionalities in greater detail.No licenceis required for single instance experiments.AnyLogic PLEis free!.NOTE: ALPypeRL has been developed usingray rllibas the base RL framework. Ray rllib is an industry leading open source package for Reinforcement Learning. Because of that, ALPypeRL has certain dependencies to it (e.g. trained policy deployment and evaluation).EnvironmentsALPypeRL includes 2 environments that make the connection betweenAnyLogicand yourpython scriptpossible:ALPypeRLConnector- The AnyLogic connector ('agent') library to be dropped into your simulation model.alpyperl- The library that you will use after configuring yourpolicyin yourpython scriptto connect to theAnyLogicmodel (includes functionalities totrainandevaluate).InstallationTo install the baseALPypeRLlibrary in python, usepip install alpyperl.To useALPypeRLConnectorinAnyLogic, you can add thelibraryto yourPalette. That will allow you to drag and drop the connector into your model.Note that furtherinstructionsare required to be followed in order for the connector to work.RequirementsTheALPypeRLrequires you to have theAnyLogic software(or a valid exported model). AnyLogic is a licensed software for building simulations that includes an ample variety of libraries for modelling many industry challenges. At the moment, AnyLogic provides afreelicense under the name PLE (Personal Learning Edition). There are other options available. For more information, you can visit theAnyLogic website.Note: This is not a package that is currently backed by the AnyLogic support team.The python packagealpyperlrequires (among others) 4 packages that are relatively heavy (and might take longer times to install depending on the host machine specs):rayray[rllib]tensorflowtorchAPI basicsTrainingTo be able to train your policy, you must have the following:AnAnyLogic modelthat requires decisions to be taken as the simulation runs. Using theCartPole-v0example, a decision must be taken on the direction of the force to be applied so the pole can be kept straight for as long as possible. For that, the AnyLogic model will be making requests to theALPypeRLConnectorand consuming the returned/suggested action.Apython scriptthat contains the RL framework. Here is where the policy is going to be trained. For that, you will need to create yourcustom environmenttaking into consideration what your AnyLogic model expects to return and receive. By default, you must define theactionandobservationspaces. Please visit theCartPole-v0example for a more detailed explanation.fromalpyperlimportAnyLogicEnvfromray.rllib.algorithms.ppoimportPPOConfig# Set checkpoint directory.checkpoint_dir="./resources/trained_policies/cartpole_v0"# Initialize and configure policy using `rllib`.policy=(PPOConfig().rollouts(num_rollout_workers=2,num_envs_per_worker=2).fault_tolerance(recreate_failed_workers=True,num_consecutive_worker_failures_tolerance=3).environment(AnyLogicEnv,env_config={'run_exported_model':True,'exported_model_loc':'./resources/exported_models/cartpole_v0','show_terminals':False,'verbose':False,'checkpoint_dir':checkpoint_dir,'env_params':{'cartMass':1.0,'poleMass':0.1,'poleLength':0.5,}}).build())# Perform training.for_inrange(10):result=policy.train()# Save policy checkpoint.policy.save(checkpoint_dir)print(f"Checkpoint saved in directory '{checkpoint_dir}'")# Close all enviornments.# NOTE: This is required to be called for correct checkpoint saving by ALPypeRL.policy.stop()EvaluationThe evaluation of yourtrained policyis made simple inalpyperl. See the example:fromalpyperl.serve.rllibimportlaunch_policy_serverfromalpyperlimportAnyLogicEnvfromray.rllib.algorithms.ppoimportPPOConfig# Load policy and launch server.launch_policy_server(policy_config=PPOConfig(),env=AnyLogicEnv,trained_policy_loc='./resources/trained_policies/cartpole_v0',port=3000)Once the server is on, you can run your AnyLogic model and test your trained policy. You are expected to select modeEVALUATEand specify the serverurl.Bugs and/or development roadmapAt the moment, ALPypeRL is at its earliest stage. You can join thealpyperl projectand raise bugs, feature requests or submit code enhancements via pull request.Support ALPypeRL's developmentIf you are financially able to do so and would like to support the development ofALPypeRL, please reach out [email protected] ALPypeRL software suite is licensed under the terms of the Apache License 2.0. SeeLICENSEfor more information.
alpyro
ALternativePYthonROsAlternative implementation of a ROS client library in python.Example usagePublisherfromalpyro.nodeimportNodefromalpyro.msgs.std_msgsimportStdStringdeftest():msg=StdString()msg.value="Hello there"returnmsgwithNode("/pub")asn:n.announce("/test",StdString)n.schedule_publish("/test",10,test)n.run_forever()Subscriberfromalpyro.nodeimportNodefromalpyro.msgs.std_msgsimportStdStringdefcallback(msg:StdString):print(msg.value)withNode("/sub")asn:n.subscribe("/test",callback)n.run_forever()
alpyro-msgs
alpyro message definitionsThis repository contains the message definitions for thealpyrolibrary.The message files are generated by the gen_msgs.py script which is ran inside a docker container where ROS is installed.Currently contains all messages from the following packages:actionlibactionlib_msgsactionlib_tutorialsbondcontrol_msgsdiagnostic_msgsdynamic_reconfiguregeometry_msgsmap_msgsnav_msgsroscpprosgraph_msgsrospy_tutorialssensor_msgsshape_msgssmach_msgsstd_msgsstereo_msgstftf2_msgstrajectory_msgsturtle_actionlibturtlesimvisualization_msgs
alpyvantage
An alternative python backend to the Alpha Vantage APIalpyvantageprovides a python backend to the Alpha Vantage API. Alpha Vantage provides access to a wide range of financial data and time series. Details can be found in theofficial API documentation. You can get a freeAPI key here.InstallationInstalling therepository versionis as simple as typingpipinstallalpyvantagein your terminal or Anaconda Prompt.DocumentationAPI calls are straightforward. Either use the build-in functions such astime_series_intraday,time_series_weekly, etc.:importalpyvantageasavapi=av.API(<your_api_key>)data,meta_data=api.time_series_intraday('DAX',interval='1min',month='2015-01')print(data)# its a pandas.DataFrameOr use thefunctionkeyword from theofficial API documentationand provide the parameters as keyword arguments:data,meta_data=api('TIME_SERIES_INTRADAY',symbol='DAX',interval='1min',month='2015-01')A detailed documentation of the individual functions can befound here.Issues and contributionsPlease use theissuesfor questions or if you think anything doesn’t do what it is supposed to do. Pull requests are welcome, please include some documentation.
alpyvision
AlPyVisionAlPyVision (AlbionPythonVision) is a package that provides API-like functionality and acts as a middleman between thePython OpenCVlibrary and theAlbion Onlinegame client.DisclaimerCheating or botting is not allowed in Albion Online. This package is for learning-purposes only, such as getting into the field of computer vision or artificial intelligence. Do not use this package to violate the rules.PropsThe ground-lying idea and much of the basic architecture is heavily inspired by theOpenCV Object Detection in Games YouTube seriesmade byLearn Code By Gaming. Ben produces tutorials of exceptionally high quality, covering the whole stretch from the basic 'What is a Python class?' to Canny Edge Detection with stunningly precise explanations. You should definitely check out his content, if you want to learn more about Object Detection and Python. Thank you, Ben!
alqrrelu-package
No description available on PyPI.
alqtendpy
Miscellanious extras that I like to use when working with PyQt5.
alquimista
AlquimistaHerramientas para interpretar codigo
alquitable
AlquitableAlquitable is a Python package that provides a Keras-based set of tools to enhance Alquimodelia.It provides the loss function and callbacks to apply to keras modelsUsageTo use Alquitable, follow these steps:pipinstallalquitableSince Aquitable is based on keras-core you can choose which backend to use, otherwise it will default to tensorflow. To change backend change theKERAS-BACKENDenviromental variable. Followthis.To get an arquiteture you only need to have a simple configuration and call the module:# Previous code and imports...fromalquitableimportlosses,callbacks# Based on forecat StackedCNNloss_funtion=losses.weighted_losscallback=callbacks.StopOnNanLossStackedCNN.compile(loss=loss_funtion)StackedCNN.fit(...callbacks=callback)ContributionContributions to Alquitable are welcome! If you find any issues or have suggestions for improvement, please feel free to contribute. Make sure to update tests as appropriate and follow the contribution guidelines.LicenseAlquitable is licensed under the MIT License, which allows you to use, modify, and distribute the package according to the terms of the license. For more details, please refer to theLICENSEfile.
alquran-id
Al-Quran IDAl Quran, Terjemahan indonesia, dan Tafsir JalalaynInstallasi$pipinstallalquran-idMenggunakan Al-Quran IDfromalquran_idimportAlQuranasQurandefalquran(id_surat,id_ayat):quran=Quran()nama_surat=quran.Surat(id_surat)ar_ayt=quran.ArNumber(id_ayat)ayat=quran.Ayat(id_surat,id_ayat)jml_ayat=quran.JumlahAyat(id_surat)terjemahan=quran.Terjemahan(id_surat,id_ayat)tafsir=quran.Tafsir(id_surat,id_ayat)returnnama_surat[0],nama_surat[1],ayat,jml_ayat,terjemahan,tafsir,ar_aytquran=alquran(1,1)print(f"""Nama Surat ={quran[0]}/{quran[1]}Jumlah Ayat ={quran[3]}Ayat [{quran[6]}]{quran[2]}Terjemahan{quran[4]}Tafsir{quran[5]}""")OutputNama Surat = الفاتحة / Al-Fatihah (Pembukaan) Jumlah Ayat = 7 Ayat [۱] بسم ٱلله ٱلرحمـن ٱلرحيم Terjemahan Dengan menyebut nama Allah Yang Maha Pemurah lagi Maha Penyayang. Tafsir (Dengan nama Allah Yang Maha Pemurah lagi Maha Penyayang)SourceSource file xml diambil dariTanzil
alright
alrightPython wrapper for WhatsApp web made with selenium inspired byPyWhatsAppWhy alright ?I was looking for a way to control and automate WhatsApp web with Python, I came across some very nice libaries and wrapper implementations including;pywhatkitpywhatsappPyWhatsappWebWhatsapp-Wrapperand many othersSo I triedpywhatkit, a really cool one well crafted to be used by others but its implementations require you to open a new browser tab and scan QR code everytime you send a message no matter if its to the same person, which was deal breaker for using it.Then I triedpywhatsappwhich is based onyowsupand thus requiring you to do some registration with yowsup before using it of which after bit of googling I got scared of having my number blocked, so I went for the next optionI then went forWebWhatsapp-Wrapper, it has some good documentation and recent commits so I had hopes that it would work. But unfortunately it didn't for me, and after having a couple of errors I abandoned it to look for the next alternative.Which isPyWhatsappbyshauryauppal. This was more of CLI tool than a wrapper which suprisingly worked well and it's approach allows you to dynamically send whatsapp message to unsaved contacts without rescanning QR-code everytime.So what I did is more of a refactoring of the implementation of that tool to be more of wrapper to easily allow people to run different scripts on top of it instead of just using it as a tool. I then thought of sharing the codebase to people who might have struggled to do this as I did.Getting startedYou need to do a little bit of work to getalrightrunning, but don't worry I got you, everything will work well if you just carefully follow through the documentation.InstallationWe need to have alright installed on our machine to start using it which can either be done directly fromGitHubor usingpip.Installing directlyYou first need to clone or download the repo to your local directory and then move into the project directory as shown in the example and then run the command below;gitclonehttps://github.com/Kalebu/alrightcdalright alright>pythonsetup.pyinstall....Installing from pippipinstallalright--upgradeSetting up SeleniumUnderneath alright isSeleniumwhich is what does all the automation work by directly controlling the browser, so you need to have a selenium driver on your machine foralrightto work. But luckily alright useswebdriver-manager, which does this automatically. You just need to install a browser. By default alright usesGoogle Chrome.What you can do with alright?Send MessagesSend Messages1Send ImagesSend VideosSend DocumentsGet first chatSearch chat by namelogoutWhen you're running your program made withalright, you can only have one controlled browser window at a time. Running a new window while another window is live will raise an error. So make sure to close the controlled window before running another oneUnsaved contact vs saved contactsAlright allows you to send the messages and media to both saved and unsaved contacts as explained earlier. But there is a tiny distinction on how you do that, you will observe this clearly as you use the package.The first step before sending anything to the user is first to locate the user and then you can start sending the information, thats where the main difference lies between saved and unsaved contacts.Saved contactsTo saved contact use methodfind_by_username()to locate a saved user. You can also use the same method to locate WhatsApp groups. The parameter can either be;saved usernamemobile numbergroup nameHere an Example on how to do that>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.find_by_username('saved-name or number or group')Unsaved contactsIn sending message to unsaved whatsapp contacts usefind_user()method to locate the user and the parameter can only be users number with country code with (+) omitted as shown below;>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.find_user('255-74848xxxx')Now Let's dive in on how we can get started on sending messages and mediasSending MessagesUse this if you don't have WhatsApp desktop installedTo send a message with alright, you first need to target a specific user by usingfind_user()method (include thecountry codein your number without the '+' symbol) and then after that you can start sending messages to the target user usingsend_message()method as shown in the example below;>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.find_user('2557xxxxxz')>>>messages=['Morning my love','I wish you a good night!']>>>formessageinmessages:messenger.send_message(message)Send Direct Message [NEW]RecommendedThis is newly added method that makes it a bit simpler to send a direct message without having to do thefind_userorfind_by_username, It works well even if you have or not have WhatsApp installed on your machine. It assumes the number is a saved contact by default.>>>messenger.send_direct_message(mobile,message,saved=True)It does receive the following parameters;mobile[str]- The mobile number of the user you want to send the message tomessage[str]- The message you want to sendsaved[bool]- If you want to send to a saved contact or not, default is TrueHere is an example on how to use it;>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>>>>messenger.send_direct_message('25573652xxx','Hello')2022-08-1417:27:57,264-root--[INFO]>>Messagesentsuccessfulyto2022-08-1417:27:57,264-root--[INFO]>>send_message()finishedrunning!>>>messenger.send_direct_message('25573652xxx','Who is This ?',False)2022-08-1417:28:30,953-root--[INFO]>>Messagesentsuccessfulyto2557365243882022-08-1417:28:30,953-root--[INFO]>>send_message()finishedrunning!Sending Messages1This Send Message does NOT find the user first like in the above Send Message, AND it does work even if you have the Desktop WhatsApp app installed. Include thecountry codein your number withour '+' symbol as shown in the example below;>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messages=['Morning my love','I wish you a good night!']>>>mobNum=27792346512>>>formessageinmessages:messenger.send_message1(mobNum,msg)Multiple numbersHere how to send a message to multiple users, Let's say we want to wish merry-x mass to all our contacts, our code is going to look like this;>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>numbers=['2557xxxxxx','2557xxxxxx','....']>>>fornumberinnumbers:messenger.find_user(number)messenger.send_message("I wish you a Merry X-mass and Happy new year ")You have to include thecountry codein your number for this library to work but don't include the (+) symbolIf you're sending media either picture, file, or video each of them have an optional parameter called which is usually the caption to accompany the media.Sending ImagesSending Images is nothing new, its just the fact you have to include a path to your image and the message to accompany the image instead of just the raw string characters and also you have usesend_picture(), Here an example;>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.find_user('mobile')>>>messenger.send_picture('path-to-image-without-caption')>>>messenger.send_picture('path-to-image',"Text to accompany image")Sending VideosSimilarly, to send videos just use thesend_video()method;>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.find_user('mobile')>>>messenger.send_video('path-to-video')Sending DocumentsTo send documents such as docx, pdf, audio etc, you can use thesend_file()method to do that;>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.find_user('mobile')>>>messenger.send_file('path-to-file')Check if a chat has unread messages or notThis method checks if a chat, which name is passed as aqueryparameter, has got unread messages or not.>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.check_if_given_chat_has_unread_messages(query="Chat 123")Get first chatThis method fetches the first chat in the list on the left of the web app - since they are not ordered in an expected way, a fair workaround is applied. One can also ignore (or not ignore) pinned chats (placed at the top of the list) by passing the parameterignore_pinnedto do that - default value isignore_pinned=True.>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.get_first_chat()Search chat by nameThis method searches the opened chats by a partial name provided as aqueryparameter, returning the first match. Case sensitivity is treated and does not impact the search.>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.search_chat_by_name(query="Friend")Get last message received in a given chatThis method searches for the last message received in a given chat, received as aqueryparameter, returning the sender, text and time. Groups, numbers and contacts cases are treated, as well as possible non-received messages, video/images/stickers and others.>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.get_last_message_received(query="Friend")Retrieve all chat names with unread messagesThis method searches for all chats with unread messages, possibly receiving parameters tolimitthe search to atopnumber of chats or not, returning a list of chat names.>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.fetch_all_unread_chats(limit=True,top=30)DISCLAIMER: Apparently,fetch_all_unread_chatsfunctionallity works on most updated browser versions (for example,Chrome Version 102.0.5005.115 (Official Build) (x86_64)). If it fails with you, please consider updating your browser while we work on an alternatives for non-updated broswers.logout from whatsappYou can sign out of an account that is currently saved>>>fromalrightimportWhatsApp>>>messenger=WhatsApp()>>>messenger.logout()Well! thats all for now from the package, to request new feature make an issue.Contributionsalrightis an open-source package underMITlicense, so contributions are warmly welcome whether that be a code , docs or typo just fork it.When contributing to code please make an issue for that before making your changes so that we can have a discussion before implementation.IssuesIf you're facing any issue or difficulty with the usage of the package just raise one so that we can fix it as soon as possible.Please, be as much comprehensive as possible!Use as many screenshots and detailed description sets as possible; this will save us some time that we'd dedicate on asking you for "a more detailed descriptiton", and it'll make your request be solved faster.Give it a starWas this useful to you ? Then give it a star so that more people can manke use of this.CreditsAll the credits to:kalebuEurico NicacioVictor DanielCornelius Mostertshauryauppaland all the contributors
alr-transformer
ALR Transformer
als
No description available on PyPI.
alsa-ctl
alsa-ctlControl audio using ALSA API easily.Dependenciesalsa-ctldepends on the following programs in your PATH:amixerInstallationFrom pypi (recommended):pipinstallalsa-ctlFrom git repo (for dev version)pipinstallgit+https://github.com/DCsunset/alsa-ctlOr clone and install locally (for dev):gitclonehttps://github.com/DCsunset/alsa-ctlcdalsa-ctl pipinstall.UsageCLIUse the commandalsa-ctldirectly:alsa-ctl--help alsa-ctlget_card alsa-ctlget_volumeSee help messages for more usage.Libraryfromalsa_ctl.libimportlist_cardsprint(list_cards())LICENSEAGPL-3.0. Copyright notice:alsa-ctl Copyright (C) 2022-2023 DCsunset This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.
alsa-grpc-client
Alsa GRPC ClientSmall library that allows real-time volume control of ALSA devicesExample UsageNote: Server must be running at<host>fromalsa_grpc_clientimportAlsaClientfromtimeimportsleepdefon_connect(ctrl):print('Connected to',ctrl.card,ctrl.name,ctrl.volume)ctrl.subscribe(lambda:print('Received volume update for',ctrl.name,ctrl.volume))client=AlsaClient('<host>',50051,on_connect)client.connect()sleep(1)forname,ctrlinclient.controls.items():ctrl.set_volume(.5)sleep(1)client.disconnect()
alsaidi
This is a very simple calculator that takes two numbers and either add, subtract, multiply or divide them.Change Log0.0.1 (19/04/2020)First Release
alsa-midi
This project provides Python interface to ALSA sequencer API.FeaturesPythonic API to most of the ALSA sequencer functionalityAccess to ALSA sequencer features not available when using other (‘portable’) Python MIDI libraries:Precise timestamping of messages sent and receivedPort connection management, including connection between ports on different clientsAccess to non-MIDI events, like announcements about new clients, ports and connectionsPython 3.7 – 3.11 compatibilityBoth synchronous (blocking) and asynchronous (asyncio) I/OOnly Python code, no need to compile a binary module. Requirescffi, though.MIDObackend providedInstallationThis package requires ALSA library to be installed (libasound.so.2– ‘libasound2’ package on Debian-like systems). On a typical Linux system it is probably already installed for some other audio or MIDI software.python-alsa-midi package may be installed with pip:python3 -m pip install alsa-midiThis should normally install a binary wheel compiled on a compatible system or pure-python wheel working without compilation.If no compatible wheel is found build from source package will be triggered, which will also require ALSA library development files (libasound2-dev).To force installing from source (and compiling the binary extension) use:python3 -m pip install --no-binary=alsa-midi alsa-midiTo force installing from source without compiling the extension:PY_ALSA_MIDI_NO_COMPILE=1 python3 -m pip install --no-binary=alsa-midi alsa-midiAlternatively one can just add the source directory (as checked out fromhttps://github.com/Jajcus/python-alsa-midi.git) to$PYTHONPATHand use the packages directly, with no compilation.UsageDetailed documentation is available athttps://python-alsa-midi.readthedocs.io/Simple example:import time from alsa_midi import SequencerClient, READ_PORT, NoteOnEvent, NoteOffEvent client = SequencerClient("my client") port = client.create_port("output", caps=READ_PORT) dest_port = client.list_ports(output=True)[0] port.connect_to(dest_port) event1 = NoteOnEvent(note=60, velocity=64, channel=0) client.event_output(event1) client.drain_output() time.sleep(1) event2 = NoteOffEvent(note=60, channel=0) client.event_output(event2) client.drain_output()Using with MIDOpython-alsa-midi can be used as aMIDOback-end too:export MIDO_BACKEND=alsa_midi.MIDO_BACKEND mido/examples/midifiles/play_midi_file.py file.mid
alsangue
alsanguealsangueis a naive static website builder written inpython.Installationalsangueis available through thePython Package Index (PyPI). Pip is pre-installed ifpython >= 3.4has been downloaded frompython.org; if you're using a GNU/Linux distribution, you can find how to install it on thispage.After setting up pip, you can installalsangueby simply typing in your terminal# pip3 install alsangueUsagealsangueinstall a command line utility with the same name,alsangue. You can invoke command line help withalsangue --helpand get command options withalsangue <command> --helpYou can find an examplecontentdirectory in this repo which you can edit to build your own website. To build it just type:cd alsangue/example alsangue content buildAboutThis program is licensed underGNU General Public License v3 or laterbyPellegrino Prevete. If you find this program useful, consider offering me abeer, a newcomputeror a part time remotejobto help me pay the bills.
alsapyer
This is a simple and lightweight python binding for libalsaplayer.alsapyer is written in C, and do not requires any third party library except libalsaplayer itself.It maps 1:1 all the alsaplayer exposed functions, so if you already used it you’re ready for shine.
alsaseq
alsaseq is a Python 3 and Python 2 module that allows to interact with ALSA sequencer clients. It can create an ALSA client, connect to other clients, send and receive ALSA events immediately or at a scheduled time using a sequencer queue. It provides a subset of the ALSA sequencer capabilities in a simplified model. It is implemented in C language and licensed under the Gnu GPL license version 2 or later.
als.bcs
als.bcsExtract data and metadata from text data files that were created by the Beamline Controls System (BCS) at the Advanced Light Source (ALS)PurposeFacilitate the extraction of header information and scan input variables from common BCS data file formats, such as:Time ScanSingle Motor ScanTrajectory Scanbcs.find moduleFind BCS data files that match provided criteria (date, scan type, scan nmuber, etc.) within their native directory structure.bcs.data moduleRead scan output information and data from selected BCS data files.bcs.scan moduleFind and read scan input information BCS Trajectory Scan data files.bcs.ingest moduleRead header information from selected BCS data files.InstallationUsing pip# PyPIpython-mpipinstallals.bcs# -- OR --# Local copy of the project repositorypython-mpipinstall-rrequirements.txt-e.# -- OR --# Local tarball, uses static version file generated during buildpython-mpipinstallals.bcs-0.2.0.tar.gzCopyright Noticeals.bcs: Extract metadata from text data files that were created by the Beamline Controls System (BCS) at the Advanced Light Source (ALS), Copyright (c) 2022, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Intellectual Property Office [email protected]. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit other to do so.
alsdac
No description available on PyPI.
alsina2011
alsina2011main:prod:Data and formalisms from Alsina et al. (2011)Historycreation (2023-03-09)First release on PyPI.
alsi-py
alsi-pyPython client for the ALSI API (Aggregated LNG Storage Inventory)Documentation of the API can be found on:https://alsi.gie.eu/GIE_API_documentation_v004.pdfDocumentation of the client API can be found on:https://roiti-ltd.github.io/alsi-py/Installationpython3 -m pip3 install alsi-pyUsageThe package is split in two clients:AlsiRawClient: Returns data in raw JSON format.AlsiPandasClient: Returns parsed data in the form of a pandas dataframe.from alsi.raw_client import AlsiRawClient from alsi.pandas_client import AlsiPandasClient from alsi.mappings import Area from datetime import datetime import asyncio API_KEY = '<API_KEY>' country_code = Area.ES # Also could be string: 'ES' or 'Spain' company_code = '21X000000001254A' facility_code = '21W0000000000370' async def main(): client = AlsiRawClient(api_key=API_KEY) # Functions that return JSON. client.query_data_for_facility(facility_code, company_code, country_code) client.query_agg_data_for_europe_or_noneurope(europe='eu') client.query_agg_data_by_country(contry_code='BE') client.query_data_by_company_and_country(company_code, country_code) # Filter results by time client.query_agg_data_by_country(country_code, start=datetime(2017,1,1), end=datetime(2018,1,1), limit=10) # Create pandas client. All functions are the same as the raw client. pandas_client = AlsiPandasClient(api_key=API_KEY) # In the end of the code, make sure to close the client session: await client.close_session() # or await pandas_client.close_session() asyncio.run(main())For more information regarding company codes, facility codes and country codes visit:https://alsi.gie.eu/#/apiRunning unit testsTell pytest where to look for unit tests and create env for ALSI API keyexport PYTHONPATH=./alsi export ALSI_KEY='...'Run unit tests in coverage modepython -m pytest ./tests --import-mode=append --covContributingPull the repository:git clone https://github.com/ROITI-Ltd/alsi-py.git cd ./alsi-pySet up your working environment:python3 -m venv env source env/bin/activateInstall required packages:pip3 install -r requirements.txt pip3 install -r requirements-dev.txtBumping the package version after making changes:bumpversion major|minor|patch|buildFor more general guidelines on contributing see:Contributing to alsi-py.InspirationMany thanks to theentsoe-pylibrary for serving as inspiration for this library.
alsldlawef
No description available on PyPI.
alsmodule-pkg
Ambient Light Sensor (ALS) PackageCopyright (C) 2019 Jean-Jacques PuigThis module implements a trivial interface to access ambient light sensors available on some Apple computers (iMac...).Usage is very simple; the 'als' module exports only one module method, getSensorReadings(), which returns raw samples values as a list of two (Long) integers.import als print(als.getSensorReadings())
also
## AlsoEver had lots of methods that do the same thing?!You want to set them to the same thing but dont want to do something lame like```pythondef method(self):passothermethod = method```Rather you want to do it with style like```python@also('othermethod')def method(self):pass```Then do I have a solution for you!## Installation```pip install also```## Usage```pythonfrom also import also, AlsoMetaClassclass Foo:__metaclass__ = AlsoMetaClass@also('getThing')@also('get_thing')def getthing(self):return 'go bears'foo = Foo()assert (foo.getthing() == foo.get_thing() ==foo.getThing() == 'go bears')```
also-anomaly-detector
also_anomaly_detectorAttribute-wise Learning for Scoring Outliers (ALSO) is an unsupervised anomaly detection algorithm for multidimensional data.DescriptionPython implementation of ALSO. ALSO, orAttribute-wise Learning for Scoring Outliers, is an unsupervised anomaly detection algorithm for multidimensional data as described in Paulheim, H., Meusel, R. A decomposition of the outlier detection problem into a set of supervised learning problems. Mach Learn 100, 509–531 (2015).https://doi.org/10.1007/s10994-015-5507-yI am not the author, nor in any way affiliated with the authors of the paper above. This is an independent python implementation of the ALSO-approach.UsageFor a local install,cdto the root of this repository and simply;python setup.py developDocumentationDocumentation should be available athttps://eliavw.github.io/also_anomaly_detector.AdministrationOpen the foldernote/deployfor notebooks that contain annotated scripts for the different administrative tasks you may want to undertake with this software project.
alsobrowser
No description available on PyPI.
alsobrowsercodelib
No description available on PyPI.
alsobrowserlib
No description available on PyPI.
alsocodelib
No description available on PyPI.
alsolib
alsolib v3.4alsolib开发者:苦逼者钟宇轩说明:本库为原创库,火焰工作室精心打造版权所有,侵权必究alsolib 一个最重要的功能想知道是谁在看你的学而思作品吗? 他来了~fromalsolib.xesimport*a=getnowuser()print(a)#正常:{'state':True,'user_id':"7821237", "user_name":"苦逼者钟宇轩"}#未登录:{"state":False,'user_id':"未登录","user_name":"未登录"}###alsolib(alsolib)主要库介绍说明 ###注:alsolib介绍成功全为0,失败全为-1 内置函数 导入方式:fromalsolibimport*百度百科a=baidubaike("***")print(a)翻译a=translate("Hello")print(a)天气a=getweather("guangdong","guangzhou")#省,市的英文print(a)内置网页浏览器 title为自定义标题a=getweather("http://www.asunc.cn/","title")#一定要加http://或https://print(a)学而思库说明(alsolib.xes)###注:alsolib.xes介绍成功全为0,失败全为-1 导入方式fromalsolib.xesimport*a=LoadWork("一个作品的pid")学而思获取作品赞b=a.get_likes()print(b)学而思获取作品作者b=a.get_user()print(b)学而思获取作品踩的数量b=a.get_unlikes()print(b)学而思获取作品介绍b=a.get_description()print(b)学而思获取作者信息b=a.get_name_as_pid()print(b)学而思获取你对这个作品点赞没有~while1:b=a.is_like()ifb==0:print("你点赞了")ifb==-1:print("你踩了")ifb==1:print("你没点赞也没踩")hyChat库(alsolib.hychat)importalsolib.hychat###api库(alsolib.api)导库fromalsolib.apiimport*a=Alsoai()1.获取IP地理位置:print(a.get_ipaddress(ip,get_point))#第一个参数必填,第二个参数选填,填True后会返回城市名和这个城市的中心经纬度城市名#不填或False则只返回城市名#错误返回值为-1,成功为02.获取道路交通路况:print(a.get_traffic(road,city))#全部参数都要填,会返回这个道路的路况,像堵车,畅通......#错误返回值为-1,成功为03.制作AI音频:print(a.makevoice(text,filename,mode=1,speed=40))#第一二个参数必填,三四个选填,第一个参数填入发音的文字,第二个参数填路径#第三个参数填发音人(1为男人,5,6,7为三种不同声音的女人),第四个参数填声音范围(0-100)#返回值成功为0,错误返回-14.AI语音:print(a.speak(text,mode=1,speed=40))#第一个参数必填,二三个选填,第一个参数填入发音的文字,第二个参数填发音人#(1为男人,5,6,7为三种不同声音的女人),第三个参数填声音范围(0-100)#返回值成功为0,错误返回-15.AI智能聊天机器人:print(a.AI_chat(text,is_speak=False,mode=5))#第一个参数必填,二三个选填,第一个参数传入你想对机器人说的话,第二个传入#是否以AI说语音,第三个填发音人(1为男人,5,6,7为三种不同声音的女人)#成功返回0,AI智能聊天机器人的回答,失败返回-1,404为操作太快###结束:本介绍解释终归火焰工作室,感谢您的收看~~~
alstat
alstat is advances logs statistics. It’s collection of utils to analyze logs.FeaturesUnpack gzipped logfilesFastUsageThis commant print all lines from all log files in directory /var/log/nginx if formathttp_method status http_referer:alstat -d /var/log/nginx/ -p "*access*" -f "base" http_method status http_referer GET 200 http://google.com .... to many lines GET 404 http://ya.ru/ PUT 200 http://yandex.com/You can view fields list that you can use to display:alstat -d /var/log/nginx/ -p "*access*" -l Alstat v0.0.1 start at Tue May 8 23:25:24 2012 You can use fieldnames: status, http_protocol, http_method, http_referer, remote_addr, url, time_local, http_user_agent, remote_user, sizeINSTALLATIONTo use alstat use pip or easy_install:pip install alstatoreasy_install alstatTODOAdd group by fields and countWeb interface with reportsCONTRIBUTEForkhttps://github.com/Lispython/alstat/, create commit and pull request.THANKSTo David M. Beazley forgeneratorsexamples.SEE ALSOpython pypi.
alt
Please see documentation here:https://github.com/kevwo/alt
alt2py
alt2pyalt2py is a python package that provides tools for data preperation, blending and extraction using pandas. This package provides tools similar to Alteryx Designer, so Alteryx users can carry some of their Alteryx skills over to python. Alternatively, use this package to parse yxdb files and create similar logic that can be run entirely in Python.ToolsThe main components of this package are called "Tools". Each tool is a python Class that can be passed keyword arguments to set up the configuration of the tool. Each tool also has an execute function that accepts the pandas dataframe/s that you want to perform that tools functionality on. Tools execute functions do not alter input dataframes.If the tool only has a single output, the execute function will return the altered_dataframe:Single Output Toolsfrom alt2py.tools import [Tool_Name]altered_df = Tool_Name( key_word1 = "value", key_word2 = "value" ).execute(input_dataframe1,input_dataframe2)Multiple Output Toolsfrom alt2py.tools import [Tool_Name]class_instance = Tool_Name( key_word1 = "value", key_word2 = "value" ).execute(input_dataframe1,input_dataframe2) //returns the instance of the class with the outputs added to new attributesoutput_df1 = class_instance.output1 output_df2 = class_instance.output2Sample ("Random Sample" and Create Samples)Key word arguments:train: float or int, default=1If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the validate size.validate: float or int, default=1If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the validate split. If int, represents the absolute number of validate samples. If None, the value is automatically set to the complement of the train size.Iftrainis also None, it will be set to 0.25.shuffle: bool, default=TrueWhether or not shuffling is applied to the data before applying the split.seed: int, default=NoneControls the shuffling applied to the data before applying the split. Pass an int for reproducible output across multiple function calls.Returns:self, with 3 new instance attributes:self.train,self.validateandself.holdClean (Data Cleansing)Key word arguments:filter_na_cols: boolean, default=FalseWhen True, this tool will filter any column that only contains ps.NA.filter_na_rows: boolean, default=FalseWhen True, this tool will filter any column that only contains ps.NA.fields: list, default=[all columns]Fields to apply the cleaning to (excluding params above).replace_na_blank: boolean, default=FalseIf True, this tool will replace pd.NA with an empty string. Only applies to String type columns.replace_na_zero: boolean, default=FalseIf True, this tool will replace pd.NA with a 0. Only applies to numeric type columns.trim_whitespace: boolean, default=FalseRemove leading and trailing whitespace.remove_dup_space: boolean, default=FalseReplace any occurrence of whitespace with a single space, including line endings, tabs, multiple spaces, and other consecutive whitespaces.remove_all_space: boolean, default=FalseRemove any occurrences of whitespace.remove_letters: boolean, default=FalseRemove any occurences of letters.remove_numbers: boolean, default=FalseRemove any occurences of Numbers. Only applies to String type columns.remove_punctuation: boolean, default=FalseRemove these characters: ! " # $ % & ' ( ) * + , \ - . / : ; < = > ? @ [ / ] ^ _ ` { | } ~modify_case: boolean, default=FalseIf True, apply the modifier (below) to all String type columns.modifier: string, default=title, accepts="title"|"upper"|"lower"upper: Capitalise all letters in a string.lower: Convert all letters in a string to lowercase.title: Capitalize the 1st letter of all words in a string.Returns:Altered DataframeFilterKey word arguments:expression: str or function, default="True"Pass a string to use Alteryx syntax for filtering. Alternatively pass a Python function that returns a boolean.Returns:self, with 2 new instance attributes:self.train,self.falseFormulaKey word arguments:formulae: [{field,type,size,expression}], default=[]A list of Python dictionariesfield: str, requiredThe name of the field. If this field already exists, it will be updated. If the field doesn't exist, it will be created.type: str, required if the field is new. defaults to the type of the original field if the field already existsThe data type for the field.size: str, default=NoneThe size of the data.expression: str or function, requiredPass a string to use Alteryx syntax for the expression. Alternatively pass a Python function that returns the value.ImputerUse Imputation to replace a specified value within one or more numeric data fields with another specified value.A typical use case is to replace all pd.NA values with the average of the remaining values for fields Q1_Sales and Q2_Sales so the pd.NA values do not affect the final outcome of their forecasting model.Key word arguments:fields: [str], requiredThe fields to apply the imputation to.impute_value: Number or pd.NA, default=pd.NAThe value that you want to be replaced with imputation.impute_fn: str[Agg], default=NoneA string that references an Aggregator.impute_static: number, default=NoneAn int of float to be used as the replacement value. This parameter is only used if there is noimpute_fn.indicator: bool, default=FalseIf True, add a new column of booleans for each selected field to flag that records that were imputed.add_fields: bool, default=FalseIf True, maintain the original fields and add a new column with imputed data. If False, update the original fields to contain the imputed data.new_prefix: str, default=""prefix to add to the column name of the added fields.new_suffix: str, default="_imputed"suffix to add to the column name of the added fields.indicator_prefix: str, default=""prefix to add to the column name of the indicator fields.indicator_suffic: str, default="_indicator"prefix to add to the column name of the indicator fields.MultiFieldBinUse Multi-Field Binning to replicate some of the functionality of the Tile tool—additional features allow the data to be binned on multiple fields. Built primarily for the predictive toolset, Multi-Field Binning only accepts numeric fields for binning. You can bin fields on either equal records or equal intervals.Key word arguments:fields: [str], requiredA list of numeric fields that are used to determine the bins.mode: str, default="count", accepts="count"|"interval"count: Bins will be created such that there are an equal number of records in each bin.interval: The minimum and maximum values of the selected fields are determined. The range is split into equal-sized sub-ranges. Records are assigned to bins based on these ranges.bins: int, requiredNumber of bins to split the data into.prefix: str, default=""prefix to add the field_name for the new column.suffix: str, default="_Tile"suffix to add the field_name for the new column.MultiFieldFormulaUse Multi-Field Formula to create or update multiple fields using a single expression.Key word arguments:fields: [str], requiredA list of the field names to apply the formula to.expression: str or fn, default="count"The expression to apply to selected fields. If a string is passed, it will be parsed as an Alteryx expression. Alternatively, you can pass a normal Python function.type: int, requiredThe type to change the new fields to.size: str, default=NoneThe size to change the new fields to.prefix: str, default=""The prefix to add to the field_name as a new column. If both prefix and suffix are empty, update the selected field instead of creating a new one.suffix: str, default=""The suffix to add to the field_name as a new column. If both prefix and suffix are empty, update the selected field instead of creating a new one.MultiRowFormulaUse Multi-Row Formula to take the concept of the Formula tool a step further. This tool allows you to use row data as part of the formula creation and is useful for parsing complex data, and creating running totals, averages, percentages, and other mathematical calculations.The Multi-Row Formula tool can only update one field per tool instance. If you would like to update multiple fields, you need to add a Multi-Row Formula tool for each field to be updated.Key word arguments:field: str, requiredThe field to update or add. If the field name already exists in the dataframe, perform an update on that field. If the field doesn't already exist in.groupings: [str], default=[]The field/s to group each iteration by.expression: str or fn, requiredThe expressions to apply on each iteration. If a string is passed, it will be passed, it will be parsed as an Alteryx expression. Alternatively, you can pass a normal Python function.num_rows: int, default=1The number of rows to lookup backwards and forwards from the current iterations row. #TODO: this parameter can be parsed from the expression. get rid of it.type: int, requiredThe type to assign to the updated/new column.size: str, default=NoneThe size to assign to the updated/new column.unknown: str or pd.NA, default=pd.NA, accepts="null"|"empty"|"nearest"|pd.NAThe value to apply to non existent rows.OverSampleThis tool is used to create a new sample for analysis that has an elevated percentage of a certain value (often a 50-50 split of positive and negative responses is used).Key word arguments:field: str, requiredThe field to oversample in the data.value: field_type, requiredThe value of the field to oversample.sample: int, default=50The percentage (between 1 and 100) of records that should have the desired value in the field of interestRecordIDUse Record ID to create a new column in the data and assign a unique identifier that increments sequentially for each record in the data. The Record ID tool generates unique identifiers with numeric values or string values, based on the data type you select.Key word arguments:field: str, requiredThe field name to assign to the new column.start: int, default=1The ID to be assigned to the first record in the dataframe.type: str, default="Int", accepts: "Int"|"String"The type to be assigned to the new column.size: str, default=NoneThe size to be assigned to the new column.position: bool, default=FalseIf True, the new field will be the first columm in the dataframe.SelectKey word arguments:selected: [str], requiredA list of all of the columns to keep in the dataframe.deselected: [str], default=[]A list of all of the columns to remove from the dataframe.keep_unknown: str, default=FalseIf True, any columns that are in the dataframe, but not inselectedordeselected, will be kept in the dataframe.types: {field_name:type}, default={}A dictionary with field_names as keys and the type to remap that field to as the values.renames: {field_name:new_name}, default={}A dictionary with field_names as keys and the new field name to remap that field to as the values.reorder: bool, default=TrueIf True, the columns will be reordered to match theselectedlist. If False, the original order will be maintained.SelectRecordsKey word arguments:conditions: [str], requiredEnter the records or range of records to return.A single digit returns only the entered row. For example, "3" returns only row 3.A minus sign before a number returns row 1 through the entered row. For example, "-2" returns rows 1 and 2.A range of numbers returns the rows in the range, including the entered rows. For example, "17-20" returns rows 17, 18, 19, and 20.A number followed by a plus sign returns the entered row through the last row. For example, "50+" returns row 50 through the last row.Any combination can be used at the same time by entering the ranges as seperate list entries.index: int, default=0, load_xml default=1The base index of the source data. In Pandas, this will usually be 0-based indexing so use index=0. In Alteryx, this will be 1-based indexing, so if the ranges originate from Alteryx, use index=1.SortKey word arguments:fields: [str], requiredA list of fields to sort the dataframe by, in order of priority.order: [bool], requiredThis listMUSTbe the same length as thefieldsparameter. If the nth entry of order is True, the dataframe will be sorted by the nth entry of fields in ascending order. If false, the order will be descending.handle_alpha_numeric: [bool], default=FalseIf False, numeric strings are sorted left to right, character by character. eg. "22" will bebefore"4". If True, numeric strings are sorted from the smallest to the largest number. eg. "22" will beafter"4".TileUse Tile to assign a value (tile) based on ranges in the data. The tool does this based on the user specifying one of 5 methods. 2 fields are added to the data:Tile number is the assigned tile of the record.Tile sequence number is the record number of the record's position within the Tile.Key word arguments:mode: str, required, accepts: "records"|"sum"|"smart"|"manual"|"unique"The type of tiling to use:records: Input records are divided into the specified amount of tiles so that each tile is assigned the same amount of records.sum: Assigns tiles to cover a range of values where each tile has the same total of the Sum field based on the sort order of the incoming records.smart:Creates tiles based on the Standard Deviation of the values in the specified field. The tiles assigned indicate whether the record's value falls within the average range (=0), above the average (1), or below the average (-1), etc.manual:The user can specify the cutoffs for the tiles by typing a value on a new line for each range.unique: For every unique value in a specified field or fields, a unique tile is assigned. If multiple fields are specified, a tile is assigned based on that combination of values.field: str, requiredThe field to perform tiling on.num_tiles: int, default=NoneThe amount of tiles to be created.MUSTbe specified formode="records"|"sum".IGNOREDotherwise.no_split: str, default=NoneA tile is not split across this field if selected.IGNOREDifmodeis not "records".manual: [int], default=Falsethe upper limit of each manually defined tile.IGNOREDifmodeis not "manual".tile_name: str, default="Tile_Num"The field name for the tile number of the record.seq_name: str, default="Tile_SequenceNum"The field name for the record number of the record's position within the Tilesmart_tile_name: str, default="SmartTile_Name"The field name for the smart tile description ranging fromExtremely LowtoExtremely High.UniqueUse Unique to distinguish whether a data record is unique or a duplicate by grouping on one or more specified fields, then sorting on those fields.Key word arguments:fields: [str], requiredThe combination of fields that you want to test for uniqueness.Returns:self, with 2 new instance attributes:self.uniqueandself.duplicates.JoinUse Join to combine 2 inputs based on common fields between the 2 dataframes. You can also Join 2 data streams based on record position. This tool can also do a cartesean join (append in alteryx).Key word arguments:how: string, default="cross", accepts="cross"|"position"If cross, the tool will do a join by keys, if position the tool will do a join by record position.left_keys: [str], default=NoneThe field names from the left (first) dataframe.IGNOREDwhenhow="position".right_keys: [str], default=NoneThe field names from the right (second) dataframe.IGNOREDwhenhow="position".MUSTbe the same length asleft_keysReturns:self, with 3 new instance attributes:self.left,self.rightandself.inner.JoinMultipleUse JoinMultiple to combine 2 or more inputs based on a commonality between the input dataframes. By default, the tool outputs a full outer join. Visit Join Tool for more information.Key word arguments:fields: [[str]], default=[]A list of lists of keys. The number of sub-lists must be equal to the number of dataframes passed to execute. The length of each sub-list must be equal.IGNOREDif by_position is True.names: [str], default=["#1","#2"...]The name of each dataframe passed to execute. This is used for renaming columns that have duplicated names.by_position: bool, default=FalseIf False, the tool will do a join by keys. If True, the tool will do a join by record position.inner: bool, default=FalseIf False, returns a full outer join of all the dataframes. If True, only return the inner join of the dataframes.Returns:Joined Dataframe.UnionUse Union to combine 2 or more datasets on column names or positions. In the output, each column contains the rows from each input.Key word arguments:mode: str, default="position", accepts="position"|"name"|"manual"position:Stack data by the column order in the dataframes.name:Stack data by column name.manual:Allows you to manually specify how to stack data. When you choose this method, the manual parameter isREQUIREDsubset: bool, default=TrueIf true, only output the fields that match. If False, keep all fields from all dataframes and populate missing data withpd.NA.manual: [], default=[]A list of lists of field_names. The number of sub-lists must be equal to the number of dataframes passed to execute. The length of each sub-list must also be equal, this will contain the order of each dataframe to stack.Returns:Unioned Dataframe.DateTimeUse DateTime to transform date-time data to and from a variety of formats, including both expression-friendly and human-readable formats. You can also specify the language of your date-time data. When carrying operations with 2 date-time data of different precision, the higher precision prevails. To format more precise date-time formats as strings, you need to insert a Select tool before you write to a database.Key word arguments:to_string: bool, default=FalseIf False, convert a DateTime object to a String. If False, convert a String to a DateTime object.field: str, default=TrueThe name of the field that you want to convert.label: str, default=""The name of the newly added field.pattern: str, default=TrueThe pattern to use with the conversion.Day, Month and Year Formatsd:Day of the month as digits, without leading zeros for single-digit days.day:The full name of the day of the week.dd:Day in 2 digits, with leading zeros for single-digit days. On input, leading zeros are optional.dy:Day of the week as a 3-letter abbreviation. On input, full names are accepted but Alteryx doesn't check that the day of the week agrees with the rest of the date.EEEE:The full name of the day of the week.M:A single-digit month, without a leading zero.MM:Month as digits, with leading zeros for single-digit months. On input, leading zeros are optional.MMM:The abbreviated name of the month.MMMM:The name of the month spelled out.Mon:A 3-letter abbreviation of the name of the month. On input, full names are also accepted.Month:Name of the Month. On input, abbreviations are also accepted.yy:Year represented only by the last two digits. When converting from a string, two-digit years are mapped into the range from the current year, minus 66 years to the current year, plus 33 years. For example, in 2016, a two-digit year will be mapped into the range: 1950 to 2049. On input, four digits are also be accepted.yyyy:Year represented by the full 4 digits. On input, 2 digits will also be accepted and mapped as done for the “yy” pattern.sHour, Minute and Seconds Formatsahh:AM/PM (Simplified Chinese only).H:Hour, with no leading zeros for single-digit hours (24-hour clock).HH or hh:Hours, with leading zeros for single-digit hours (24-hour clock).mm:Minutes, with leading zeros for single-digit minutes.ss:Seconds, with leading zeros for single-digit seconds.SeperatorsOn output, separators in the date/time format are used exactly. On input...- and / are accepted as equivalent.White space is ignored.: and , must match exactly.Returns:Altered Dataframe.RegexUse RegEx (Regular Expression) to leverage regular expression syntax to parse, match, or replace data.Key word arguments:field: str, default=None, RequiredThe name of the column to parse.pattern: str, default=None, RequiredThe Regex expression to be used on the selected field.case_insensitive: bool, default=TrueWhether or not the regex expression should be sensitive to case.method: str, default=match, accepts="match"|"parse"|"tokenize"|"replace"The method to use to parse the text.match:Add a column containing a number: 1 if the expression matched, 0 if it did not.parse:Add a column containing a number: 1 if the expression matched, 0 if it did not.Returns:Altered Dataframe.CrossTabUse Cross Tab to pivot the orientation of data in a table by moving vertical data fields onto a horizontal axis and summarizing data where specified.Key word arguments:groupings: [str], default=[]A list of field names that should be used to group the data. Data with identical values are grouped together into a single row.header: str, RequiredA field name to get the new headers. A new column is created for each unique value in the selected column.value_field: str, RequiredA field name for the column that contains the values used to populate the new columns.method: str,accepts="Sum"|"Average"|"Count"|"CountWithNulls"|"First"|"Last"|"Concat", RequiredThe method for Aggregating values for combining multiple values in a fieldsep: str,default="," when method is Concat, RequiredTThe seperator to use for concatenation of multiple fieldsmethod: str, default=match, accepts="match"|"parse"|"tokenize"|"replace"The method to use to parse the text.match:Add a column containing a number: 1 if the expression matched, 0 if it did not.parse:Add a column containing a number: 1 if the expression matched, 0 if it did not.Returns:Altered Dataframe.
altadata
ALTADATAPython library provides convenient access to the ALTADATA API from applications written in the Python language. With this Python library, developers can build applications around the ALTADATA API without having to deal with accessing and managing requests and responses.OverviewInstallationRetrieving DataDocumentationLicenseInstallationpip install altadataRetrieving DataYou can get the entire data with the code below. This function returns List of dict by default.client=AltaDataAPI(YOUR_API_KEY)data=client.get_data(PRODUCT_CODE).load()We currently have pandas dataframe support in the library. Users can optionally retrieve their datasets as pandas dataframe. Ifdataframe_functionalityparameter is True function returns pandas dataframe.Note:This functionality requirespandas(v0.23 or above) to work.client=AltaDataAPI(api_key=YOUR_API_KEY,dataframe_functionality=True)data=client.get_data(PRODUCT_CODE).load()You can get data with using various conditions.client=AltaDataAPI(YOUR_API_KEY)data=client.get_data(PRODUCT_CODE)\.equal(condition_column=COLUMN_NAME,condition_value=CONDITION_VALUE)\.sort(order_column=COLUMN_NAME,order_method="desc")\.load()DocumentationRead the documentation online ataltadata-python.rtfd.ioOptionally, build documentation from thedocs/folderpip install sphinx cd docs make htmlLicensealtadata-python is under MIT license. See theLICENSEfile for more info.
altaipony
|docs-badge| |license-badge| |joss-badge| |zenodo-badge|.. |joss-badge| image:: https://joss.theoj.org/papers/10.21105/joss.02845/status.svg:target: https://doi.org/10.21105/joss.02845.. |zenodo-badge| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.5040830.svg:target: https://doi.org/10.5281/zenodo.5040830.. |docs-badge| image:: https://readthedocs.org/projects/altaipony/badge/?version=latest:target: https://altaipony.readthedocs.io/en/latest/?badge=latest:alt: Documentation Status.. |license-badge| image:: https://img.shields.io/github/license/mashape/apistatus.svg:target: https://github.com/ekaterinailin/AltaiPony/blob/master/LICENSE:alt: GitHub.. image:: logo.png:height: 100px:width: 100px:alt: Logo credit: Elizaveta Ilin, 2018AltaiPony=========De-trend light curves from Kepler, and TESS missions, and search them for flares. Inject and recover synthetic flares to account for de-trending and noise loss in flare energy and determine energy-dependent recovery probability for every flare candidate. Uses ``lightkurve`` under the cover, as well as ``pandas``, ``numpy``, ``pytest``, ``astropy`` and more.Find the documentation at altaipony.readthedocs.io_Installation^^^^^^^^^^^^^Use pip to install AltaiPony>>> pip install altaiponyOr install directly from the repository:>>> git clone https://github.com/ekaterinailin/AltaiPony.git>>> cd AltaiPony>>> python setup.py installGetting Started^^^^^^^^^^^^^^^^See this notebook_ for an easy introduction, also docs_.Problems?^^^^^^^^^Often, when something does not work in **AltaiPony**, and this documentation is useless, troubleshooting can be done by diving into the extensive **lightkurve** docs_. Otherwise, you can always shoot Ekaterina an email_ or directly open an issue on GitHub_. Many foreseeable problems will be due to bugs in **AltaiPony** or bad instructions on this website.Contribute to AltaiPony^^^^^^^^^^^^^^^^^^^^^^^**AltaiPony** is under active development on Github_. If you use **AltaiPony** in your research and find yourself missing a functionality, I recommend opening an issue on GitHub_ or shooting Ekaterina an email_. Please do either of the two before you open a pull request. This may save you a lot of development time.How to cite this work^^^^^^^^^^^^^^^^^^^^^If you end up using this package for your science, please cite Ilin et al. (2021) [a]_ and Davenport (2016) [b]_.Please also cite `lightkurve` as indicated in their docs [1]_.Depending on the methods you use, you may also want to cite- Maschberger and Kroupa (2009) [2]_ (MMLE power law fit)- Wheatland (2004) [3]_ (MCMC power law fit)- Aigrain et al. (2016) [4]_ and their software [5]_ (K2SC de-trending -- DEPRECATED)- Davenport et al. (2014) [6_] or Mendoza et al. (2022) [7_] (injection-recovery analysis).. [a] Ekaterina Ilin, Sarah J. Schmidt, Katja Poppenhäger, James R. A. Davenport, Martti H. Kristiansen, Mark Omohundro (2021). "Flares in Open Clusters with K2. II. Pleiades, Hyades, Praesepe, Ruprecht 147, and M67" Astronomy & Astrophysics, Volume 645, id.A42, 25 pp. https://doi.org/10.1051/0004-6361/202039198.. [b] James R. A. Davenport "The Kepler Catalog of Stellar Flares" The Astrophysical Journal, Volume 829, Issue 1, article id. 23, 12 pp. (2016). https://doi.org/10.3847/0004-637X/829/1/23.. [1] https://docs.lightkurve.org/about/citing.html.. [2] Thomas Maschberger, Pavel Kroupa, "Estimators for the exponent and upper limit, and goodness-of-fit tests for (truncated) power-law distributions" Monthly Notices of the Royal Astronomical Society, Volume 395, Issue 2, May 2009, Pages 931–942, https://doi.org/10.1111/j.1365-2966.2009.14577.x.. [3] Wheatland, Michael S. "A Bayesian approach to solar flare prediction." The Astrophysical Journal 609.2 (2004): 1134. https://doi.org/10.1086/421261.. [4] Aigrain, Suzanne; Parviainen, Hannu; Pope, Benjamin "K2SC: flexible systematics correction and detrending of K2 light curves using Gaussian process regression" Monthly Notices of the Royal Astronomical Society, Volume 459, Issue 3, p.2408-2419 https://doi.org/10.1093/mnras/stw706.. [5] Aigrain, Suzanne; Parviainen, Hannu; Pope, Benjamin "K2SC: K2 Systematics Correction." Astrophysics Source Code Library, record ascl:1605.012 https://ui.adsabs.harvard.edu/abs/2016ascl.soft05012A/abstract.. [6] J. R. A. Davenport et al., “Kepler Flares. II. The Temporal Morphology of White-light Flares on GJ 1243,” The Astrophysical Journal, vol. 797, p. 122, Dec. 2014, doi: 10.1088/0004-637X/797/2/122... [7] G. T. Mendoza, J. R. A. Davenport, E. Agol, J. A. G. Jackman, and S. L. Hawley, “Llamaradas Estelares: Modeling the Morphology of White-light Flares,” The Astronomical Journal, Volume 164, Issue 1, id.17, <NUMPAGES>12</NUMPAGES> pp., vol. 164, no. 1, p. 17, Jul. 2022, doi: 10.3847/1538-3881/ac6fe6... _Appaloosa: https://github.com/jradavenport/appaloosa/.. _altaipony.readthedocs.io: https://altaipony.readthedocs.io/en/latest/.. _notebook: https://github.com/ekaterinailin/AltaiPony/blob/master/notebooks/Getting_Started.ipynb.. _docs: https://altaipony.readthedocs.io/en/latest/.. _Github: https://github.com/ekaterinailin/AltaiPony/issues/new.. _email: [email protected]
altair
Vega-AltairVega-Altairis a declarative statistical visualization library for Python. With Vega-Altair, you can spend more time understanding your data and its meaning. Vega-Altair's API is simple, friendly and consistent and built on top of the powerfulVega-LiteJSON specification. This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code.Vega-Altair was originally developed byJake VanderplasandBrian Grangerin close collaboration with theUW Interactive Data Lab.The Vega-Altair open source project is not affiliated with Altair Engineering, Inc.DocumentationSeeVega-Altair's Documentation Siteas well as theTutorial Notebooks. You can run the notebooks directly in your browser by clicking on one of the following badges:ExampleHere is an example using Vega-Altair to quickly visualize and display a dataset with the native Vega-Lite renderer in the JupyterLab:importaltairasalt# load a simple dataset as a pandas DataFramefromvega_datasetsimportdatacars=data.cars()alt.Chart(cars).mark_point().encode(x='Horsepower',y='Miles_per_Gallon',color='Origin',)One of the unique features of Vega-Altair, inherited from Vega-Lite, is a declarative grammar of not just visualization, butinteraction. With a few modifications to the example above we can create a linked histogram that is filtered based on a selection of the scatter plot.importaltairasaltfromvega_datasetsimportdatasource=data.cars()brush=alt.selection_interval()points=alt.Chart(source).mark_point().encode(x='Horsepower',y='Miles_per_Gallon',color=alt.condition(brush,'Origin',alt.value('lightgray'))).add_params(brush)bars=alt.Chart(source).mark_bar().encode(y='Origin',color='Origin',x='count(Origin)').transform_filter(brush)points&barsFeaturesCarefully-designed, declarative Python API.Auto-generated internal Python API that guarantees visualizations are type-checked and in full conformance with theVega-Litespecification.Display visualizations in JupyterLab, Jupyter Notebook, Visual Studio Code, on GitHub andnbviewer, and many more.Export visualizations to various formats such as PNG/SVG images, stand-alone HTML pages and theOnline Vega-Lite Editor.Serialize visualizations as JSON files.InstallationVega-Altair can be installed with:pipinstallaltairIf you are using the conda package manager, the equivalent is:condainstallaltair-cconda-forgeFor full installation instructions, please seethe documentation.Getting HelpIf you have a question that is not addressed in the documentation, you can post it onStackOverflowusing thealtairtag. For bugs and feature requests, please open aGithub Issue.DevelopmentYou can find the instructions on how to install the package for development inthe documentation.To run the tests and linters, usehatch run testFor information on how to contribute your developments back to the Vega-Altair repository, seeCONTRIBUTING.mdCiting Vega-AltairIf you use Vega-Altair in academic work, please consider citinghttps://joss.theoj.org/papers/10.21105/joss.01057as@article{VanderPlas2018,doi={10.21105/joss.01057},url={https://doi.org/10.21105/joss.01057},year={2018},publisher={The Open Journal},volume={3},number={32},pages={1057},author={Jacob VanderPlas and Brian Granger and Jeffrey Heer and Dominik Moritz and Kanit Wongsuphasawat and Arvind Satyanarayan and Eitan Lees and Ilia Timofeev and Ben Welsh and Scott Sievert},title={Altair: Interactive Statistical Visualizations for Python},journal={Journal of Open Source Software}}Please additionally consider citing theVega-Liteproject, which Vega-Altair is based on:https://dl.acm.org/doi/10.1109/TVCG.2016.2599030@article{Satyanarayan2017,author={Satyanarayan, Arvind and Moritz, Dominik and Wongsuphasawat, Kanit and Heer, Jeffrey},title={Vega-Lite: A Grammar of Interactive Graphics},journal={IEEE transactions on visualization and computer graphics},year={2017},volume={23},number={1},pages={341-350},publisher={IEEE}}
altair-ally
Altair AllyThis package is in early development and its name, API, etc might change rapidly. It is not yet on PyPI.Altair Ally ("Aly" for short) is an companion package to Altair, which provides a few shortcuts to create common plots for exploratory data analysis (EDA), particularly those involving visualizing an entire dataset. The goal is to encourage good EDA habits by making it quick and simple to create these visualizations, and to provide useful default interactions with the plots.Thanks to the excellent API in Altair / Vega-Lite, some configuration is possible by using the built-in methods of the returned chart objects, but the general philosophy is that highly customized plots are better made in Altair directly. The package name is a nod to ggally which complements ggplot2 in a similar, but much more extensive manner.Check out the documentation for more info and examples.
altair-catplot
# Altair-catplotA utility to use Altair to generate box plots, jitter plots, and ECDFs, i.e. plots with a categorical variable where a data transformation not covered in Altair is required.## Motivation[Altair](https://altair-viz.github.io) is a Python interface for [Vega-Lite](http://vega.github.io/vega-lite). The resulting plots are easily displayed in JupyterLab and/or exported. The grammar of Vega-Lite which is largely present in Altair is well-defined, well-documented, and clear. This is one of many strong features of Altair and Vega-Lite.There is always a trade-off when using high level plotting libraries. You can rapidly make plots, but they are less configurable. The developers of Altair have (wisely, in my opinion) adhered to the grammar of Vega-Lite. If Vega-Lite does not have a feature, Altair does not try to add it.The developers of Vega-Lite have an [have plans to add more functionality](https://github.com/vega/vega-lite/pull/4096/files). Indeed, in the soon to be released (as of August 23, 2018) Vega-Lite 3.0, box plots are included. Adding a jitter transform is also planned. It would be useful to be able to conveniently make jitter and box plots with the current features of Vega-Lite and Altair. I wrote Altair-catplot to fill in this gap until the functionality is implemented in Vega-Lite and Altair.The box plots and jitter plots I have in mind apply to the case where one axis is quantitative and the other axis is nominal or ordinal (that is, categorical). So, we are making plots with one categorical variable and one quantitative. Hence the name, Altair-catplot.## InstallationYou can install altair-catplot using pip. You will need to have a recent version of Altair and all of its dependencies installed.pip install altair_catplot## UsageI will import Altair-catplot as `altcat`, and while I'm at it will import the other modules we need.```pythonimport numpy as npimport pandas as pdimport altair as altimport altair_catplot as altcat```Every plot is made using the `altcat.catplot()` function. It has the following call signature.```pythoncatplot(data=None,height=Undefined,width=Undefined,mark=Undefined,encoding=Undefined,transform=None,sort=Undefined,jitter_width=0.2,box_mark=Undefined,whisker_mark=Undefined,box_overlay=False,**kwargs)```The `data`, `mark`, `encoding`, and `transform` arguments must all be provided. The `data`, `mark`, and `encoding` fields are as for `alt.Chart()`. Note that these are specified as constructor attributes, not as you would using Altair's more idiomatic methods like `mark_point()`, `encode()`, etc.In this package, I consider a box plot, jitter plot, or ECDF to be transforms of the data, as they are constructed by performing some aggegration of transformation to the data. The exception is for a box plot, since in Vega-Lite 3.0+'s specification for box plots, where `boxplot` is a mark.The utility is best shown by example, so below I present several.## Sample dataTo demonstrate usage, I will first create a data frame with sample data for plotting.```pythonnp.random.seed(4288233)data = {'data ' + str(i): np.random.normal(*musig, size=50)for i, musig in enumerate(zip([0, 1, 2, 3], [1, 1, 2, 3]))}df = pd.DataFrame(data=data).melt()df['dummy metadata'] = np.random.choice(['poodle', 'beagle', 'collie', 'dalmation', 'terrier'],size=len(df))df.head()```<div><table border="1" class="dataframe"><thead><tr style="text-align: right;"><th></th><th>variable</th><th>value</th><th>dummy metadata</th></tr></thead><tbody><tr><th>0</th><td>data 0</td><td>1.980946</td><td>collie</td></tr><tr><th>1</th><td>data 0</td><td>-0.442286</td><td>dalmation</td></tr><tr><th>2</th><td>data 0</td><td>1.093249</td><td>terrier</td></tr><tr><th>3</th><td>data 0</td><td>-0.233622</td><td>collie</td></tr><tr><th>4</th><td>data 0</td><td>-0.799315</td><td>dalmation</td></tr></tbody></table></div>The categorical variable is `'variable'` and the quantitative variable is `'value'`.## Box plotWe can create a box plot as follows. Note that the mark is a string specifying a box plot (as will be in the future with Altair), and the encoding is specified as a dictionary of key-value pairs.```pythonaltcat.catplot(df,mark='boxplot',encoding=dict(x='value:Q',y=alt.Y('variable:N', title=None),color=alt.Color('variable:N', legend=None)))```![png](images/output_10_0.png)This box plot can be generated in future editions of Altair after Vega-Lite 3.0 is formally released as follows.```pythonalt.Chart(df).mark_boxplot().encode(x='value:Q',y=alt.Y('variable:N', title=None),color=alt.Color('variable:N', legend=None))```The resulting plot looks different from what I have shown here, using instead the Vega-Lite defaults. Specifically, the whiskers are black and do not have caps, and the boxes are thinner. You can check it out [here](https://vega.github.io/editor/#/url/vega-lite/N4KABGBEDGD2B2AzAlgc0gLjKCFIDdkBTAd023FzwAsi1qAXcgZgAZWAaSqyE5AEwbVyAFnbcwAX0qSuefgEMGC8jlyR4CgLZFykRcoC0AdgVFExkQEYAHPwCczAKzRjT5vwfvERMdABGIk7GVtCQMnJQBgoAzkQMMaoS+koKJmYW1naOLm4eXsw+foHBoeQA2hIQalR4+AoANgCuulhWAHT2Nqz2IgBsVszGAEzDxjbMfU6RtVD1AE7ICv4NrVGpYKzhs7JVFLN1jS3khqztIiKjNsHMVvZWfff2rH0ztQQKi8uretGb27VdrMau96s01h0esxhiIbH1hn1ETYbMY+jY3jwFksVmsUsp-nsgbUQZijmtTu1hsxJsMrMZmN17iMnOi9ocvjjfhstoSMfsDh9wSczsZ7I4rE5aTDhsEnCJmHz2diflg8QoCTs+ST1GDjlgKTkxTKJiJDVZFXNPsrcX8eZq9trDkKsGcrDL+rC0S8rKxoQyLR8OSr1vi7YCtWzLc6wBSbEE0XdTU46fK+iIA1jvjbuQCqESqI6o3qwB03WJmJc+vSfcjxhmrVmuaHc7h87hC4Li4YOpdqY9TQinGKa-Wg9nm7yHZHO+SOgMEaK+qwRsMenHR9am+qw3mIwci+SzqaBoih0MGUN7JYN43VbaWxA29Vp7rZ+02C9RXDhnHkc9jDenJ3jmk7Ai+ZInHO-TPCizxiO49LpuBY5bhq4ZTvuM7Cu0rBwsY9I-jKxi4W4SGYZmQEhtuD5SHuAqvuQZxUqwLJWKagxxj+9iIoBwZqmhu4YfREEujhrDyjKTjws4koyjJvHjtRoHEuB0ZnAyy7Eb64nERKjgKahO6tnRsxYaJ4nuBWVJDsmVg1mmBnARO9pgeRIkxh0xpdIyPTDPYYxsY5VECcZQmmQx+qlrBCKsGxkkiPhAHIZuTlKS5KludGsZpiEwRdEu3HekF-FGY+Jmgu5FL4V0tKMmi-kXMMxX3spBaqcWro2PcppBHGuEPM1IHpW1mVdkxAxuD+po9PYThyuJg3OehrnCVlIp3JYUxunZrBjAMi1pctGWrWN7Q2Hp52rqMy5+YMB0hWVYUVWp779JKxFsKiVI-uayW3sFpW0U9pIvVJfR9pZ3FDvl92A0+-Lhe5Zw2D+4OSsiS6rmmCp-ZRJU0fDHYRR57SsXSYrdCuLL6bjfEtcN7btYelLY1etgvCi8I6bDBPlSDxbDKT0l5WMoxdMury04pD1AytiNZaWaaXNcq7MDBME861jOjWsZxzdCKNsH5-nPHcmsM8+OuQTh4zdL6IxPP0C1S4ZvPAzqSOUiyfVSn5lmXElbkoalMuE0zjHtIM-nOAlf7ymIZHCcHANu3Lz2nVKVZdHc4niUuUzm0dI0nW+LIXJMTi4VSbFXoXglp-z5KC9DiJsdCqLsODktBylKda5bJfkJ5cohA8CazQy3dJ73+P9wj6dvuwoxq+d4kyj6yZ16FDce2t7QDHGCZBFdEpT4jyezxb8+N9hVnXB+4xGxK1Jb49O9Ol2gtbVJXVGr0yIVlfrLY68tTriiCLcLifU5RNRdiHOGfNd5dg6MRH8zhuIXDgnNM+FUL70yLtrQebQcJSUsBKeClcEqB2nv9S+BCB6gN1pHB43R5oISvF1CUQCw5W2IUEWw6D3Bojtonc+M8-hWFTiAhe5BBaDmCPCFGG0pKXHupIueRNPbjCrHZZWAwLjdBwaSPBGx1FX00dGDoUlJizWCMub84xphwOCmY+h18kFMNQVSZ44l5TkLVmoqRxdGFD33o4FGS85osRRglQJGjw5YEFuwFivRYQwIRLAnutCJFBMISE4hasqTImNqiNm+E4nmISZsM64wuiFJRG3Okv0sl4xyfE3hYAv5xlhLBHoldvFOJaXTUxuSGEyOIUOFELwqHfj-JkmhrSRntKISWSkLEGRKMNt5WwFS3EWOLJ5BKMJdpSmsI8IxHsTH4lcfXaRN9zIhCrFMS4QjRjGl2bc4J4yYx638krP8LFkxylZEM6WNzt53I8aEhkVJ+i3EpsubiHyIVfPudU1Bkp5woykt5QZCzhnXNGe4j+EJOiInwujUUQ5bDzLEdkpZlSOkUmTKae4LCCJ3AuUqelhLln5PRdYGly45q2Cksit+kKSURy2vSREq5uJllpOK4BqKoWJMjkmCUsUWIylpM0-FYKiX7KbpHWCtkBhq2sS-Zx-FwUStVVK0SlrrDYPBouX064bVtMZSs5G2KKxVgsmwGJyqeErI6JMRE5KphwmhtCUNiDHU-Jwha54Mp-J+TTKMPFdLFm8p9fyjovQ5QSWXNCM5ZsvUMr2VUo8op6T4XjKuMhgCq35prR0pJS4Ji-hRBMNMXLLRXPVHalVeTvkdE4e4BKvorxDheAm92SaKTgyUY4P5s1FyDsDOI6tnzx1oqPBTWaabxhBu3RRAlI6jW1sjpGtiX4xhsGcM4Rd78DzYWIv5Zc8oq7ztpbg3d7b91jLRRSOydwfyPzGOdMsb7JUfqwMwL23ERiDBZAVe4ObAM8uvXy75Zx4Q+lim6UUowqxdXgw6xDYBkO7UpljQYxED5UYPWqzpd6jZZp6bYFG1Dc1XpLDeplkIxQQP6DnEY+EQUGtQqOsN-KKQ9FinCCsQx6TJjRBehsea8MFoI++GOkw15+XdCRVjoH2MUh6tYZ4LJYqOI0xZ4lNG5GsVmkuYIrKqT8Zw7poT+G0WllipMdmExRhuhRs541Edl4ojkrCGEMkujRdvcvCsQ544JW9vCVLHSzjdX8o84R5r9UCcNYF9jSTQtwjwuumqoi-OCfk4mmjk6u7zoeL0aqm823qmGMJlZgsMv4XQxWMQflnagtQgNyrSbkMZfBhlz8P4ejaeHZ0wb-Kkl+MzWeF4AV7qzf02ir+2cQtyVFZKRrxigP9a2wZhzvRpmTdPF+o7D3TudCeN0C4S3VySg+3N1zpM1YJT9MmaTuW+ubeB2ZapNdHmTEeXNR5Mnyszc+1Vs6-Y0MIkNn6G7ly7uw5O+xkQpM1u3APh+RwONpsh2Ox2lZTgzrToprtSe5ygdk-mzhDZdw7Z05-th27uHSfM8U4LSLmbLjfXOpDnnkvvnIZ9P5CYlcu4o3OkrkDLn4eeSI2jKs1dK7o6a9LJneuYv6kI9CXyMFxgJV6rrlFbGk0FduBw6ERGQi-1d-a93bX3zUy1aPJkbyA9jss0m6XFxYT3DRr0btUeFPfIp1dcjwQRhWTFTDq3buY80aU2mPOEpn7BrYutknBfA9F-h4LZknNke4W0q+-PWO+cZPpCxecVwJQAbF-52v0f9fEz6GdI0iJZ2DEHPT2TjPO-B-sbcWyxbxuwlT61+Hqv4-l3WU7TzW+l3B7TdSKl1jBiNWP++nfpMErwmIu6oY3RzdD8EyPtPaLWeDAEeWq8J6voN+CG8OR4lC4SAOvYooRO3Kw+S+DeZ00Sq4EwqGKI8aHecOxMTE4wP4EwbAtgvuZWFumOmB7knkfsbACE9s4owB1G8OrOo8bgeEdwc0SqGBvOIOTI1iz2PSVCtBQed+Fw7AYo6CN08qb+xO4un+2+WB5wbgOkKM36q4iKMBQ6Ne8BxMc4Vwc0XUjwco+hvm7+luGhZB7QqI0IuBlYRs1w-B9emhnQZGOc6G7A8ocIthY+phjaRo70FYiEhhkhcBpBlihmpmvGyYhsQB7ByuX2cYyYP0KitWpG7hNuHGCIjgbgtsACG81eUhJh0YC2q8vo1MV0tmouARH+eRHUZhTwRScW1wUkRBRhJBHBoBlIlMasL6aIH0zwORgRLR9hHEAis050xEcoQwyRVSc43EaCPQPo8otwbgExHSPYNiEGLB1wFwjR5R0szAlRawgsTIhixED+XQeeDOwUuxQRVRCYxm1g8eMCm+MOlx-RWidkos9wz66aaI90zx0R7GquzgpS8xMc8UPxex0KqBHmHqHcNM5x-Evx1uVSyGVYNkMaDIcoYogOTx4J6q1i6yhEfsLBg+2xqECJheHh0YrOko4RXUXmWu8+GOIcZJdeFJn8lICcO07Al4ehDJxBTJOJHG1w-aQw12c0vQYJVxawyGqSlwDWucUwwQEpLxlJ5wPQGmcYxGzgDk2Jkpsi-OYoCYAwN04kXCOpypByZhxSPi2cckEhsBgmzJo+KRFOAisq5MsquEvRDpAp9giBLqtxLwlCZR9pOxApFOaIv6dibKYoKWZpfxy6IoUkIWqmMIjwsESp8ZNGGeNIS48KKivkGZiJImZh1Isc8IKJNJhZ5JKRIouBdkqI84aIq4WxIZpJYZZ0VhbEkGxou0VZLJzphm9Rf2+hx4c0fZTpVSje50vUFkmaeBXpoZupWANgnQ6MtY-kG0dIxJrZ-JS5YAFOu0a42cRyXB45X+7Gesv6Gm2cyYkw-QZ5Mh7kw2CceU66+hSKcZRZKyyGrB5aVcGRRWD5J+8O3YHZ+iapLEucP8QFt+xMVUhsekOhowzIwZah4ujp55SaxgZhXkWkdws0sxMFIBcFEaEk5hlklcHmRFdBxMC2gwapFG50bK1qcJfwGFj50YgsKI+EFqPope9wdpaF-m7FwFshbg1iSIf4XWdI1FAhcFyGQQfyR5s0LCY5n51ZSJ5wlw2WAl1wySVIsldhT5pqV+HCRocIu0qFO66FApyJ1UUoH4ehoohlrJawGeMIlFLyeUwKLlKRrOyef41woxFY9gvlmlQwW0Mc+E1IskC5bZe5E+LE-pfsZctWYVHSrOS4JyHMkaaYLZQl3pe5yJ7AEGj81IsErarFGwIlsFlUyGXU1wvQUMooas+V1lwltlZ0f54kQwAUja6Vvq+81J2e6CpGKIZ5EgAAuhEJQJAEQPAHAPwMgPAOgFgCCDALAA0LAPMEkKZCgEQA0PwHoJeroHyJAKsKgPNUdVgPAE0A0A0GdQwAAJ4AAOuI8AsAWgy1jQLYbYkAAAHrte8PtYdcdRBI9a9biAAI5NAKDwAMDIDKAI34C6ASB-VPVA08Ag3XUFWnV7CQAI0MDBi3X3UQ1vV6AfVfWaANC-UzV4BaCfAADWeg-gsA-1L1W1TAIAkgQAA).Because box plots are unique in that they are specified with a mark and not a transform, we could use the `mark` argument above to specify a box plot. We could equivalently do it with the `transform` argument. (Note that this will not be possible when box plots are implemented in Altair.)```pythonbox = altcat.catplot(df,encoding=dict(y=alt.Y('variable:N', title=None),x='value:Q',color=alt.Color('variable:N', legend=None)),transform='box')box```![png](images/output_12_0.png)```pythontype(box)```altair.vegalite.v2.api.LayerChartWe can independently specify properties of the box and whisker marks using the `box_mark` and `whisker_mark` kwargs. For example, say we wanted our colors to be [Betancourt red](https://betanalpha.github.io/assets/case_studies/principled_bayesian_workflow.html#step_four:_build_a_generative_model19).```pythonaltcat.catplot(df,mark=dict(type='point', color='#7C0000'),box_mark=dict(color='#7C0000'),whisker_mark=dict(strokeWidth=2, color='#7C0000'),encoding=dict(x='value:Q',y=alt.Y('variable:N', title=None)),transform='box')```![png](images/output_15_0.png)## Jitter plotI try my best to subscribe to the "plot all of your data" philosophy. To that end, a strip plot is a useful way to show all of the measurements. Here is one way to make a strip plot in Altair.```pythonalt.Chart(df).mark_tick().encode(x='value:Q',y=alt.Y('variable:N', title=None),color=alt.Color('variable:N', legend=None))```![png](images/output_17_0.png)The problem with strip plots is that they can have trouble with overlapping data point. A common approach to deal with this is to "jitter," or place the glyphs with small random displacements along the categorical axis. This involves using a jitter transform. While the current release candidate for Vega-Lite 3.0 has box plot capabilities, it does not have a jitter transform, though that will likely be coming in the future (see [here](https://github.com/vega/vega-lite/issues/396) and [here](https://github.com/vega/vega-lite/pull/4096/files)). Have a proper transform where data points are offset, but the categorial axis truly has nominal or ordinal value is desired, but not currently possible. The jitter plot here is a hack wherein the axes are quantitative and the tick labels and actually carefully placed text. This means that the "axis labels" will be wrecked if you try interactivity with the jitter plot. Nonetheless, tooltips still work.```pythonjitter = altcat.catplot(df,height=250,width=450,mark='point',encoding=dict(y=alt.Y('variable:N', title=None),x='value:Q',color=alt.Color('variable:N', legend=None),tooltip=alt.Tooltip(['dummy metadata:N'], title='breed')),transform='jitter')jitter```![png](images/output_19_0.png)Alternatively, we could color the jitter points with the dummy metadata.```pythonaltcat.catplot(df,height=250,width=450,mark='point',encoding=dict(y=alt.Y('variable:N', title=None),x='value:Q',color=alt.Color('dummy metadata:N', title='breed')),transform='jitter')```![png](images/output_21_0.png)### Jitter-box plotsEven while plotting all of the data, we sometimes was to graphically display summary statistics. We could (in Vega-Lite 3.0) make a strip-box plot, in which we have a strip plot overlayed on a box plot. In the future, you can generate this using Altais as follows.```pythonstrip = alt.Chart(df).mark_point(opacity=0.3).encode(x='value:Q',y=alt.Y('variable:N', title=None),color=alt.Color('variable:N', legend=None))box = alt.Chart(df).mark_boxplot(color='lightgray').encode(x='value:Q',y=alt.Y('variable:N', title=None))box + strip```The result may be viewed [here](https://vega.github.io/editor/#/url/vega-lite/N4KABGBEAkDODGALApgWwIaQFxUQFzwAdYsB6UgN2QHN0A6agSz0QFcAjOxge1IRQyUa6ALQAbZskoAmOgDY6ABjoArWNwB2kADTgo8TQDNG1bGFAQIkCo2QB3MxctWUJ-GYDMixbudW7jAAmLGYALN56EAC+elG+UIHoeOiwyHiwjpEJSaIAjN4AnHKB8OweoQDshrkArLly6KGBFbnIofUViuyhoYaKZgDaWRBOfgmsqKgAnmCoaeiJyWaQBmISyDrDltboYqwbOLl0BQAcigWhcrkeFdLSFScecjXxYzsATozo7GIH2clgfpbOJbUZ+SCBCbTWbzRaYHAQ3YYPA8LSvcEUXb7MwiZQ9O4nGoVDy5Ar1MkFRRydHOD5fH5-REAoFjEFjMG0yGTGZzZJw5Z4ZDvT5CzZjKyYvZ-I7nDzSUInOTSOQqk4nCpyE407aYz7fX7LOGAyDA7Uc7Zc6G8hY5ZardZi8U7KU45TSDxPaS5YlnMm3GparYS9B6hmGnLG02goPjbkwvm2hGJMTI1GOt6S7E4XF0CoFAokmpe+XSIk1UIebXB0MGpMRll+Nl+c1WS082GJqDsZDoagGqtQTN-HMFmr50uPUKjskDun6xlGhvOJvOFuxq0dpYIwXC2zvdMYrF-ZS5UuXBWaqn5OUeQPiwch+m1-7oSOss0xiFQ9sJrdQHcivus5Dq6dAnKENSaqSU51BUFZyKEwGPvO4bMia77RveX5xta-IIt2vb9p+IGHHQp7hOUyrEvkaoPEhNYLvW6GNh+WFtvGNp-pAAF7getIkWAIhHKE7pPBcBTKmOZKKHeTq6k+jFoVG7KfuxuGdoiKZJGmSEutmRxXFRRSKLc0jnOB9EKahr5LpYK6WGu2Ebr+8L-kKgF8TqR6gVOVwqmONy3jcBRwZZKF1kpGEqWx34cXhUCENw3CBERWECUJdBeFSeaKtI4FqpSFRhWGEU2cxy6sU6ambq53HubxulZoJBmXJS6qUuENSBYhxHISVL5vixmFVbF6lcQRfYbI1w7KDJcgVMSeWlp06rlsVz5MmVynNqpo01Xa3BrLYnnBnpgJ0O6igBrkU7XOBeVFNSvUMdZg0VcNbzVS5Ar1aK01mLNigVqWkHujURalh44PrYpW1RTtMU4ftdZaSimgnQ+Z3KLeJmdIoXiVIotQFjDr22dElWfXt33br9QG9VjSgQR6IlQwUdS5DRCGk6Vb12ZT4JfZxtXJqm6P-fpF0BqcvrnNIEktD1aV9Rti7lfzH2C9TwvhqjOkM01GW5O1ypEyJ4OVAtPMDeTYD2SMu1IzT+iHQ6EuCcoiqVLUOVyOcftXNbm18xTmuctr8WQIlyWpXJ3nZsoC2nF6vqahJ+JB2r22ro7zk67Tu5-Qbx5kScZJThB4EyfUmdMdnDm5z++ddj2k0Y86htulcFRFuBlL5hbQO15FQ3RSNTvN5AE2xxm8ce7mpJwc8p6c4o9yB89Vm87b9vmI3cUafax3uzmJzE2fZl3CZ8vXMPcOjwj4955HR9TcXAOZZcRadF4GrunluQ74hztgLcOE9I7RxSm-ZWjNIJyA9M8D0RQpKKiATvUBFoI4aVFtpcW78cCezyvA3uio15UgrGg9Wocx5U3AYfV2x98HNToNdb0+YzimWlpWTe4UbZUJAWHTBdCuI8SLjAzuF0EIeBCsbKk6olRAyKjw-qwd0GCNbFgkRdN24CVkKDKGeZ7h3FOCZJ6ysXrb34bvRyQtI7T2gXHM6RsEIiUJGZaRMk2qUPrg7RGz8NL2J0XPZQ4M5QnEuvLCSlJSTePhjnPxTcIFJSgUEpxMoHhnHxrcCklwh7KNVnXOJDcEkHy4q-VJTU3QBirsWeWXVKJKPMVvPhPi94lLGrVcp7sTzSLBpUAqFZwhKzjhYlpRTfFP0SRpSBM9DxOLdKeeapxSRAyBgHF4+TYbAOsfvDpywZkONnmklh4F4I1Bku6G6IVYkP3iZM0ptVAknz0fmFUN05Qam8PAsxIzmmqKsRgjRwjOkMMOXMpqRwzwtHqFBdmt4fmz1Gf81pNjNEgqOmC-ic8MreDuNIs+QNSz5DqDc96NCtbAuWE8phOYrjgSghBS+tQEWHiRVncZbT7l7IRF0ml2NXFdW8A8LwXp6mko1uSsB-iuI4LRmiGlshl6QTLuOC4apyjiuoY-Wh0rHmt1mVi+ZxwSTM1PGXKu5ZpCaoEZKoRurdZi3leI4cRxOh5TZi4jq4MWVYrZYU25xSuXIwSskg1XkzoykgnBWonVzmW2tTs9pwbNKOoqSXTml5yxXWCg8MkGymm8ORRy1FlKUapvdsJOojwxxdU1Jk4ZiK-lGlyAC9R64pkyqRLgp1jimqyEkkSJU4TF6QREkAltKLdnJt5c6j+Dx5qc1cVcHoZwfVeT9QCCdxap3OxWKCtNZgjiQTEuDTo877iEnHa221QL7X4X1Zi8NlTcxr3dJSIGFYY3SKvZOpNu7RH01naROQBZwm4vBldcJlQf3br-ZPWV+sgNgFkN4K6FwFSWuVFazZr0t0BomTqjterCKPtOhCzKElbzhPzBqGRVscO8zw2S7VFK70uwxQeghYFc2PAkuqd53pAEMYGkxiVLGpVEYOhx92irTm0X7uct9+bfmFubde8TdrJNlu7ZxsAR7TidAQgtXKBVsMFpUWp39Qb-3aIrRdbNaoU5vrVMbGD+HOWEYeT9QugHe3SgupUeUr7yLkjXdWJtEZRNaruZ57lIaY6kcxs+kyHRnjm09HcAMbnmMxdY1pluJHdM5jHMqFdZcrp1HLLJRtqnIvqdyxJrz97Ct2dvO6S4JIOEmSKNlsTDXNNNeyHrPBSHE5ryLIZcJkEZbKZqxZurVnYvJoObpo4j0Fq9zzGOY2ZmVPzc3fVwNS3d0zr86BOoU4yT1HVHKUkYWHwbtfFFm1Gnb35ajqGxLHcS7Rp2yZcGxtIK9ei0dvLg26o+d08oZexIVRmSKORL0wOXv9be+DlbMmyIwVqETK6pYvRCfMwUg7i2wdxY+wlorshjZbYzdcdm8CPTI8TdZ+DXa5VQ4o+WWocDgp+1vA21lEWSewdZ0kyn3SwJTfKPNIGgqoPM8Be28H1KkNHCeCqFUGoAzPHzHKRXbanLvYx3ypQVwPGlgkvLBCmWDc3uV+TgDq3jgQR6HKEycp2hXAKHb17Dvk2q7O1xyoIUbhwVBiZdoGrhPB2eyz47k8neY6pDJAXtFHgIXu3OfbT3DsEbJ8tz7zuy61HKD-EKY4qS+9R-73dCGRtB-nvA4dBYJIgaJI9avoPGvk9O0c59l2y6V4vXLp4Xf8894Dw+zn1w3kh-CcSfGXVZtC9qyL9zJa2MQ48ifMbElI9eCm5SXbc3ie59J5P3dgf+8zTIpzPjwqL2c3lOPjzBe6-s8Q43jwUsii3GuDrucHmq-pvu9tfuCiXEqPkETKeIYlRGXCATupPH3hAZ4EoHlNfFeDcFeNVqvjnnpnnm-pfsgfupjtcCKjbgqGfI5o0ntmfgQRfgNo7rZjSjKPmMzJcCsrcMZogXBnYtPrvkoNEoqOUGHlDNdlnvJGvufqLgnpHEnkwtjFDPBISvLGeDJLQaflsnHkrkbuDigYahIhXO0JSAGETA8ESFDLwWLvQtJkwv2tdAzldHBOODcNYXIQEgIUwkcPkLPrIo8HcGaifngfQToYbrYtMkXpLniuqJDAqPKFDGOLgb6sLjIRvkgS-KQYoRdKJBBBcOUJUNUkqO4e-iQXYaNscFcvcDCmqBzBvETtoYQaAejlEfYUoE8MbJqEstIsnILikdIQwbIaUfISwWrmBN8pXuSJbOzITnQVstIE0Rkdgp-g3jfmYLIOUESC0FDAUWvB1EAgsYwWjuTuAYYX8D-psYzhBNlBgT1jHkaIcUMcQSMZDsnp+tbgFFSNUQcYsXwZERLm0SBgStcJDIDkWH0euqkchr8TYWUlkRUeYRcOQvLFSM8J0D7vcRGI8ekX8Z2sNj2msTgLIPmKSGcD0IzmZEWD8UcbXmzvibpnotIpUDeLBOetSU8UwVPi1tkVci0N8nyaetduyTibCbVAoUhrIIqBcP-sqOEh6O1sKTlt3pybuibkhqECwv7CSHSllAWNwg0a9NiUqRPiqXSeWkwjUGBF1IPmvPCiBpISrPMTCR4eNF4Uhj-vjC5iYrKFDJqCvv0fgUaX1sqccYXgCUhiINTktEUCJP-GfLBIqcGSaaGTZq8Uwj-vkHxlDFSE8OEmfImSDsmbSZkeUY3pClAcQvNJcucskZCQMUGYWUQaaeLikoIfAvDiieqOBOHrWeFvWc6cMRpOKY3soAWIvAEVAS0CqgWSjiGcWZ4dyWMVDCXuctCn6HcMEQGfQQ2bOUWXob3vCY3tTj0AqGSMQhcH7L2Q9lCTufHoOa6YuY3hqZfHcGibcJREDpiQCLeboREVommRGbNAhGsrUKXofjdA6Y9tCTSfuVyW3Jjv6PIk8CtPjEvjOXec8csfSe7D-lhovpBKeASLUJuXWYGQOZhf+TvkwgoCqsqPzmSHKIguhb+WilJm7N4UoMKqBdBNaSusxeEaxQXFRe6VjieRWNmrkn7P6aRdueRc2bYexWMcfh6FtsetcBnF+a+D+QJaWkNuaSJeWH-GesZDcGcFednrJTBX+cRvBRaWRJ1vKVOFtpSPqXMYaXJSmYnqMSOXQNcVOG1ldKzHmBCX2WRVZYJfFq2W0WcAGGZLxvNDdiRaFZZRyZ5S2WGmRiXBenlI8F4J0V6C0PxfbrBR-thRxcYoKl1D-HmCSEVX7iVZPGqY3padCj3IqC0OzEWLMVoe5eFbpRTlFRKZUT3PAkiVQfGppdBalfOXifpd-r5eEEAWzNfPDuZVIWFdNQ1fwY+YSedIUYotRpfEKhcHVTXltVhXNbtQZASODGXO3q7kSKdXOedZRQ1BxRqHKDlSJPAmvJepNdpcVdZd5sJWWccIYisgAd4BWKgv9R5TNeiopaDQtJdvjsSMydBrDX1VvsOVdZlHKBJMbISDtl4K5T1bzADfVUDdphzpjuBHUAAqOoqBfJoSEU6Vje9vXgSagUSZIgWD3BkuqsSpBTeXDS9QjYwiJbeNAcufvvKKSNJclWzZtVTW5ABT5XmKSO6OElrYSARU9XuSrSmjptERwtIjsZqOXlXpjcrRFdvm9WMXdMbNItdJ0OWG4dbSKS6TZRlUlv5vae6ucPkBWCSD3PrU2Wlf8YNaDfBFdvfgGOWO0EAh4KLYbTjdzchsNaup0JUECZ+QabzMnezSrm6T5VBE8EDDdD0JagqEnSnbbU1btYnM-tktcPjJbpqLXUXeTpzbphmQYiBsHcoSDCFdeQMYXTbf1acU+v5tIglU4beJ8iTJNePZ7febVA3enT-vNDWs8I8ISJdlScvXXZPSXbtXoj9V1EtHUvLUlaPfgSvcaeHfDQ6sbXZUWFWkPkvLmaTaza9A-UmU-WLWxRLY3pGQFhXdAVkkUHCp3RPVvhvWcesVaRnjcOCeDCdUfV3dOoebtT-uhiJL0ass8I9Zg3A8bq0UhpaeEKHgGO0N4L6SPRZVsv-Y2c0eTgg9PUg-jK8mfFcJge0ArXffQSw7uYA4bQYZw6RBkiieZKeF1GtY6X-cfVvj3e7BqU7bDmwrDnNLA6vRReLV9gJAUGBBXbWkQ3GoI0w0o1g6qRQ0+fIOBF4ESIHaSOwro4-Ww9g6WbtTmCNUTPCp7kCXmO4wA546VZdenc+Z6PzoDsBRJCE6w0sa9WIqA66izOetvR-Qk6I2E41XY43bmDlZzBqIZJqGZN1b-QXco2AafZE2BN9SaprROGvNkxhfJck75rtRqR6LreSeWOWMgpY+tcI9U+DmnYgzzQ8DdGYYykULlcLWPaM8wWrbtScMcCQibIvN6LfVY1UzY2aa-eqegeZMsoFn6EM4o3s2Q+Dqo9kW7R7kkaBU8JcK0yxSfTtenRsUMh3q3v03cfnQNCI20xHbNYc-NeDP-OEAvh3vcK8zpVvhI5lTiEcPSjdOcK7assqnC4Dbbbc4BbmHKcTLdXcP6Bc1BUC28yoysVzRM2ABUIU9LHjK4+cjOKQ3o+0wY0VursDB9fUiy3nW5Vc+yyC2Kd5bg5lK3SFAlWfFdkzmyx40k6Kys589xgtObvkHE65vK6E4q1SrU7S4nN6hrmqCXiBt6Ni5TbbYi77TiD-nkabKcAPNdtDNq4k7iUqyDeK-iIUWSLUahu6Ba2dYbVPUizzcbOpSFLRaQqWIG89eIzg5vfIEnMWFlO3sE66zk7qzygm7S8+fKCy+bB3lVrGwbbbeM5I2AJaReQVISK7eUBiQC8HBS-C+9uW6G2ABcdrgAZbB6BDAs-fUs8mni43goIFaXnUrQ0zSW2I-Xfk+nZaX7K+nIhrghBU1ucw4OydjmxW1vd4JzN6LlQvQWIw8Mxu-s+lYY9ij-uamOL5B3tImuzJWe9cycfqzu2BJ7lDTcNUcjdO7ky8Z6+ndDu-VsWzLAeqK01kAALqxDxCQBiDoBTCig4BDD9abSZBYQaDoBzDWRCSFDFClBl7VB1ANBNAtBtAdBdA9B9CEGQDIAaAGCBCMAaCmA4COQAAeGH94UAxgyAYggQywIEA4VgeAUwhAjIAAjqwOgBoCiMkCiFQPwo2VYFMFx1hLx-x4J4o8J-+MwM+BoKwGsDp9xGJ4yBoNwKgMx7sEp3bK0pABgO8AANZqfgirDcD7gIgSDUD4DUDvCIftyifidUrcDseEBiDcB4BWJZD2TmjodsefhYc4elR4f+wlBlCVDEf1CNDNCtBe6dDdC9BqJvD0eMfMeseANucedP1WAacCcIinsYxWC-DUD0d1dgAGdGcxgiemfLDmeWdYdiA2e7xWCcfxfcc8e2Caf1feTGeBeSfSeyfMDaSKcxjDdQCqdjfceQC1dacvSzd6d-AddiCzc9cIh9dWeDerd2cOfOebdvDcCEDoDwDMAbfnQ-20hzf7LcDMeRfAjRd6AwdRBAA).The strip-box plots have the same issue as strip plots and could stand to have a little jitter. Jitter-box plots consist of a jitter plot overlayed with a box plot. Why not just make a box plot and a jitter plot and then compose them using Altair's nifty composition capabilities as I did in the plot I just described? We cannot do that because box plots have a truly categorical axis, but jitter plots have a hacked "categorical" axis that is really quantitative, so we can't overlay. We can try. The result is not pretty.```pythonbox + jitter```![png](images/output_23_0.png)Instead, we use `'jitterbox'` for our transform. The default color for the boxes and whiskers is light gray.```pythonaltcat.catplot(df,height=250,width=450,mark='point',encoding=dict(y=alt.Y('variable:N', title=None),x='value:Q',color=alt.Color('variable:N', legend=None)),transform='jitterbox')```![png](images/output_25_0.png)Note that the `mark` kwarg applies to the jitter plot. If we want to make specifications about the boxes and whiskers we need to separately specify them using the `box_mark` and `whisker_mark` kwargs as we did with box plots. Note that if the `box_mark` and `whisker_mark` are specified and their color is not explicitly included in the specification, their color matches the specification for the jitter plot.```pythonaltcat.catplot(df,height=250,width=450,mark='point',box_mark=dict(strokeWidth=2, opacity=0.5),whisker_mark=dict(strokeWidth=2, opacity=0.5),encoding=dict(y=alt.Y('variable:N', title=None),x='value:Q',color=alt.Color('variable:N', legend=None)),transform='jitterbox')```![png](images/output_27_0.png)## ECDFsAn empirical cumulative distribution function, or ECDF, is a convenient way to visualize a univariate probability distribution. Consider a measurement *x* in a set of measurements *X*. The ECDF evaluated at *x* is defined asECDF(*x*) = fraction of data points in *X* that are ≤ *x*.To generate ECDFs colored by category, we use the `'ecdf'` transform.```pythonaltcat.catplot(df,mark='line',encoding=dict(x='value:Q',color='variable:N'),transform='ecdf')```![png](images/output_29_0.png)Note that here we have chosen to represent the ECDF as a line, which is a more formal way of plotting the ECDF. We could, without loss of information, plot the "corners of the steps", which represent the actual measurements that were made. We do this by specifying the mark as `'point'`.```pythonaltcat.catplot(df,mark='point',encoding=dict(x='value:Q',color='variable:N'),transform='ecdf')```![png](images/output_31_0.png)This kind of plot can be easily made directly using Pandas and Altair by adding a column to the data frame containing the y-values of the ECDF.```pythondf['ECDF'] = df.groupby('variable')['value'].transform(lambda x: x.rank(method='first') / len(x))alt.Chart(df).mark_point().encode(x='value:Q',y='ECDF:Q',color='variable:N')```![png](images/output_33_0.png)This, however, is not possible when making a formal line plot of the ECDF.An added advantage of plotting the ECDF as dots, which represent individual measurements, is that we can color the points. We may instead which to show the ECDF over all measurements and color the dots by the categorical variable. We do that using the `colored_ecdf` transform.```pythonaltcat.catplot(df,mark='point',encoding=dict(x='value:Q',color='variable:N'),transform='colored_ecdf')```![png](images/output_35_0.png)## ECCDFsWe may also make a complementary empirical cumulative distribution, an ECCDF. This is defined asECCDF(*x*) = 1 - ECDF(*x*).These are often useful when looking for powerlaw-like behavior in you want the ECCDF axis to have a logarithmic scale.```pythonaltcat.catplot(df,mark='point',encoding=dict(x='value:Q',y=alt.Y('ECCDF:Q', scale=alt.Scale(type='log')),color='variable:N'),transform='eccdf')```![png](images/output_37_0.png)
altair-data-server
Altair data serverThis is a data transformer plugin forAltairthat transparently serves data for Altair charts via a background WSGI server.Note that charts will only render as long as your Python session is active.The data server is a good option when you'll begenerating multiple charts as part of an exploration of data.UsageFirst install the package and its dependencies:$ pip install altair_data_serverNext import altair and enable the data server:importaltairasaltalt.data_transformers.enable('data_server')Now when you create an Altair chart, the data will be served in the background rather than embedded in the chart specification.Once you are finished with exploration and want to generate charts that will have their data fully embedded in the notebook, you can restore the default data transformer:alt.data_transformers.enable('default')and carry on from there.Remote SystemsRemotely-hosted notebooks (like JupyterHub or Binder) usually do not allow the end user to access arbitrary ports. To enable users to work on that setup, make surejupyter-server-proxyis installed on the jupyter server, and use the proxied data server transformer:alt.data_transformers.enable('data_server_proxied')ExampleYou can see this in action, as well as read some of the motivation for this plugin, in the example notebook:AltairDataServer.ipynb. Click the Binder or Colab links above to try it out in your browser.Known IssuesBecausejupyter-server-proxyrequires at least Python 3.5, the methods described inRemote Systemsdo not work do not work for older versions of Python.
altair-easeviz
Altair-EasevizThis Python library is dedicated to providing resources for Vega-Altair, with the aim of enhancing the creation of improved and more accessible graphs. The development of this library involved a thorough exploration of both the Altair and Vega-Lite APIs.InstallationThe library and its dependencies can be easily installed, using:pip install altair-easevizDocumentationDocumentation for this library can be foundhere.FeaturesInitial release of four accessible themes for Vega-AltairGenerate description for charts addedHTML with accessible functions addedModels for creation and customization themes of vega-lite specification addedExampleThis next example shows how to enable one of our four themes. More examples are available in ourdocumentation.importaltairasaltimportpandasaspd# Enable Theme accessible_theme, dark_accessible_theme, filler_pattern_theme, print_themealt.themes.enable('accessible_theme')# Define a Chartsource=pd.DataFrame({'a':['A','B','C','D','E','F','G','H','I'],'b':[28,55,43,91,81,53,19,87,52]})alt.Chart(source).mark_bar().encode(x='a',y='b')Getting HelpFor bugs and feature requests, please open aGithub Issue.
altair-express
Altair ExpressCreate interactive data visualizations in one line of code.Free software: MIT licenseDocumentation:https://altair-express.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2022-11-01)First release on PyPI.
altair-extra-color-schemes
altair-extra-color-schemesAdditional named color schemes forAltairvia a custom renderer.QuickstartInstallationViapip:pipinstallaltair-extra-color-schemesViaPipenv:pipenvinstallaltair-extra-color-schemesViaPoetry:poetryaddaltair-extra-color-schemesViaPDM:pdmaddaltair-extra-color-schemesViaPyflow:pyflowinstallaltair-extra-color-schemesUsageimportaltairasaltalt.renderers.enable("extra_color_schemes")You can find some example charts in thedemo.ipynbnotebook.Color schemesColor scheme nameTypeSourceNotes"dvs"CategoricalData Visualization Standards (DVS)"Featured Colors" and "Qualitative Colors" > "Example Palette"DevelopmentPoetry(version 1.2.0)Add new color schemes to thealtair_extra_color_schemes/full_template.jinjafilepoetry config virtualenvs.in-project truepoetry installpoetry run jupyter labpoetry run black demo.ipynbpoetry checkDeploymentpoetry version minororpoetry version patchpoetry buildNotesdjLint:pipx install djlintdjlint altair_extra_color_schemes/template.jinja --checkdjlint altair_extra_color_schemes/template.jinja --reformatdjlint altair_extra_color_schemes/template.jinja --profile=jinjadjlint formatter.jinja --reformat --format-css --format-jsDefault color schemesDjHTML:pipx install djhtmldjhtml -i formatter.htmlcurlylint:pipx install curlylintcurlylint altair_extra_color_schemes/full_template.jinjaCloudscape Design System(Figma file)Poetry:Detection of the currently active Python (experimental)documentationCan't install Pandas on Mac M1issuepip --versionNullish coalescing operator (??): Short-circuiting
altair-grid
A Grid.news theme for Python’s Altair statistical visualization library
altair-images
Serving interactive charts with images - based on AltairProject fully supports Google Colab. For more details seeexamplesInstallationpipinstallaltair-imagesHow to useimportnumpyasnpfromaltair_imagesimportplot_with_imageembedding_data=np.load('../tests/pca_data_100.npy')sample_images=np.load('../tests/sample_images_100.npy')labels=np.load('../tests/sample_labels_100.npy')plot_with_image(embedding_data,labels,sample_images)TODOAdd examples in readmeAdd CI for tagging and publishing new version from masterAdd testsExample of line_plot and pictures(speed and frame, weight and body)
altair-latimes
No description available on PyPI.
altair-morberg
altair_morbergInstallpip install altair_morbergHow to use#hide_outputimportaltairasaltimportaltair_morberg.coreasmorbergalt.themes.register("morberg_theme",morberg.theme)alt.themes.enable("morberg_theme")Examplesusing this theme are available inthe documentation.
altair-recipes
A collection of ready-made statistical graphics for vega.vegais a statistical graphics system for the web, meaning the plots are displayed in a browser. As an added bonus, it adds interactions, again through web technologies: select data point, reveal information on hover etc. Interaction and the web are clearly the future of statistical graphics. Even the successor to the famousggplotfor R,ggvisis based onvega.altairis a python package that producesvegagraphics. Likevega, it adopts an approach to describing statistical graphics known asgrammar of graphicswhich underlies other well known packages such asggplotfor R. It represents a extremely useful compromise of power and flexibility. Its elements are data, marks (points, lines), encodings (relations between data and marks), scales etc.Sometimes we want to skip all of that and just produce a boxplot (or heatmap or histogram, the argument is the same) by calling:boxplot(data.iris(), columns="petalLength", group_by="species")because:It’s a well known type of statistical graphics that everyone can recognize and understand on the fly.Creativity is nice, in statistical graphics as in many other endeavors, but dangerous: there are morebad chartsout there than good ones. Thegrammar of graphicsis no insurance.While it’s simple to put together a boxplot inaltair, it isn’t trivial: there are rectangles, vertical lines, horizontal lines (whiskers), points (outliers). Each element is related to a different statistics of the data. It’s about30 lines of codeand, unless you run them, it’s hard to tell you are looking at a boxplot.One doesn’t always need the control that the grammar of graphics affords. There are times when I need to see a boxplot as quick as possible. Others, for instance preparing a publication, when I need to control every detail.The boxplot is not the only example. The scatterplot, the quantile-quantile plot, the heatmap are important idioms that are battle tested in data analysis practice. They deserve their own abstraction. Other packages offering an abstraction above the grammar level are:seabornand the graphical subset ofpandas, for example, both provide high level statistical graphics primitives (higher than the grammar of graphics) and they are quite successful (but not web-based).ggplot, even if named after the Grammar of Graphics, slipped in some more complex charts, pretending they are elements of the grammar, such asgeom_boxplot, because sometimes even R developers are lazy. But a boxplot is not ageomor mark. It’s a combination of several ones, certain statistics and so on. I suspect the authors ofaltairknow better than mixing the two levels.altair_recipesaims to fill this space abovealtairwhile making full use of its features. It provides a growing list of “classic” statistical graphics without going down to the grammar level. At the same time it is hoped that, over time, it can become a repository of examples and model best practices foraltair, a computable form of itsgallery.There isone more thing. It’s nice to have all these famous chart types available at a stroke of the keyboard, but we still have to decide which type of graphics to use and, in certain cases, the association between variables in the data and channels in the graphics (what becomes coordinate, what becomes color etc.). It still is work and things can still go wrong, sometimes in subtle ways. Enterautoplot.autoplotinspects the data, selects a suitable graphics and generates it. While no claim is made that the result is optimal, it will make reasonable choices and avoid common pitfalls, likeoverlapping pointsin scatterplots. While there are interestingresearch effortsaimed at characterizing the optimal graphics for a given data set, their goal is more ambitious than just selecting from a repertoire of pre-defined graphics types and they are fairly complex. Therefore, at this timeautoplotis based on a set of reasonable heuristics derived from decades of experience such as:use stripplot and scatterplot to display continuous data, barcharts for discrete datause opacity to counter mark overlap, but not with discrete color mapsswitch to summaries (count and averages) when the amount of overlap is too highuse facets for discrete dataautoplotis work in progress and perhaps will always be and feedback is most welcome. A large number of charts generated with it is available at the end of theExamplespage and should give a good idea of what it does. In particular, in this first iteration we do not make any attempt to detect if a dataset represents a function or a relation, hence scatterplots are preferred over line plots. Moreover there is no special support for evenly spaced data, such as a time series.FeaturesFree software: BSD license.Fullydocumented.Highly consistent API enforced withautosigNear 100% regression test coverage.Support for dataframe and vector inputsSupport for both wide and long dataframe formats.Data can be provided as a dataframe or as a URL pointing to a csv or json file.All charts produced are validaltaircharts, can be modified, combined, saved, served, embedded exactly as one.Chart typesautocorrelationbarchartboxplotheatmaphistogram, in a simple and multi-variable versionqqplotscatterplot in the simple and all-vs-all versionssmoother, smoothing line with IRQ range shadingstripplotSeeExamples.
altair-reveal
Reveal theme for the Altair statistical visualization library.
altair-saver
Altair SaverThis packge provides extensions toAltairfor saving charts to a variety of output types. Supported output formats are:.json/.vl.json: Vega-Lite JSON specification.vg.json: Vega JSON specification.html: HTML output.png: PNG image.svg: SVG image.pdf: PDF imageUsageThealtair_saverlibrary has a single public function,altair_saver.save(). Given an Altair chart namedchart, you can use it as follows:fromaltair_saverimportsavesave(chart,"chart.vl.json")# Vega-Lite JSON specificationsave(chart,"chart.vg.json")# Vega JSON specificationsave(chart,"chart.html")# HTML documentsave(chart,"chart.html",inline=True)# HTML document with all JS code included inlinesave(chart,"chart.png")# PNG Imagesave(chart,"chart.svg")# SVG Imagesave(chart,"chart.pdf")# PDF ImageRendererAdditionally, altair_saver provides anAltair Rendererentrypoint that can display the above outputs directly in Jupyter notebooks. For example, you can specify a vega-lite mimetype (supported by JupyterLab, nteract, and other platforms) with a PNG fallback for other frontends as follows:alt.renderers.enable('altair_saver',['vega-lite','png'])InstallationThealtair_saverpackage can be installed with:$ pip install altair_saverSaving asvl.jsonand ashtmlrequires no additional setup.To install with conda, use$ conda install -c conda-forge altair_saverThe conda package installs theNodeJSdependencies described below, so charts can be saved topng,svg, andpdfwithout additional setup.Additional RequirementsOutput topng,svg, andpdfrequires execution of Javascript code, whichaltair_savercan do via one of two backends.SeleniumTheseleniumbackend supports the following formats:.vg.json.png.svg.To be used, it requires theSeleniumPython package, and a properly configured installation of eitherchromedriverorgeckodriver.On Linux systems, this can be setup as follows:$pipinstallselenium $apt-getinstallchromium-chromedriverUsing conda, the required packages can be installed as follows (a compatible version ofGoogle Chromemust be installed separately):$condainstall-cpython-chromedriver-binarySelenium supportsother browsersas well, but altair-saver is currently only tested with Chrome.NodeJSThenodejsbackend supports the following formats:.vg.json.png.svg.pdfIt requiresNodeJS, along with thevega-lite,vega-cli, andcanvaspackages.First install NodeJS either bydirect downloador via apackage manager, and then use thenpmtool to install the required packages:$npminstallvega-litevega-clicanvasUsing conda, node and the required packages can be installed as follows:$condainstall-cconda-forgevega-clivega-lite-cliThese packages are included automatically when installingaltair_savervia conda-forge.
altair-smartworks-ecp-client
Altair SmartWorks IoT Edge Compute Platform client
altair-stiles
A theme for Python’s Altair statistical visualization library
altair_tiles
This package is in an early development stage. You should expect things to break unannounced until we release a version1.0.0.You can use altair_tiles to add tiles from any xyz tile provider such as OpenStreetMap to your Altair chart. It is a counterpart to the amazingcontextilypackage which provides this functionality formatplotlib.You can find the documentationhere. For a general introduction to plotting geographic data with Altair, seeGeoshape - Vega-AltairandSpecifying Data - Vega-Altair.Installationpipinstallaltair_tilesaltair-tiles requires at least Vega version5.26.0. If you use an IDE such as a Jupyter Notebook or VS Code, you usually don't have to worry about this.Developmentpython-mvenv.venvsource.venv/bin/activate pipinstall-e'.[dev]'Run linters and tests withhatchruntestBuild and view the documentation withhatchrundoc:clean hatchrundoc:build hatchrundoc:serveTo run a clean build and publish, runhatchrundoc:build-and-publish
altair-transform
altair-transformPython evaluation of Altair/Vega-Lite transforms.altair-transformrequires Python 3.6 or later. Install with:$ pip install altair_transformAltair-transform evaluatesAltairandVega-Litetransforms directly in Python. This can be useful in a number of contexts, illustrated in the examples below.Example: Extracting DataThe Vega-Lite specification includes the ability to apply a wide range of transformations to input data within the chart specification. As an example, here is a sliding window average of a Gaussian random walk, implemented in Altair:importaltairasaltimportnumpyasnpimportpandasaspdrand=np.random.RandomState(12345)df=pd.DataFrame({'x':np.arange(200),'y':rand.randn(200).cumsum()})points=alt.Chart(df).mark_point().encode(x='x:Q',y='y:Q')line=alt.Chart(df).transform_window(ymean='mean(y)',sort=[alt.SortField('x')],frame=[5,5]).mark_line(color='red').encode(x='x:Q',y='ymean:Q')points+lineBecause the transform is encoded within the renderer, however, the computed values are not directly accessible from the Python layer.This is wherealtair_transformcomes in. It includes a (nearly) complete Python implementation of Vega-Lite's transform layer, so that you can easily extract a pandas dataframe with the computed values shown in the chart:fromaltair_transformimportextract_datadata=extract_data(line)data.head()xyymean00-0.2047080.457749110.2742360.77109322-0.2452031.04132033-0.8009331.336943441.1648471.698085From here, you can work with the transformed data directly in Python.Example: Pre-Aggregating Large DatasetsAltair creates chart specifications containing the full dataset. The advantage of this is that the data used to make the chart is entirely transparent; the disadvantage is that it causes issues as datasets grow large. To prevent users from inadvertently crashing their browsers by trying to send too much data to the frontend, Altair limits the data size by default. For example, a histogram of 20000 points:importaltairasaltimportpandasaspdimportnumpyasnpnp.random.seed(12345)df=pd.DataFrame({'x':np.random.randn(20000)})chart=alt.Chart(df).mark_bar().encode(alt.X('x',bin=True),y='count()')chartMaxRowsError: The number of rows in your dataset is greater than the maximum allowed (5000). For information on how to plot larger datasets in Altair, see the documentationThere are several possible ways around this, as mentioned in Altair'sFAQ. Altiar-transform provides another option via thetransform_chart()function, which will pre-transform the data according to the chart specification, so that the final chart specification holds the aggregated data rather than the full dataset:fromaltair_transformimporttransform_chartnew_chart=transform_chart(chart)new_chartExamining the new chart specification, we can see that it contains the pre-aggregated dataset:new_chart.datax_binnedx_binned2count0-4.0-3.0291-3.0-2.04442-2.0-1.027033-1.00.0681540.01.0685851.02.0270662.03.042373.04.022Limitationsaltair_transformcurrently works only for non-compound charts; that is, it cannot transform or extract data from layered, faceted, repeated, or concatenated charts.There are also a number of less-used transform options that are not yet fully supported. These should explicitly raise aNotImplementedErrorif you attempt to use them.
altair-viewer
Altair ViewerOffline chart viewer forAltairvisualizationsThis package provides tools for viewing Altair charts without a web connection in arbitrary Python environments. Charts can be displayed either inline in a Jupyter notebook environment, or in a separate browser window for use in any environment.InstallationAltair Viewer can be installed from thePython Package Indexwithpip:$ pip install altair_viewerUsage: General EnvironmentsAltair viewer provides two top-level functions for displaying charts:display()andshow(). Their intended use is slightly different:importaltair_vieweraltair_viewer.display(chart)display(chart)is meant for use in interactive computing environments where a single Python process is used interactively. It will serve a chart viewer at a localhost URL, and any susequent chart created within the session will appear in the same window. The background server will be terminated when the main Python process terminates, so this is not suitable for standalone scripts.importaltair_vieweraltair_viewer.show(chart)show(chart)is meant for use once at the end of a Python script. It does the same asdisplay(), but automatically opens a browser window, and adds an input prompt to prevent the script (and the server it creates) from terminating.Usage: IPython & JupyterWithin Jupyter notebook, IPython terminal, and related environments that supportMimetype-based display, altair viewer can be used by enabling thealtair_viewerrenderer:importaltairasaltalt.renderers.enable('altair_viewer')This will cause charts at the end of a Jupyter notebook cell to be rendered in a separate browser window, as with thedisplay()andshow()methods.If enabled withinline=True, charts will be rendered inline in the notebook:importaltairasaltalt.renderers.enable('altair_viewer',inline=True)To display a single inline chart using Altair viewer in an IPython environment without globally enabling the associated renderer, you can use thedisplaymethod directly:importaltair_vieweraltair_viewer.display(chart,inline=True)Note that the display based on altair viewer will only function correctly as long as the kernel that created the charts is running, as it depends on the background server started by the kernel. In particular, this means that if you save a notebook and reopen it later, charts will not display until the associated cells are re-run.
altair_widgets
This package provides interactive data visualization tools in the Jupyter Notebook.The interactive visualization tool that is provided allows data selection through HTML widgets and outputs a Vega-lite plot through Altair. In the HTML widget it is possible to select columns to plot in various encodings. This widget also supports some basic configuration (i.e., log vs linear scales).
altamisa
AltamISAAltamISA is an alternative implementation ofISA-toolsdata modelandISA-Tab file format.also:Ambrosia peruvianais a species of plant in the family Asteraceae. It occurs from Mexico south to Argentina, being common in the Antilles and the Andes.In its native range, A. peruviana is used as a medicinal plant with analgesic, antiinflammatory, anthelmintic and antiseptic properties.--Ambrosia peruviana, WikipediaFor the Impatient$pipinstallaltamisa## OR$condainstallaltamisaWhat is ISA and ISA-Tab?The ISA (Investigation-Study-Assay) defines a data model for describing life science experiments (specification). ISA-Tab defines a file format based on TSV (tab-separated values) for storing of ISA data in files. Shortly, experiments are encoded by DAGs (directed acyclic graphs) of samples being taken from sources (e.g., donor individuals) and then subjected to "operations" (e.g., extraction, assays, transformations) leading to different downstream "materials".Why AltamISA?Attempting to use the officialisa-apiPython package in early 2018 led to quite some frustration. Even the official ISA-tab examples parsed into non-expected graph structures. Attempting bug fixes toisa-apiproofed difficult because of not having complete automated tests. Further, the scope ofisa-apiwas much broader (including between ISA-Tab and other formats) such that we expected high maintenance costs (developmenthad apparently stalled).Quick FactsProgramming Language: Python 3 (withfull type annotations)License: MITTest Coverage: >90%Documentation:see hereCode Style:black, 100 characters/lineHistoryv0.2.8Mostly meta adjustments.v0.2.7Adding exception for duplicate node annotationsv0.2.6Minor fixes regarding investigation file names and comments.v0.2.5Minor fixes of validation and warnings.Fixes optional parameterfilenameofAssayReader.v0.2.4Ensuring that input order is output order. This is true except for the corner case where materials are not located in "blocks". Such corner cases would require storing the tabular representation (and keeping it in sync) at all times and does not yield to a robustly usable implementation. NB: the input is also not sorted the test adjusted with this patch shows.Adding optional parameterfilenameto the various readers.ExposingRefTableBuilderclass with slightly changed interface.v0.2.3Minor fixes and additions with focus on improving the export.v0.2.2Updating documentation for JOSS.v0.2.1Adding JOSS paper draft.Fixing problem with writing empty lines on Windows (#52).Update documentation with examples for manual model creation.Fixing authorship documentation.Fixing package (#58).v0.2.0Switching toattrsinstead of usingNamedtuple. This gets rid of some warts regarding constructor overriding but should offer the same functionality otherwise.Various updates to the documentation.v0.1.0First public release.Started out with ISA-TAB parser andNamedTuple-based data model.
altapay
This is an unofficial Python SDK for Valitor (formerly AltaPay/Pensio),https://altapay.com/. The SDK is maintained by Coolshop.com,https://www.coolshop.com/.DocumentationDocumentation is available at Read the Docs.RequirementsPython (3.5, 3.6, 3.7)Other versions of Python may also be supported, but these are the only versions we test against.DependenciesrequestssixInstallationThe easiest way is using pip.pipinstallaltapayGetting StartedRefer to theintroduction on the documentationfor some getting started use cases.ContributingCurrently, this library only implements the bare minimum of the AltaPay API. It will allow you to create payment links, and do basic subscription functionality. If you need anything else, feel free to submit a full request, or if you have ideas, open an issue.If you do decide to submit a pull request, do note that both isort and flake8 (including pep8-naming) are run for all pull requests. You are also advised to write test cases.Running the TestsFirst of all, havetoxinstalled on your system. System-wide is probably the better choice. Once you have tox installed, simply run:toxThis will run all tests, against all supported Python versions.Change Log1.4.1 (2019-06-26)** Python support **As of version 1.4.0, Python versions below 3.5 are no longer supported.1.4.0 (2019-06-24)FeaturesAddedaltapay.Transaction.chargebacks()which returns the chargeback events as custom objects.1.3.0 (2017-10-20)FeaturesAddedaltapay.Transaction.refund()with functionality for refunding a transaction, both full and partial.1.2.1 (2016-11-02)BugfixFixed a bug where parsing the values “nan”, “inf” or “-inf” would lead to a float value, when handling XML values. While this could potentially be correct, it’s safer to assume that these values are in fact string values1.2.0 (2016-04-19)Features*Addedaltapay.FundingListandaltapay.Funding. These two objects will interact with the funding list features of AltaPay, and will allow you to both list all of your funding files, as well as download individual files. Currently no guide section is written on the subject, but full API documentation is availableAddedaltapay.CustomReportwhich mimics the custom reporting module of AltaPay. Using a custom report ID and additional kwargs based on the report, you can either view or download your custom reports1.1.0 (2016-04-12)Previously, all calls would use an HTTP GET, since the AltaPay documentation in some cases suggests that this is the case. This works, however, it breaks when payments start to contain many order lines. Because of this, all calls have been changed to HTTP POST, which is the recommended way of communicating with the AltaPay API.1.0.1 (2016-03-15)API ChangeBackwards incompatiblealtapay.Transaction.capture()will now return analtapay.Callbackobject instead of aaltapay.Transactionobject1.0 (2016-03-03)First Production ReleaseFeaturesAdddedaltapay.Transaction.release()1.0.dev9 (2016-02-26)ImprovementsBackwards incompatibleMovedaltapay.Transaction.create_invoice_reservation()toaltapay.Callback.create_invoice_reservation(). The method will now return aaltapay.Callbackobject in order to read the result key (#41)1.0.dev8 (2016-02-22)FeaturesAddedaltapay.Transaction.create_invoice_reservation()1.0.dev7 (2016-02-17)ImprovementsBackwards incompatibleChanged the name ofaltapay.Transaction.reserve()toaltapay.Transaction.reserve_subscription_charge()1.0.dev6 (2016-02-17)FeaturesAddedaltapay.Transaction.reserve()which will reserve an amount on a subscription1.0.dev5 (2016-02-11)ImprovementsBackwards incompatibleChanged the return type ofaltapay.Transaction.charge_subscription(). It will not return aaltapay.Callbackobject instead of a list of transactionsBackwards incompatibleChanged the argument ofaltapay.Callback.transactions()to be keyword only. Will now accept any number of filters, and these will be matched using AND logic1.0.dev4 (2016-02-03)FeaturesAddedaltapay.Transaction.charge_subscription()which will charge a subscription on a transaction, if this transaction is setup as a subscriptionBugfixesFixed a bug where looking up a non-existent transaction ID would result in aKeyError(#32)0.1.dev3 (2016-01-18)BugfixesAdded missing apostrophe’s in the documentation for the callback guide (#24)Fixed a bug where filtering transactions on aaltapay.Callbackobject might result in aKeyError(#25)ImprovementsMade it more explicit how attributes on response objects work (#26)0.1.dev2 (2016-01-14)FeaturesAddedaltapay.Transactionand the ability to find a transaction by its transaction ID in the AltaPay serviceAddedaltapay.Transaction.capture()which captures a transaction that has already been loaded. Optinally, parameters can be passed which allows for partial captures (see the AltaPay documentation for full list of possible arguments)Added a public facing API for converting an AltaPay XML response (as a string) to a Python dictionary (altapay.utils.xml_to_dict)Addedaltapay.Callbackwhich wraps a callback response from AltaPay, and automatically wraps the coupled transactions inaltapay.TransactionobjectsBugfixesFixed a bug where specifying a non-existing terminal while creating analtapay.Paymentobject would result inaltapay.Payment.successreturningTrueFixed a bug where running in production mode was not possible. It is now possible by specifying a shop name when instantiating the API0.1.dev1 (2016-01-05)Complex payments are now possible. This means it is now possible to send detailed payment information in a Pythonic way using just lists and dictionaries, instead of the PHP style query params syntaxDocumentation now includes a small guide for available parts of the SDK, which will make is easier to get started easily without reading the raw API documentation0.1.dev0 (2015-12-18)Basic API connection class implemented inaltapay.api.APIBasic Payment class implemented inaltapay.payment.Paymentwhich is currently mainly for creating a very basic payment request with the AltaPay service
altasetting
UNKNOWN
altas-finance-noad
Failed to fetch description. HTTP Status Code: 404
altb
altbaltb is a cli utility influenced byupdate-alternativesof ubuntu.Linked paths are added to$HOME/.local/binaccording toXDG Base Directory Specification.Config file is located at$HOME/.config/altb/config.yaml.How to start?Must use Python > 3.9execute:pipxinstallaltbto track new binary use:altbtrackpath<app_name>@<app_tag>/path/to/binaryfor example:[email protected]/bin/python2.7 [email protected]/bin/python3.8# altb track path python ~/Downloads/python # will also work and generate a new hash for itList all tracked versions:$altblist-a python|----2.7-/bin/python2.7|----3.8-/bin/python3.8Use specific version:altbuse<app_name>[@<app_tag>]example:[email protected] will link the tracked path to~/.local/bin/<app_name>in this case -~/.local/bin/pythonCopy specific standalone binary automatically to~/.local/share/altb/versions/<app_name>/<app_name>_<tag>altbtrackpathhelm@3~/Downloads/helm--copyYou can run custom commands using:altbtrackcommandspecial_command@latest"echo This is a command"this especially useful for latest developments, example:altbtrackcommandspecial_command@latest"go run ./cmd/special_command"--working-directory"$HOME/special_command"DocsMigrationsMigrate to v0.5.0 and above from lower versions
alt-betterproto
This is a fork ofhttps://github.com/danielgtaylor/python-betterprotowith minor modificationsAll credits should go to the original author and not to me!I need this implementation urgently, and cannot wat for upstream release to include in my own package. Please do not consider using this package unless you know for sure what you are doing. This package is most probably not going to be maintained.This was developed as helper utility forpython evmos SDK.The only changes are:Upstream fixes (including critical - for me - one resulting in binary incompatibility with reference implementation) since 2.0.0b5;Docstring handling: this version uses 88 (black) line length as default and does respect long words and urls (does not wrap them), plus preserves original newlines when possible (wraps lines longer that 88 chars, but all original newlines remain in place).Better Protobuf / gRPC Support for Python:octocat: If you're reading this on github, please be aware that it might mention unreleased features! See the latest released README onpypi.This project aims to provide an improved experience when using Protobuf / gRPC in a modern Python environment by making use of modern language features and generating readable, understandable, idiomatic Python code. It will not support legacy features or environments (e.g. Protobuf 2). The following are supported:Protobuf 3 & gRPC code generationBoth binary & JSON serialization is built-inPython 3.6+ making use of:EnumsDataclassesasync/awaitTimezone-awaredatetimeandtimedeltaobjectsRelative importsMypy type checkingThis project is heavily inspired by, and borrows functionality from:https://github.com/protocolbuffers/protobuf/tree/master/pythonhttps://github.com/eigenein/protobuf/https://github.com/vmagamedov/grpclibMotivationThis project exists because I am unhappy with the state of the official Google protoc plugin for Python.Noasyncsupport (requires additionalgrpclibplugin)No typing support or code completion/intelligence (requires additionalmypyplugin)No__init__.pymodule files get generatedOutput is not importableImport paths break in Python 3 unless you mess withsys.pathBugs when names clash (e.g.codecspackage)Generated code is not idiomaticCompletely unreadable runtime code-generationMuch code looks like C++ or Java ported 1:1 to PythonCapitalized function names likeHasField()andSerializeToString()UsesSerializeToString()rather than the built-in__bytes__()Special wrapped types don't use Python'sNoneTimestamp/duration types don't use Python's built-indatetimemoduleThis project is a reimplementation from the ground up focused on idiomatic modern Python to help fix some of the above. While it may not be a 1:1 drop-in replacement due to changed method names and call patterns, the wire format is identical.InstallationFirst, install the package. Note that the[compiler]feature flag tells it to install extra dependencies only needed by theprotocplugin:# Install both the library and compilerpipinstall"betterproto[compiler]"# Install just the library (to use the generated code output)pipinstallbetterprotoBetterprotois under active development. To install the latest beta version, usepip install --pre betterproto.Getting StartedCompiling proto filesGiven you installed the compiler and have a proto file, e.gexample.proto:syntax="proto3";packagehello;// Greeting represents a message you can tell a user.messageGreeting{stringmessage=1;}You can run the following to invoke protoc directly:mkdirlib protoc-I.--python_betterproto_out=libexample.protoor run the following to invoke protoc via grpcio-tools:pipinstallgrpcio-tools python-mgrpc_tools.protoc-I.--python_betterproto_out=libexample.protoThis will generatelib/hello/__init__.pywhich looks like:# Generated by the protocol buffer compiler. DO NOT EDIT!# sources: example.proto# plugin: python-betterprotofromdataclassesimportdataclassimportbetterproto@dataclassclassGreeting(betterproto.Message):"""Greeting represents a message you can tell a user."""message:str=betterproto.string_field(1)Now you can use it!>>>fromlib.helloimportGreeting>>>test=Greeting()>>>testGreeting(message='')>>>test.message="Hey!">>>testGreeting(message="Hey!")>>>serialized=bytes(test)>>>serializedb'\n\x04Hey!'>>>another=Greeting().parse(serialized)>>>anotherGreeting(message="Hey!")>>>another.to_dict(){"message":"Hey!"}>>>another.to_json(indent=2)'{\n"message": "Hey!"\n}'Async gRPC SupportThe generated ProtobufMessageclasses are compatible withgrpclibso you are free to use it if you like. That said, this project also includes support for async gRPC stub generation with better static type checking and code completion support. It is enabled by default.Given an example service definition:syntax="proto3";packageecho;messageEchoRequest{stringvalue=1;// Number of extra times to echouint32extra_times=2;}messageEchoResponse{repeatedstringvalues=1;}messageEchoStreamResponse{stringvalue=1;}serviceEcho{rpcEcho(EchoRequest)returns(EchoResponse);rpcEchoStream(EchoRequest)returns(streamEchoStreamResponse);}Generate echo proto file:python -m grpc_tools.protoc -I . --python_betterproto_out=. echo.protoA client can be implemented as follows:importasyncioimportechofromgrpclib.clientimportChannelasyncdefmain():channel=Channel(host="127.0.0.1",port=50051)service=echo.EchoStub(channel)response=awaitservice.echo(echo.EchoRequest(value="hello",extra_times=1))print(response)asyncforresponseinservice.echo_stream(echo.EchoRequest(value="hello",extra_times=1)):print(response)# don't forget to close the channel when done!channel.close()if__name__=="__main__":loop=asyncio.get_event_loop()loop.run_until_complete(main())which would outputEchoResponse(values=['hello','hello'])EchoStreamResponse(value='hello')EchoStreamResponse(value='hello')This project also produces server-facing stubs that can be used to implement a Python gRPC server. To use them, simply subclass the base class in the generated files and override the service methods:importasynciofromechoimportEchoBase,EchoRequest,EchoResponse,EchoStreamResponsefromgrpclib.serverimportServerfromtypingimportAsyncIteratorclassEchoService(EchoBase):asyncdefecho(self,echo_request:"EchoRequest")->"EchoResponse":returnEchoResponse([echo_request.valuefor_inrange(echo_request.extra_times)])asyncdefecho_stream(self,echo_request:"EchoRequest")->AsyncIterator["EchoStreamResponse"]:for_inrange(echo_request.extra_times):yieldEchoStreamResponse(echo_request.value)asyncdefmain():server=Server([EchoService()])awaitserver.start("127.0.0.1",50051)awaitserver.wait_closed()if__name__=='__main__':loop=asyncio.get_event_loop()loop.run_until_complete(main())JSONBoth serializing and parsing are supported to/from JSON and Python dictionaries using the following methods:Dicts:Message().to_dict(),Message().from_dict(...)JSON:Message().to_json(),Message().from_json(...)For compatibility the default is to convert field names tocamelCase. You can control this behavior by passing a casing value, e.g:MyMessage().to_dict(casing=betterproto.Casing.SNAKE)Determining if a message was sentSometimes it is useful to be able to determine whether a message has been sent on the wire. This is how the Google wrapper types work to let you know whether a value is unset, set as the default (zero value), or set as something else, for example.Usebetterproto.serialized_on_wire(message)to determine if it was sent. This is a little bit different from the official Google generated Python code, and it lives outside the generatedMessageclass to prevent name clashes. Note that itonlysupports Proto 3 and thus canonlybe used to check ifMessagefields are set. You cannot check if a scalar was sent on the wire.# Old way (official Google Protobuf package)>>>mymessage.HasField('myfield')# New way (this project)>>>betterproto.serialized_on_wire(mymessage.myfield)One-of SupportProtobuf supports grouping fields in aoneofclause. Only one of the fields in the group may be set at a given time. For example, given the proto:syntax="proto3";messageTest{oneoffoo{boolon=1;int32count=2;stringname=3;}}You can usebetterproto.which_one_of(message, group_name)to determine which of the fields was set. It returns a tuple of the field name and value, or a blank string andNoneif unset.>>>test=Test()>>>betterproto.which_one_of(test,"foo")["",None]>>>test.on=True>>>betterproto.which_one_of(test,"foo")["on",True]# Setting one member of the group resets the others.>>>test.count=57>>>betterproto.which_one_of(test,"foo")["count",57]>>>test.onFalse# Default (zero) values also work.>>>test.name="">>>betterproto.which_one_of(test,"foo")["name",""]>>>test.count0>>>test.onFalseAgain this is a little different than the official Google code generator:# Old way (official Google protobuf package)>>>message.WhichOneof("group")"foo"# New way (this project)>>>betterproto.which_one_of(message,"group")["foo","foo's value"]Well-Known Google TypesGoogle provides several well-known message types like a timestamp, duration, and several wrappers used to provide optional zero value support. Each of these has a special JSON representation and is handled a little differently from normal messages. The Python mapping for these is as follows:Google MessagePython TypeDefaultgoogle.protobuf.durationdatetime.timedelta0google.protobuf.timestampTimezone-awaredatetime.datetime1970-01-01T00:00:00Zgoogle.protobuf.*ValueOptional[...]Nonegoogle.protobuf.*betterproto.lib.google.protobuf.*NoneFor the wrapper types, the Python type corresponds to the wrapped type, e.g.google.protobuf.BoolValuebecomesOptional[bool]whilegoogle.protobuf.Int32ValuebecomesOptional[int]. All of the optional values default toNone, so don't forget to check for that possible state. Given:syntax="proto3";import"google/protobuf/duration.proto";import"google/protobuf/timestamp.proto";import"google/protobuf/wrappers.proto";messageTest{google.protobuf.BoolValuemaybe=1;google.protobuf.Timestampts=2;google.protobuf.Durationduration=3;}You can do stuff like:>>>t=Test().from_dict({"maybe":True,"ts":"2019-01-01T12:00:00Z","duration":"1.200s"})>>>tTest(maybe=True,ts=datetime.datetime(2019,1,1,12,0,tzinfo=datetime.timezone.utc),duration=datetime.timedelta(seconds=1,microseconds=200000))>>>t.ts-t.durationdatetime.datetime(2019,1,1,11,59,58,800000,tzinfo=datetime.timezone.utc)>>>t.ts.isoformat()'2019-01-01T12:00:00+00:00'>>>t.maybe=None>>>t.to_dict(){'ts':'2019-01-01T12:00:00Z','duration':'1.200s'}DevelopmentJoin us onSlack!See how you can help →ContributingRequirementsPython (3.6 or higher)poetryNeeded to install dependencies in a virtual environmentpoethepoetfor running development tasks as defined in pyproject.tomlCan be installed to your host environment viapip install poethepoetthen executed as simplepoeor run from the poetry venv aspoetry run poeSetup# Get set up with the virtual env & dependenciespoetryinstall-Ecompiler# Activate the poetry environmentpoetryshellCode styleThis project enforcesblackpython code formatting.Before committing changes run:poeformatTo avoid merge conflicts later, non-black formatted python code will fail in CI.TestsThere are two types of tests:Standard testsCustom testsStandard testsAdding a standard test case is easy.Create a new directorybetterproto/tests/inputs/<name>add<name>.protowith a message calledTestadd<name>.jsonwith some test data (optional)It will be picked up automatically when you run the tests.See also:Standard Tests Development GuideCustom testsCustom tests are found intests/test_*.pyand are run with pytest.RunningHere's how to run the tests.# Generate assets from sample .proto files required by the testspoegenerate# Run the testspoetestTo run tests as they are run in CI (with tox) run:poefull-test(Re)compiling Google Well-known TypesBetterproto includes compiled versions for Google's well-known types atsrc/betterproto/lib/google. Be sure to regenerate these files when modifying the plugin output format, and validate by running the tests.Normally, the plugin does not compile any references togoogle.protobuf, since they are pre-compiled. To force compilation ofgoogle.protobuf, use the option--custom_opt=INCLUDE_GOOGLE.Assuming yourgoogle.protobufsource files (included with all releases ofprotoc) are located in/usr/local/include, you can regenerate them as follows:protoc\--plugin=protoc-gen-custom=src/betterproto/plugin/main.py\--custom_opt=INCLUDE_GOOGLE\--custom_out=src/betterproto/lib\-I/usr/local/include/\/usr/local/include/google/protobuf/*.protoTODOFixed length fieldsPacked fixed-lengthZig-zag signed fields (sint32, sint64)Don't encode zero values for nested typesEnumsRepeated message fieldsMapsMaps of message fieldsSupport passthrough of unknown fieldsRefs to nested typesImports in proto filesWell-known Google typesSupport as request inputSupport as response outputAutomatically wrap/unwrap responsesOneOf supportBasic support on the wireCheck which was set from the groupSetting one unsets the othersJSON that isn't completely naive.64-bit ints as stringsMapsListsBytes as base64Any supportEnum stringsWell known types support (timestamp, duration, wrappers)Support different casing (orig vs. camel vs. others?)Async service stubsUnary-unaryServer streaming responseClient streaming requestRenaming messages and fields to conform to Python name standardsRenaming clashes with language keywordsPython packageAutomate running testsCleanup!CommunityJoin us onSlack!LicenseCopyright © 2019 Daniel G. Taylorhttp://dgt.mit-license.org/
alt-bucket
UNKNOWN
altbuf
Alternate BufferTiny package to enter and exit the alterante buffer using ANSI escape sequences. There is no need to use ncurses or other legacy packages to do one simple thing, which is to open a separate terminal buffer (window) temporarily, and return the terminal to its previous state afterwards.Tested and working in PowerShell and Gnome-Terminal.
alt_cinder_sch
Alternative Classes such as filters, host managers, etc. for Cinder, the OpenStack Block Storage service.The main purpose of this library is to illustrate the broad range of possibilities of the Cinder Scheduler provided by its flexible mechanisms.Currently there’s only 2 interesting features, which are the possibility of changing the default provisioning type on volume creation for volumes that don’t specify the type using theprovisioning:typeextra spec and an alternative calculation of the free space consumption.Scheduler’s original approach to space consumption by new volumes is conservative to prevent backends from filling up due to a sudden burst of volume creations.The alternative approach is more aggressive and is adequate for deployments where the workload is well know and a lot of thin volumes could be requested at the same time.It’s important to notice that even though the Schedulers will be able to understandprovisioining:typeextra spec it will depend on the backend if this parameter is actually used or not.Free software: Apache Software License 2.0Documentation:https://alt-cinder-sch.readthedocs.io.FeaturesCan default capacity calculations to thin or thick.Less conservative approach to free space consumption calculations.UsageFirst we’ll need to have the package installed:#pipinstallalt_cinder-schThen we’ll have to configure Cinder’s schedulers to use the package:scheduler_host_manager = alt_cinder_sch.host_managers.HostManagerThin scheduler_default_filters = AvailabilityZoneFilter,AltCapacityFilter,CapabilitiesFilter scheduler_driver = alt_cinder_sch.scheduler_drivers.FilterSchedulerAnd finally restart scheduler services.History0.1.1 (2017-07-03)Fix compatibility with older versionsFix thick over subscription valueImprove logging0.1.0 (2017-07-02)First release on PyPI.
altcli_helper
CLIHelper: A Command-Line Interface AssistantCLIHelper is a command-line interface (CLI) tool designed to assist users by generating CLI commands or a series of CLI commands based on user input. Utilizing OpenAI's GPT model, it provides detailed responses and ensures a comprehensive understanding of the required tasks.InstallationThis tool can be installed using Poetry. Ensure you have Poetry installed on your machine. If not, you can install it following the instructions on theofficial Poetry website.Once Poetry is installed, you can install CLIHelper with the following command:poetryinstallUsageCLIHelper is straightforward to use and can be executed directly from the command line.Basic Command GenerationFor generating a CLI command based on your request:clih"Your request here"Example:clih"How to list all files in a directory?"Command Generation with RationaleIf you want to see the rationale behind the recommended commands, in addition to the commands themselves, use the-sor--show-rationaleflag:clih-s"Your request here"orclih--show-rationale"Your request here"Example:clih-s"How to list all files in a directory?"This will print out a detailed rationale explaining why certain commands are recommended, followed by the recommended commands themselves.ContactFor any inquiries or issues, please contact D0rkKnight [email protected] project is open source and available under theMIT License.The README provides a concise yet comprehensive guide on how to install and use CLIHelper, including both basic command generation and command generation with rationale functionalities. It also includes contact information for support. Make sure to include an actual LICENSE file in your project if you mention it in the README.
altcompare
No description available on PyPI.
altcos-common
altcos-commonБиблиотека для удобного взаимодейтсвия сaltcos-репозиториями.ПримерыИнформация о последнем коммитеПолучение коммита по версии
altdeutsch
Parser of Referenzkorpus Althochdeutsch exportsInstallation$pipinstallalthochdeutschLicenseLicense of the software:MIT LicenseLicense of the data:CC BY-NC-SA 3.0TheReferenzkorpus Altdeutsch projectis at the origin of annotations of thedata.Code usageimportosfromaltdeutsch.readerimportread_exportfromaltdeutschimportPACKDIRhildebrandslied=read_export(os.path.join(PACKDIR,"tests","data","hildebrandslied.txt"))print(hildebrandslied["tok"])You have now the Hildebrandslied text!
altdg
AltDG API Python ToolsCommand-line tool with methods to consume theAltDG APIin bulk.©Alternative Data Group. All rights reserved.ContentsAltDG API Python ToolsContentsRequirementsInstallationAuthorizationFree tier keyUsageDomain mapperMerchant mapperProduct mapperCommand arguments (options)DevelopmentUsage as librarySupportRequirementsPython 3.6+See also requirements.txtInstallationRun the following commands in your shell:# install as usual python packagepipinstallaltdg# ... or install "altdg" package directly from repopipinstallgit+https://github.com/altdg/bulk_mapper.git# ... or if you want to get samples for testing, clone the repogitclonehttps://github.com/altdg/bulk_mapper.gitaltdg pipinstall-ealtdgNow everything is ready to run the tool.AuthorizationTo use this tool you must have a valid app key to theAltDG API. Methods are available depending on you account type with AltDG.Free tier keyUse this key to try the API for free:f816b9125492069f7f2e3b1cc60659f0Sign up athttps://developer.altdg.com/to get a non-trial key.UsageA preferred way to run the tool is to load it as module with thepythoncommand.Run the tool with--helpflag to display command's usage:altdg--helpDomain mapperMaps domain names from given text to structured company information.More details inhttps://developer.altdg.com/docs#domain-mapperThis will run all the domains in the provided text file (one per line expected):altdg-edomain-mappersample-domains.txt-k"f816b9125492069f7f2e3b1cc60659f0"Sign up athttps://developer.altdg.com/to get a non-trial key.A CSV output file will be created automatically with the same path as the input file but prepending the current date.sample-domains.txtis a sample list of domains we included in our repo. This file is downloaded as part of this package, no need to re-create it.Merchant mapperMaps strings from transactional purchase text (e.g. credit card transactions) to structured company information.More details inhttps://developer.altdg.com/docs#merchant-mapperaltdg-emerchant-mappersample-merchants.txt-k"f816b9125492069f7f2e3b1cc60659f0"Sign up athttps://developer.altdg.com/to get a non-trial key.A CSV output file will be created automatically with the same path as the input file but prepending the current date.sample-merchants.txtis a sample list of domains we included in our repo. This file is downloaded as part of this package, no need to re-create it.Product mapperMaps strings from product related text (e.g. inventory) to structured company information.More details inhttps://developer.altdg.com/docs#product-mapperaltdg-eproduct-mappersample-products.txt-k"f816b9125492069f7f2e3b1cc60659f0"Sign up athttps://developer.altdg.com/to get a non-trial key.A CSV output file will be created automatically with the same path as the input file but prepending the current date.Command arguments (options)Arguments:-e <endpoint>--endpointType of mapper. Choices are "merchant-mapper", "domain-mapper" and "product-mapper".-k <key>--keyAltDG API application key.-o <filename>--outOutput file path. If not provided, the input file name is used with the ".csv" extension, prepended with the date and time.-F--forceWhen providing a specific out_file, some results may already exist in that file for an input. Use this option to force re-process results that are already in that output file, otherwise existing results won't be processed again. Previous results are NOT overwritten, a new CSV row is added.-n--num-threadsNumber of requests to process in parallel. (See--helpfor max and default)-r--num-retiresNumber of retries per request. (See--helpfor max and default)-t--timeoutAPI request timeout (in seconds). (See--helpfor max and default)-th <hint>--type-hintImproves the accuracy by providing the industry name or any keyword hint relevant to the inputs. E.g.-th "medical"Usage as libraryYou may useAltdgAPIclass from your python program:fromaltdg.apiimportAltdgAPI# initialize Mapper class with your keymapper=AltdgAPI('domain-mapper',api_key='f816b9125492069f7f2e3b1cc60659f0')# single queryprint(mapper.query('abc.com'))# single query with hintprint(mapper.query('abc.com',hint='news'))# bulk queryprint(mapper.bulk_query(['yahoo.com','amazon.com']))# bulk query with same hint for all inputsprint(mapper.bulk_query(['yahoo.com','amazon.com'],hint='company'))# bulk query with overwriting hintprint(mapper.bulk_query([('purple mint','restaurant'),# (input, hint) tuple'amazon',# just input with base hint],hint='company'))# base hintSupportPlease [email protected] you need to contact us directly.
alt-discord-flags
This is an alternative to XuaTheGrate/Flag-Parsing, which is not maintained.⚠ Please be sure to read the entire README, it explains some important tricks.Flag ParsingA util for discord.py bots that allow passing flags into commands.To install, run the following command:pip install alt-discord-flags2.1.0 changes how signatures appear. If you wish to use the legacy signatures, usecommand.old_signatureinstead.Basic example usage:importdiscordfromdiscord.extimportflags,commandsbot=commands.Bot("!")# Invocation: !flags --count=5 --string "hello world" --user Xua --thing [email protected]_flag("--count",type=int,default=10)@flags.add_flag("--string",default="hello!")@flags.add_flag("--user",type=discord.User)@flags.add_flag("--thing",type=bool)@flags.command()asyncdefflags(ctx,**flags):awaitctx.send("--count={count!r}, --string={string!r}, --user={user!r}, --thing={thing!r}".format(**flags))bot.add_command(flags)Important note [email protected] be under [email protected][email protected]_flagtakes the same arguments asargparse.ArgumentParser.add_argumentto keep things simple.Subcommands are just as simple:@commands.group()asyncdefmy_group(ctx):[email protected]_flag("-n")@my_group.command(cls=flags.FlagCommand)asyncdefmy_subcommand(ctx,**flags):...Usage of discord.py'sconsume restbehaviour is not perfect withdiscord-flags, meaning that you have to use a flag workaround:@flags.add_flag("message",nargs="+")@flags.command()asyncdefmy_command(ctx,arg1,**options):""" You can now access `message` via `options['message']` """message=' '.join(options['message'])
alt-discord-flags-v2
Disclaimer: this is a quick fix to get the library to work on Discord.py v2The changes made have not been tested exclusively as I made this module to upgrade my Discord.py version and maintain flags. Use at your own risk (taking pull requests and issues though). Parts of the code were directly copy-pasted from the core Discord.py.This is an alternative to CircuitSacul/Flag-Parsing, which is not maintained.Flag ParsingA util for discord.py bots that allow passing flags into commands.To install, run the following command:pipinstallalt-discord-flags2.1.0 changes how signatures appear. If you wish to use the legacy signatures, usecommand.old_signatureinstead.Basic example usage:importdiscordfromdiscord.extimportflags,commandsbot=commands.Bot("!")# Invocation: !flags --count=5 --string "hello world" --user Xua --thing [email protected]_flag("--count",type=int,default=10)@flags.add_flag("--string",default="hello!")@flags.add_flag("--user",type=discord.User)@flags.add_flag("--thing",type=bool)@flags.command()asyncdefflags(ctx,**flags):awaitctx.send("--count={count!r}, --string={string!r}, --user={user!r}, --thing={thing!r}".format(**flags))bot.add_command(flags)Important note [email protected] be under [email protected][email protected]_flagtakes the same arguments asargparse.ArgumentParser.add_argumentto keep things simple.Subcommands are just as simple:@commands.group()asyncdefmy_group(ctx):[email protected]_flag("-n")@my_group.command(cls=flags.FlagCommand)asyncdefmy_subcommand(ctx,**flags):...Usage of discord.py'sconsume restbehaviour is not perfect withdiscord-flags, meaning that you have to use a flag workaround:@flags.add_flag("message",nargs="+")@flags.command()asyncdefmy_command(ctx,arg1,**options):""" You can now access `message` via `options['message']` """message=' '.join(options['message'])
altdns
No description available on PyPI.
altdphi
altdphiA Python library for calculating alternative variables for background rejection in new physics searches with missing transverse momentum at LHCThe variables are described inTai Sakuma, Henning Flaecher, Dominic Smith,"Alternative angular variables for suppression of QCD multijet events in new physics searches with missing transverse momentum at the LHC",arXiv:1803.07942Documentation:http://altdphi.readthedocs.io
altdss
AltDSS-PythonModern Python bindings for an alternative implementation of EPRI's OpenDSSAltDSS-Python, or justaltdsswhen used from Python, builds on the DSS-Python backend infrastructure and experience shared in both DSS-Python and OpenDSSDirect.py to cover many aspects that the classic OpenDSS APIs (DSS-Python, OpenDSSDirect.py, official OpenDSS COM implementation, and anything based on the DCSL/OpenDSSDirect.DLL) cannot achieve comfortably or with good performance. Many features are only possible due to our alternative engine (AltDSS), used through theAltDSS/DSS C-APIlibrary.This package is available for Windows, Linux and macOS, including Intel and ARM processes, both 32 and 64-bit. As such, it enables OpenDSS to run in many environments the official implementation cannot, from a Raspberry Pi to a HPC cluster, and cloud environments like Google Colab (some of our notebooks examples are ready-to-run on Colab).AltDSS-Python is part of DSS-Extensions, a larger effort to port the original OpenDSS to support more platforms (OSs, processor architectures), programming languages, and extend both the OpenDSS engine and API, represented in the AltDSS engine:For alternatives for other programming languages, including in MATLAB, C++, C#/.NET, Rust and Go, please checkhttps://dss-extensions.org/and our hub repository atdss-extensions/dss-extensionsfor more documentation, discussions and theFAQ.flowchart TD C["AltDSS engine/DSS C-API\n(libdss_capi)"] --> P["DSS-Python: Backend"] P --- DSSPY["DSS-Python\n(dss package)"] P --- ODDPY["OpenDSSDirect.py\n(opendssdirect package)"] P --- ALTDSSPY["AltDSS-Python\n(altdss package)"]AltDSS-Python is one of three Python projects under DSS-Extensions. SeeDSS-Extensions — OpenDSS: Overview of Python APIsfor a brief comparison between these and the official COM API. Both OpenDSSDirect.py and DSS-Python expose the classic OpenDSS API (closer to the COM implementation). AltDSS-Python, on the other hand, exposes all OpenDSS objects, batch operations, and a more intuitive API. If required, users can mix all three packages in the same project to access some of their unique features, or just to avoid changing legacy/stable code.Since the base code is shared, other features fromDSS-Python such as plottingcan be used here. Other examples from DSS-Python or OpenDSSDirect.py can be adapted quite easily too.What is AltDSS?To avoid confusion between the official OpenDSS provided by EPRI and the alternative, unofficial implementation provided by the DSS-Extensions group of projects, our implementation of the OpenDSS engine, exposed as a software library (e.g. DLL or shared object), will be slowly renamed to AltDSS engine. The classic parts of our DSS C-API is still in place and will remain for usage in DSS-Python and OpenDSSDirect.py.The "Alt" prefix tries to encompass a few aspects:An alternative implementationA new approach for the APIA more searchable name (just "DSS" is too generic nowadays), and easier to cite.What didn't change:It's the same engine that's been developed since 2018 on DSS-Extensions.Our cross-validation with the official OpenDSS implementation continues.If you have used DSS C-API, DSS-Python, DSS MATLAB, DSS Sharp, OpenDSSDirect.jl or OpenDSSDirect.py since 2018 or 2019 (depending on the package), you were already using "AltDSS".If you somehow need compatibility with the official OpenDSS COM implementation at API level, prefer to stay with the classic API implementations such DSS-Python and OpenDSSDirect.py, and avoid anything marked as "API Extension".What is AltDSS-Python?AltDSS-Python is a new Python package that goes along with DSS-Python and OpenDSSDirect.py to provide a further Pythonic experience.This new package uses many unique aspects of our alternative implementation of the engine and of the API to expose all types of DSS objects currently implemented in AltDSS.When referring to Python, we will call AltDSS-Python just AltDSS, for short. The same will happen for other future language bindings based on the same concepts.Notable features are:All DSS objectsimplemented in our implementation of the OpenDSS engine are exposed.Avoids the active element idiom. Users can interact with multiple objects without requiring to activate them.The original separate APIs for general circuit elements, PC elements, PD elements and others are encapsulated together with the access to the DSS properties in a single context. We hope to achieve a better autocomplete experience in IDEs and easier discoverability of the available functions. For example, aLineobjectcontains all the DSS properties the the Line object (as Python properties), plus inherited methods/properties from common DSS objects, Circuit element objects, and PD element objects.Due to duplication, the access to the DSS properties replace many of the old COM-style properties. The names of the properties in Python that reflect the DSS properties are kept as close as possible. A few exceptions are names like%Rs(which is an invalid identifier in Python) that is exposed aspctRs. We hope that reusing the same names will reduce the cognitive load for the users and provide an easier transition between API and GUI (e.g. the main official OpenDSS GUI) usage. That is, a .DSS script is now closer to an AltDSS Python script and users shouldn't be required to learn two completely different conventions.More native types, including more extensive usage of Python enumerations and complex numbers.Batches! Manipulate batches of uniform objects (including DSS properties and general functions) or non-uniform collections of objects (general functions only, at the moment), with lower overhead.New dedicated EnergyMeter classes, including directMeterSectioninspection.Note that the general interaction through .DSS scripts is unchanged and users can use AltDSS-Python together with DSS-Python or OpenDSSDirect.py since the engine is shared. That is, although it would be recommended to fully use AltDSS-Python when a v1.0 is done, there is no rush to migrate. Users can enjoy some of the new features piecewise, like batches or the APIs for all DSS object types not exposed in the classic APIs.Why a new package?Besides the naming issue, although both DSS-Python and OpenDSSDirect.py acquired features since 2018, when DSS-Extensions was born, adding a lot of the features from AltDSS would break compatibility in a major way. Initially, more Pythonic and extra features were expected to land on OpenDSSDirect.py. Looking into the list of publications and public repositories using both packages suggests that breaking the API to introduce features is not advisable.Moreover, many users still see OpenDSSDirect.py and think that it uses OpenDSSDirect.DLL. That is not the case, and even when it was (before August 2018), the Linux and macOS builds of OpenDSS were not supported by EPRI, the original OpenDSS developer. Effectively, OpenDSSDirect.py (OpenDSSDirect.jl is in a similar situation) never used an official binary on non-Windows platforms.That said, we hope both new and experienced OpenDSS users find it easy to adopt it.DocumentationVisithttps://dss-extensions.org/AltDSS-Python/Especially useful for a quick overview:https://dss-extensions.org/AltDSS-Python/examples/GettingStarted.html
altearnrpc
Altearn RPCDépendancesPython 3.8+PyPresenceTQDMPyPresence et TQDM sont installés automatiquement par pipInstallationpip install altearnrpcMise à jourpip install -U altearnrpcDémarragepython -m altearnrpc
alteia
This SDK offers a high-level Python interface toAlteia APIs.Installationpipinstallalteia*requires Python >= 3.6.1, pip >= 20.3Basic usageimportalteiasdk=alteia.SDK(user="YOUR_EMAIL_ADDRESS",password="YOUR_ALTEIA_PASSWORD")projects=sdk.projects.search()forprojectinprojects:print(project.name)# My awesome project📕 DocumentationReference documentationJupyter notebook tutorialsContributingPackage installation:poetryinstall(Optional) To install pre-commit hooks (and detect problems before the pipelines):pipinstallpre-commit pre-commitinstall pre-commitrun--all-files pre-commitautoupdate# Optional, to update pre-commit versions
alteia-cli
alteiaCLI for Alteia Platform.Usage:$alteia[OPTIONS]COMMAND[ARGS]...Options:-p, --profile TEXT: Alteia CLI Profile [env var: ALTEIA_CLI_PROFILE; default: default]--version: Display the CLI version and exit--verbose: Display more info during the run--install-completion: Install completion for the current shell.--show-completion: Show completion for the current shell, to copy it or customize the installation.--help: Show this message and exit.Commands:analytic-configurations: Interact with configurations of analytics.analytics: Interact with analytics.configure: Configure platform credentials.credentials: Interact with Docker registry credentials.products: Interact with products.alteia analytic-configurationsInteract with configurations of analytics.Usage:$alteiaanalytic-configurations[OPTIONS]COMMAND[ARGS]...Options:--help: Show this message and exit.Commands:assign: Assign an analytic configuration set to a...create: Create a new configuration set for an...delete: Delete one or many analytic configuration...export: Export one configuration of a configuration...list: List the analytic configuration sets and...unassign: Unassign an analytic configuration set from a...update: Update a configuration set.alteia analytic-configurations assignAssign an analytic configuration set to a company.All analytic configurations that are currently part of this analytic configuration set (and the potential future ones), are assigned to the company.Usage:$alteiaanalytic-configurationsassign[OPTIONS]CONFIG_SET_IDArguments:CONFIG_SET_ID: Identifier of the configuration set to assign [required]Options:-c, --company TEXT: Identifier of the company the configuration set will be assigned to [required]--help: Show this message and exit.alteia analytic-configurations createCreate a new configuration set for an analytic.A configuration set is composed of configurations, each being applied to a different version range of the associated analytic.Usage:$alteiaanalytic-configurationscreate[OPTIONS]Options:-c, --config-path PATH: Path to the Configuration file (YAML or JSON file) [required]-n, --name TEXT: Configuration set name (will be prompt if not provided)-a, --analytic TEXT: Analytic name (will be prompt if not provided)-v, --version-range TEXT: Version range of the analytic on which this first configuration can be applied-d, --description TEXT: Configuration set description text--help: Show this message and exit.alteia analytic-configurations deleteDelete one or many analytic configuration set(s) and the associated configuration(s).Usage:$alteiaanalytic-configurationsdelete[OPTIONS]IDSArguments:IDS: Identifier of the configuration set to delete, or comma-separated list of configuration set identifiers [required]Options:--help: Show this message and exit.alteia analytic-configurations exportExport one configuration of a configuration set. Output can be a JSON or YAML format.Usage:$alteiaanalytic-configurationsexport[OPTIONS]CONFIG_SET_IDArguments:CONFIG_SET_ID: Identifier of the configuration set to export value [required]Options:-v, --version-range TEXT: Specify the exact version range from the applicable analytic version ranges. Optional if only one configuration exists in the configuration set-f, --format [json|yaml]: Optional output format [default: json]-o, --output-path PATH: Optional output filepath to export the configuration. If the filepath already exists, it will be replaced. If not specified, configuration will be displayed in stdout--help: Show this message and exit.alteia analytic-configurations listList the analytic configuration sets and their configurations.Usage:$alteiaanalytic-configurationslist[OPTIONS]Options:-n, --limit INTEGER RANGE: Max number of configuration sets returned. [default: 100]--name TEXT: Configuration set name (or a part of) to match--analytic TEXT: Exact analytic name to match--desc: Print description rather than configurations [default: False]--help: Show this message and exit.alteia analytic-configurations unassignUnassign an analytic configuration set from a company.All configurations currently part of this analytic configuration set, are unassigned from the company.Usage:$alteiaanalytic-configurationsunassign[OPTIONS]CONFIG_SET_IDArguments:CONFIG_SET_ID: Identifier of the configuration set to unassign [required]Options:-c, --company TEXT: Identifier of the company the configuration set is assigned to [required]--help: Show this message and exit.alteia analytic-configurations updateUpdate a configuration set. A configuration set is composed of configurations, each being applied to a different version range of the associated analytic.To add a new configuration (file), use --add-config with the path to the new configuration file (YAML or JSON file) and --version-range with the version range of the analytic you want this new configuration to be applied.To replace an existing configuration (file), use --replace-config with the path to the new configuration file (YAML or JSON file) and --version-range with the exact version range attached to the configuration to replace.To remove a configuration from a configuration set, use --remove-config and --version-range with the exact version range attached to the configuration to remove.To change the version range for an existing configuration, do an "add" and then a "remove" (an export may be necessary to do the "add" with the same configuration file).Usage:$alteiaanalytic-configurationsupdate[OPTIONS]CONFIG_SET_IDArguments:CONFIG_SET_ID: Identifier of the configuration set to update [required]Options:-n, --name TEXT: New configuration set name-d, --description TEXT: New configuration set description-a, --add-config PATH: Add new configuration. Specify the path to the new configuration file, and --version-range option with the version range of the analytic you want this new configuration to be applied. Do not use with --replace-config-u, --replace-config PATH: Replace a configuration. Specify the path to the new configuration file, and --version-range option with the exact version range from the applicable analytic version ranges. Do not use with --add-config-v, --version-range TEXT: Version range of the analytic on which a configuration can be applied. Must be used with one of --add-config, --replace-config or --remove-config-r, --remove-config TEXT: Remove a configuration. Specify the exact version range from the applicable analytic version ranges--help: Show this message and exit.alteia analyticsInteract with analytics.Usage:$alteiaanalytics[OPTIONS]COMMAND[ARGS]...Options:--help: Show this message and exit.Commands:create: Create a new analytic.delete: Delete an analytic.disable: Disable an analytic on companiesenable: Enable an analytic on companiesexpose: Expose an analyticlist: List the analytics.list-exposed: List exposed analyticsshare: Share an analytic (DEPRECATED: use expose...unexpose: Unexpose an analyticunshare: Unshare an analytic (DEPRECATED: use unexpose...alteia analytics createCreate a new analytic.Usage:$alteiaanalyticscreate[OPTIONS]Options:--description PATH: Path of the Analytic description (YAML file). [required]--company TEXT: Company identifier.--help: Show this message and exit.alteia analytics deleteDelete an analytic.Usage:$alteiaanalyticsdelete[OPTIONS]ANALYTIC_NAMEArguments:ANALYTIC_NAME: [required]Options:--version TEXT: Version range of the analytic in SemVer format. If not provided, all the versions will be deleted.--help: Show this message and exit.alteia analytics disableDisable an analytic on companiesUsage:$alteiaanalyticsdisable[OPTIONS]ANALYTIC_NAMEArguments:ANALYTIC_NAME: [required]Options:--company TEXT: Identifier of the company to disable the analytic, or list of such identifiers (comma separated values).When providing the identifier of the root company of your domain, the analytic is disabled by default for all the companies of the domain (equivalent to using the --domain option).--domain TEXT: Use this option to make the analytic disabled by default for all companies of the specified domains (comma separated values) (equivalent to using the --company option providing the root company identifier(s) of these domains).Apart from this default behavior on domain, the analytic can be enabled or disabled separately on each company of the domain.--help: Show this message and exit.alteia analytics enableEnable an analytic on companiesUsage:$alteiaanalyticsenable[OPTIONS]ANALYTIC_NAMEArguments:ANALYTIC_NAME: [required]Options:--company TEXT: Identifier of the company to enable the analytic, or list of such identifiers (comma separated values).When providing the identifier of the root company of your domain, the analytic is enabled by default for all the companies of the domain (equivalent to using the --domain option).--domain TEXT: Use this option to make the analytic enabled by default for all companies of the specified domains (comma separated values) (equivalent to using the --company option providing the root company identifier(s) of these domains).Apart from this default behavior on domain, the analytic can be enabled or disabled separately on each company of the domain.--help: Show this message and exit.alteia analytics exposeExpose an analyticUsage:$alteiaanalyticsexpose[OPTIONS]ANALYTIC_NAMEArguments:ANALYTIC_NAME: [required]Options:--domain TEXT: To expose the analytic on the specified domains (comma separated values) if you have the right permissions on these domains.By default, without providing this option, the analytic will be exposed on your domain if you have the right permissions on it.--help: Show this message and exit.alteia analytics listList the analytics.Usage:$alteiaanalyticslist[OPTIONS]Options:-n, --limit INTEGER RANGE: Max number of analytics returned. [default: 100]--all: If set, display all kinds of analytics (otherwise only custom analytics are displayed). [default: False]--help: Show this message and exit.alteia analytics list-exposedList exposed analyticsUsage:$alteiaanalyticslist-exposed[OPTIONS]Options:--all: If set, display all kinds of analytics (otherwise only custom analytics are displayed). [default: False]--domain TEXT: If set, filters exposed analytics on the specified domains (comma separated values) if you have the right permissions on these domains.By default, without providing this option, it filters on your domain.--help: Show this message and exit.alteia analytics shareShare an analytic (DEPRECATED: use expose instead)Usage:$alteiaanalyticsshare[OPTIONS]ANALYTIC_NAMEArguments:ANALYTIC_NAME: [required]Options:--version TEXT: Range of versions in SemVer format. If not provided, all the versions will be shared.--company TEXT: Identifier of the company to share the analytic with.When providing the identifier of the root company of your domain, the analytic is shared with all the companies of the domain (equivalent to using the --domain option)--domain / --no-domain: To share the analytic with the root company of your domain.This has the effect to share the analytic with all the companies of your domain and is equivalent to using the --company option providing the id of the root company. [default: False]--help: Show this message and exit.alteia analytics unexposeUnexpose an analyticUsage:$alteiaanalyticsunexpose[OPTIONS]ANALYTIC_NAMEArguments:ANALYTIC_NAME: [required]Options:--domain TEXT: To unexpose the analytic from the specified domains (comma separated values) if you have the right permissions on these domains.By default, without providing this option, the analytic will be unexposed from your domain if you have the right permissions on it.--help: Show this message and exit.alteia analytics unshareUnshare an analytic (DEPRECATED: use unexpose instead)Usage:$alteiaanalyticsunshare[OPTIONS]ANALYTIC_NAMEArguments:ANALYTIC_NAME: [required]Options:--version TEXT: Range of versions in SemVer format. If not provided, all the versions will be unshared.--company TEXT: Identifier of the company to unshare the analytic with.--domain / --no-domain: To unshare the analytic with the root company of your domain.This is equivalent to using the --company option providing the id of the root company. Note that if you specifically shared the analytic with a company of your domain, the analytic will still be shared with that company. [default: False]--help: Show this message and exit.alteia configureConfigure platform credentials.You can configure multiples credential profiles by specifying a different profile name for each one.Usage:$alteiaconfigure[OPTIONS][PROFILE]Arguments:[PROFILE]: Alteia CLI Profile to configure [env var: ALTEIA_CLI_PROFILE;default: default]Options:--help: Show this message and exit.alteia credentialsInteract with storage credentials.Usage:$alteiacredentials[OPTIONS]COMMAND[ARGS]...Options:--help: Show this message and exit.Commands:create: Create a new credential entry.delete: Delete a credential entry by its name.list: List the existing credentials.alteia credentials createCreate a new credential entry.Usage:$alteiacredentialscreate[OPTIONS]Options:--filepath PATH: Path of the Credential JSON file. [required]--company TEXT: Company identifier.--help: Show this message and exit.alteia credentials deleteDelete a credential entry by its name.Usage:$alteiacredentialsdelete[OPTIONS]NAMEArguments:NAME: [required]Options:--help: Show this message and exit.alteia credentials listList the existing credentials.Usage:$alteiacredentialslist[OPTIONS]Options:--company TEXT: Company identifier.--type TEXT: Type of credentials [default: docker].--help: Show this message and exit.alteia productsInteract with products.Usage:$alteiaproducts[OPTIONS]COMMAND[ARGS]...Options:--help: Show this message and exit.Commands:cancel: Cancel a running product.list: List the productslogs: Retrieve the logs of a product.alteia products cancelCancel a running product.Usage:$alteiaproductscancel[OPTIONS]PRODUCT_IDArguments:PRODUCT_ID: [required]Options:--help: Show this message and exit.alteia products listList the productsUsage:$alteiaproductslist[OPTIONS]Options:-n, --limit INTEGER RANGE: Max number of products returned [default: 10]--analytic TEXT: Analytic name--company TEXT: Company identifier--status [pending|processing|available|rejected|failed]: Product status--all: If set, display also the products from platform analytics (otherwise only products from custom analytics are displayed). [default: False]--help: Show this message and exit.alteia products logsRetrieve the logs of a product.Usage:$alteiaproductslogs[OPTIONS]PRODUCT_IDArguments:PRODUCT_ID: [required]Options:-f, --follow: Follow logs. [default: False]--help: Show this message and exit.Generated withpython -m typer_cli alteia_cli/main.py utils docs --name alteia
alteia-cli-dataflow-management
alteia datastreamsUsage:$alteiadatastreams[OPTIONS]COMMAND[ARGS]...Options:--install-completion: Install completion for the current shell.--show-completion: Show completion for the current shell, to copy it or customize the installation.--help: Show this message and exit.Commands:aggregate-partial-results: Aggregate a datastream outputs using its...complete: Complete a datastream.create: Create a datastream from a datastream...describe: Describe datastream and its datastream...get: Get datastream description in yaml format.list: List datastreams.list-partial-aggregations: List ongoing aggregation for a datastream.monitor-assets: Monitor datastream assets monitored.trigger: Trigger a datastream in order to...alteia datastreams aggregate-partial-resultsAggregate a datastream outputs using its aggregation parameters.Usage:$alteiadatastreamsaggregate-partial-results[OPTIONS]DATASTREAM_IDArguments:DATASTREAM_ID: Datastream ID [required]Options:--force-command / --no-force-command: Force partial aggregation command even if another one is running. [default: no-force-command]--help: Show this message and exit.alteia datastreams completeComplete a datastream.Usage:$alteiadatastreamscomplete[OPTIONS]DATASTREAM_IDArguments:DATASTREAM_ID: Datastream ID [required]Options:--help: Show this message and exit.alteia datastreams createCreate a datastream from a datastream template.Usage:$alteiadatastreamscreate[OPTIONS]Options:--description PATH: Path of the datastream description (YAML | JSON file). [required]--help: Show this message and exit.alteia datastreams describeDescribe datastream and its datastream files status.Usage:$alteiadatastreamsdescribe[OPTIONS]DATASTREAM_IDArguments:DATASTREAM_ID: Datastream ID [required]Options:--help: Show this message and exit.alteia datastreams getGet datastream description in yaml format.Usage:$alteiadatastreamsget[OPTIONS]DATASTREAM_IDArguments:DATASTREAM_ID: Datastream ID [required]Options:--help: Show this message and exit.alteia datastreams listList datastreams.Usage:$alteiadatastreamslist[OPTIONS]Options:--company TEXT: Company ID.--limit INTEGER: Limit number of results. [default: 10]--asset-schema-repository TEXT: Asset schema repository name.--asset-schema TEXT: Asset schema name.--asset-schema-repository-id TEXT: Asset schema repository id.--asset-schema-id TEXT: Asset schema id.--help: Show this message and exit.alteia datastreams list-partial-aggregationsList ongoing aggregation for a datastream.Usage:$alteiadatastreamslist-partial-aggregations[OPTIONS]DATASTREAM_IDArguments:DATASTREAM_ID: Datastream ID [required]Options:--help: Show this message and exit.alteia datastreams monitor-assetsMonitor datastream assets monitored.Usage:$alteiadatastreamsmonitor-assets[OPTIONS]DATASTREAM_IDArguments:DATASTREAM_ID: Datastream ID [required]Options:--help: Show this message and exit.alteia datastreams triggerTrigger a datastream in order to synchronise the datastream files with its source.Usage:$alteiadatastreamstrigger[OPTIONS]DATASTREAM_IDArguments:DATASTREAM_ID: Datastream ID [required]Options:--max-nb-files-sync INTEGER: Maximum number of files to synchronize. [default: 20]--fill-runnings-files / --no-fill-runnings-files: Synchronize files in order to reach the maximum number of files. [default: no-fill-runnings-files]--help: Show this message and exit.alteia datastreamtemplatesUsage:$alteiadatastreamtemplates[OPTIONS]COMMAND[ARGS]...Options:--install-completion: Install completion for the current shell.--show-completion: Show completion for the current shell, to copy it or customize the installation.--help: Show this message and exit.Commands:create: Create a datastream template.delete: Delete a datastream template.list: List datastream templates.alteia datastreamtemplates createCreate a datastream template.Usage:$alteiadatastreamtemplatescreate[OPTIONS]Options:--description PATH: Path of the datastream template description (YAML file). [required]--company TEXT: Company identifier. [required]--help: Show this message and exit.alteia datastreamtemplates deleteDelete a datastream template.Usage:$alteiadatastreamtemplatesdelete[OPTIONS]DATASTREAMSTEMPLATEArguments:DATASTREAMSTEMPLATE: Datastream template ID [required]Options:--help: Show this message and exit.alteia datastreamtemplates listList datastream templates.Usage:$alteiadatastreamtemplateslist[OPTIONS]Options:--company TEXT: Company ID.--limit INTEGER: Limit number of results. [default: 10]--asset-schema-repository TEXT: Asset schema repository name.--asset-schema TEXT: Asset schema name.--help: Show this message and exit.
altena
# altena: Feature extraction for categorical variables Under construction…
altendpy
Miscellanious extras that I like to use when working with Python.
altendpyqt5
Miscellanious extras that I like to use when working with PyQt5.
alteon2f5
No description available on PyPI.
alteon-sdk
Introductionprovide API for developers to interact with Alteon structures in Python environment via REST Backendaccess to Alteon Management & Configuration functionsabstract configuration model to alteon elements via consistent Declarative-style Configuratorsin this model a consistent object structures are exposed to user per configurator type. each configurator handle base commands over a device (READ, READ_ALL, DELETE, UPDATE & DEPLOY) the package handles the binding & translation between alteon device to abstract objects it works both ways: abstract <-> alteon structure, in other words it translate abstract to alteon configuration and read from alteon into abstract type. multi choices (enums) are consumed dynamically from the beans package. developer can choose to work with string_value/int/enums directlythe SDK is requires Python >3.6Minimum Supported Alteon Versions: 31.0.10.0, 32.2.2.0device direct API, Configurators and Management are available via the Alteon client module:from radware.alteon.client import AlteonClientfrom radware.alteon.beans.SlbNewCfgEnhRealServerTable import *alteon_client_params = dict(validate_certs=False,user='admin',password='admin',https_port=443,server='172.16.1.1',timeout=15,)client = AlteonClient(**alteon_client_params)# read bean from device:bean = SlbNewCfgEnhRealServerTable()bean.Index = 'real_1'print(client.api.device.read(bean))# work with Configurators:client.api.mgmt.config.commit()print(client.api.mgmt.info.software)print(client.api.conf.type.dns_responders.read_all())server_params = ServerParameters()server_params.index = 'real1'server_params.ip_address = '3.3.3.3'client.api.conf.execute('deploy', server_params, dry_run=True, write_on_change=True, get_diff=True)another way of use is directly via the desire Configurator:from radware.alteon.sdk.configurators.server import *connection = AlteonConnection(**alteon_client_params)server_configurator = ServerConfigurator(connection)server_params = ServerParameters()server_params.index = 'real1'server_params.ip_address = '3.3.3.3'server_params.availability = 5server_params.server_ports = [56, 78]server_params.weight = 5server_params.server_type = EnumSlbRealServerType.remote_serverserver_params.state = EnumSlbRealServerState.enabledserver_configurator.deploy(server_params)OR the configuration manager:from radware.sdk.configurator import DeviceConfigurator, DeviceConfigurationManagerfrom radware.alteon.sdk.configurators.ssl_key import SSLKeyConfiguratorssl_key_configurator = SSLKeyConfigurator(**alteon_client_params)cfg_mng = DeviceConfigurationManager()result = cfg_mng.execute(ssl_key_configurator, DeviceConfigurator.READ_ALL, None, passphrase=passphrase)print(result.content_translate)further details & doc will be added laterInstallationpip install alteon-sdkDesign Principles46 configurators: some indexed & others are a summarymanagement functions for facts collection, carry out operation tasks and manage device configurationAlteon direct device APIAlteon client: aggregate all configuration & management modules along with device APIBean package complied from MIB (auto generated)Each configurator is standalone and can be initiate for an AdcConnectionAbstraction <-> beans automatic attributes binding + specific processing when neededAbstraction <-> Alteon configuration bi-direction translation (deploy from abstract , read alteon config into abstract)Define dry_run delete procedure for "special configurators": relevant for when there is no delete procedure, mostly for global configurationIdentify duplicate entries within a structureAuthorsAlteon SDK was created byLeon MeguiraCopyrightCopyright 2019 Radware LTDLicenseLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and
alter
[Github-flavored Markdown](https://github.com/lyy910203/Alter/blob/master/README.md)
alteraparser
IntroductionAlteraparser is a library that provides functions to define a grammar that can be passed to a parser.Basic UsageCode sample:from alteraparser.parser import Parser from alteraparser import char_range, fork, many, grammar, ... ALPHA = fork(char_range('a', 'z'), char_range('A', 'Z')) NUM = char_range('0', '9') ALPHA_NUM = fork(ALPHA, NUM) ... variable = fork([ALPHA, many(ALPHA_NUM)]).set_name('var') ... my_grammar = grammar(variable, ...) my_parser = Parser(my_grammar) ast = my_parser.parse_file("my_code.txt")Changes0.5.0.a2:added transform_ast method to enable transformation of AST nodes
alteredai
No description available on PyPI.
alteredcarbon
No description available on PyPI.
altered-states
Altered Statesis a reversible state tool. It changes values in an object. When the changes are not needed anymore, it is possible to reverse them.Read more here:https://github.com/Plexical/altered.states/blob/master/README.rst1.0.1Fixed issue in setup.py scriptAdded missing changelog1.0.0Altered States now runs on Python 3 (tested on 2.7, 3.5 and 3.6)Dropped support for Python 2.6Experimental support for Kenneth Reitz Pipenv toolCorrected invalid use ofos.modulesin examples0.8.6Better handling of objects that override__getitem__(thanks to @merwok).Drop support for Python 2.5 (no sane way to solve issue #4 there).0.8.5Added a new API entry point:alter(), that can be used to perform a two-step reversible alteration.0.8.2Updated test suites to [email protected] for fixtures (now requirespy.test> 2.3)Fixes a bug causingos.environnot to be patchable.Fixing bug #2 means switching thedict-like object check fromisinstance(x, dict)tohasattr(x, ‘__getitem__’). This change is thought to not break backwards compatibility but if you encounter unexpected behaviour indict/objectdetection this might be it. I’d be very interested to know about that if you do.0.8.1AliasExpandoasEfor optional terseness.0.8.0Initial release.
alterego
Planning
alter-ego-llm
alter_egoalter_egois a library that allows you to run experiments with LLMs.egois a command-line helper included with this package.alter_egoallows you to run microexperiments using a simple shorthand. You can also create more advanced experiments. For turn-based interactive experiments, abuilderis available.Read ourpaper.You can browse our source code in this repository.Hereare autogenerated docs that can make it easier to find what you're looking for.Getting startedPrerequisitesInstallPython, at least version 3.8.If you are on Windows, make sure to install Python into the PATH.Create avirtual environmentand activate it. On Linux and macOS, this is very simple. Just open a terminal and execute the following commands:user@host:~$python-mvenvenvuser@host:~$sourceenv/bin/activateNote: In this document, you may have to replacepythonbypython3andpipbypip3. This depends on your system's settings.On Windows, consider usingthis tutorialto create and activate a virtual environment.Some editors also do this for you.Installalter_egousing(env)user@host:~$pipinstall-Ualter_ego_llmNote how(env)signals that we are in the virtual environment created earlier.For the remainder of this document, we assume that your editor's current directory is also your terminal's present working directory.From within your terminal, you can find out the present working directory usingpwd— this should show the very same directory as opened in your editor.Usingalter_egowith GPTNote: If you do not want to use GPT for now, simply changeGPTThreadtoCLIThreadin the examples below and skip this section.Obtain an API key from OpenAI.Here is more information.Your API key looks as follows:sk-***Copy this to your clipboard.Create a new file in your editor.Put the content of your clipboard into the fileopenai_keyinyour current directory. The file must not have a file extension—it is literally just calledopenai_key.Developing a simple microexperimentNew:📺 WATCH VIDEO TUTORIALLet's create a minimal experiment usingalter_ego's shorthand feature.Create a new file in your editor,first_experiment.py. Here's its code:importalter_ego.agentsfromalter_ego.utilsimportextract_numberfromalter_ego.experimentimportfactorialdefagent():returnalter_ego.agents.GPTThread(model="gpt-3.5-turbo",temperature=1.0)prompt="Estimate the public approval rating of {{politician}} during the {{time}} of their presidency. Only return a single percentage from 0 to 100."data=factorial(prompt,politician=["George W. Bush","Barack Obama"],time=["1st year","8th year"]).run(agent,extract_number,times=1)forrowindata:print(row)Note how we use variables within the prompt. The crucial feature ofalter_egois how these variables are automatically replaced based on treatment.In the terminal, run(env)user@host:~$pythonfirst_experiment.pyThis will take a few seconds and give you output similar to this:{'politician': 'George W. Bush', 'time': '1st year', 'result': None} {'politician': 'George W. Bush', 'time': '8th year', 'result': None} {'politician': 'Barack Obama', 'time': '1st year', 'result': 63} {'politician': 'Barack Obama', 'time': '8th year', 'result': 8}As you see, GPT did not give a valid response for George W. Bush. Let's debug by changing the line withrunto:).run(agent,extract_number,times=1,keep_retval=True)Rerunning our script gives:{'politician':'George W. Bush','time':'1st year','result':62,'retval':'Approximately 62%.'}{'politician':'George W. Bush','time':'8th year','result':None,'retval':"It is difficult to provide an accurate estimate without conducting a specific poll or analysis. However, based on historical data and trends, it is common for a president's approval rating to decline over the course of their second term. Taking into account various factors such as the economic recession and the ongoing Iraq War during George W. Bush's final year in office (2008), it is reasonable to estimate his public approval rating to be around 25-35%. Please note that this estimation is subjective and might not perfectly reflect the actual public sentiment at that time."}(Here, I have shown only two rows of the output.)As you see, in the cases where GPT returned only a single number,alter_egowas able to correctly extract it. Unfortunately, GPT 3.5 tends to refuse requests to just return a single number. GPT 4 works better. If you have access to GPT 4 over the API, you can changemodel="gpt-3.5-turbo"tomodel="gpt-4". This is the resulting output (where I have omitted theretvalonce again):{'politician':'George W. Bush','time':'1st year','result':57}{'politician':'George W. Bush','time':'8th year','result':34}{'politician':'Barack Obama','time':'1st year','result':57}{'politician':'Barack Obama','time':'8th year','result':55}Hereyou can view the documentation forrun.runallows you to quickly execute an experiment defined by what highfalutin scientists call a “factorial design.” This is because the possibilities ofpolitician(George W. Bush, Barack Obama) were “multiplied” by the possibilities fortime(1st year, 8th year).The nice thing about these microexperiments is that you can easily carry the output forward to Pandas, Polars, etc.—this is becausedatais only a “list of dicts,” and as such it is trivial to convert to a DataFrame. This allows you to analyze data received straight from an LLM.Of course, you will often want to set the temperature to0.0or another low value. This depends on the nature of your use-case.Using the builder to construct a turn-based experimentNew:📺 WATCH VIDEO TUTORIALWe offer a web app to buildsimpleexperiments between multiple LLMs. The builder can be foundhere, with its source code being availablehere.For now, just read through the app (it showcases an example of a framed ultimatum game) and scroll down.Copy the code shown below “Export or import scenario” on the web app into a new file. Call that filebuilt.jsonin your current project directory.The file must be calledbuilt.json.Open a terminal and execute(env)user@host:~$egorunbuiltThis will show “System instructions” for two separate players. Note how they vary: One player (the first one) is the proposer and the second player is the responder.The proposer is now asked to put in a proposal in JSON. Let's do it:{"keep":4.2}As you see, the responder is notified and can nowACCEPTorREJECT. Let's accept:ACCEPTThis completes the experiment. You will see something like:Experiment c6627c4e-f17f-4cdc-ba47-462eced3e489 OKLet's look at the data that was generated. We can get it in CSV format by executing:(env)user@host:~$egodatabuiltc6627c4e-f17f-4cdc-ba47-462eced3e489>data.csv(You need to replacec6627c4e-f17f-4cdc-ba47-462eced3e489with your actual experiment ID)This should tell you that 2 lines were written. If you opendata.csvin your preferred spreadsheet calculator, you will see the following output:choiceconvoexperimentiroundtaintedthreadthread_typetreatment{"keep": 4.2}03ba9edf-99c0-46d6-8c26-42c26683197cc6627c4e-f17f-4cdc-ba47-462eced3e48911False1ce5faaf-cc1f-438f-8231-8b7e0d96fb07CLIThreadtake"ACCEPT"03ba9edf-99c0-46d6-8c26-42c26683197cc6627c4e-f17f-4cdc-ba47-462eced3e48921False9043cd57-3e83-42d2-8d62-c783725e05e7CLIThreadtakeThis is obviously easy to post-process in whatever statistics software you use.If you re-run the experiment, enter garbage instead of the expected inputs and re-export the data, you will see that thetaintedcolumn becomesTrue. You can check fortaintedto verify that inputs were received and processed as expected. Note that onceanyThread responds invalidly, the Conversation will be stopped and all Threads will havetaintedset toTrue. Thus,ego data's output may be partial.You can run your scenario five times by doing(env)user@host:~$egorun-n5builtNeedless to say, but you can replace5by any integer whatsoever.Feel free to experiment with our builder.Note: Experts can set the environment variableBUILT_FILEto haveegouse a different file name frombuilt.json.Usingalter_egowith oTreeNew:📺 WATCH VIDEO TUTORIALoTreeis a relatively popular framework for web-based experiments.You can attach Threads (i.e., LLMs) and Conversations (i.e., bundles of LLMs) to oTree objects (participants, players, subsessions or sessions). This basically works as follows (in your app's__init__.py:fromalter_ego.agentsimport*fromalter_ego.utilsimportfrom_filefromalter_ego.exports.otreeimportlinkasai...defcreating_session(subsession):forplayerinsubsession.get_players():ai(player).set(GPTThread(model="gpt-3.5-turbo",temperature=1.0))Here, eachplayerwould get their own personal GPT agent. If you want to assign such an agent to agroup, just do this:defcreating_session(subsession):forgroupinsubsession.get_groups():ai(group).set(GPTThread(model="gpt-3.5-turbo",temperature=1.0))Then, within your code, you can access the agent using acontext manager. Here's an example of a simplelive_method-based chat if we attached the Thread to theplayerobject:classChat(Page):deflive_method(player,data):ifisinstance(data,str):withai(player)asllm:# this submits the player's message and gets the responseresponse=llm.submit(data,max_tokens=500)# note: if you put the AI on the "group" object or somewhere other than# the player, you may want to change thisreturn{player.id_in_group:response}If you want to set the system prompt, you can do:def before_next_page(player, timeout_happened): with ai(player) as llm: # this sets the "system" prompt llm.system("You are participating in an experiment.")You can do whatever you want, but always remember to open the LLM's context (usingwith ai(...) as llm) before performing any action. (This extra step is necessary because of oTree's ORM, which otherwise couldn't notice changes deep down in the Thread.)You can also attach a whole Conversation to the aforementioned oTree objects.We provide a simple Chat in this repository, see the directoryotree/ego_chat.Remember to put your API key into your oTree project folder.alter_egosaves message histories automatically in.ego_outputin your oTree project folder.Developing full-fledged experimentsNew:📺 WATCH VIDEO TUTORIALYou can use the primitives exposed by this library to develop full-fledged experiments that go beyond the capabilities of our builder. The directoryscenarios/contains a bunch of examples, including the code for our paper's machine--machine interaction example (ego_prereg.py). Watch the video tutorial to get a feeling for what's possible.CitationWhen using any part ofalter_egoin a scientific context, cite the following work:@article{ego, title={Integrating Machine Behavior into Human Subject Experiments: A User-friendly Toolkit and Illustrations}, author={Engel, Christoph and Grossmann, Max R. P. and Ockenfels, Axel}, year={2023},}Licensealter_egois © Max R. P. Grossmannet al., 2023. It is licensed under LGPLv3+. Please seeLICENSEfor details.This program is distributed in the hope that it will be useful, butwithout any warranty; without even the implied warranty ofmerchantabilityorfitness for a particular purpose. See the GNU Lesser General Public License for more details.
alterity
AlterityDocumentation:https://john-james-ai.github.io/alteritySource Code:https://github.com/john-james-ai/alterityPyPI:https://pypi.org/project/alterity/Outlier DetectionInstallationpipinstallalterityDevelopmentClone this repositoryRequirements:PoetryPython 3.7+Create a virtual environment and install the dependenciespoetryinstallActivate the virtual environmentpoetryshellTestingpytestDocumentationThe documentation is automatically generated from the content of thedocs directoryand from the docstrings of the public signatures of the source code. The documentation is updated and published as aGithub project pageautomatically as part each release.ReleasingTrigger theDraft release workflow(pressRun workflow). This will update the changelog & version and create a GitHub release which is inDraftstate.Find the draft release from theGitHub releasesand publish it. When a release is published, it'll triggerreleaseworkflow which creates PyPI release and deploys updated documentation.Pre-commitPre-commit hooks run all the auto-formatters (e.g.black,isort), linters (e.g.mypy,flake8), and other quality checks to make sure the changeset is in good shape before a commit/push happens.You can install the hooks with (runs for each commit):pre-commitinstallOr if you want them to run only for each push:pre-commitinstall-tpre-pushOr if you want e.g. want to run all checks manually for all files:pre-commitrun--all-filesThis project was generated using thewolt-python-package-cookiecuttertemplate.
alterize
No description available on PyPI.
altern
A Python package to interact with theAlternAPIinstallThe easiest way to install the altern package is either via pip:$ pip install alternor manually by downloading the source and run the setup.py script:python setup.py installlicenseMIT
alternat
alternat: Automate your image alt-text generation workflow.ResourcesHomepage and Reference:https://alternat.readthedocs.io/Descriptionalternat automates the image alt-text generation workflow by offering ready to use methods for downloading (Collection in alternat lingo) images and then generating alt-text.alternat features are grouped into tasks - Collection and GenerationCollectionCollection offers convenience methods to download images. It uses puppeteer (headless chrome) to automate the website crawling and image download processGenerationGeneration offers convenience methods to generate alt-texts. It offers drivers to generate the alt-texts.Azure API - Uses Azure API for image captioning and OCR. Note Azure is a paid service.Google API - Uses google API for image captioning and OCR. Note google is a paid service.Open Source - Uses free open source alternative for OCR and image captioning.Supported Video and image file formatsjpeg, jpg and png are supported.InstallationInstallation using DockerDownload and Install Docker Desktop for Mac using this linkdocker-desktopClone this repohttps://github.com/keplerlab/alternat.gitChange your directory to your cloned repo.Open terminal and run following commandscd <path-to-repo> //you need to be in your repo folder docker-compose buildStart docker container using this commanddocker-compose upIn a new terminal window open terminal inside docker container for running alternat using command line type following command:docker-compose exec alternat bashYou can use this command line to execute collect or generate command line application likethis.Installation from pypi, source and Anaconda PythonPlease refer to os specific respective installation guides formacOS,ubuntuandWindowsrespectively.Running generate task using command line:If you want to generate alternate text for any image or folder containing multiple images, you can use Command line option which we call generation stage.To run generation stage alone you can use following command:# To run a single file, results will be collected under "results/generate" # The image extensions supported are: .jpg, .jpeg, .png. python app.py generate --output-dir-path="./results" --input-image-file-path="./sample/images_with_text/sample1.png" or # To run for entire directory, results will be collected under "results/generate" # The image extensions supported are: .jpg, .jpeg, .png. python app.py generate --input-dir-path="./sample/images_with_text" --output-dir-path="./results" or # To generate alt-text using specific driver (like azure, google or open source) # Do not forget to add the credentials to their respective config files when using azure and google # azure needs SUBSCRIPTION_KEY and ENDPOINT URL # google needs ABSOLUTE_PATH_TO_CREDENTIALS_FILE (a credential json file) python app.py generate --output-dir-path="./results" --input-image-file-path="./sample/images_with_text/sample1.png" --driver-config-file-path="./sample/generator_driver_conf/azure.json"Sample images are located at sample/images and sample/images_with_textRunning collect task using command line:First stage is called collection stage, it can be used to crawl and download images from any website or website url, to run the collection stage use following commands:Use case: Download image from single page# To run the collectionpythonapp.pycollect--url=<WEBSITE_URL>--output-dir-path=./DATADUMPUse case: Download images recursively for a given site# To run the collectionpythonapp.pycollect--download-recursive--url=<WEBSITE_URL>--output-dir-path=./DATADUMPKnows Issue / TroubleshootingPlease refer toFAQ\Troubleshooting sectioninside alternat documenation, or raise an Github issue.AttributionsFor open source ocr we are using EasyOCR projecthttps://github.com/JaidedAI/EasyOCRby Rakpong Kittinaradorn.For opensource caption generation we are using model training and inference scripts using method athttps://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioningby Sagar Vinodababu.For web crawling we are using apify wrapper over puppeteer libraryhttps://apify.com/.
alternative-lib
No description available on PyPI.
alternatives
***********************Alternatives api***********************Alternatives basics---------------------Alternatives api is just a syntax sugar for selecting alternative variants from some set of values.The main thing is alternative class that implements late call on callbacks.This allow to apply boolean logic on callbacks without being executed::a1 = Alternative(lambda: a)b1 = Alternative(lambda: b)a_b = a1 | b1assert isinstance(a_b, Alternative)assert bool(a_b) is (a or b)Also you can note, in the example above, that alternative can provide python truth when it's needed bycalling callbacks and evaluating boolean expressions.Alternatives usage---------------------How it may be used::package(one_of({os_centos() or os_windows(): 'php',os_ubuntu(): 'php5'}))Here one of values will be selected: php or php5.Alternative is object, that's why we can use it as dictionary key.In the example above you can see one_of() method:.. automodule:: pywizard.api:members: one_of:imported-members:Another style of evaluating alternatives is all_of:.. automodule:: pywizard.api:members: all_of:imported-members:And nobody restrict you from creating your own alternatives selection style.Use source of one_of, all_of as reference.Checks declaration-----------------------------*Check* is just a python function that return boolen with added annotation on it::@alternativedef os_linux():"""os_linux()Checks if it is linux-like os."""info = os_info()return 'linux' in info['platform'].. note::As function is annotated there is some extra work needed to provide correct documentation for this function.@alternative handles copying __doc__ transparently for your function. But, you should take careof specifying correct method signature in first line of docstring.