package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
admk | Algebraic Dynamics Monge Kantorovich solver for solution of Optimal Transport on
Graphs. It contains (part of) the work described inFast Iterative Solution of the Optimal Transport Problem on Graphs. Consider citying this paper if you find the code inside this repository useful.Free software: MIT licenseInstallationpip install admkYou can also install the in-development version with:pip install https://github.com/enricofacca/admk/archive/main.zipDocumentationhttps://admk.readthedocs.io/DevelopmentTo run all the tests run:toxNote, to combine the coverage data from all the tox environments run:Windowsset PYTEST_ADDOPTS=--cov-append
toxOtherPYTEST_ADDOPTS=--cov-append toxChangelog0.1.1 (2022-06-10)Working code.0.1.0 (2022-06-10)Working code.0.0.0 (2022-01-12)First release on PyPI. |
admmsolver | A fast and general ADMM solverInstallationpip3 install admmsolverImplementation noteImplementation noteHow to develop using VS code + dockerInstallRemote Developmentextension of VS code.Clone the repository and open the cloned directory by VS code.You will be asked if you want to reopen the directory in a container. Say yes!(The first time you start a docker container, a docker image is built. This takes a few minutes).Once you login in the container, all required Python packages are pre-installed andPYTHONPATHis set tosrc.To run all unit tests and static type checkes bymypy, use./bin/runtests.sh.Note:The cloned git repository on your host file system is mounted on the working directory in the container. Created files in the container will be persisted even after the container stops.The full list of what are installed in the container will be found in.devcontainer/Dockerfile.If you customizeDockerfileand build an image for debugging, execute the command shown below on your host machine.docker build -f .devcontainer/Dockerfile . |
admobilizeapis | Protobuf message definition for AdMobilize services |
admobilize-malos | A simplePythoncoroutine based driver for communicating with malos-vision.LicenseThis application follows the GNU General Public License, as described in theLICENSEfile.InstallingThe package is available on PyPI, so you can easily install via pip:$pipinstallmatrix-io-malosRunning the CLI clientThe library includes a simple command line client to start reading data from
your MALOS service right away.#Getthemalosclienthelpscreen$malosclient--help#GetIMUdatatoSTDOUTfromalocallyrunningMALOSservice$malosclientIMU#GetHUMIDITYdatatoSTDOUTfromaremotelyrunningMALOSservice$malosclient-h192.168.0.100HUMIDITY#GetFACEdetectiondatausingaserializeddriverconfigfile$malosclient--driver-config-file~/driver_config.protoVISIONUsing the MalosDriverTo use the MALOS driver works as an async generator so in your code
you can do the following:importasyncioimportsysfrommatrix_io.malos.driverimportIMU_PORT,UV_PORTfrommatrix_io.proto.malos.v1importdriver_pb2frommatrix_io.proto.malos.v1importsense_pb2frommatrix_io.malos.driverimportMalosDriverasyncdefimu_data(imu_driver):asyncformsginimu_driver.get_data():print(sense_pb2.Imu().FromString(msg))awaitasyncio.sleep(1.0)asyncdefuv_data(uv_driver):asyncformsginuv_driver.get_data():print(sense_pb2.UV().FromString(msg))awaitasyncio.sleep(1.0)asyncdefstatus_handler(driver):type_mapping={driver_pb2.Status.MESSAGE_TYPE_NOT_DEFINED:"Not Defined",driver_pb2.Status.STARTED:"Started",driver_pb2.Status.STOPPED:"Stopped",driver_pb2.Status.CONFIG_RECEIVED:"Config Received",driver_pb2.Status.COMMAND_EXECUTED:"Command Executed",driver_pb2.Status.STATUS_CRITICAL:"Critical",driver_pb2.Status.STATUS_ERROR:"Error",driver_pb2.Status.STATUS_WARNING:"Warning",driver_pb2.Status.STATUS_INFO:"Info",driver_pb2.Status.STATUS_DEBUG:"Debug"}asyncformsgindriver.get_status():print(type_mapping[msg.type])ifmsg.uuid:print("UUID:{}".format(msg.uuid))ifmsg.message:print("MESSAGE:{}".format(msg.message))awaitasyncio.sleep(1.0)# Driver configurationdriver_config=driver_pb2.DriverConfig()# Create the driversimu_driver=MalosDriver('localhost',IMU_PORT)uv_driver=MalosDriver('localhost',UV_PORT)# Create loop and initialize keep-aliveloop=asyncio.get_event_loop()loop.run_until_complete(imu_driver.configure(driver_config))loop.run_until_complete(uv_driver.configure(driver_config))loop.create_task(imu_driver.start_keep_alive())loop.create_task(uv_driver.start_keep_alive())# Initialize data and error handlersloop.create_task(imu_data(imu_driver))loop.create_task(uv_data(uv_driver))loop.create_task(status_handler(imu_driver))loop.create_task(status_handler(uv_driver))try:loop.run_forever()exceptKeyboardInterrupt:print('Shutting down. Bye, bye !',file=sys.stderr)finally:loop.stop()asyncio.gather(*asyncio.Task.all_tasks()).cancel()loop.run_until_complete(loop.shutdown_asyncgens())loop.close()Who can answer questions about this library?Heitor Silva <[email protected]>Maciej Ruckgaber <[email protected]>More Documentation… |
adm-osc | ADM-OSCAn industry initiative to standardization of Object Based Audio (OBA) positioning data in live production ecosystems, by implementing the Audio Definition Model (ADM) over Open Sound Control (OSC).Project OriginatorsL-Acoustics,FLUX::SE,Radio-FranceProject Contributorsd&b Audiotechnik, DiGiCo, Lawo, Magix, Merging Technologies, Meyer Sound, Steinberg.ContextImmersive audio is gaining ground in different industries, from music streaming to gaming, from live sound to broadcast.ADMor Audio Definition Model, is becoming a popular standard metadata model in some of these industries, with serialADM used in broadcast or ADM bwf or xml files used in the studio.Motivation and goalsTo facilitate the sharing of audio objects metadata between a live ecosystem and a broadcast or studio ecosystem.To define a basic layer of interoperability between Object Editors and Object renderers while not aiming at replacing more complete manufacturer specific protocols or grammars.To define a direct translation of the most relevant ADM Object Properties onto a communication protocol widely used in the live industry,OSC.Keeping the grammar scope aligned with the ADM properties.Share this proposal with the EBU so they can become a relay, publish and support this initiative.Extend this small grammar to more ADM properties (beds, etc.) in the future.ApproachBijective mapping of the Object subset of ADM with a standard OSC grammar.Why OSC ?Lightweight network protocolEasy to implementHuman readableSupports wildcards and bundlesSpecification:Open Sound Control websiteAvailable in a plethora of professional audio hardware and software devicesGeneral principlesSender (client)Object Editor sending positioning data to one or more receivers.Position data is always normalizedReceiver (server)Handles the (optional) local scaling of normalized data: x, y, z, distanceThe receiver can be a DAW, an ADM renderer, an object editor, a bridge (ADM-OSC <-> sADM)Current statusThe current dictionary covers most Object properties from the Audio Definition model.
A more complete dictionary is being discussed to cover the remaining parts of the Audio Definition model.
OSC Live test tool (talker and listener OSC Live test tool) is now available.Current SpecificationSee Repository.Current DiscussionsSee Issues.Current development & tests toolsSpecificationsTesterDesktop application(Jose Gaudin / Meyer Sound)download from resources directoryValidator, Test and Stress TestPython Module(Gael Martinet / FLUX:: SE)adm_osc module is available to install through pip :python3-mpipinstalladm_oscquick examples:fromadm_oscimportOscClientServer# create a basic client/server that implement basic ADM-OSC communication with stable parameters# + command monitoring and analyzecs=OscClientServer(address='127.0.0.1',out_port=9000,in_port=9001)# send some individual parameterscs.send_object_position_azimuth(object_number=1,v=-30.0)cs.send_object_position_elevation(object_number=1,v=0.0)cs.send_object_position_distance(object_number=1,v=2.0)# or pack themcs.send_object_polar_position(object_number=1,pos=[-30.0,0.0,2.0])# in cartesian coordinatescs.send_object_cartesian_position(object_number=1,pos=[-5.0,8.0,0.0])# see documentation for full list of available functions# when receiving an adm osc command its analyze will be printed on the command output window## e.g.## >> received valid adm message for obj :: 1 :: gain (0.7943282127380371)# >> received valid adm message for obj :: 1 :: position aed (20.33701515197754, 0.0, 0.8807612657546997)# >> received valid adm message for obj :: 1 :: position xyz (-0.2606865465641022, 0.8273822069168091, 0.0)# >># >> ERROR: unrecognized ADM address : "/adm/obj/1/bril" ! unknown command "/bril/"# >> ERROR: arguments are malformed for "/adm/obj/1/gain :: (1.4791083335876465,)":# >> argument 0 "1.4791083335876465" out of range ! it should be less or equal than "1.0"fromadm_oscimportTestClient# create a test client, assume default address (local: '127.0.0.1')# test client can be used to test how receiver will handle all kind of parameters and parameters value rangesender=TestClient(out_port=9000)# all stable parameters for a specific objectsender.set_object_stable_parameters_to_minimum(object_number=1)sender.set_object_stable_parameters_to_maximum(object_number=1)sender.set_object_stable_parameters_to_default(object_number=1)sender.set_object_stable_parameters_to_random(object_number=1)# all stable parameters for a range of objectssender.set_objects_stable_parameters_minimum(objects_range=range(1,64))sender.set_objects_stable_parameters_maximum(objects_range=range(1,64))sender.set_objects_stable_parameters_default(objects_range=range(1,64))sender.set_objects_stable_parameters_random(objects_range=range(1,64))# all stable parameters for all objectssender.set_all_objects_stable_parameters_minimum()sender.set_all_objects_stable_parameters_maximum()sender.set_all_objects_stable_parameters_default()sender.set_all_objects_stable_parameters_random()# see documentation for full list of available functionsfromadm_oscimportStressClient# create a stress client, assume default address (local: '127.0.0.1')# stress client will send huge amount of data to stress test the receiverssender=StressClient(out_port=9000)# do stress test in cartesian coordinatessender.stress_cartesian_position(number_of_objects=64,duration_in_second=60.0,interval_in_milliseconds=10.0)# do stress test in polar coordinatessender.stress_polar_position(number_of_objects=64,duration_in_second=60.0,interval_in_milliseconds=10.0)full documentation.Source directoryCurrently supported in:SPAT Revolution (FLUX::SE), L-ISA Controller (L-Acoustics), Ovation (Merging Technologies), Nuendo (Steinberg), SpaceMap Go (Meyer Sound), QLAB 5 (Figure 53), Space Controller (Sound Particles). |
admt-distributions | No description available on PyPI. |
adnap | ADNAPReverse engineer the Panda dynamics model.Installpip install adnapRequirementsThe dependencypanda-modelrequiresPOCO C++ librariesandEigen3to be installed. On Ubuntu install them by running:sudo apt-get install libpoco-dev libeigen3-devUsagePoint the environment variable to the libfranka shared library downloaded withpanda-modelexport PANDA_MODEL_PATH=<path-to-libfrankamodel.so>Run optimization with 10 random samples from the Panda state-space and save results in params.npy:adnap-optimize -n 10 -o params.npyEvaluate the optimized physical parameters against the shared library on 1000 random samples:adnap-evaluate -n 1000 params.npy |
adnar-scraper | No description available on PyPI. |
adnbidder | A simple client for controlling bidding in the Adnuntius platform |
adnd2e-combat-simulator | SummaryThe AD&D Second edition Combat Simulator will run simple simulated battles to
determine the statistical likelyhood of success or failure by the party. The
simulator does not account for player creativity and uses a very simple method
to determine how the battle will go.UsageFirst create acombatants.yamlfile with the players and monsters
information. The examplecombatants.example.yamlfile included in the package
is a good place to start.Next simulate the warbattle [BATTLES]The BATTLES argument indicates how many times to simulate the battle. Default
is 1.ConfigurationAdd all details about the combatants into the combatants.yaml file. The
example combatants.example.yaml illustrates the syntaxacThe values in the AC dictionary are added together to detemrine the AC. For
example a shield would have an AC value of 1 because it reduces AC by 1.
Studded leather would have a value of 3 because it gives AC 7. If a
combatant had both (3 + 1), their AC would be 6. Default is no modifier or AC
10.attackThis list enumerates the attacks that the combatant will use. The values are
the names of the attacks in the attacks section. Each combatant usesallof
their attacks each round. For example a monster with an attack list of “claw”,
“claw”, “bite” would make all 3 attacks in a single round.attacksThis list contains all the possible attacks a combatant might use.damageThis can either be a string or a dictionary. If it’s a string it applies to
targets of all sizes. If it’s a dictionary the size of the target is mapped to
a damage string.tohitThe values in the To Hit dictionary are added together to determine the total
modifier for the to hit role. For example if a fighter specialized in a bastard
sword and had a magical bastard sword +1, the 1 from specialization and 1 from
magic would be added to the d20 die roll. Default is no modifier.rofRate of fire can be a number of attacks/shots per round (e.g. 2 or 3) or a
ratio of attacks/shots per round (2/1 or 3/2). Default is 1/1.qtyThe number of the given type of monster to include in the battle.hdThe hit dice of the monster. This can be a traditional Hit Die number (e.g. 3)
which is the number of 1d8 dice to roll to determine the monsters hit points,
or it can be a traditional Hit Die number with a modifier (e.g. 3 + 2), or it
can just be a description of dice and modifiers (e.g. 1d6 + 2 or 2d8) |
adn-gocd-cli | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content. |
adnipy | adnipyProcess ADNI study data with adnipy.Adnipy is a python package designed for working with theADNI database.
It also offers some handy tools for file operations.Free software: MIT licenseDocumentation:https://adnipy.readthedocs.ioCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.0.1 (2019-09-05)First release on GitHub.First release on PyPI.0.1.0 (2019-10-25)Improved documentation.Added pandas dataframe class extension for ADNI |
adnmtf | StatusCompatibilitiesContactNMTFNon-Negative Matrix and Tensor FactorizationsDevelopped in collaboration withAdvestis(Github)NMF ExamplefromadnmtfimportNMFimportnumpyasnpw=np.array([[1,2],[3,4],[5,6],[7,8],[9,10],[11,12]])h=np.array([[0.1,0.2,0.3,0.4,0.5,0.6],[0.9,0.8,0.7,0.6,0.5,0.4]])m0=w.dot(h)my_nmfmodel=NMF(n_components=2)estimator_=my_nmfmodel.fit_transform(m0)estimator_=my_nmfmodel.predict(estimator_)In this example, the matrix to be factorized is generated by the dot product of:W123456789101112andH0.10.20.30.40.50.60.90.80.70.60.40.4NMFinstantiates a NMF class with 2 components.fit_transformcalls the functions below in the given order:nmtf_basemodule:non_negative_factorization,nmf_init,r_ntf_solvenmtf_coremodule:ntf_solve,ntf_solve_simple,ntf_updatepredictderives fromfit_transformoutputs ordered sample and
feature indexes for future use in ordered heatmaps.predictcallsnmf_predictandbuild_clustersin thenmtf_basemoduleNTF ExamplefromadnmtfimportNTFimportpandasaspdDATA_PATH=...df=pd.read_csv(DATA_PATH)m0=df.valuesn_blocks=5my_ntfmodel=NTF(n_components=5)estimator_=my_ntfmodel.fit_transform(m0,n_blocks)estimator_=my_ntfmodel.predict(estimator_)In this example, the tensor to be factorized is read in filedata_ntf.csv.
The tensor has 5 layers in the 3rd dimension and is formatted as a table
with 5 blocks concatenated horizontally.NTFinstantiates a NTF class with 5 components.fit_transformcalls the functions below in the given order:nmtf_basemodule:non_negative_tensor_factorization,ntf_init,r_ntf_solvenmtf_coremodule:ntf_solve,ntf_solve_simple,ntf_updatepredictderives fromfit_transformoutputs ordered sample and
feature indexes for future use in ordered heatmaps.predictcallsnmf_predictandbuild_clustersin thenmtf_basemoduleArticlesPeer-reviewed articles(researchgate) A Tale of Two Matrix Factorizations(researchgate) Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations(nature) Learning the parts of objects by non-negative matrix factorizationBlog articles(Medium) Using Non-negative matrix factorization to classify companies(offconvex) Tensor Methods in Machine Learning |
adnotatio-server | No description available on PyPI. |
ad-notify | # notify“notify”は通知を簡単にするための自前ライブラリ
現在はEmail通知が可能。# DEMOnone# Featuresnone# RequirementPython 3.6.8`bash `# Usagenone`bash `# Notenone# AuthorToshiaki.Kosuga
Aderans Internal Information System Department# License“ecbeing-downloader” is under [MIT license](https://en.wikipedia.org/wiki/MIT_License).
“ecbeing-downloader” is Confidential. |
adnovum-test-adn-pkg | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content. |
adnpy | ADNpy aims to be an easy-to-use Python library for interacting with theApp.net API.InstallationTo install Requests, simply:$pipinstalladnpyDocumentationDocumentation is available athttp://adnpy.readthedocs.org/.Quick StartIn order to use ADNpy, You’ll to need an access token. If you don’t already have one, firstcreate an app, and then generate an access token for your app.importadnpyadnpy.api.add_authorization_token(<AccessTokenHere>)# Create a postpost,meta=adnpy.api.create_post(data={'text':'Hello App.net from adnpy!'})# Take a look at recent checkinsposts,meta=adnpy.api.get_explore_stream('checkins')forpostinposts:printpost# You can even paginate through checkins using the cursor method.# Cursors will obey rate limits (by blocking until retries are# permitted), and will allow you to page through the entire stream.forpostinadnpy.cursor(adnpy.api.get_explore_stream,'checkins'):printpost |
adns | adns-python is a Python module that interfaces to the adns asynchronousresolver library.http://www.gnu.org/software/adns/ |
adns-python | adns-python is a Python module that interfaces to the adns asynchronous
resolver library.http://www.gnu.org/software/adns/ |
adn-tools | No description available on PyPI. |
adn-torch | No description available on PyPI. |
adnuntius | Interface and tools for using the Adnuntius API |
adnuntius-bidder | Failed to fetch description. HTTP Status Code: 404 |
ado | UNKNOWN |
ado2hugo | ADO2HugoWith this program you will be able to exportAzure DevOpswikis toHugoAzure DevOps offers two kinds of wiki: Project (managed by the platform) or "Wiki as code" (publish directly markdown files from a branch in your repository).Currently, only wiki projects are supported, and it does not have sense to support the "wiki as code" option.The program uses the Azure DevOps API to iterate over an organization's projects and, in accordance with Hugo's folder structure, exports all wiki pages with their corresponding attachments.I have usedGeekdoc theme.Installationpip install ado2hugoUsageado2hugo-h
usage:ado2hugo[-h][--organizationORGANIZATION][--patPAT][--projectPROJECT][-v]site_dir
positionalarguments:site_dirSitedirectory
optionalarguments:-h,--helpshowthishelpmessageandexit--organizationORGANIZATIONOrganization--patPATPersonalaccesstoken--projectPROJECTProjectname-v,--verboseVerboseORGANIZATIONandPAToptions, can be also an environment variable.ado2hugo--organization<YOUR_ORGANIZATION>--pat<YOUR_PAT><YOUR_SITE_DIRECTORY>Be carefully because this program deletes before its execution, all files in the /static and /content folders of supplied pathDevelopmentYou must run the following commands to develop localy:Create a pipenv local environment withpipenv install --devInstall pre-commit hooks withpre-commit installFor executing__main__.py, you must usepython -m src.ado2hugo, so relative imports will work, if you usepython __main__.pyyou will receive the errorImportError: attempted relative import with no known parent package,
more information inhttps://napuzba.com/a/import-error-relative-no-parent |
ado-asana-sync | ado-asana-syncThis project aims to synchronize work items between Azure DevOps (ADO) and Asana. It's currently in development and not ready for use. Breaking changes will occur as needed.How to useGet the latest container image from theGithub Container Registry.Configure the environment variables with the relevant values:ADO_PAT- Your Personal Access Token for ADO to accesst the work items.ADO_URL- The full URL of your Azure DevOps instance.ASANA_TOKEN- Your Personal Access Token for Asana to access the work items.ASANA_WORKSPACE_NAME- Name of the Asana workspace to sync with.CLOSED_STATES- Comma separated list of states that will be considered closed.THREAD_COUNT- Number of projects to sync in parallel. Must be a positive integer.SLEEP_TIME- Duration in seconds to sleep between sync runs. Must be a positive integer.SYNCED_TAG_NAME- Name of the tag in Asana to append to all synced items. Must be a valid Asana tag name.Run the container with the configured environment variables.The application will start syncing work items between ADO and Asana based on the configured settings.DevelopmentCommit message styleThis repo usesConventional Commitsto ensure the build numbering is generated correctlyManual testingTo test the application manually, you can use the following steps:Create new ADO work item and ensure it is synced to Asana.Rename Asana task and ensure it is reverted back to the ADO name.Rename ADO task and ensure it is synced to Asana.Remove Synced tag from item in Asana and ensure it is replaced.Delete synced tag from Asana workspace and from appdata.json file and ensure it is re-created and assigned to all synced tasks.Mark Asana task as complete and ensure it is re-opened.Mark ADO task as complete and ensure it is marked as complete in Asana.Re-open ADO task and ensure it is re-opened in Asana.ReferenceADOazure-devops PyPiazure-devops GitHubazure-devops API referenceazure-devops samplesAsanaAsana PyPiAsana GitHubAsana API Reference |
adobe-aam | Adobe Audience Manager - Python ExtensionThis is a Python wrapper for the Adobe Audience Manager API.To get startedGenerate a JWT Authentication using Adobe IOThis package requires you to create a .json document with the following credential details: client ID, client secret, tech account ID, and organization ID. In a separate file, you also need generate a public/private key pair.credentials.json:{"client_id":"...","client_secret":"...","tech_acct_id":"...","org_id":"..."}Once you have these documents, you can get install the package and login:Terminal:pip install adobe_aamPython:importadobe_aamasaamaam.Login('path/to/credentials.json','path/to/private.key')Your authentication token should be tied to a Product Profile, which controls the actions you can execute and the objects on which you can act. If you are unable to perform an action supported by this package, the error is likely due to a permissions issue within the credentials setup.Here are some examples:Python:# Get traits by folder and sortaam.Traits.get_many(folderId=12345,sortBy='createTime',descending=True)# Get trait by sidaam.Traits.get_one(sid=12345)# Get traits by integration code and simplify resulting dataframeaam.Traits.get_many(ic='code',condense=True)# Get trait limits of accountaam.Traits.get_limits()# Create traits from csvaam.Traits.create_from_csv('path/to/traits_to_create.csv')If you're new to Python and want to output the results of an AAM API call, you can try something like the following:Python:importpandasaspdoutput=aam.Traits.get_one(sid=12345)output.to_csv('path/to/your_aam_output.csv')Coverage:Every standard API call for AAM can be found on SwaggerEndpointActionCoverageTraitsCreatexTraitsGetxTraitsUpdatexTraitsDeletexSegmentsCreatexSegmentsGetxSegmentsUpdatexSegmentsDeletexTrait FolderGetxSegment FolderGetxDestinationsCreate-DestinationsGet-DestinationsUpdate-DestinationsDelete-Derived SignalsCreate-Derived SignalsGet-Derived SignalsUpdate-Derived SignalsDelete-DatasourcesCreate-DatasourcesGet-DatasourcesUpdate-DatasourcesDelete-Custom reporting will be added according to roadmap. Examples:# Get traits trends for all SIDs in a folderaam.Reports.traits_trend(startDate="2021-02-21",endDate="2021-02-23",folderId=12345)# Get traits trends for one SIDaam.Reports.traits_trend(startDate="2021-02-21",endDate="2021-02-23",sid=[12345]) |
adobe-analytic-API-V2.0-azure | Adobe Analytics Python ClassDownload Reports data utilising the Adobe.io version 2.0 API locally.To Integrate with Cloud (Azure) ,please checkIntegrate with AzureAuthentication methods supported by the package:1.JWT2.OAuth (tested only through Jupyter Notebook!)Authentication via JSON Web Token (JWT aka Service Account)We’re going to use JWT aka Service Account as the method for authentication since it’s designed for machine-to-machine communication. As such authentication can be completely automated on platforms such as Azure after it’s built.Compare this to Oauth 2.0 based authentication which requires user input at some interval. You might want the user to authenticate from time to time, but my goal is to build this data ingestion pipeline that doesn’t require any user interaction once it’s built.JWT Requirements & Adobe.io accessIn order to run the package, first you need to gain access to a service account from Adobe.io or request an existing certificate from Principle Publisher. The method used is JWT authentication. More instructions on how to create the integration at:https://www.adobe.io/authentication/auth-methods.html#!AdobeDocs/adobeio-auth/master/JWT/JWT.md.To obtain JWT credientials from Adbobe Developer ConsoleIn Projects > Credential Details > Get the Client ID and Client Secret:In Projects > Credential Details > Generate a public/private keypairWhen you click the button you’ll download a zip file that contains a public key file and private key file. You can open these in any text editor to see what they look like. Keep the private key file handy, we’ll refer to it later in our Python code.Or you can request JWT certificate from Principle Publisher.Sample certificate:{'CLIENT_SECRET':'xxxx','ORG_ID':'xxxx@AdobeOrg','API_KEY':'xxxxx','TECHNICAL_ACCOUNT_ID':'[email protected]','TECHNICAL_ACCOUNT_EMAIL':'[email protected]','PUBLIC_KEYS_WITH_EXPIRY':{'xxxxxx':'mm/dd/yyyy'}}After you have completed the integration, you will finde available the following information:Organization ID ( ORG_ID ): It is in the format of < organisation id >@AdobeOrgTechnical Account ID( TECHNICAL_ACCOUNT_ID ): < tech account id >@techacct.adobe.comClient ID( API_KEY ):Like a username for the API, Information is available on the completion of the Service Account integrationClient Secret( CLIENT_SECRET ):Like a password for the API, Information is available on the completion of the Service Account integrationAccount ID( TECHNICAL_ACCOUNT_ID ): Instructions on how to obtain it athttps://youtu.be/lrg1MuVi0Fo?t=96Report suite( GLOBAL_COMPANY_ID ): Report suite ID from which you want to download the data. Usually it is 'canada5'.Private Key: Like a signature for your passwordJWT Payload: Some specific details that Adobe want you to show them to trade for the Access Token.Make sure that the integration is associated with an Adobe Analytics product profile that is granted access to the necessary metrics ,dimensions and segments.Package installationpipinstallrequirements.txtSamplesInitial setup - JWTAfter you have configured the integration and downloaded the package, the following setup is needed:ADOBE_ORG_ID=os.environ['ADOBE_ORG_ID']SUBJECT_ACCOUNT=os.environ['SUBJECT_ACCOUNT']CLIENT_ID=os.environ['CLIENT_ID']CLIENT_SECRET=os.environ['CLIENT_SECRET']PRIVATE_KEY_LOCATION=os.environ['PRIVATE_KEY_LOCATION']GLOBAL_COMPANY_ID=os.environ['GLOBAL_COMPANY_ID']REPORT_SUITE_ID=os.environ['REPORT_SUITE_ID']Next initialise the Adobe client:aa=analytics_client(adobe_org_id=ADOBE_ORG_ID,subject_account=SUBJECT_ACCOUNT,client_id=CLIENT_ID,client_secret=CLIENT_SECRET,account_id=GLOBAL_COMPANY_ID,private_key_location=PRIVATE_KEY_LOCATION)aa.set_report_suite(report_suite_id=REPORT_SUITE_ID)Initial setup - OAuthImport the package and initiate the required parametersimportanalytics_clientclient_id='<client id>'client_secret='<client secret>'global_company_id='<global company id>'Initialise the Adobe client:aa=analytics_client(auth_client_id=client_id,client_secret=client_secret,account_id=global_company_id)Perform the authenticationaa._authenticate()For a demo notebook, please refer to theJupyter Notebook - OAuth exampleReport ConfigurationsSet the date range of the report (format: YYYY-MM-DD)aa.set_date_range(date_start='2019-12-01',date_end='2019-12-31')To configure specific hours for the start and end date:aa.set_date_range(date_start='2020-12-01',date_end='2020-12-01',hour_start=4,hour_end=5)Ifhour_endis set, then only up to that hour in the last day data will be retrieved instead of the full day.Global segmentsTo add a segment, you need the segment ID (currently only this option is supported). To obtain the ID, you need to activate the Adobe Analytics Workspace debugger (https://github.com/AdobeDocs/analytics-2.0-apis/blob/master/reporting-tricks.md). Then inspect the JSON request window and locate the segment ID under the 'globalFilters' object.To apply the segment:aa.add_global_segment(segment_id='s300000938_60d228c474f05e635fba03ff')# add segment 'SC Labs (E/F)(v12)' to the report request bodyRequest with 2 metrics and 1 dimensionaa.add_metric(metric_name='metrics/visits')aa.add_metric(metric_name='metrics/orders')aa.add_dimension(dimension_name='variables/mobiledevicetype')data=aa.get_report()Output:itemId_lvl_1value_lvl_1metrics/visitsmetrics/averagetimeuserstay0Other131728229488Tablet2452163986270Mobile Phone1223............Request with 2 metrics and 2 dimensions:aa.add_metric(metric_name= 'metrics/visits')
aa.add_metric(metric_name= 'metrics/averagetimespentonsite')
aa.add_dimension(dimension_name = 'variables/devicetype')
aa.add_dimension(dimension_name = 'variables/evar5')
data = aa.get_report_multiple_breakdowns()Output:
Each item in level 1 (i.e. Tablet) is broken down by the dimension in level 2 (i.e. eng,fra). The package downloads all possible combinations. In a similar fashion more dimensions can be added.itemId_lvl_1value_lvl_1itemId_lvl_2value_lvl_2metrics/visitsmetrics/averagetimespentonsite0Other1fra233390Other2fra424120Other3fra84041..................1728229488Tablet1eng80121728229488Tablet2eng5041..................Upload result to Azure Blob StorageNow to connect to the Azure blob to upload the result, we must provide an the following parameters. You can find them on the “Access keys” page of the Azure blob storage account. To obtain the parameters, open the home page of Azure Portal Select Azure Blob storage account (stsaebdevca01 ) :conn_string=os.environ['conn_string']accountName=os.environ['accountName']accountKey=os.environ['accountKey']containerName=os.environ['containerName']Now we can initiate the blob client and upload our result as a csv into the containerblob=BlobClient.from_connection_string(conn_str=conn_string,container_name=containerName,blob_name='blob_parent/blob_name')blob.upload_blob(str(data.to_csv()),overwrite=True)Unit TestRun the following code to unit test the codepy.testAdobe-Azure-analytics-api-v2.0/tests/test_core.py# orpytestNext StepsIntegrate with AzureConnect with Power BIIssues, Bugs and Suggestions:Known missing features:No support for filteringNo support for top NNo support for custom sorting |
adobe-analytics | UNKNOWN |
adobe-analytics-api-20 | adobe-analytics-api - An Adobe Analytics API 2.0 library for pythonInstallationYou can install this through pip: pip install adobe-analytics-api_20Usagefromadobe_analyticsimportapiimportauthenticationasauthconfig={'client_id':'CLIENT_ID','client_secret':'CLIENT_SECRET','org_id':'ORG_ID','tech_account':'TECH_ACCOUNT','keyfile_path':'KEYFILE_PATH','company_id':'COMPANY_ID'}definition={"reportsuite":"report-suite","start_date":"2019-10-01","end_date":"2019-10-01","dimensions":["Day","variables/evar50"],"metrics":[{"name":"metrics/orders"},{"name":"metrics/revenue"}],"segments":["Name|id of a Segment"]}jwt_token=auth.getToken(config["org_id"],config["tech_account"],config["client_id"],3)print(jwt_token)jwt=auth.encrypt_jwt(jwt_token,config["keyfile_path"])print(jwt)token=auth.authorize(config["client_id"],config["client_secret"],jwt)print(token)response=api.report(token['access_token'],config["client_id"],config["company_id"],definition["reportsuite"],definition["dimensions"],definition["metrics"],definition["start_date"],definition["end_date"],segments=definition["segments"])print(response) |
adobecli | A linux-like shell |
adobe-color-swatch | Adobe Color SwatchDescriptionswatch.pyis a Python 3 command line interface created to extract Color
Swatch data from.acofiles and save them as a simple.csv. It can also
work in revers and generate a.acofile based on a.csvdata file.InstallationInstall from GitHub repository:pip3 install git+https://github.com/kdybicz/adobe-color-swatchUsageExtract.acousage: swatch extract [-h] -i INPUT -o OUTPUT [-v]
Extract .aco input file to a .csv output file
optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
input file
-o OUTPUT, --output OUTPUT
output file
-v, --verbose increase output verbosityGenerate.acousage: swatch generate [-h] -i INPUT -o OUTPUT [-v]
generate .aco output file based on .csv input file
optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
input file
-o OUTPUT, --output OUTPUT
output file
-v, --verbose increase output verbositySpecification.acofile format parser and generator were created based onAdobe Color Swatch File Format Specification.
Script is supporting both version 1 and 2 of the Color Swatch format..csvfile is using custom format:name,space_id,color
RGB Magenta 16-bit,0,#FF00FF
RGB Magenta 32-bit,0,#FFFF0000FFFF
CMYK Magenta 16-bit,2,#FF00FFFF
CMYK Magenta 32-bit,2,#FFFF0000FFFFFFFF
75% Gray,8,#1D4CColor space IDsSupported color spacesIDNameColor information0RGBSupports 16 and 32 bit channels, so accordingly 6 or 12 bytes of color information1HSBSupports 16 and 32 bit channels, so accordingly 6 or 12 bytes of color information2CMYKSupports 16 and 32 bit channels, so accordingly 8 or 16 bytes of color information8GrayscaleSupports 16 and 32 bit channel, so accordingly 2 or 4 bytes of color informationNOT supported color spacesIDName3Pantone matching system4Focoltone colour system5Trumatch color6Toyo 88 colorfinder 10507Lab10HKS colorsValidationTo validate that the.acofile generation is working properly I decided on
the following process:export few default Color Swatches from Adobe Photoshop 2022extract them to.csvfiles and make sure data in that files are matching
to what is in the Adobe Photoshopgenerate new.acofiles from.csvacquired in the previous stepcompare original.acofiles with ones regenerated from.csvusing:hexdump examples/utf.aco > utf.aco.hex
hexdump utf-new.aco > utf-new.aco.hex
diff utf.aco.hex utf-new.aco.hex -yimport new.acofiles into Adobe Photoshop and compare them with original
SwatchesNotesI'm aware that original.acofiles contain some additional bytes at the end
of the files. Those bytes which will not be present in.acofiles generated
by the script. These bytes might be related toCustom color spaces,
which are not supported by this script.Nevertheless I was able to successfully import generated.acofiles back into
the Adobe Photoshop and use them in my work!DevelopmentTesting and lintingFor all supported environments:tox --parallelNote: running tests for all supported Python versions require to have
Python interpreters for those versions to be installed.For particular environment:tox -e py39For running tests in development environment:tox --devenv venv -e py39
. venv/bin/activate
pytestLocal installationInstall a project in editable mode:pip3 install -e .DeploymentBuilding the packages:./venv/bin/python setup.py sdist bdist_wheelChecking if build packages are valid:twine check dist/*Uploading to pypi:twine upload -r pypi dist/* |
adobo | # Description
adobo is a Python-framework for single cell gene expression data analysis. adobo can be composed into scripts, used in interactive workflows and much more.# Getting started
Full documentation and tutorial are available here:
*https://oscar-franzen.github.io/adobo/# Contact, bugs, etc
* Oscar Franzén, <[email protected]> |
adoc | Seehttps://github.com/saalaa/adoc |
adoc-math | // Header# adoc-math:toc: macro// Links:example: https://github.com/hacker-dom/adoc-math/raw/main/example/adoc-math-example.pdf[Example]:adoc: https://docs.asciidoctor.org/asciidoc/latest[AsciiDoc]:markdown: https://daringfireball.net/projects/markdown/[Markdown]:latex: https://www.latex-project.org[LaTeX]:adoctor: https://github.com/asciidoctor/asciidoctor[Asciidoctor]:adoctor-pdf: https://github.com/asciidoctor/asciidoctor-pdf[Asciidoctor-Pdf]:adoctorjs: https://github.com/asciidoctor/asciidoctor.js[Asciidoctor.js]:adoc-stem: https://docs.asciidoctor.org/asciidoc/latest/stem/[AsciiDoc STEM]:adoctor-pdf-stem: https://docs.asciidoctor.org/pdf-converter/latest/stem[Asciidoctor-Pdf STEM]:mathjax: https://github.com/mathjax/MathJax-src[MathJax]:katex: https://github.com/KaTeX/KaTeX[KaTeX]:adoc-math: https://github.com/hacker-dom/adoc-math[adoc-math]:adoctor-math: https://github.com/asciidoctor/asciidoctor-mathematical[asciidoctor-mathematical]:amath: http://asciimath.org[AsciiMath]Use MathJax (Latex or AsciiMath) in your AsciiDoc projects 🤟🚀toc::[]## 📝 {example}## 📝 Installationadoc-math has zero depependencies! So it's fine to install it globallyfootnote:[Theoretically, the only time this could cause issues is if you have another package which has the name adoc-math (it obviously has to have a different PyPI name, because adoc-math is already taken 😛. But this is not very likely.. )] 😛[source,bash]----pip3 install --user --upgrade adoc-mathadoc-math-setup # will call `npm i -g mathjax@3` and `npm link`----## 📝 Overview### 🔍 BackgroundI think of {adoc} as a markup syntax somewhere between {markdown} and {latex}. It originated with a https://github.com/asciidoc-py/asciidoc-py[Python implementation], but afaik that isn't actively developed, and the reference implementation is {adoctor} in Ruby.{adoc} allows you to write a document and then output it in:* html ({adoctor})* pdf ({adoctor-pdf})and many other formats! There is even an {adoctorjs} version (an automated translation of the Ruby code to JavaScript).### 🔍 LaTeXPutting LaTeX equations in other places than a TeX document is not so easy. There are two main libraries for this:* {mathjax}** It uses native browser fonts and a lot of Css to replicate {latex} in the browser.* {katex}** Similar to {mathjax}, built by Khan Academy.### 🔍 STEMSTEM stands for Science, Technology, Engineering, Mathematics, basicaly {latex}. There are two sections in the {adoc} documentation on STEM:* {adoc-stem}* {adoctor-pdf-stem}TLDR:* In {adoctor} (i.e. Html output), you can include math with `stem:[x+y]`. In the browser, {mathjax} is used to render the math, and frankly, it looks beautiful.* Since {mathjax} uses browser fonts and Css, it doesn't work in Pdfs. There is an official {adoctor-math} package that provides this support. However, it is extremely quirky, and the ouput doesn't look very good (see a comparison of {adoc-math} and {adoctor-math} in the {example})** Some more references:*** https://github.com/asciidoctor/asciidoctor-mathematical/issues/45### 🔍 ArchitectureThat's where `adoc-math` comes in! I decided for:* a Python package that searches for naturally-looking latex cells (e.g. `$a+b$`), calls {mathjax} to create an svg, and replaces the cells with an image of the svgI couldn't use {katex} because only {mathjax} has an Svg output (see https://github.com/KaTeX/KaTeX/issues/375).Unfortunately, {mathjax} 3 doesn't come with a Node CLI package like https://github.com/mathjax/mathjax-node-cli/[MathJax 2]. So I implemented xref:./adoc_math/d_mathjax_wrapper.js[a wrapper] over the library.### 🔍 Usage[cols="2*"]|===| Inline cells:a|----$x + y$ [...options]----| Block cells:a|----$$ [...options]x + y$$----|===For more examples, see the {example}.## 📝 FAQ> Why isn't `adoc-math` written in Ruby?I don't speak Ruby 😞 If you would like to translate this library to Ruby, or at least an AsciiDoc macro that can get replaced by an image, so we cant get rid of the extra metacompilation step, I'd be more than happy to help!> What about Windows?I tried to be conscious of non-Posix platforms, but haven't tested in on Windows. Any behavioral discrepancies would be considered valid issues.> Can I reference a cell, or add a caption to a block cell?Yes! Check out the {example}.> It's annoying having to uncomment the source math to edit it.You can use a `pre-post` pattern. `pre.adoc` will be your source code, and `post.adoc` will be the output of `adoc-math` / input to `asciidoctor(-pdf)?`. Run `cpy pre.adoc post.adoc` before every invocation to `adoc-math`.> How come inline cells become part of the sentence when they are on a separate line?In {adoc}, you need to separate two blocks with at least one _empty_ line. 🙂> Does `adoc-math` work with an Html output?This first version is geared towards Pdf output. Happy to add more powerful support for Html outputs in the future (e.g., just use the native `stem:[]` macro for Html, so we can use basic {mathjax} with browser fonts and Css (instead of svgs)).> Can I use a different font?{mathjax} currently http://docs.mathjax.org/en/v3.2-latest/output/fonts.html[doesn't provide support for multiple fonts].> Can I make my math thinner/thicker?The created svgs have a property called `stroke-width` that can adjust this. Unfortunately, it is currently set to 0, so it is not possible to make it thinner. In theory it should be possible to make it *thicker* by increasing that value. xref:./adoc_math/e_svg_transforming.py[svg_transforming.py] would be the place for that; or create an issue and I'll add it.## 📝 Debugging> I get a MODULE_NOT_FOUND error.MathJax probably cannot be found. Try running `adoc-math-setup`.> My AsciiMath fractions are too large!It seems that {amath} interprets fractions in `displaystyle` rather than `textstyle` (`\dfrac{}{}` rather than `\tfrac{}{}` or even `\frac{}{}`, see https://tex.stackexchange.com/a/135395/31626[StackExchange]).I haven't found a good solution to this yet. If you have any ideas, please let me know! Note that if you have a singleton fraction (`$a/b$ amath`) you can scale it down with `$a/b$ amath, scale = 60%` (or just use `tex`). |
adodbapi | Project-------adodbapiA Python DB-API 2.0 (PEP-249) module that makes it easy to use Microsoft ADOfor connecting with databases and other data sourcesusing either CPython or IronPython.Home page: <http://sourceforge.net/projects/adodbapi>Features:* 100% DB-API 2.0 (PEP-249) compliant (including most extensions and recommendations).* Includes pyunit testcases that describe how to use the module.* Fully implemented in Python. -- runs in Python 2.5+ Python 3.0+ and IronPython 2.6+* Licensed under the LGPL license, which means that it can be used freely even in commercial programs subject to certain restrictions.* The user can choose between paramstyles: 'qmark' 'named' 'format' 'pyformat' 'dynamic'* Supports data retrieval by column name e.g.:for row in myCurser.execute("select name,age from students"):print("Student", row.name, "is", row.age, "years old.")* Supports user-definable system-to-Python data conversion functions (selected by ADO data type, or by column)Prerequisites:* C Python 2.7 or 3.5 or higherand pywin32 (Mark Hammond's python for windows extensions.)orIron Python 2.7 or higher. (works in IPy2.0 for all data types except BUFFER)Installation:* (C-Python on Windows): Install pywin32 ("pip install pywin32") which includes adodbapi.* (IronPython on Windows): Download adodbapi from http://sf.net/projects/adodbapi. Unpack the zip.Open a command window as an administrator. CD to the folder containing the unzipped files.Run "setup.py install" using the IronPython of your choice.NOTE: ...........If you do not like the new default operation of returning Numeric columns as decimal.Decimal,you can select other options by the user defined conversion feature.Try:adodbapi.apibase.variantConversions[adodbapi.ado_consts.adNumeric] = adodbapi.apibase.cvtStringor:adodbapi.apibase.variantConversions[adodbapi.ado_consts.adNumeric] = adodbapi.apibase.cvtFloator:adodbapi.apibase.variantConversions[adodbapi.ado_consts.adNumeric] = write_your_own_convertion_function............notes for 2.6.2:The definitive source has been moved to https://github.com/mhammond/pywin32/tree/master/adodbapi.Remote has proven too hard to configure and test with Pyro4. I am moving it to unsupported statusuntil I can change to a different connection method.whats new in version 2.6A cursor.prepare() method and support for prepared SQL statements.Lots of refactoring, especially of the Remote and Server modules (still to be treated as Beta code).The quick start document 'quick_reference.odt' will export as a nice-looking pdf.Added paramstyles 'pyformat' and 'dynamic'. If your 'paramstyle' is 'named' you _must_ pass a dictionary ofparameters to your .execute() method. If your 'paramstyle' is 'format' 'pyformat' or 'dynamic', you _may_pass a dictionary of parameters -- provided your SQL operation string is formatted correctly.whats new in version 2.5Remote module: (works on Linux!) allows a Windows computer to serve ADO databases via PyROServer module: PyRO server for ADO. Run using a command like= C:>python -m adodbapi.server(server has simple connection string macros: is64bit, getuser, sql_provider, auto_security)Brief documentation included. See adodbapi/examples folder adodbapi.rtfNew connection method conn.get_table_names() --> list of names of tables in databaseVastly refactored. Data conversion things have been moved to the new adodbapi.apibase module.Many former module-level attributes are now class attributes. (Should be more thread-safe)Connection objects are now context managers for transactions and will commit or rollback.Cursor objects are context managers and will automatically close themselves.Autocommit can be switched on and off.Keyword and positional arguments on the connect() method work as documented in PEP 249.Keyword arguments from the connect call can be formatted into the connection string.New keyword arguments defined, such as: autocommit, paramstyle, remote_proxy, remote_port.*** Breaking change: variantConversion lookups are simplified: the following will raise KeyError:oldconverter=adodbapi.variantConversions[adodbapi.adoStringTypes]Refactor as: oldconverter=adodbapi.variantConversions[adodbapi.adoStringTypes[0]]License-------LGPL, see http://www.opensource.org/licenses/lgpl-license.phpDocumentation-------------Look at adodbapi/quick_reference.mdhttp://www.python.org/topics/database/DatabaseAPI-2.0.htmlread the examples in adodbapi/examplesand look at the test cases in adodbapi/test directory.Mailing lists-------------The adodbapi mailing lists have been deactivated. Submit comments to thepywin32 or IronPython mailing lists.-- the bug tracker on sourceforge.net/projects/adodbapi may be checked, (infrequently).-- please use: https://github.com/mhammond/pywin32/issues |
adofaipy | This is a library that makes automating events in ADOFAI levels more convenient.List of Classes:LevelDictInitalize withLevelDict(filename, encoding)(encoding is optional, default is utf-8-sig)LevelDict.filename : strThe filename of the file from which theLevelDictwas obtained.LevelDict.encoding : strThe encoding of the file from which theLevelDictwas obtained.LevelDict.leveldict : dictThe specified file in the form of nested dictionaries and lists.LevelDict.angleData : list[float]A list of all tile angles in the level.LevelDict.actions : list[Action]A list of all actions (tile-based events) in the level.LevelDict.decorations : list[Decoration]A list of all decorations (including objects and text) in the level.LevelDict.nonFloorDecos : list[Decoration]A list of all decorations in the level that are not tied to any particular tile.LevelDict.settings : SettingsThe level settings, as a Settings object.LevelDict.tiles : list[Tile]A list of all tiles in the level. (SeeTileclass)LevelDict.appendTile(self, angle : float) -> None:Adds a single tile to the end of the level.LevelDict.appendTiles(self, angles : list[float]) -> None:Adds a list of tiles to the end of the level.LevelDict.insertTile(self, angle : float, index : int) -> None:Adds a single tile to the level before the specified index.LevelDict.insertTiles(self, angles : list[float], index : int) -> None:Adds a list of tiles to the level before the specified index.LevelDict.addAction(self, event : Action) -> int:Adds the given action to the level. Returns the index of the event within the tile.LevelDict.addDecoration(self, event : Decoration) -> int:Adds the given decoration to the level. Returns the index of the event within the tile / within the list of non-floor decorations.LevelDict.getActions(self, condition : Callable) -> list[Action]:Returns a list of actions in the level that meet the given condition. Returns a list of all actions if condition is not specified.LevelDict.getDecorations(self, condition : Callable) -> list[Decoration]:Returns a list of decorations in the level that meet the given condition. Returns a list of all decorations if condition is not specified.LevelDict.removeActions(self, condition : Callable) -> list[Action]:Removes all actions in the level that meet the given condition. Returns a list of removed actions.LevelDict.removeDecorations(self, condition : Callable) -> list[Decoration]:Removes all decorations in the level that meet the given condition. Returns a list of removed decorations.LevelDict.popAction(self, tile, index) -> Action:Removes the action at the specified tile at the specified index. Returns the event.LevelDict.popDecoration(self, tile, index) -> Decoration:Removes the decoration at the specified tile at the specified index. Returns the event.LevelDict.replaceFieldAction(self, condition : Callable, field : str, new) -> None:Changes the value of "field" to "new" in all actions that meet the given condition.LevelDict.replaceFieldDecoration(self, condition : Callable, field : str, new) -> None:Changes the value of "field" to "new" in all decorations that meet the given condition.LevelDict.writeDictToFile(self, leveldict : dict, filename : str):Writes the given dictionary to the specified file. Overwrites the original file if filename is not specified.Use this if you are working withLevelDict.leveldict.LevelDict.writeToFile(self, filename : str=None) -> None:Writes the level to the specified file. Overwrites the original file if filename is not specified.SettingsPart of a LevelDict object. The properties of this class are equivalent to the parameters in thesettingsfield of a .adofai file.TileA list of Tiles is contained within a LevelDict object.Tile.angle : floatThe angle that the tile points towards (0 degrees is facing right, 90 degrees is facing upwards)Tile.actions : list[Action]A list of actions which are present on that particular tile.Tile.decorations : list[Decoration]A list of decorations which are present on that particular tile.ActionAn event that goes on a tile (one with a purple icon). AnActionobject behaves like adict. The keys depend on the event type. Check any entry in theactionsfield of a .adofai file for more information on the fields used by that event type.Action objects are found in a list of actions in aTileobject.DecorationA decoration, object decoration, or text decoration (anything found in the decorations menu on the left sidebar). ADecorationobject behaves like adict. The keys depend on the event type. Check any entry in thedecorationsfield of a .adofai file for more information on the fields used by that event type.Decoration objects are found in a list of decorations in aTileobject. If the decoration is not tied to any tile, it is found in the list of non-floor decos.# Changelog3.0.1 (2023/12/02)Minor bugfixFixed markdown bug on README and CHANGELOGRemoved unnessecary files3.0.0 (2023/12/01)Major updateCompletely overhauled file structure to use a class-based systemToo much to list! Read the docs for more info2.0.3 (2023/09/23)Minor bugfixesFixedaddEvent()not detectingaddObjectandaddTexteventsFixedremoveEvents()not modifyingleveldictFixed typo inreplaceField()Added logo to README2.0.2 (2023/09/03)Minor bugfixFixed markdown bug on README and CHANGELOG for real this time2.0.1 (2023/09/03)Minor bugfixFixed markdown bug on README and CHANGELOG (hopefully)2.0.0 (2023/09/03)Major updateCompletely overhauled file reading to use dictionaries instead of stringsAddedgetFileDict()Added 3 new utility functions:searchEvents(),removeEvents()andreplaceField()getAngles(),setAngles()and all event functions are now deprecatedUpdated documentationREADME and CHANGELOG now uses markdown0.1.1 (2023/07/17)Minor bugfixesFixed encoding incompatibilityFixed output string formoveDecorations()0.1.0 (2023/06/15)Minor updateAdded dynamic pivot offset, parallax offset, masking and blending fields toaddDecoration()andmoveDecorations()Added angleOffset tosetSpeed()0.0.3 (2023/06/14)Minor bugfix: fixed filename__init__.py0.0.2 (2023/06/13)Minor bugfix:'re'is no longer a dependency0.0.1 (2023/05/28)First Release |
adolet-db | No description available on PyPI. |
adol-Py | No description available on PyPI. |
adom | adomAdomonic-likewrapper aroundselectolaxfromdomimport*print(html(body(h1('Hello, World!'))))# <html><body><h1>Hello, World!</h1></body></html>fromwindowimport*window.location="https://www.google.com"print(window.document.css('a'))# [<Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>, <Node a>]installpython3-mpipinstalladomCLITo see the version:domonic-vTo use css selectors on a website...adom-qhttps://google.coma |
adonai-client | No description available on PyPI. |
adoniram | AdoniramA Server-Client Communication Framework |
adop | PyPI:https://pypi.org/project/adop/Downloads:https://gitlab.com/fholmer/adop/-/packagesDocumentation:https://fholmer.gitlab.io/adopSource Code:https://gitlab.com/fholmer/adopLicense: BSD LicenseSummaryAutomatic deployment on-prem from zip archives.FeaturesA REST API to upload, download and deploy zip-files.Listens for webhook requests, to continuously deploy zip-files on commits.Includes “package manager” like commands to upload and installing zip-files.WarningThis is a beta version. Not ready for production.InstallationOpen command line and and install using pip:$pipinstalladop[server]Usageadop is available as console script and library module$adop-h$python-madop-hServe the REST-API:$adopserve-apiServing on http://127.0.0.1:8000Find the generated authorization tokenWindows>type%USERPROFILE%\.adop\adop.ini|findstr write_tokenLinux$cat~/.adop/adop.ini|grepwrite_tokenTest the REST-API withcurlWindows>setADOP_TOKEN=copy-paste-token-here>curl -H"Token:%ADOP_TOKEN%""http://127.0.0.1:8000/api/v1/test"Linux$exportADOP_TOKEN=copy-paste-token-here$curl-H"Token:$ADOP_TOKEN""http://127.0.0.1:8000/api/v1/test"Upload and deploy a zip-library:$curl\-H"Token:$ADOP_TOKEN"\-H"Zip-Tag: 0.1.0"\--data-binary"@work/mylib.zip"\"http://127.0.0.1:8000/api/v1/deploy/zip/mylib"Zip file layoutZip files with exactly one root directory are valid and can be distributed.
The root directory name must be unique if many zip files are to be distributed.Example of a valid zip file layout:/mylib//README.rst/main.py/mypackage1//__init__.py/__main__.py/mypackage2//__init__.py/__main__.pyFollowing example isnotvalid:/README.rst/mylib1//__init__.py/__main__.py/mylib2//__init__.py/__main__.pyAPIEndpointsDescriptionMethodEndpointCheck that the API is available.GET/api/v1/testShasum for all deployed zip-files.GET/api/v1/stateShasum for given deployed root.GET/api/v1/state/<root>Known tags for given root.GET/api/v1/tags/<root>Check specific tag for given root.GET/api/v1/tags/<root>/<tag>List available zip-files.GET/api/v1/list/zipList available zip-files for given root.GET/api/v1/list/zip/<root>Start auto-fetch routine if enabled.GET/api/v1/trigger/fetchStart auto-fetch routine if enabled.POST/api/v1/trigger/fetch/<root>Download zip-file with given root.GET/api/v1/download/zip/<root>Upload a zip-file without deploying it.POST/PUT/api/v1/upload/zip/<root>Upload and deploy a zip-file.POST/PUT/api/v1/deploy/zip/<root>Deploy a preloaded zip-file.GET/api/v1/deploy/zip/<root>Zip-file unpacking progress.GET/api/v1/progress<root>Name of the root directory in the zip-file.HeadersHeaderDescriptionEndpointTokenThe authorization token for this API.AllZip-Sha256content hash of the zip-file to
deploy.GET /api/v1/deploy/zipZip-TagTag the Shasum. Optional.POST/PUT /api/v1/upload/zipGET/POST/PUT /api/v1/deploy/zipZip-RootName of root directory in zip-file
Optional.POST/PUT /api/v1/upload/zipPOST/PUT /api/v1/deploy/zipResultThe result is encoded as a json object. Most endpoints will return an object
withresultandresult_codeas keywords.$curl\-H"Token: paste-token-here"\http://127.0.0.1:8000/api/v1/test{
"result": "It works",
"result_code": 0
}Endpoints that take a long time will stream a progress log until
the result is returned.$curl\-H"Token: paste-token-here"\--data-binary"@work/mylib.zip"\http://127.0.0.1:8000/api/v1/deploy/zip/mylib// root: mylib
// store data
// verify data
// verify root dir
// verify zip data
// zip root: 'mylib'
// unpack zip data
// remove untracked files
{"root": "mylib", "result": "Success", "result_code": 0}The Json specification does not support comments,
so the client must ignore lines prefixed with//before decoding.$curl\-H"Token: paste-token-here"\--data-binary"@work/mylib.zip"\http://127.0.0.1:8000/api/v1/deploy/zip/mylib\|grep-v//\|python-mjson.tool{
"root": "mylib",
"result": "Success",
"result_code": 0
}Status and result codesHTTP statusresult_codeDescripton2000OK. Indicates that the request has succeeded.2001Fail. The request has succeeded but result was
unsuccessful.2002In progress. The request as been interrupted and
returned to early to give the final result code.4014Unauthorized. Invalid token.5005Internal ErrorClient sideDefine requirements in arequires.inifile[requires]
mylib = tag:0.1.0Define the remote and install locations:$exportADOP_TOKEN=copy-paste-token-here$adopconfigadd-remotemyserverhttp://127.0.0.1:8000/api/v1-eADOP_TOKEN$adopconfigadd-installmylibs./lib/auto./lib/.cacheAnd then install:$adopzipinstallrequires.ini--remotemyserver--installmylibs |
ado-pipeline-helper | ADO Pipeline helperPython package and commandline tool for helping with writing Azure Devops pipelines.FeaturesNone of these are implemented mind you as of nowvalidate pipeline (load .azure-pipeline, resolve templates, send to run endpoint with yamlOverride and preview=True)validate library groups (see if value exists)MAYBE: validate schedule cronWarning about templating syntax errors (like missing $ before {{ }} )LimitationsCan't resolve{{ }}expressions, only simple{{ parameter.<key>}}ones.
I started working on a custom resovler but it was a lot of work. You can see it on the branchexpression resolverunderado_pipeline_helper/src/ado_pipeline_helper/resolver/expression.pyUseful linksADO Yaml Reference |
adopt-a-doodle | adopt-a-doodleA Python package to make the creation of Doodle actors easier!Dependenciesadopt_a_doodle depends on Panda3D. If you have not installed it already, you can do so with the following command:pip install Panda3DInstallingTo install the latest version of adopt_a_doodle, open your favorite command terminal and use the following command:pip install adopt_a_doodleIf for whatever reason you are unable to install adopt_a_doodle through pip, you can also install it through thelatest source distribution released on GitHub. Download the tar.gz file and open your favorite command terminal. Navigate to wherever the file was downloaded and run the following command:pip install [file]UsageWith adopt_a_doodle, the creation of Doodle actors becomes much easier.Like with any other Panda3D Toontown project, you must first extract the Phase Files. You can do so with the following command, with [x] being replaced by the id of the phase file:multify.exe -xf phase_[x].mfWith adopt_a_doodle, you will need phase_4, phase_5, and phase_5.5. Once these files are extracted, drop them into the same directory you want to have your Python files in. Your directory should look similar to this:| phase_4
| phase_5
| phase_5.5
| example_doodle.pyNext, go into phase_4/models and find TT_pets-mod.bam. This is the model file for doodles. Drop this file into your main working directory, which should now look like this:| phase_4
| phase_5
| phase_5.5
| example_doodle.py
| TT_pets-mod.bamNow that all the necessary files are here, you can open the Python file containing your scene and start to program! Here's an example scene:fromdirect.directbase.DirectStartimportbaseimportadopt_a_doodleexample_doodle=adopt_a_doodle.adopt(adopt_a_doodle.Doodle(color=(0.546875,0.28125,0.75,1.0),eye_color=(0.242188,0.742188,0.515625,1.0),pattern=adopt_a_doodle.Pattern(ears="phase_4/maps/BeanCatEar3Yellow.jpg",body="phase_4/maps/BeanbodyLepord2.jpg",legs="phase_4/maps/BeanFootYellow1.jpg",tail="phase_4/maps/BeanLongTailLepord.jpg"),animation=adopt_a_doodle.Animation(file="phase_5/models/char/TT_pets-speak.bam",anim_loop=True,loop_from=None,loop_to=None,loop_restart=None,pose=False,pose_frame=None),eyelashes=False,hair=None,ears="catEars",nose=None,tail="longTail"))example_doodle.setPos(0,5,-1.2)example_doodle.setH(180)example_doodle.reparentTo(render)base.run()This code will produce the following doodle:DocumentationYou can find documentation for adopt_a_doodle in therustydoodle lib.rs file.LicenseCode in adopt_a_doodle is licensed under theMIT License. |
adopt-pytorch | No description available on PyPI. |
adopy | ADOpyADOpyis a Python implementation of Adaptive Design Optimization (ADO; Myung, Cavagnaro, & Pitt, 2013), which computes optimal designs dynamically in an experiment. Its modular structure permit easy integration into existing experimentation code.ADOpy supports Python 3.6 or above and relies on NumPy, SciPy, and Pandas.FeaturesGrid-based computation of optimal designs using only three classes:adopy.Task,adopy.Model, andadopy.Engine.Easily customizable for your own tasks and modelsPre-implemented Task and Model classes including:Psychometric function estimation for 2AFC tasks (adopy.tasks.psi)Delay discounting task (adopy.tasks.ddt)Choice under risk and ambiguity task (adopy.tasks.cra)Example code for experiments using PsychoPy(link)Installation# Install from PyPIpipinstalladopy# Install from Github (developmental version)pipinstallgit+https://github.com/adopy/adopy.git@developResourcesGetting startedDocumentationBug reportsCitationIf you use ADOpy, please cite this package along with the specific version.
It greatly encourages contributors to continue supporting ADOpy.Yang, J., Pitt, M. A., Ahn, W., & Myung, J. I. (2020).
ADOpy: A Python Package for Adaptive Design Optimization.Behavior Research Methods, 1-24.https://doi.org/10.3758/s13428-020-01386-4AcknowledgementThe research was supported by National Institute of Health Grant R01-MH093838 to Mark A. Pitt and Jay I. Myung, the Basic Science Research Program through the National Research Foundation (NRF) of Korea funded by the Ministry of Science, ICT, & Future Planning (NRF-2018R1C1B3007313 and NRF-2018R1A4A1025891), the Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-01367, BabyMind), and the Creative-Pioneering Researchers Program through Seoul National University to Woo-Young Ahn.ReferencesMyung, J. I., Cavagnaro, D. R., and Pitt, M. A. (2013).
A tutorial on adaptive design optimization.Journal of Mathematical Psychology, 57, 53–67. |
ado-py | ado-pyDo stuff with python. Quickly access functions from the command-line.Automate stuff and save time.Inspired by make. 😂To avoid repetition of lines in the terminal, we often create a make alias and callmake funcInmakeyou write stuff in shell, inado, you write in python.Installationpipinstallado-pyUsageCreate ado.pyfile in your directory.Write functions in it like thisNote: the functions indo.pyshould not take any arguments.
For user input useinput()function.Call any function from the terminal by runningado func.Running onlyadowould print the docstring ofdo.py. |
adorable | adorableMake the UI of your projectadorable.Basic UsageimportadorablefromadorableimportcolorRED=color.from_name("red")print(RED.fg("Hello adorable World"))BLUE=color.from_hex(0x0AF)DARK=color.from_rgb((38,38,38))col=BLUE.on(DARK)adorable.printc("Hello","World",style=col)LinksDocumentationSource CodePyPI |
adorad | No description available on PyPI. |
adoreta | adoreta_loginstalling the packagepip install adoretaExamples:Logging your logsDefault name of the log filelogs.csvfrom adoreta.log import Log
log = Log()
log.write("Please log this text to show this in the future")Custom name of the log filefrom adoreta.log import Log
log = Log("custom_filename.csv") # any custom file name you want
log.write("This text will go in the custom named file")Displaying your logsDefault logs file displaylogs.csvfrom adoreta.log import Log
log = Log()
log.show()Custom logs file displayfrom adoreta.log import Log
log = Log("custom_filename.csv")
log.show() |
adorn | Featuresadornis a configuration tool for python code.adorncan currentlyinstantiate an objectcheck that a config can instantiate an objectExamplefromadorn.orchestrator.baseimportBasefromadorn.paramsimportParamsfromadorn.unit.compleximportComplexfromadorn.unit.constructor_valueimportConstructorValuefromadorn.unit.parameter_valueimportParameterValuefromadorn.unit.pythonimportPython@Complex.root()classExample(Complex):[email protected](None)classParent(Example):def__init__(self,parent_value:str)->None:super().__init__()[email protected]("child")classChild(Parent):def__init__(self,child_value:int,**kwargs)->None:super().__init__(**kwargs)self.child_value=child_valuebase=Base([ConstructorValue(),ParameterValue(),Example(),Python()])params=Params({"type":"child","child_value":0,"parent_value":"abc"})# well specified configuration# we can type check from any level in the# class hierarchyassertbase.type_check(Example,params)isNoneassertbase.type_check(Parent,params)isNoneassertbase.type_check(Child,params)isNone# instantiate# we can instantiate from any level in the# class hierarchyexample_obj=base.from_obj(Example,params)assertisinstance(example_obj,Child)parent_obj=base.from_obj(Parent,params)assertisinstance(parent_obj,Child)child_obj=base.from_obj(Child,params)assertisinstance(child_obj,Child)InstallationYou can installAdornviapipfromPyPI:$pipinstalladornContributingContributions are very welcome.
To learn more, see theContributor Guide.LicenseDistributed under the terms of theApache 2.0 license,Adornis free and open source software.IssuesIf you encounter any problems,
pleasefile an issuealong with a detailed description.CreditsThis project was generated from@cjolowicz’sHypermodern Python Cookiecuttertemplate. |
adotsdot | adotsdot (Almost Surely)PurposeProvide the ability to obtain the differences between two JSON serializable Python dictionaries (e.g., configuration files) while treating lists as unordered.** Supporting this functionality requires that the dictionaries in a list of dictionaries have a a field that can be used as a unique marker across all elements in the list.UsageAssumingprev_stateandcurr_stateare JSON serializable Python dictionaries,from adotsdot import diff
node = diff(prev_state, curr_state)will generate the root of a tree of nodes that represent the changes in state.This information can be displayed in the terminal via thestrprocedure or transformed into a JSON serializable Python dictionary vianode.as_dict(). |
ad-pack | No description available on PyPI. |
adparallelengine | StatusCompatibilitiesContactadparallelengineA wrapper around several ways of doing map multiprocessing in Python. One can use :Daskconcurrent.futuresmpi4py.futures
The underlying engine is also available in a serial mode, for debugging purposesInstallationpip install adparallelengine[all,mpi,dask,support_shared,k8s]UsageBasic useCreating the engine is done this way:fromadparallelengineimportEnginefromtransparentpathimportPathif__name__=="__main__":which="multiproc"# Can also be "serial", "dask", "mpi" or "k8s"engine=Engine(kind=which,path_shared=Path("tests")/"data"/"shared")Then using the engine is done this way:fromadparallelengineimportEngineimportpandasaspdfromtransparentpathimportPathdefmethod(df):return2*df,3*dfif__name__=="__main__":which="multiproc"# Can also be "serial", "dask", "mpi" or "k8s"engine=Engine(kind=which,# max_workers=10 One can limit the number of workers. By default, os.cpu_count() or MPI.COMM_WORLD.size is used)results=engine(method,# The method to use...[pd.DataFrame([[1,2]]),pd.DataFrame([[3,4]]),pd.DataFrame([[5,6]])]# ...on each element of this iterable)Note that AdParallelEnginesupports generatorsif thelengthargument is given :fromadparallelengineimportEnginedefdummy_prod(xx):return2*xxdeffib(limit):"""Fibonacci generator"""a,b=0,1whilea<limit:yieldaa,b=b,a+bx=fib(25)# will have 9 elements: 0, 1, 1, 2, 3, 5, 8, 13, 21if__name__=="__main__":which="multiproc"# Can also be "serial", "dask", "mpi" or "k8s"engine=Engine(kind=which,# max_workers=10 One can limit the number of workers. By default, os.cpu_count() or MPI.COMM_WORLD.size is used)results=engine(dummy_prod,x,length=9,batch=4)At no moment the engine will cast it to list, instead a custom iterator class is created to properly batch the generator
and loop through it only once, when the computation actually happens.GatheringResults will be a list of tuples, each containing two dataframes, becausemethodreturns a tuple of two dataframes.
One could have used the keyword "gather" to flatten this list inside the engine :results=engine(method,[pd.DataFrame([[1,2]]),pd.DataFrame([[3,4]]),pd.DataFrame([[5,6]])],gather=True)BatchingBy default, one process will executemethodon a single element of the iterable. This can result in significant
overhead if your iterable is much bigger than the number of workers, in which case the keyword "batched" can be used :results=engine(method,[pd.DataFrame([[1,2]]),pd.DataFrame([[3,4]]),pd.DataFrame([[5,6]])],batched=True)In that case, sublists of elements are given to each process so that there are exactly the same number of processes than
numbers (unless the iterable is too small of course).Doing this can also have its own problem, namely a load unbalance of some process finish much quicker than others. One
can optionally use more batches than the number of workers by giving an integer instead of a boolean to the "batched"
keyword :# Using 16 batchesresults=engine(method,[pd.DataFrame([[1,2]]),pd.DataFrame([[3,4]]),pd.DataFrame([[5,6]])],batched=16)other keyword argumentsThemethodcan accept other keyword arguments, for exampledefmethod(df,s):return2*df*s,3*df*sThose can be given when calling the engine and will be passed to each process. For example :fromadparallelengineimportEngineimportpandasaspdfromtransparentpathimportPathdefmethod(df,s):return2*df*s,3*df*sif__name__=="__main__":which="multiproc"# Can also be "serial", "dask", "mpi" or "k8s"engine=Engine(kind=which,path_shared=Path("tests")/"data"/"shared")some_series=pd.Series([10,20])results=engine(method,[pd.DataFrame([[1,2]]),pd.DataFrame([[3,4]]),pd.DataFrame([[5,6]])],s=some_series)Large objects given to keyword argumentsIfmethodis given large objects as keyword arguments, passing the object to workers could imply a significant loss
of time. I observed that doing out-of-core learning can sometime be quicker, despite the I/O that it implies. It
can even save a bit of memory. You can use it by using the "share" keyword argument :results=engine(method,[pd.DataFrame([[1,2]]),pd.DataFrame([[3,4]]),pd.DataFrame([[5,6]])],share={"s":some_series})Here, "some_series" will be written to disk by the engine, and only a path will be given to each process, which will then
read it when starting. For now, only pandas dataframes and series, and numpy arrays, are supported for sharing. The directory
where the shared objects are written is by default the local temp dir, by one can specify some other location by giving
the "path_shared" keyword argument when creating the engine (NOT when calling it!).Method to run in each processesWhen using multiprocessing with numpy, one has to use the "spawn" multiprocessing context to avoid the GIL. By doing so
however, any environment variable or class attributes defined in the main process is forgotten in the child processes,
since the code is imported from scratch. So, one might need to re-load some variables and re-set some class attributes
inside each process. This can be done in an additional method that can be given to engine. The complete example below
shows how it is done.Complete exampleThe code below shows an example of how to use the engine. Heremethodaccepts two other arguments, one that can be a
pandas' dataframe or series, and one that is expected to be a float. It returns a tuple of two dataframes.If the parallelization is done using Python's native multiprocessing, do not forget to useif __name__ == "__main__"like in the example !importsysfromtypingimportUnionimportpandasaspdimportnumpyasnpfromtransparentpathimportPathfromadparallelengineimportEngineclassDummy:some_attr=0defmethod_in_processes(a):Dummy.some_attr=adefmethod(element:pd.DataFrame,some_other_stuff:Union[float,pd.DataFrame,pd.Series,np.ndarray],some_float:float,):return(element*some_other_stuff+some_float+Dummy.some_attr,3*(element*some_other_stuff+some_float+Dummy.some_attr))if__name__=="__main__":Dummy.some_attr=1dfs=[pd.DataFrame([[0,1],[2,3]]),pd.DataFrame([[4,5],[6,7]]),pd.DataFrame([[8,9],[10,11]]),pd.DataFrame([[12,13],[14,15]]),pd.DataFrame([[16,17],[18,19]]),pd.DataFrame([[21,22],[23,24]]),]s=pd.Series([2,3])f=5.0which=sys.argv[1]gather=Trueifsys.argv[2]=="True"elseFalsebatched=Trueifsys.argv[3]=="True"elseFalseifsys.argv[3]=="False"elseint(sys.argv[3])share=Trueifsys.argv[4]=="True"elseFalseifshareisTrue:share_kwargs={"share":{"some_other_stuff":s}}else:share_kwargs={"some_other_stuff":s}engine=Engine(kind=which,path_shared=Path("tests")/"data"/"shared")res=engine(method,dfs,init_method={"method":method_in_processes,"kwargs":{"a":1}},some_float=f,gather=gather,batched=batched,**share_kwargs) |
adpasswd | adpasswd.py: Pure Python Command line interface to change Active Directory Passwords via LDAP.SETUP:you need a config file.config files can either be in the Current Working Directory, or in ~/config files are always named .adpasswd.cfg and are INI style.Example:[ad]host: ad.blah.comport: 636binddn: cn=Administrator,CN=Users,DC=ad,DC=blah,DC=combindpw: changemequicklysearchdn: DC=ad,DC=blah,DC=comAll of the options above MUST exist, and be configured properly for this to work.Once you have a config file setup, then it's EASY to use:adpasswd.py username [password]you can call it with a password or not, if you don't you will be prompted for one.you get NO OUTPUT (but successful return) if everything went well. (good for scripts!)If things went wrong, you will be told about it.Bug reports, etc please use launchpad: https://launchpad.net/adpasswdCREDITS:ldaplib.py originally from [email protected]: http://sourceforge.net/projects/ldaplibpy/Big thanks for doing all the hard work!FYI: I no longer use this code in production, nor really maintain it. If you love/use or care about this code, feel free to adopt it or take over ownership. |
adpbulk | adpbulkSummaryPerforms pseudobulking of anAnnDataobject based on columns available in the.obsdataframe. This was originally intended to be used to pseudo-bulk single-cell RNA-seq data to higher order combinations of the data as to use existing RNA-seq differential expression tools such asedgeRandDESeq2. An example usage of this would be pseudobulking cells based on their cluster, sample of origin, or CRISPRi guide identity. This is intended to work on both individual categories (i.e. one of the examples) or combinations of categories (two of the three, etc.)InstallationFrom PyPIpipinstalladpbulkFrom Githubgitclonehttps://github.com/noamteyssier/adpbulkcdadpbulk
pipinstall.
pytest-vUsageThis package is intended to be used as a python module.Single Category Pseudo-BulkThe simplest use case is to aggregate on a single category. This will aggregate all the observations belonging to the same class within the category and return a pseudo-bulked matrix with dimensions equal to the number of values within the category.fromadpbulkimportADPBulk# initialize the objectadpb=ADPBulk(adat,"category_name")# perform the pseudobulkingpseudobulk_matrix=adpb.fit_transform()# retrieve the sample meta data (useful for easy incorporation with edgeR)sample_meta=adpb.get_meta()Multiple Category Pseudo-BulkA common use case is to aggregate on multiple categories. This will aggregate all observations beloging to the combination of classes within two categories and return a pseudo-bulked matrix with dimensions equal to the number of values of nonzero intersections between categories.fromadpbulkimportADPBulk# initialize the objectadpb=ADPBulk(adat,["category_a","category_b"])# perform the pseudobulkingpseudobulk_matrix=adpb.fit_transform()# retrieve the sample meta data (useful for easy incorporation with edgeR)sample_meta=adpb.get_meta()Pseudo-Bulk using raw countsSome differential expression software expects the counts to be untransformed counts. SCANPY uses the.rawattribute in itsAnnDataobjects to store the initialAnnDataobject before transformation. If you'd like to perform the pseudo-bulk aggregation using these raw counts you can provide theuse_raw=Trueflag.fromadpbulkimportADPBulk# initialize the object w. aggregation on the `.raw` attributeadpb=ADPBulk(adat,["category_a","category_b"],use_raw=True)# perform the pseudobulkingpseudobulk_matrix=adpb.fit_transform()# retrieve the sample meta data (useful for easy incorporation with edgeR)sample_meta=adpb.get_meta()Alternative Aggregation OptionsIt may also be useful to aggregate using an alternative function besides the sum - this option will allow you to choose between sum, mean, and median as an aggregation function.fromadpbulkimportADPBulk# initialize the object w. an alternative aggregation option# aggregation options are: sum, mean, and median# default aggregation is sumadpb=ADPBulk(adat,"category",method="mean")# perform the pseudobulkingpseudobulk_matrix=adpb.fit_transform()# retrieve the sample meta data (useful for easy incorporation with edgeR)sample_meta=adpb.get_meta()Alternative Formatting OptionsfromadpbulkimportADPBulk# initialize the object w. alternative name formatting optionsadpb=ADPBulk(adat,["category_a","category_b"],name_delim=".",group_delim="::")# perform the pseudobulkingpseudobulk_matrix=adpb.fit_transform()# retrieve the sample meta data (useful for easy incorporation with edgeR)sample_meta=adpb.get_meta()ExampleAnnDataFunctionHere is a function to generate anAnnDataobject to test the module or to play with the object if unfamiliar.importnumpyasnpimportpandasaspdimportanndataasaddefbuild_adat(SIZE_N=100,SIZE_M=100):"""creates an anndata for testing"""# generates random values (mock transformed data)mat=np.random.random((SIZE_N,SIZE_M))# generates random values (mock raw count data)raw=np.random.randint(0,1000,(SIZE_N,SIZE_M))# creates the observations and categoriesobs=pd.DataFrame({"cell":[f"b{idx}"foridxinnp.arange(SIZE_N)],"cA":np.random.choice(np.random.choice(5)+1,SIZE_N),"cB":np.random.choice(np.random.choice(5)+1,SIZE_N),"cC":np.random.choice(np.random.choice(5)+1,SIZE_N),"cD":np.random.choice(np.random.choice(5)+1,SIZE_N),}).set_index("cell")# creates the variables (genes) and categoriesvar=pd.DataFrame({"symbol":[f"g{idx}"foridxinnp.arange(SIZE_M)],"cA":np.random.choice(np.random.choice(5)+1,SIZE_M),"cB":np.random.choice(np.random.choice(5)+1,SIZE_M),"cC":np.random.choice(np.random.choice(5)+1,SIZE_M),"cD":np.random.choice(np.random.choice(5)+1,SIZE_M),}).set_index("symbol")# Creates the `AnnData` objectadat=ad.AnnData(X=mat,obs=obs,var=var)# Creates an `AnnData` object to simulate the `.raw` attributeadat_raw=ad.AnnData(X=raw,obs=obs,var=var)# Sets the `.raw` attributeadat.raw=adat_rawreturnadatadat=build_adat() |
adp-connection | A library to help connect to ADP using Openid-Connect and OAuth 2.0 |
adp-connectors | Connectors for Discovery Platform ApplicationsThe current version supports 3 kind of API connections: AWS S3, Box, PostgreSQL.
The required secret file formats are as follows.S3 Connector{
"aws_access_key_id": "",
"aws_secret_access_key": ""
}Box Connector{
"boxAppSettings": {
"clientID": "",
"clientSecret": "",
"appAuth": {
"publicKeyID": "",
"privateKey": "",
"passphrase": ""
}
},
"enterpriseID": ""
}Postgresql Connector{
"host": "",
"port": 5432,
"database": "",
"user": "",
"password": ""
} |
ad-physics | ad-physicsprovides the python binding of a C++ implementation for common data types to be used in the context of automated driving (AD).This includes type safe implemenations of e.g. Distance, Speed, Duration and Acceleration and operations on those.In addition, the types define AD specific precision, minima, maxima and input range values.Seeproject webpageordoxygen docufor a full interface description. |
adpil | DescriptionThis toolbox interfaces to PIL, used by ia636 and ia870 toolboxes. The main functions it provides are adshow,
adread, adreadgray.RequirementsThis toolbox requires PIL. To install PIL using pip use the following command:pip install PIL –allow-unverified PIL –allow-all-external |
adpix | UNKNOWN |
adp-model-evaluation | Package adding_problem_model_evaluationThe documentation for this package is on the link below:adding_problem_model_evaluation documentation |
adp-py | adpalgorithms and datastructures pedia |
adpred | ADpredA tool for prediction of Transcription Activation Domains from protein sequences.GoalsDocumentationContributingAuthorsLicenceGoalsThe main goal is to identify regions with high AD function probability in protein sequences. Moreover, at these observed regions, a saturated mutagenesis study can reveal insights into the important residues that confer the AD function to that region.ContributingContributions are welcome and encouraged.DocumentationAuthorsAriel ErijmanLicenceADpredis an open source software released under theMIT licence |
adptools | If you have some questions, send a email. ([email protected]) |
adp_userinfo | ADP client library to get the logged-in user’s info |
adpushup-adstxt | This library provides adpushup API for handling ads.txt management.Django:add it to installed apps:INSTALLED_APPS=(...,'adpushup_adstxt',...,)add your user_id and key to YOUR settings.py:ADPUSHUP_API_USER_ID='[email protected]'ADPUSHUP_API_KEY='1234'OPTIONALLY add different WWW_DIR:ADPUSHUP_WWW_DIR='/some/dir/to/put/ads.txt/in/it/'by default it is DjangosROOT_DIR + '/www'ADD it to your urlconf:fromadpushup_adstxt.django_viewsimporthandleurlpatterns+=patterns('',url(r'^adsTxtManagementApiByAdpushup.php',handle)),Testing:fromadpushup_adstxt.utilsimportencode_uri_componentimporttimeimporthmacimporthashlibimportrequestsuser_id='your user id'key='your key'req_time=int(time.time())hash_params="email={}&ts={}".format(encode_uri_component(user_id.encode("UTF-8")),req_time)hash=hmac.new(key,hash_params,hashlib.sha256).hexdigest()res=requests.post('http://localhost:8000/adsTxtManagementApiByAdpushup.php',dict(data='test content',ts=req_time,hash=hash))printres.status_codeprintres.content |
adp-webscrape | A Selenium-based python script for logging into ADP Resource and downloading reports.UsageInstallationWith Python 3.6 or greater installed, in a command prompt: enterpip installadp-webscrape. You’ll also need a recent edition of Firefox and its respectiveGeckoDriver. The GeckoDriver must be added to PATH, or to the root folder of the project.CodeUse the following code, replacing my_username, my_password, my_download_path, and my_isi_client_id with relevant information.my_username: Your ADP Resource usernamemy_password: Your ADP Resource passwordmy_download_path: (optional) The path that Selenium’s browser will download reports to (e.gC:\adp-reports). Ommiting defaults to the user’s download folder.my_isi_client_id: This can be found at the end of the url for any ezLaborManager page. Most likely, it’s going to be your company name (probably spaced out by hyphens if the name is multiple words).my_report_index: On the ezLaborManager “My Reports” page, this will be the index of the report you want to download (with the first report starting at index 0)https://i.imgur.com/Tg7kPQV.pngmy_file_prefix: (optional) If you’d like to prefix the name of your files with some word so as to not mix report names, you may do so here.importatexitfromadpwebscrapeimportADPResourceresource=ADPResource('my_username','my_password',isi_client_id='my_isi_client_id',download_path=r'my_download_path')resource.download_my_report(my_report_index,prefix='my_file_prefix')#returns Filenameatexit.register(resource.quit)OtherWhy no official API?There isn’t one. ADP Marketplace has an API, though it is very separate from the reports I’ve attempted to generate here.Why Selenium and not regular schmegular requests?Requests to ADP Resource require hidden fields whose contents seem like a pain to generate programatically. Selenium was chosen because it handles all of that at the cost of a little performance. Please let me know if you find a better way to do this. |
adqol-persist | # ADQ Persistence Framework for Dynamo DBThis project was made to optimize the Dynamo DB Persistence implementation code to a basic Entity CRUD operations for Python ADQ projects[The source for this project is available here][src].Most of the configuration for a Python project is done in thesetup.pyfile,
an example of which is included in this project. You should edit this file
accordingly to adapt this sample project to your needs.This is the README file for the project.The file should use UTF-8 encoding and can be written using
[reStructuredText][rst] or [markdown][md use] with the appropriate [key set][md
use]. It will be used to generate the project webpage on PyPI and will be
displayed as the project homepage on common code-hosting services, and should be
written for that purpose.Typical contents for this file would include an overview of the project, basic
usage examples, etc. Generally, including the project changelog in here is not a
good idea, although a simple “What’s New” section for the most recent version
may be appropriate.[packaging guide]:https://packaging.python.org[distribution tutorial]:https://packaging.python.org/tutorials/packaging-projects/[src]:https://github.com/jhmjesus/adqol-persist[rst]:http://docutils.sourceforge.net/rst.html[md]:https://tools.ietf.org/html/rfc7764#section-3.5“CommonMark variant”
[md use]:https://packaging.python.org/specifications/core-metadata/#description-content-type-optional |
adqsetup | adqsetupSPDX-License-Identifier: BSD-3-ClauseCopyright (c) 2022, Intel CorporationDependencies:Python 2.7-3.10Distro Support:Redhat:7.1-9.xFedora:28-35Ubuntu:19.04-22.04Debian:11InstallationPython Package Index (pip):python -m pip install adqsetupIncluded with driver:python scripts/adqsetup/adqsetup.py installUsageThe basic usage is:adqsetup [options] <command> [parameters ...]Please see the output ofadqsetup helpfor a complete list of
command line options.Commandshelp:Show help messageexamples:Create an 'examples' subdirectoryThe examples subdirectory - created in the current directory -
contains a set of sample config filesapply {filename}:Apply a config file{filename}:Config file (relative or full path)If empty or '-', config file is read from stdin.create { [{name}] { {key} {value} }... }...:Create a config from the command lineEach section consisting of a bracketed name and one or more {key} {value} pairs.[{name}]:User-defined name of sectionMust be unique within a configuration, '[globals]' is reserved but can be used.{key}{value}:Configuration ParameterOne or more space-seperated key and value pairs.
See the above Class Configuration Parameter list for possible keys and values.reset:Remove ADQ traffic classes and filtersAttempts to perform a cleanup of any ADQ-related setup.
Note: '--priority=skbedit' option must be included to remove the egress filters.persist {filename}:Persist a config file across rebootsCreates a systemd service unit set to run once on boot after the network is running.
One config per network interface, new configs overwrite old ones.{filename}:Config file (relative or full path)If empty or '-', config file is read from stdin.install:Install the adqsetup scriptInstalls the current script at /usr/local/binConfiguration ParametersGlobals Sectionarpfilter: (bool)Enable selective ARP activitybpstop: (bool)Channel-packet-clean-bp-stop featurebpstop-cfg: (bool)Channel-packet-clean-bp-stop-cfg featurebusypoll: (integer)busy_poll valuebusyread: (integer)busy_read valuecpus: (integer list|'auto')CPUs to use for handling 'default'
traffic, default 'auto'numa: (integer|'local'|'remote'|'all')Numa node to use for 'default'
traffic, default 'all' (prefer local)dev: (string)Network interface device to configureoptimize: (bool)Channel-inspect-optimize featurepriority: ('skbedit')Method to use for setting socket priority, default nonequeues: (integer)Number of queues in 'default' traffic class, default 2txring: (integer)Transmit ring buffer sizetxadapt: (bool)Adaptive transmit interrupt coalescingtxusecs: (integer)Usecs for transmit interrupt coalescingrxring: (integer)Receive ring buffer sizerxadapt: (bool)Adaptive receive interrupt coalescingrxusecs: (integer)Usecs for receive interrupt coalescingUser-defined Section (for each application or traffic class)addrs: (string list)Local IP addresses of trafficcpus: (integer list|'auto')CPUs to use for handling traffic,
default 'auto'mode: ('exclusive'|'shared')Mode for traffic classnuma: (integer|'local'|'remote'|'all')Numa node to use for traffic,
default 'all' (prefer local)pollers: (integer)Number of independent pollers, default 0poller-timeout: (integer)Independent poller timeout value,
default 10000ports: (integer list)Local IP ports of trafficprotocol: ('tcp'|'udp')IP Protocol of trafficqueues: (integer)Number of queues in traffic classremote-addrs: (string list)Remote IP addresses of trafficremote-ports: (integer list)Remote IP ports of trafficSample Usageadqsetup help
adqsetup examples
adqsetup apply memcached.conf
adqsetup --dev=eth4 apply nginx.conf
adqsetup --dev=eth3 persist eth3.conf
cat memcached.conf | adqsetup apply
adqsetup create [myapp] queues 4 ports 11211
adqsetup --verbose create \
[globals] priority skbedit \
[myapp] queues 2 ports 11211
adqsetup --verbose create \
[app1] mode shared queues 4 ports 6379-6382
[app2] queues 2 ports 11211 pollers 2Sample Usage Bash Script#!/bin/bash
QUEUES=8
# this will loop through a range
# of busy_poll values
for BP in {10000..50000..5000}; do
adqsetup create [globals] busypoll $BP [nginx] queues $QUEUES ports 80
# run test here
doneSample Usage With Pipes From Bash Script#!/bin/bash
QUEUES=8
# this will loop through a range
# of busy_poll values
for BP in {10000..20000..5000}; do
adqsetup apply <<EOF
[globals]
dev=eth2
busypoll=$BP
[nginx]
queues=$QUEUES
ports=80
EOF
# run test here
doneSample Usage With Pipes From External Scriptpython makeconf.py | adqsetup --json applymakeconf.pyimport json
conf = {
"globals": {
"dev": "eth2",
"busypull": 10000
},
"app1": {
"queues": 4,
"ports": "80,443"
}
}
print(json.dumps(conf))NotesTo load/use a different device driver while creating the setup,
the--driverparameter may be used. Device driver path is the full path
to the .ko file (ex: ice-1.9.x/src/ice.ko). Interfacemustbe set to
come up automatically with an ip address (via NetworkManager or other).
adqsetup will wait up to three seconds for this to occur before erroring out.
Conversely, you can load the driver and setup the interface manually
before running the adqsetup.The independentpollersargument passed to adqsetup doesn’t map directly
to theqps_per_pollerarguments passed to the driver. adqsetup
allows the user to specify how many pollers for a particular TC instead of
having to specify qps_per_poller.adqsetup 1.x required updated versions of the 'tc', 'ethtool', and 'devlink'
commands to be installed on the system. With adqsetup 2.x and onward, this
requirement has been removed.Common IssuesIf you get a/usr/bin/env: ‘python’: No such file or directoryerror
when you run the script, please install Python. If you have already installed
Python, then trywhereis pythonand you should see a message like:python: /usr/bin/python2.7 /usr/bin/python3.6 /usr/bin/python3.6m /usr/bin/python3.9on the first line of the output. Either run the version you wish to use
manually:python3.6 adqsetup.py help, or create a 'python' symbolic
link on the path:ln -s /usr/bin/python3.6 /usr/local/bin/pythonMany advanced features, such aspollersand the per-tc flow director may
not be supported by older versions of the driver or kernel. adqsetup
will attempt to use an equivalent fallback feature, and if none are available
a descriptive error will be provided. Please refer to the ADQ Config Guide for
more information.Other IssuesPlease run the malfunctioning config with the command line--debugoption,
which should include a short stack trace at the end of the output. Send the
configuration file (if used), full commmand line, and program output to your
Intel support contact.JSON Supportadqsetup accepts configurations in the JSON format from either a file or stdin
with the--jsonoption. Parameters are the same as listed above, using the
following basic structure:{
"globals": {
"dev": "eth4",
"priority": "skbedit"
},
"app1": {
"queues": 2,
"ports": 11211
},
"app2": {
"queues": 4,
"mode": "shared",
"ports": "6379-6382"
}
} |
adr | ADR-pyShout to excellentadr-toolsproject on whichADR-pyis based onThis Python script is designed to help software development teams document their architecture decisions using Architecture Decision Records (ADRs).
ADRs are a lightweight and effective way to capture important decisions made during the design and development of a software system, and to keep track of their rationale and implications over time.The script creates ADR files in a predefined format, following the principles ofMichael Nygard's ADR template.
Each ADR file is a Markdown document with a unique name that includes a sequential number and a title, which is automatically generated based on the information provided by the user.PrerequisitesPython 3.11 installed on your system.Basic knowledge of command-line interface (CLI) usage.Installationpip install adrHow to UseUsage:$adr[OPTIONS]COMMAND[ARGS]...Options:--install-completion: Install completion for the current shell.--show-completion: Show completion for the current shell, to copy it or customize the installation.--help: Show this message and exit.Commands:init: Initialize ADR directory with first ADR in given PATHnew: Create new ADR with given NAMEinitInitialize ADR directory with first ADR in given PATHUsage:$adrinit[OPTIONS][PATH]Arguments:[PATH]: Path in where ADRs should reside. If not provided Path will be extracted from pyproject.tomlOptions:--help: Show this message and exit.newCreate new ADR with given NAMEUsage:$adrnew[OPTIONS]NAMEArguments:NAME: Name of new ADR. Longer names (with spaces) should be put in quotation marks. [required]Options:adr --help: Show this message and exit.ADR TemplateThe generated ADR files follow the template proposed by Michael Nygard in his book "Documenting Architecture Decisions." The template consists of the following sections:Title: The title of the ADR.Status: The current status of the decision (e.g., proposed, accepted, rejected).Context: The context and background information that led to the decision.Decision: The decision made and its rationale.Consequences: The potential consequences and trade-offs of the decision.Benefits of ADRsUsing ADRs has several benefits for software development teams, including:Documentation: ADRs provide a written record of important architectural decisions, making it easier for team members to understand the reasons behind past decisions.Communication: ADRs serve as a communication tool for discussing and documenting design decisions, facilitating collaboration among team members.Decision-making: ADRs encourage thoughtful decision-making by requiring the team to consider the context, rationale, and potential consequences of each decision.Transparency: ADRs promote transparency by making architectural decisions visible and accessible to the entire team, fostering a culture of shared understanding and accountability.Knowledge sharing: ADRs help capture the collective knowledge and experience of the team, enabling future team members to learn from past decisions and avoid repeating mistakes. |
adranis-sigma | No description available on PyPI. |
adr.ca | ADRAircraft Design Resources aims to help engineers on conceptual design analysis, giving them the tools necessary to easily simulate different aircraft designs.InstallationRegular usagegit clone https://github.com/CeuAzul/ADR.git
cd ADR
pip install setuptools
pip install ./Developmentgit clone https://github.com/CeuAzul/ADR.git
cd ADR
pip install setuptools
pip install -e ./UsageTo run an analysis modify the inputs onparameters.pyas needed and runmain.py.ContributorsThis project exists thanks to all the people who contribute. |
adre | Adre - ADR ExtendedUsagepipinstalladre
adreinit
adrenew"Babby's first ADR"adreserve |
adrenaline | adrenalineSimple Python module to prevent your computer from going to sleep. Supports
Windows and macOS at the moment; Linux support is coming soon (hopefully).UsageThe module provides a context manager namedprevent_sleep(). The computer
will not go to sleep while the execution is in this context:fromadrenalineimportprevent_sleepwithprevent_sleep():# do something important here...Optionally, you can also prevent the screen from turning off:withprevent_sleep(display=True):# do something important here...Command line interfaceYou can also use this module from the command line as follows:$python-madrenalineThe command line interface will prevent sleep mode as long as it is running.AcknowledgmentsThanks toMichael Lynnfor figuring out
how to do this on macOS.Thanks toNiko Pasanenfor the Windows
version. |
adrest | Adrest is Another Django REST. Django application for simple make HTTP REST API.Documentation inconstruction.RequirementsPython 2.7Django (1.5, 1.6, 1.7)InstallationADRestshould be installed using pip:pip install adrestQuick startfrom adrest import Api, ResourceView
api = Api('v1')
@api.register
class BookResource(ResourceView):
class Meta:
allowed_methods = 'get', 'post'
model = 'app.book'
urlpatterns = api.urlsSetupAdrest settings (default values):# Enable logs
ADREST_ACCESS_LOG = False
# Auto create adrest access key for User
ADREST_AUTO_CREATE_ACCESSKEY = False
# Max resources per page in list views
ADREST_LIMIT_PER_PAGE = 50
# Display django standart technical 500 page
ADREST_DEBUG = False
# Limit request number per second from same identifier, null is not limited
ADREST_THROTTLE_AT = 120
ADREST_THROTTLE_TIMEFRAME = 60
# We do not restrict access for OPTIONS request
ADREST_AUTHENTICATE_OPTIONS_REQUEST = FalseNoteAdd ‘adrest’ to INSTALLED_APPSUse adrestSee test/examples in ADREST sources.Bug trackerIf you have any suggestions, bug reports or
annoyances please report them to the issue tracker
athttps://github.com/klen/adrest/issuesContributingDevelopment of adrest happens at github:https://github.com/klen/adrestContributorsklen(Kirill Klenov)LicenseLicensed under aGNU lesser general public license. |
adrf | Async Django REST frameworkAsync support for Django REST frameworkRequirementsPython 3.8+Django 4.1+Wehighly recommendand only officially support the latest patch release of
each Python and Django series.InstallationInstall usingpip...pip install adrfAdd'adrf'to yourINSTALLED_APPSsetting.INSTALLED_APPS=[...'adrf',]ExamplesAsync ViewsWhen using Django 4.1 and above, this package allows you to work with async class and function based views.For class based views, all handler methods must be async, otherwise Django will raise an exception. For function based views, the function itself must be async.For example:fromadrf.viewsimportAPIViewclassAsyncAuthentication(BaseAuthentication):asyncdefauthenticate(self,request)->tuple[User,None]:returnuser,NoneclassAsyncPermission:defhas_permission(self,request,view)->bool:ifrandom.random()<0.7:returnFalsereturnTrueclassAsyncThrottle(BaseThrottle):defallow_request(self,request,view)->bool:ifrandom.random()<0.7:returnFalsereturnTruedefwait(self):return3classAsyncView(APIView):authentication_classes=[AsyncAuthentication]permission_classes=[AsyncPermission]throttle_classes=[AsyncThrottle]asyncdefget(self,request):returnResponse({"message":"This is an async class based view."})fromadrf.decoratorsimportapi_view@api_view(['GET'])asyncdefasync_view(request):returnResponse({"message":"This is an async function based view."})Async ViewSetsFor viewsets, all handler methods must be async too.views.pyfromdjango.contrib.authimportget_user_modelfromrest_framework.responseimportResponsefromadrf.viewsetsimportViewSetUser=get_user_model()classAsyncViewSet(ViewSet):asyncdeflist(self,request):returnResponse({"message":"This is the async `list` method of the viewset."})asyncdefretrieve(self,request,pk):user=awaitUser.objects.filter(pk=pk).afirst()returnResponse({"user_pk":useranduser.pk})urls.pyfromdjango.urlsimportpath,includefromrest_frameworkimportroutersfrom.importviewsrouter=routers.DefaultRouter()router.register(r"async_viewset",views.AsyncViewSet,basename="async")urlpatterns=[path("",include(router.urls)),]Async Serializersserializers.pyfromadrf.serializersimportSerializerfromrest_frameworkimportserializersclassAsyncSerializer(Serializer):username=serializers.CharField()password=serializers.CharField()age=serializers.IntegerField()views.pyfrom.importserializersfromadrf.viewsimportAPIViewclassAsyncView(APIView):asyncdefget(self,request):data={"username":"test","password":"test","age":10,}serializer=serializers.AsyncSerializer(data=data)serializer.is_valid()returnawaitserializer.adata |
adria | adria-pyProgrammer-friendly static site generator |
adriamanu-test-radarly | Failed to fetch description. HTTP Status Code: 404 |
adrian | No description available on PyPI. |
adrianAppTeste | Teste |
adrian.cgen | Failed to fetch description. HTTP Status Code: 404 |
adrian-databricks | No description available on PyPI. |
adrian-geotools | Failed to fetch description. HTTP Status Code: 404 |
adrianna | A powerfull lib, for development in machine learningInstall:$pipinstalladrianna-UExamples:BasesNeuro V1 (binary)fromadrianna.neuro.base_v1importNeuralNetworkimportnumpyasnp# Exemplo de usoif__name__=="__main__":# Dados de entrada e saídaX=np.array([[3,1,2],[1,24,5],[2,42,5],[2,23,3]])y=np.array([[1],[0],[1],[0]])# Criação e treinamento da rede neuralinput_size=X.shape[1]# Número de colunas das listashidden_size=4# Número de neurônios na camada ocultaoutput_size=y.shape[1]# Número de saídasneural_net=NeuralNetwork(input_size,hidden_size,output_size,learning_rate=0.1)neural_net.train(X,y,epochs=10000)# Fazendo previsõespredictions=neural_net.predict(X)predictions=np.round(predictions).astype(int)print("Predictions:")print(predictions)Results:...
Epoch8000,Loss:0.2617957
Epoch8100,Loss:0.1830651
Epoch8200,Loss:0.1800935
Epoch8300,Loss:0.2585192
Epoch8400,Loss:0.3310709
Epoch8500,Loss:0.3215849
Epoch8600,Loss:0.1803035
Epoch8700,Loss:0.1802555
Epoch8800,Loss:0.1807692
Epoch8900,Loss:0.3312975
Epoch9000,Loss:0.1800601
Epoch9100,Loss:0.2642676
Epoch9200,Loss:0.1930688
Epoch9300,Loss:0.3279387
Epoch9400,Loss:0.1871483
Epoch9500,Loss:0.1809427
Epoch9600,Loss:0.2635577
Epoch9700,Loss:0.1855325
Epoch9800,Loss:0.1796770
Epoch9900,Loss:0.1800897
Predictions:[[1][0][1][1]] |
adrianopdf | This is the homepage of our project. |
adrianpdf | This is the home page of our project |
adrians-geotools | No description available on PyPI. |
adrmdr | No description available on PyPI. |
adroit | adroit- Ansible Docker Role TestingHeavily opinionated tool for testing Ansible roles using Docker containers.Assumptions and limitationsThese are the current assumptions about your Ansible codebase which might prevent you from using Adroit. They are subject to change or improve.You only deploy (or only want to test) on modern systems with systemd as their init system.You have abaserole which other roles can build upon. (If you don't need this, you can just have an emptyroles/basedirectory).With the exception of depending on the base role, your Ansible roles are atomic, indepentent, and can be applied individually.include_roleand dependencies defined inmetashould still work, though.Feel free to open a Github issue about any limitations that prevent you from using Adroit.How it worksAdroit builds acore imagebased on your distro of choice.A container based on the core image is created. Thebaserole will be applied to the container, and it is saved as thebase image.For each role you want to test, a container based on the base image is started, and the role under test will be applied.Adroit will check if the role playbook fails, and will also run the playbook a second time to test for idempotency - if there are any changes on the second run, we consider it a failure.PrecautionsTo properly test Ansible using Docker containers, systemd needs to be running inside the containers. This requires the containers to run in privileged mode. There is a security risk involved here, check your base images and playbooks accordingly.UsageIn a virtualenv or whatever you prefer:pip install adroitIn the root directory of your Ansible tree structure, which should at least contain arolesdirectory, run this command:adroit-ddebian:stretchmyroleWheredebian:stretchis the image you want to base your tests on. Currently supported are Debian, Ubuntu and CentOS.Customizing your roles for testingCertain tasks simply cannot be ran inside a Docker container - for example, mounting/procwithhidepid=2. You should add awhenclause to these tasks. Example:-when:ansible_virtualization_type != 'docker'import_tasks:configure_network.ymlIf you need certain variables to be set which aren't indefaultsorvarsbut should be set during testing, you can create a file likeroles/myrole/testing/test_vars.ymland it will be applied when testing that particular role.LicenseThe contents of this repository is released under theMIT license. See the LICENSE file included for details. |
adropinthebucket | adropinthebucketTable of ContentsInstallationLicenseInstallationpip install adropinthebucketLicenseadropinthebucketis distributed under the terms of theMITlicense. |
adrt | Approximate Discrete Radon TransformFast approximate discrete Radon transform forNumPyarrays.Documentation:https://adrt.readthedocs.io/en/latest/Source Code:https://github.com/karlotness/adrtBug Reports:https://github.com/karlotness/adrt/issuesThis library provides an implementation of an approximate discrete
Radon transform (ADRT) and related routines as a Python module
operating on NumPy arrays. Implemented routines include: the forward
ADRT, a back-projection operation, and several inverse transforms. The
packagedocumentationcontains usage examples, and sample
applications.InstallationInstall fromPyPIusing pip:$python-mpipinstalladrtFor further details on installation or building from source, consult
thedocumentation.ReferencesThis implementation is based on descriptions in several publications:Martin L. Brady,A Fast Discrete Approximation Algorithm for the Radon Transform Related Databases, SIAM Journal on Computing, 27.William H. Press,Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines, Proceedings of the National Academy of Sciences, 103.Donsub Rim,Exact and fast inversion of the approximate discrete Radon transform from partial data, Applied Mathematics Letters, 102.LicenseThis software is distributed under the 3-clause BSD license. See
LICENSE.txt for the license text.We also make available several pre-built binary copies of this
software. The binary build for Windows includes additional license
terms for runtime code included as part of the software. Review the
LICENSE.txt file in the binary build package for more information. |
adr-tools-python | READMEThis is a project to get a python equivalent of the adr-tools by npryce ongithub. The tool can make and list and change Architecture Decision Records. For more information on Architecture Decision Records see the page ofJoel Parker Henderson on ADRs.Installationpip install adr-tools-pythonorpython3 -m pip install adr-tools-python --userBy adding a--upgradeflag, the tool can be updated if a new version is availableUsageadr-initWithadr-init, the directory structure can be initialized. Default, a subdircectorydoc/adris generated, but if a different directory is wished for, this can be input:adr-init fooIn this case, adrs will be stored in a local folderfoo/. In the main directory, a file called.adr-diris generated to indicate toadr-toolsthat a different location than the defaultdoc/adr/is used. This behaviour was copied from, and should be compatible with the originaladr-tools.adr-initalways creates a new adr to say that adrs will be used.adr-newA subject should be given for a new adr:> adr-new create equal animals
> adr-list
doc/adr/0001-record-architecture-decisions.md
doc/adr/0002-create-equal-animals.md
>ADRs can be superceded from the command line using the-soption, and be linked by using the-loption.From the documentation ofadr-tools:Multiple -s and -l options can be given, so that the new ADR can supercedeor link to multiple existing ADRs.E.g. to create a new ADR with the title "Use MySQL Database":adr new Use MySQL DatabaseE.g. to create a new ADR that supercedes ADR 12:adr new -s 12 Use PostgreSQL DatabaseE.g. to create a new ADR that supercedes ADRs 3 and 4, and amends ADR 5:adr new -s 3 -s 4 -l "5:Amends:Amended by" Use Riak CRDTs to cope with scaleThe same funcitonality is also available in this python versionadr-listSee above, lists the adrs.Serving the adrsIf you want the ADRs to be served on a webpage, please look for the python package [adr-viewer](https://pypi.org/project/adr-viewer/Source, contributionThe source code is available onbitbucket. If you're interested in collaborating let me know, and/or send a merge request.ThanksThanks to Michael Nygard for the originalidea of ADRs, WesleyKS for his work onadre(which was inspiring, but not the road I followed), and of course to Npryce for making and documenting thebash toolchainI tried to replicate in Python. |
adrubix | StatusCompatibilitiesContactAdRubixPackage allowing to createRubixHeatmapobjects for plotting complex, highly customizable heatmaps with metadata.The interest of such a visualization is to highlight clusters in data and to track any patterns vis-à-vis metadata.►You can easily test AdRubix tool on your data with thisfriendly Streamlit GUIbefore integrating it into your projects code-wise.Example of a heatmap created using AdRubix:InputThree input files (CSV) or pandas DataFrames (in any combination) are expected:Main dataGenerally comes clusterized: for example, by applyingAdNMTFto raw data.Example A (see figure above) : rows = genes, columns = cell groups for each patientExample B : rows = biomarkers at different timepoints, columns = patientsMetadata for rowsIndex of these metadata should correspond to theindexof main data
(at least partially, in which case the plot will only keep the matching rows).Example A : column 1 = gene group, column 2 = geneExample B : column 1 = timepoint, column 2 = biomarkerMetadata for columnsIndex of these metadata should correspond to thecolumnsof main data
(at least partially, in which case the plot will only keep the matching columns).Example A : column 1 = patient, column 2 = cell typeExample B : column 1 = score (Y/N), column 2 = treatment, column 3 = clusterThe resulting plot layout is composed of the following elements, all rendered usingholoviews.HeatMap()and fine-tuned via Bokeh plot parameters :#### [CA] ####
[RA] [MP] [RL]
#### [CL] ####[MP]main plot(with colorbar on the right)[RA]row annotations(from metadata for rows)[CA]column annotations(from metadata for columns) : can be duplicated under the main plot for long DFs[RL]row legend(RA explained) : optional[CL]column legend(CA explained) : optional####white space fillerOutputplot()method of the class will save :HTML plotwith an interactive toolbar enabling zooming into main heatmap and metadataPNG imagecorresponding to the HTML plot (without toolbar) : ifsave_pngevaluates to TrueWithplot_save_pathspecified, HTML and PNG are saved according to it,
otherwise, HTML only is saved in current working directory to be able to show the plot.HTML toolbarThe image above gives an example of toolbar forAdRubixHTML plot.
It comprises the following Bokeh tools, top to bottom:Box Zoom(activated by default) : drag & drop to select a rectangular area for zooming inPan: drag to move a zoomed-in image aroundWheel Zoom: zoom in or out with your mouse wheelResetto the initial view (after any combination of zoom and pan)Crosshairsfrom mouse location (activated by default)You can activate/deactivate any zoom, pan or crosshairs tool by clicking on it.WARNING.When usingrow_labels_for_highlightingparameter, zoom can only work linked between
main data and column annotations. Withrow_labels_for_highlighting=None, zoom is always linked between main data
and both row and column annotations.Requirements for saving PNGTo be able to save plots as PNG files, ideally you should have :Firefox web browserandgeckodriverinstalled on your machineFolders with the executables of Firefox and geckodriver added to your system PATH environment variableAdding new locations to system PATH on a Windows machineAdding new locations to system PATH on a Linux machineMain parametersDefault values are bolded, where applicable.Data input and plot outputdata(DF) ordata_file(CSV file name)metadata_rows(DF) ormetadata_rows_file(CSV file name)metadata_cols(DF) ormetadata_cols_file(CSV file name)data_pathrequired if any of[...]_fileparameters are used.Do not forget a slash at the end of the path.
Also,if you work on a Windows machine, be sure to use double backslashes\\instead of single slashes.[ optional ]plot_save_path= path to HTML file to be saved,including its name.
IfNoneis provided, HTML is saved in current working directory
under the name<your_python_script_name>.htmland automatically opened in a web browser.[ optional ]save_png= True/Falseor 1/0. PNG image will be saved in the same folder as HTML
under the same name except for the extension .pngData scaling and normalization + DataprepNB.It is still preferred that you do data scaling and/or normalization externally before usingRubixHeatmapin order to have more control and transparency over your data.NB.If you go for it, for one axis you must choose betweenscale_alongandnormalize_along.
You cannot use both simultaneously along the same axis.[ optional ]color_scaling_quantile= quantile for getting rid of outliers (in %), default95,
accepted 80...100. Applied both toscale_alongandnormalize_alongoptions.When applied toscale_along,color_scaling_quantile=95will cap top (> 95% quantile) values.When applied tonormalize_along,color_scaling_quantile=95will cap both top (> 97.5% quantile)
and bottom (<2.5% quantile) values before normalizing data (see below).[ optional ]scale_along= "columns"/"rows" or 0/1 for scaling and capping data along the specified axis.
Default :None= do nothing.[ optional ]normalize_along= "columns"/"rows" or 0/1 for scaling and capping + normalizing data
along the specified axis :(x - median(x) by column or row) / MAD(x) by column or row,
whereMADis median average deviation. Default :None= do nothing.[ optional ]data_rows_to_drop,data_cols_to_drop= lists of the names of rows/columns in main data not intended
to be plotted. Nonexistent names will be skipped without raising an error.Colorbar[ optional ]colorbar_title(no titleby default)[ optional ]colorbar_height,colorbar_location= "top"/"center"/"bottom"
(always to the right of the main plot)[ optional ]show_colorbar=True/FalseMetadata[ optional ]show_metadata_rows=True/False[ optional ]show_metadata_rows_labels= True/False(font size is adapted to main dataframe length and to heatmap height, between 5pt and 10pt)[ optional ]show_metadata_cols=True/False[ optional ]duplicate_metadata_cols= True/False/None(if None, set automatically to True for DFs longer that 70 rows)Legends[ optional ]show_rows_legend=True/False[ optional ]show_cols_legend=True/FalsePlot dimensions(in terms of the main heatmap)[ optional ]heatmap_width,heatmap_height: either sizes in pixels, or one size and the other "proportional".
If neither is specified, plot dimensions will be proportional to the DF size (6 screen pixels per row or column).Colormaps(must be known byholoviews)NB.Aseparatoris a row or column or a group of rows or columns (depending on the DF size and heatmap size)
inserted in the main dataframe to be plotted in a specified color in order to visually separate meaningful blocks
of data.[ optional ]colormap_main(default "coolwarm" / "YlOrRd" for non-negative data)[ optional ]colormap_metarows(default "Glasbey")[ optional ]colormap_metacols(default "Category20")[ optional ]nan_color(default "black") = hex color string "#xxxxxx" or named HTML color
for filling NaN values in the main heatmap[ optional ]sep_color(default "white") = hex color string "#xxxxxx" or named HTML color
for filling separators in the main heatmap[ optional ]sep_value=None/ "min" / "median" / "adapt"
= plot separators filled withsep_color/ with color corresponding to the mininum value of the DF /
with color corresponding to the median value of the DF, respectively. "adapt" will try to choose between
"min" and "median", depending on data range and normalization.Plot enhancement[ optional ]metadata_rows_sep= insert row separators in the main DF and the metadata-rows DF
before plotting, according to the specified column (between groups of labels with identical values).[ optional ]metadata_cols_sep= insert column separators in the main DF and the metadata-cols DF
before plotting, according to the specified rows (between groups of labels with identical values).[ optional ]row_labels_for_highlighting= list of keywords for identifying row labels to be highlighted
(in red and italic to the right of the heatmap). See WARNING inToolbarsection.Example of usagefromadrubiximportRubixHeatmapimportpandasaspdmain_data=pd.DataFrame(index=[...],columns=[...],data=[...])hm=RubixHeatmap(data_path="/home/user/myproject/data/",data=main_data,metadata_rows_file="meta_rows.csv",metadata_cols_file="meta_cols.csv",plot_save_path="/home/user/myproject/output/plot.html",save_png=True,scale_along="columns",colorbar_title="my colorbar",colorbar_location="top",show_metadata_rows_labels=True,show_rows_legend=False,# duplicate_metadata_cols=False,colormap_main="fire",heatmap_width=1500,heatmap_height="proportional",data_rows_to_drop=["useless_row_1","useless_row_2"],row_labels_for_highlighting=["row_keyword_A","row_keyword_B"],metadata_rows_sep="Group",metadata_cols_sep="Subject",nan_color="orange",sep_color="green",# sep_value="median")hm.plot() |
adr-viewer | adr-viewerShow off your Architecture Decision Records with an easy-to-navigate web page, either as a local web-server or generated static content.ExamplesExample above using Nat Pryce'sadr-toolsprojectThis project exposes its own Architecture Decision RecordshereInstallationFrom PyPI$pipinstalladr-viewerFrom local buildadr-viewer requires Python 3.7 or higher (with Pip)$gitclonehttps://github.com/mrwilson/adr-viewer
$pipinstall-rrequirements.txt
$pythonsetup.pyinstallUsageUsage:adr-viewer[OPTIONS]Options:--adr-pathTEXTDirectorycontainingADRfiles.[default:doc/adr/]--outputTEXTFiletowriteoutputto.[default:index.html]--serveServecontentathttp://localhost:8000/--portINTCustomserverport[default:8000]--helpShowthismessageandexit.The default for--adr-pathisdoc/adr/because this is the default path generated byadr-tools.Supported Record Types |
adr_writer | OverviewCreate Architecture Decision Records by answering questions to ensure consistency in formatting and thought processes.UsageInstallation:pip3installadr_writer# orpython3-mpipinstalladr_writerBash:python</oath/to>/adr_writerPython:importadr_writeradr_writer.main()ExamplesExecutionRecord outputWorkflow |
ads | A Python Module to Interact with NASA’s ADS that Doesn’t Suck™If you’re in astro research, then you pretty muchneedNASA’s ADS.
It’s tried, true, and people go crazy on the rare occasions when it goes down.Docs:https://ads.readthedocs.io/Repo:https://github.com/andycasey/adsPyPI:https://pypi.python.org/pypi/adsQuickstart>>> import ads
>>> ads.config.token = 'secret token'
>>> papers = ads.SearchQuery(q="supernova", sort="citation_count")
>>> for paper in papers:
>>> print(paper.title)
[u'Maps of Dust Infrared Emission for Use in Estimation of Reddening and Cosmic Microwave Background Radiation Foregrounds']
[u'Measurements of Omega and Lambda from 42 High-Redshift Supernovae']
[u'Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant']
[u'First-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Determination of Cosmological Parameters']
[u'Abundances of the elements: Meteoritic and solar']Running Tests> cd /path/to/ads
> python -m unittest discover |
ads1015 | ADS1015 4 channel differential/single-ended ADCInstallingStable library from PyPi:Just runpython3 -m pip install ads1015Latest/development library from GitHub:git clone https://github.com/pimoroni/ads1015-pythoncd ads1015-python./install.sh --unstable1.0.0Enhancement: Repackage to pyproject.toml/hatchlingIsort and black code formatting0.0.8Add thread-safe wrapper around ADC readsMinor spelling fixes0.0.7Fix setting data rateAdd support for ADS1115Add new detect_chip_type function0.0.6Added support for all addresses ads1015 supportsGenericized implementation away from pimoroni breakoutTypo fixes in docstringFix get_multiplexer so that it returns a value0.0.5Fix to support alternate i2c addressTypo fixes in DocString and comment0.0.4Port to i2cdevice>=0.0.6 set/get API0.0.3Fixed timeout in wait_for_conversionAliased timeout exception to ads1015.ADS1015TimeoutError0.0.2Fixed Python 2.7 bug with missing TimeoutError0.0.1Initial Release |
ads1118 | # ADS1118
A library that uses hardware SPI to communicate with an ADS1118 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.