package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
amazon-kinesis-utils
amazon-kinesis-utilsA library of useful utilities for Amazon KinesisReferenceSee:https://amazon-kinesis-utils.readthedocs.io/en/latest/Usage# import submodule you want to use with from importfromamazon_kinesis_utilsimportkinesisdeflambda_handler(event,context):raw_records=event['Records']# kinesis.parse_records parses aggregated/non-aggregated records, with or without gzip compression# it even unpacks CloudWatch Logs subscription filters messagesforpayloadinkinesis.parse_records(raw_records):# kinesis.parse_records is a generator, so we only have one payload in memory on every iterationprint(f"Decoded payload:{payload}")ContributingMake sure to have following tools installed:pre-commitSphinx for docs generationmacOS$brewinstallpre-commit#setuppre-commithooksbyrunningbelowcommandinrepositoryroot$pre-commitinstall#installsphinx$pipinstallsphinxsphinx_rtd_theme
amazon-lex-bot-deploy
amazon-lex-bot-deployAmazon Lex Bot DeployThe sample code provides a deploy function and an executable to easily deploy an Amazon Lex bot based on a Lex Schema file.License SummaryThis sample code is made available under a modified MIT license. See the LICENSE file.Deploy and export Amazon Lex Schema bots easily. Maintain bots in source, share and use for CI/CD processes:pip install amazon-lex-bot-deploythen:lex-bot-deploy --example BookTripTo get the JSON schema easily:lex-bot-get-schema --lex-bot-name BookTripAnd you can specify which schema you would like to deploy obviously:lex-bot-deploy -s BookTrip_Export.jsonFor an example how to use the API check the CLI commandhttps://github.com/aws-samples/amazon-lex-bot-deploy/blob/master/bin/lex-bot-deployFree software: MIT-0 license (https://github.com/aws/mit-0)Documentation:https://lex-bot-deploy.readthedocs.io.FeaturesLet me know :-)Thoughts: * make creation of permissions optional * allow mapping of Lambda endpoints or allow options to map aliases to Lambda function (tbd)CreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.8 (2020-04-02)fix dependency on legacy from botocore.vendored import requests (thx @davidfsmith)0.1.7 (2020-01-10)fix honoring TTL (thx @AlFalahTaieb)0.1.0 (2018-12-07)First release on PyPI.
amazon-lex-bot-test
amazon_lex_bot_testTest Amazon Lex bots easily by defining business requirements through conversations:lex-bot-test--exampleBookTriplex-bot-test--test-file<your-test-file>--aliastest--regionus-west-2--verboseSample Test Definitions:This should give you an idea how to define your test cases. Essentially you can test the response attributes and use Python patterns in the test condition. Not an examples from the samples as they don’t have session attributes.name:test-lex-botdescription:Regression tests for the Amazon Lex botbotName:your-bot-namebotAlias:testwaitBetweenRequestsMillis:0sequences:-name:book a car with all defaultsdescription:book a car with all defaultssequence:-utterance:"bookacarwithallmydefaults"postConditions:message:-"Ok,Pickupofeconomyclasscartomorrowanddropoffin2daysinnewyork?"-utterance:"yes"postConditions:dialogState:ReadyForFulfillmentslots:CarType:economyDriverAge:"38"PickUpCity:new yorkPickUpDate:".*"ReturnDate:".*"sessionAttributes:-name:cartpattern:".*total:$33.35.*"Free software: MIT-0 licenseDocumentation:https://amazon-lex-bot-test.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2018-12-12)First release on PyPI.
amazon-lex-helper
No description available on PyPI.
amazon-login
No description available on PyPI.
amazon-mail-sender
Failed to fetch description. HTTP Status Code: 404
amazon-management-page-parser
amazon-management-page-parserCurrently only support seller profile page parsing. All fields use same encoding 'utf-8'.Seller profile fields:namestorefront_urlfeedbacks30dayspositiveneutralnegativecount90dayspositiveneutralnegativecount12monspositiveneutralnegativecountlifetimepositiveneutralnegativecountContentamazon_management_page_parser.seller_profile_parser.SellerProfileParserInstallationThe simplest way is to install it viapip:pip install amazon-management-page-parserRun Testpip install -r requirements-dev.txttox
amazon-mws
# ** DISCLAIMER ** This API is in constant development. Do not rely on it too much until its in stable release. All except for the last two APIs ( InboundShipment and OutboundShipment ) are complete. Help towards completing this last two APIs would be greatly appreciated. I will mark this as stable 1.0 once all tests have been completed.Forked from [czpython/python-amazon-mws](https://github.com/czpython/python-amazon-mws).# Python Amazon MWSPython Amazon MWS is a python interface for the Amazon MWS API. I wrote it to help me upload my products to amazon. However, seeing its potential i decided to expand it in order for it to cover most ( if not all ) operations in the Amazon MWS.This is still an ongoing project. If you would like to contribute, see below :).Its based on the [amazon-mws-python](http://code.google.com/p/amazon-mws-python).Checkout the documentation [here](https://python-amazon-mws.readthedocs.org/latest/). You can read the official Amazon MWS documentation [here](https://developer.amazonservices.com/).# To-DoImprove READMECreate testsFinish InboundShipments & OutboundShipments APIsFinish Docs
amazon-omics-tools
AWS HealthOmics ToolsTools for working with the Amazon Omics Service.Using the Omics Transfer ManagerInstallationInstallation Amazon Omics Tools is available through pypi. To install, type:pipinstallamazon-omics-toolsBasic UsageTheTransferManagerclass makes it easy to download files for an Omics reference or read set. By default the files are saved to the current directory, or you can specify a custom location with thedirectoryparameter.importboto3fromomics.common.omics_file_typesimportReadSetFileName,ReferenceFileName,ReadSetFileTypefromomics.transfer.managerimportTransferManagerfromomics.transfer.configimportTransferConfigREFERENCE_STORE_ID="<my-reference-store-id>"SEQUENCE_STORE_ID="<my-sequence-store-id>"client=boto3.client("omics")manager=TransferManager(client)# Download all files for a reference.manager.download_reference(REFERENCE_STORE_ID,"<my-reference-id>")# Download all files for a read set to a custom directory.manager.download_read_set(SEQUENCE_STORE_ID,"<my-read-set-id>","my-sequence-data")Download specific filesSpecific files can be downloaded via thedownload_reference_fileanddownload_read_set_filemethods. Theclient_fileobjparameter can be either the name of a local file to create for storing the data, or aTextIOorBinaryIOobject that supports write methods.# Download a specific reference file.manager.download_reference_file(REFERENCE_STORE_ID,"<my-reference-id>",ReferenceFileName.INDEX)# Download a specific read set file with a custom filename.manager.download_read_set_file(SEQUENCE_STORE_ID,"<my-read-set-id>",ReadSetFileName.INDEX,"my-sequence-data/read-set-index")Upload specific filesSpecific files can be uploaded via theupload_read_setmethod. Thefileobjsparameter can be either the name of a local file, or aTextIOorBinaryIOobject that supports read methods. For paired end reads, you can definefileobjsas a list of files.# Upload a specific read set file.read_set_id=manager.upload_read_set("my-sequence-data/read-set-file.bam",SEQUENCE_STORE_ID,"BAM","name","subject-id","sample-id","<my-reference-arn>",)# Upload paired end read set files.read_set_id=manager.upload_read_set(["my-sequence-data/read-set-file_1.fastq.gz","my-sequence-data/read-set-file_2.fastq.gz"],SEQUENCE_STORE_ID,"FASTQ","name","subject-id","sample-id","<my-reference-arn>",)Subscribe to eventsTransfer events:on_queued,on_progress, andon_donecan be observed by defining a subclass ofOmicsTransferSubscriberand passing in an object which can receive events.classProgressReporter(OmicsTransferSubscriber):defon_queued(self,**kwargs):future:OmicsTransferFuture=kwargs["future"]print(f"Download queued:{future.meta.call_args.fileobj}")defon_done(self,**kwargs):print("Download complete")manager.download_read_set(SEQUENCE_STORE_ID,"<my-read-set-id>",subscribers=[ProgressReporter()])ThreadsTransfer operations use threads to implement concurrency. Thread use can be disabled by setting theuse_threadsattribute to False.If thread use is disabled, transfer concurrency does not occur. Accordingly, the value of themax_request_concurrencyattribute is ignored.# Disable thread use/transfer concurrencyconfig=TransferConfig(use_threads=False)manager=TransferManager(client,config)manager.download_read_set(SEQUENCE_STORE_ID,"<my-read-set-id>")Using the Omics URI ParserBasic UsageTheOmicsUriParserclass makes it easy to parse omics readset and reference URIs to extract fields relevant for calling AWS omics APIs.Readset file URI:Readset file URIs come in the following format:omics://<AWS_ACCOUNT_ID>.storage.<AWS_REGION>.amazonaws.com/<SEQUENCE_STORE_ID>/readSet/<READSET_ID>/<SOURCE1/SOURCE2>For example:omics://123412341234.storage.us-east-1.amazonaws.com/5432154321/readSet/5346184667/source1 omics://123412341234.storage.us-east-1.amazonaws.com/5432154321/readSet/5346184667/source2Reference file URI:Reference file URIs come in the following format:omics://<AWS_ACCOUNT_ID>.storage.<AWS_REGION>.amazonaws.com/<REFERENCE_STORE_ID>/reference/<REFERENCE_ID>/sourceFor example:omics://123412341234.storage.us-east-1.amazonaws.com/5432154321/reference/5346184667/sourceimportboto3fromomics.uriparse.uri_parseimportOmicsUriParser,OmicsUriREADSET_URI_STRING="omics://123412341234.storage.us-east-1.amazonaws.com/5432154321/readSet/5346184667/source1"REFERENCE_URI_STRING="omics://123412341234.storage.us-east-1.amazonaws.com/5432154321/reference/5346184667/source"client=boto3.client("omics")readset=OmicsUriParser(READSET_URI_STRING).parse()reference=OmicsUriParser(REFERENCE_URI_STRING).parse()# use the parsed fields from the URIs to call omics APIs:manager=TransferManager(client)# Download all files for a reference.manager.download_reference(reference.store_id,reference.resource_id)# Download all files for a read set to a custom directory.manager.download_read_set(readset.store_id,readset.resource_id,readset.file_name)# Download a specific read set file with a custom filename.manager.download_read_set_file(readset.store_id,readset.resource_id,readset.file_name,"my-sequence-data/read-set-index")Using the Omics Rerun toolBasic UsageTheomics-reruntool makes it easy to start a new run execution from a CloudWatch Logs manifest.List runs from manifestThe following example lists all workflow run ids which were completed on July 1st (UTC time):> omics-rerun -s 2023-07-01T00:00:00 -e 2023-07-02T00:00:00 1234567 (2023-07-01T12:00:00.000) 2345678 (2023-07-01T13:00:00.000)Rerun a previously-executed runTo rerun a previously-executed run, specify the run id you would like to rerun:> omics-rerun 1234567 StartRun request: { "workflowId": "4974161", "workflowType": "READY2RUN", "roleArn": "arn:aws:iam::123412341234:role/MyRole", "parameters": { "inputFASTQ_2": "s3://omics-us-west-2/sample-inputs/4974161/HG002-NA24385-pFDA_S2_L002_R2_001-5x.fastq.gz", "inputFASTQ_1": "s3://omics-us-west-2/sample-inputs/4974161/HG002-NA24385-pFDA_S2_L002_R1_001-5x.fastq.gz" }, "outputUri": "s3://my-bucket/my-path" } StartRun response: { "arn": "arn:aws:omics:us-west-2:123412341234:run/3456789", "id": "3456789", "status": "PENDING", "tags": {} }It is possible to override a request parameter from the original run. The following example tags the new run, which is particularly useful as tags are not propagated from the original run.> omics-rerun 1234567 --tag=myKey=myValue StartRun request: { "workflowId": "4974161", "workflowType": "READY2RUN", "roleArn": "arn:aws:iam::123412341234:role/MyRole", "parameters": { "inputFASTQ_2": "s3://omics-us-west-2/sample-inputs/4974161/HG002-NA24385-pFDA_S2_L002_R2_001-5x.fastq.gz", "inputFASTQ_1": "s3://omics-us-west-2/sample-inputs/4974161/HG002-NA24385-pFDA_S2_L002_R1_001-5x.fastq.gz" }, "outputUri": "s3://my-bucket/my-path", "tags": { "myKey": "myValue" } } StartRun response: { "arn": "arn:aws:omics:us-west-2:123412341234:run/4567890", "id": "4567890", "status": "PENDING", "tags": { "myKey": "myValue" } }Before submitting a rerun request, it is possible to dry-run to view the new StartRun request:> omics-rerun -d 1234567 StartRun request: { "workflowId": "4974161", "workflowType": "READY2RUN", "roleArn": "arn:aws:iam::123412341234:role/MyRole", "parameters": { "inputFASTQ_2": "s3://omics-us-west-2/sample-inputs/4974161/HG002-NA24385-pFDA_S2_L002_R2_001-5x.fastq.gz", "inputFASTQ_1": "s3://omics-us-west-2/sample-inputs/4974161/HG002-NA24385-pFDA_S2_L002_R1_001-5x.fastq.gz" }, "outputUri": "s3://my-bucket/my-path" }SecuritySeeCONTRIBUTINGfor more information.LicenseThis project is licensed under the Apache-2.0 License.
amazon-orders
Amazon Ordersamazon-ordersis an unofficial library that provides a command line interface alongside a programmatic API that can be used to interact with Amazon.com's consumer-facing website.This works by parsing website data from Amazon.com. A nightly build validates functionality to ensure its stability, but as Amazon provides no official API to use, this package may break at any time. This package only supports the English version of the website.Installationamazon-ordersis available onPyPIand can be installed usingpip:pipinstallamazon-ordersThat's it!amazon-ordersis now available as a Python package is available from the command line.Basic UsageExecuteamazon-ordersfrom the command line with:amazon-orders--username<AMAZON_EMAIL>--password<AMAZON_PASSWORD>historyOr useamazon-ordersprogrammatically:fromamazonorders.sessionimportAmazonSessionfromamazonorders.ordersimportAmazonOrdersamazon_session=AmazonSession("<AMAZON_EMAIL>","<AMAZON_PASSWORD>")amazon_session.login()amazon_orders=AmazonOrders(amazon_session)orders=amazon_orders.get_order_history(year=2023)fororderinorders:print(f"{order.order_number}-{order.grand_total}")DocumentationFor more advanced usage,amazon-orders's official documentation is available athttp://amazon-orders.readthedocs.io.ContributingIf you would like to get involved, be sure to review theContribution Guide.Want to contribute financially? If you've foundamazon-ordersuseful,sponsorshipwould also be greatly appreciated!
amazon-paapi5
Amazon Product Advertising API 5.0 wrapper for PythonA simple Python wrapper for the last version of the Amazon Product Advertising API. This module allows to get product information from Amazon using the official API in an easier way. Like Bottlenose you can use cache reader and writer to limit the number of api calls.FeaturesObject oriented interface for simple usageGet multiple products at onceConfigurable query cachingCompatible with Python versions 3.6 and upSupport for AU, BR, CA, FR, IN, IT, JP, MX, ES, TR, MX, AE, UK and US Amazon Product Advertising API endpointsConfigurable throttling for batches of queriesAsk for new features through theissuessection.Full documentation onRead the DocsInstallationYou can install or upgrade the module with:pip install amazon-paapi5 --upgradeUsage guideSearch items::from amazon.paapi import AmazonAPI amazon = AmazonAPI(KEY, SECRET, TAG, COUNTRY) products = amazon.search_items(keywords='harry potter') print(product['data'][0].image_large) print(product['data'][1].prices.price)Get multiple products information::from amazon.paapi import AmazonAPI amazon = AmazonAPI(KEY, SECRET, TAG, COUNTRY) products = amazon.get_items(item_ids=['B01N5IB20Q','B01F9G43WU']) print(products['data']['B01N5IB20Q'].image_large) print(products['data']['B01F9G43WU'].prices.price)Get variations::from amazon.paapi import AmazonAPI amazon = AmazonAPI(KEY, SECRET, TAG, COUNTRY) products = amazon.get_variations(asin=['B01N5IB20Q','B01F9G43WU'])Get browse nodes::from amazon.paapi import AmazonAPI amazon = AmazonAPI(KEY, SECRET, TAG, COUNTRY) browseNodes = amazon.get_browse_nodes(browse_node_ids=['473535031'])Use cache reader and writer::from amazon.paapi import AmazonAPI DATA = [] def custom_save_function(url, data, http_info): DATA.append({'url':url, 'data': data, 'http_info':http_info}) def custom_retrieval_function(url): for item in DATA: if item["url"] == url: return {'data':item['data'], 'http_info': item['http_info']} return None amazon = AmazonAPI(KEY, SECRET, TAG, COUNTRY, CacheReader=custom_retrieval_function, CacheWriter=custom_save_function) products = amazon.search_items(keywords='harry potter')ChangelogVersion 1.1.2 - License MIT Version 1.1.1 - add additional parameters to api calls Version 1.1.0 - CacheReader and CacheWriter available for all the search functions - Defintion af AmazonException to get exceptions during the api calls - Constants defintion - AmazonProduct and AmazonBrowseNode definition - Uniform data structure returned by all the api calls Version 1.0.0 - CacheReader and CacheWriter - Enable throttling Version 0.1.0 - First release
amazon-page-parser
amazon-page-parserCurrently supported detail fields are:title - Titleauthor - Array of authorsfeature_bullets - Array of feature bulletsbook_description - description under offer informationproduct_description - description under editorial reviewimages - Array of imagesstar - Average star of custom reviewsreviews - Custom Reviews countrank - Sales Rank in top browse nodecategories - Browse nodes trees, multiple tree pathes are concated by ';'details - Details are key, value pairs.Offer listing fields are:price - Product priceshipping_price - Shipping pricecondition - Conditionsubcondition - Subconditioncondition_comments - Condition commentsavailable - Whether product is currently available, or needs pre-orderedprime - Whether shipping supports prime optionsexpected_shipping - Whether shipping support expected optionsseller_name - Seller nameseller_rating - Seller ratingseller_feedbacks - Seller feedback countseller_stars - Seller stars countoffer_listing_id - Offer listing IDTracking fields are:carrier - Carrier nametracking_id - Tracking numberis_shipped - Whether order is shippedInstallationThe simplest way is to install it viapip:pip install amazon-page-parserRun Testpip install -r requirements-dev.txttox
amazon.partiql
This is the README.md file forpartiql-python.
amazon_pay
No description available on PyPI.
amazon-photos
Amazon Photos APITable of ContentsInstallationSetupExamplesSearchNodesRestrictionsRange QueriesNotesKnown File TypesCustom Image Labeling (Optional)It is recommended to use this API in aJupyter Notebook, as the results from most endpoints are aDataFramewhich can be neatly displayed and efficiently manipulated with vectorized ops. This becomes increasingly important if you have "large" amounts of data (e.g. >1 million photos/videos).Installationpipinstallamazon-photos-UOutput Examplesap.dbdateTimeDigitizedidname...modelapertureValuefocalLengthwidthheightsize02019-07-06T18:22:00.000ZHeMReF-vvJiTTkdPIeWuoP1694252973839.png...iPhone XS54823/3232517/43024403243277712023-01-18T09:36:22.000Zz_HiIvASAKqWmdrkjWiqMZ1692626817154.jpg...iPhone XS54823/3232517/43024403223425722022-08-14T14:13:21.000ZLKXEZbqoVrhrOYBezisGEQ1798219686789.jpg...iPhone 11 Pro Max54823/3232517/43024403242398732020-06-28T19:32:30.000ZEPUeciHtfKkGiYkfUyEuMa1593482220567.jpg...iPhone XS54823/3232517/43024403289895742021-07-07T17:12:55.000ZfdfKzRJbEyoVeGcfCoJgE-1592299282720.png...iPhone XR54823/3232517/43024403243255652021-08-18T18:32:41.000ZcrskJSmKPFRhxbpfkivyLm1592902159105.png...iPhone XR54823/3232517/43024403212312362023-08-23T19:12:21.000ZqkBFUlyIdkUwVVSaVWWKEF1598138358650.png...iPhone 1154823/3232517/43024403243788772021-06-19T17:14:13.000ZTXKMKC-mHvSUrtRfwmtyDe1622199863606.jpg...iPhone 12 Pro14447/1065321/51536204875843282023-02-15T22:45:40.000ZFRDvvjcZdpFWiwrIZfTNHO1581874518054.jpg...iPhone 8 Plus54823/32325399/10013482049862883ap.print_tree()~ ├── Documents ├── Pictures │ ├── iPhone │ └── Web │ ├── foo │ └── bar ├── Videos └── Backup ├── LAPTOP-XYZ │ └── Desktop └── DESKTOP-IJK └── DesktopSetup[Update] Jan 04 2024: To avoid confusion, setting env vars is no longer supported. One must pass cookies directly as shown below.Log in to Amazon Photos and copy the following cookies:session-idubid*at*Canada/Europewherexxis the TLD (top-level domain)ubid-acbxxat-acbxxUnited Statesubid_mainat_mainE.g.fromamazon_photosimportAmazonPhotosap=AmazonPhotos(## US# cookies={# 'ubid_main': ...,# 'at_main': ...,# 'session-id': ...,# },## Canada# cookies={# 'ubid-acbca': ...,# 'at-acbca': ...,# 'session-id': ...,# }## Italy# cookies={# 'ubid-acbit': ...,# 'at-acbit': ...,# 'session-id': ...,# })ExamplesA database namedap.parquetwill be created during the initial setup. This is mainly used to reduce upload conflicts by checking your local file(s) md5 against the database before sending the request.fromamazon_photosimportAmazonPhotosap=AmazonPhotos(# see cookie examples abovecookies={...},# optionally cache all intermediate JSON responsestmp='tmp',# pandas optionsdtype_backend='pyarrow',engine='pyarrow',)# get current usage statsap.usage()# get entire Amazon Photos librarynodes=ap.query("type:(PHOTOS OR VIDEOS)")# query Amazon Photos library with more filters appliednodes=ap.query("type:(PHOTOS OR VIDEOS) AND things:(plant AND beach OR moon) AND timeYear:(2023) AND timeMonth:(8) AND timeDay:(14) AND location:(CAN#BC#Vancouver)")# sample first 10 nodesnode_ids=nodes.id[:10]# move a batch of images/videos to the trash binap.trash(node_ids)# get trash bin contentsap.trashed()# permanently delete a batch of images/videosap.delete(node_ids)# restore a batch of images/videos from the trash binap.restore(node_ids)# upload media (preserves local directory structure and copies to Amazon Photos root directory)ap.upload('path/to/files')# download a batch of images/videosap.download(node_ids)# convenience method to get photos onlyap.photos()# convenience method to get videos onlyap.videos()# get all identifiers calculated by Amazon.ap.aggregations(category="all")# get specific identifiers calculated by Amazon.ap.aggregations(category="location")SearchUndocumented API, current endpoints valid Dec 2023.For validlocationandpeopleIDs, see the results from theaggregations()method.nametypedescriptionContentTypestr"JSON"_int1690059771064assetstr"ALL""MOBILE""NONE"DESKTOP"default:"ALL"filtersstr"type:(PHOTOS OR VIDEOS) AND things:(plant AND beach OR moon) AND timeYear:(2019) AND timeMonth:(7) AND location:(CAN#BC#Vancouver) AND people:(CyChdySYdfj7DHsjdSHdy)"default:"type:(PHOTOS OR VIDEOS)"groupByForTimestr"day""month""year"limitint200lowResThumbnailstr"true""false"default:"true"resourceVersionstr"V2"searchContextstr"customer""all""unknown""family""groups"default:"customer"sortstr"['contentProperties.contentDate DESC']""['contentProperties.contentDate ASC']""['createdDate DESC']""['createdDate ASC']""['name DESC']""['name ASC']"default:"['contentProperties.contentDate DESC']"tempLinkstr"false""true"default:"false"NodesDocs last updated in 2015FieldNameFieldTypeSort AllowedNotesisRootBooleanOnly lower case"true"is supported.nameStringYesThis field does an exact match on the name and prefix query. Considernode1{ "name" : "sample" }node2 { "name" : "sample1" }Query filtername:samplewill return node1name:sample*will return node1 and node2kindStringYesTo search for all the nodes which contains kind as FILEkind:FILEmodifiedDateDate (in ISO8601 Format)YesTo Search for all the nodes which has modified from timemodifiedDate:{"2014-12-31T23:59:59.000Z" TO *]createdDateDate (in ISO8601 Format)YesTo Search for all the nodes created oncreatedDate:2014-12-31T23:59:59.000ZlabelsString ArrayOnly Equality can be tested with arrays.if labels contains["name", "test", "sample"].Label can be searched for name or combination of values.To get all the labels which contain name and testlabels: (name AND test)descriptionStringTo Search all the nodes for description with value 'test'description:testparentsString ArrayOnly Equality can be tested with arrays.if parents contains["id1", "id2", "id3"].Parent can be searched for name or combination of values.To get all the parents which contains id1 and id2parents:id1 AND parents:id2statusStringYesFor searching nodes with AVAILABLE status.status:AVAILABLEcontentProperties.sizeLongYescontentProperties.contentTypeStringYesIf prefix query, only the major content-type (e.g.image*,video*, etc.) is supported as a prefix.contentProperties.md5StringcontentProperties.contentDateDate (in ISO8601 Format)YesRangeQueries and equals queries can be used with this fieldcontentProperties.extensionStringYesRestrictionsMax # of Filter Parameters Allowed is 8Filter TypeFiltersEqualitycreatedDate, description, isRoot, kind, labels, modifiedDate, name, parentIds, statusRangecontentProperties.contentDate, createdDate, modifiedDatePrefixcontentProperties.contentType, nameRange QueriesOperationSyntaxGreaterThan{"valueToBeTested" TO *}GreaterThan or Equal["ValueToBeTested" TO *]LessThan{* TO "ValueToBeTested"}LessThan or Equal{* TO "ValueToBeTested"]Between["ValueToBeTested_LowerBound" TO "ValueToBeTested_UpperBound"]Noteshttps://www.amazon.ca/drive/v1/batchLinkThis endpoint is called when downloading a batch of photos/videos in the web interface. It then returns a URL to download a zip file, then makes a request to that url to download the content. When making a request to download data for 1200 nodes (max batch size), it turns out to be much slower (~2.5 minutes) than asynchronously downloading 1200 photos/videos individually (~1 minute).Known File TypesExtensionCategory.pdfpdf.docdoc.docxdoc.docmdoc.dotdoc.dotxdoc.dotmdoc.asddoc.cnvdoc.mp3mp3.m4amp3.m4bmp3.m4pmp3.wavmp3.aacmp3.aifmp3.mpamp3.wmamp3.flacmp3.midmp3.oggmp3.xlsxls.xlmxls.xllxls.xlcxls.xarxls.xlaxls.xlbxls.xlsbxls.xlsmxls.xlsxxls.xltxls.xltmxls.xltxxls.xlwxls.pptppt.pptxppt.ppappt.ppamppt.pptmppt.ppsppt.ppsmppt.ppsxppt.potppt.potmppt.potxppt.sldmppt.sldxppt.txttxt.texttxt.rtftxt.xmlmarkup.htmmarkup.htmlmarkup.zipzip.rarzip.7zzip.jpgimg.jpegimg.pngimg.bmpimg.gifimg.tifimg.svgimg.mp4vid.m4vvid.qtvid.movvid.mpgvid.mpegvid.3g2vid.3gpvid.flvvid.f4vvid.asfvid.avivid.wmvvid.swfexe.exeexe.dllexe.axexe.ocxexe.rpmexeCustom Image Labeling (Optional)Categorize your images into folders using computer vision models.pipinstallamazon-photos[extras]-USee theModel Listfor a list of all available models.Sample ModelsVery Largeeva02_base_patch14_448.mim_in22k_ft_in22k_in1kLargeeva02_large_patch14_448.mim_m38m_ft_in22k_in1kMediumeva02_small_patch14_336.mim_in22k_ft_in1k vit_base_patch16_clip_384.laion2b_ft_in12k_in1k vit_base_patch16_clip_384.openai_ft_in12k_in1k caformer_m36.sail_in22k_ft_in1k_384Smalleva02_tiny_patch14_336.mim_in22k_ft_in1k tiny_vit_5m_224.dist_in22k_ft_in1k edgenext_small.usi_in1k xcit_tiny_12_p8_384.fb_dist_in1krun('eva02_base_patch14_448.mim_in22k_ft_in22k_in1k',path_in='images',path_out='labeled',thresh=0.0,# threshold for predictions, 0.9 means you want very confident predictions onlytopk=5,# window of predictions to check if using exclude or restrict, if set to 1, only the top prediction will be checkedexclude=lambdax:re.search('boat|ocean',x,flags=re.I),# function to exclude classification of these predicted labelsrestrict=lambdax:re.search('sand|beach|sunset',x,flags=re.I),# function to restrict classification to only these predicted labelsdataloader_options={'batch_size':4,# *** adjust ***'shuffle':False,'num_workers':psutil.cpu_count(logical=False),# *** adjust ***'pin_memory':True,},accumulate=False,# accumulate results in path_out, if False, everything in path_out will be deleted before running againdevice='cuda',naming_style='name',# use human-readable label names, optionally use the label index or synsetdebug=0,)
amazon-product-review-scraper
Amazon Product Review ScraperPython package to scrape product review data from amazonQuickstartpip install amazon-product-review-scraperfrom amazon_product_review_scraper import amazon_product_review_scraper review_scraper = amazon_product_review_scraper(amazon_site="amazon.in", product_asin="B07X6V2FR3") reviews_df = review_scraper.scrape() reviews_df.head(5)Parametersamazon_siteExamples: amazon.in, amazon.com, amazon.co.ukproduct_asinProduct ASIN(Amazon Standard Identification Number)An ASIN is a 10-character alphanumeric unique identifier that is assigned to each product on amazon.Examples:https://www.amazon.in/Grand-Theft-Auto-V-PS4/dp/B00L8XUDIC/ref=sr_1_1https://www.amazon.in/Renewed-Sony-Cybershot-DSC-RX100-Digital/dp/B07XRVR9B9/ref=lp_20690678031_1_14?srs=20690678031&ie=UTF8&qid=1598553991&sr=8-14sleep_time(Optional)Number of seconds to wait before scraping the next page. (Amazon might intervene with CAPTCHA if receives too many requests in a small period of time)start_page(Optional)end_page(Optional)
amazon-purchase
amazon_purchaseInstallpipinstallamazon_purchase--upgradeMake A Purchasefromamazon_purchaseimportAMAZONamazon=AMAZON(username,password)link="amazon link to the item you wish to purchase"amazon.purchase(link)
amazonpy
amazonpyAmazon scraping libraryDescriptionAmazon(JP)の商品情報を取得するためのスクレイピングライブラリInstallationpipinstallamazonpyUpgradepipinstall--upgradeamazonpyHow to usefromamazonpyimportAmazonamazon=Amazon()# URLから製品情報を取得する。product=amazon.get_product_by_url("https://www.amazon.co.jp/dp/B07T17NSJH/")print(product.id)# 商品IDを取得する。# B07T17NSJHprint(product.title)# タイトルを取得する。# [コーチ] COACH バッグ ショルダーバッグ 斜めがけ MAE CROSSBODY レザー F34823 アウトレット [並行輸入品]print(product.description)# 説明を取得する。# ■品 番:34823 IMTAU ■サイズ:約高さ23x幅25xマチ5cm ショルダー約103-119cm(取り外し不可)  ■重 さ:約400g ■仕 様:開閉 :ファスナー式 内側 :ポケット1 外側 :ポケット1 ■素 材:レザー ■カラー:Taupe 金具ゴールド ■付 属:箱なし、保存袋なし ■バッグ内の商品はサンプル品につき、付属しておりません。print(product.price)# 販売価格を取得する。# 17570print(product.another_type)# 色などの同じ製品の違うタイプの製品IDを取得する。['B08TW5VZX3','B07PZJYLM7','B07ZYFV9ZW','B0854LBQG2']DonationPlease donate me.PayPal
amazon-review-analyzer
Amazon Review AnalyzerWelcome to the Amazon Review Analyzer, a Python program designed to help you make informed decisions about purchasing products on Amazon.OverviewThis program takes an Amazon product link as input and analyzes the reviews, providing insights into whether the product is a good buy. However, the ultimate decision to purchase rests entirely with you.How It WorksInput the Product Link:When you provide the Amazon product link, the program initiates a web scraping process to extract reviews.Scraping and Data Storage:The scrapper extracts reviews from theamazon.in/product-reviews/asinpage. The unique ASIN (Amazon Standard Identification Number) is extracted from the link using Python's regular expressions. Due to pagination, the program iterates through the pages, storing the reviews in a CSV file namedreviews.csv.User Agent:Replaceheadersvariable inscrapper.pywith your user agent to know what is your user agent simply googlemy user agentSentiment Analysis:The program uses thevaderSentimentPython package to analyze the tone of the reviews. Additionally, thedemojipackage is employed to handle emojis present in the reviews.Run the Amazon Review AnalyzerTo use the Amazon Review Analyzer, follow these steps:Install theamazon_review_analyzerpackage:pipinstallamazon_review_analyzerfromamazon_review_analyzerimportget_sentimentThe following code prompts the user to enter an Amazon URL in the terminaland displays the sentiment analysis results.get_sentiment()
amazonreviewanalyzer-preprocess
No description available on PyPI.
amazon-reviews
UNKNOWN
amazonreviewscrap
Amazon review scrapper. Without any IP block It will work with selenium and bs4
amazon_review_scraper
No description available on PyPI.
amazons3
UNKNOWN
amazon-sagemaker-haystack
amazon-sagemaker-haystackTable of ContentsInstallationContributingLicenseInstallationpip install amazon-sagemaker-haystackContributinghatchis the best way to interact with this project, to install it:pipinstallhatchWithhatchinstalled, to run all the tests:hatch run testNote: You need to export your AWS credentials for Sagemaker integration tests to run (AWS_ACCESS_KEY_IDandAWS_SECRET_SECRET_KEY). If those are missing, the integration tests will be skipped.To only run unit tests:hatch run test -m "not integration"To only run integration tests:hatch run test -m "integration"To run the lintersruffandmypy:hatch run lint:allLicenseamazon-sagemaker-haystackis distributed under the terms of theApache-2.0license.
amazon-sagemaker-jupyter-scheduler
Amazon Sagemaker Jupyter SchedulerA JupyterLab extension to schedule notebooks on Amazon SageMaker from any JupyterLab environment. This extension is built on top ofhttps://pypi.org/project/jupyter-scheduler/For more details on usage please follow the documentation -https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-auto-run.html
amazon-sagemaker-sql-editor
Amazon SageMaker SQL EditorPackage which provides sql deitor functionalities for Sagemaker SQL extension
amazon-sagemaker-sql-execution
Amazon Sagemaker SQL ExecutionProvides SQL Execution components for Amazon SageMaker
amazon-sagemaker-sql-magic
Amazon SageMaker SQL MagicPackage which provides sql magic commands for Sagemaker SQL extension
amazon-scrape
Scrape Amazon product data such as Product Name, Product Images, Number of Reviews, Price, Product URL, and ASIN.RequirementsPython 2.7 and later.SetupYou can install this package by using the pip tool and installing:$pipinstallamazon-scrapeOr:$easy_installamazon-scrapeScraper HelpExecute this commandamazon_scraper –helpin the terminal.usage: amazon_scraper [-h] [--locale LOCALE] [--keywords KEYWORDS] [--url URL] [--proxy_api_key PROXY_API_KEY] [--pages PAGES] [-r] optional arguments: -h, --help show this help message and exit --locale LOCALE Amazon locale (e.g., "com", "co.uk", "de", etc.) --keywords KEYWORDS Search keywords --url URL Amazon URL --proxy_api_key Scraper API Key --pages PAGES Number of pages to scrape -r, --review Scrape reviewsUsage Example# Specify locale, keywords, API key, and number of pages to scrape:amazon_scraper--localecom--keywords"laptop"--proxy_api_key"your_api_key"--pages10## Specify only keywords and API key (will default to "co.uk" locale and 20 pages):amazon_scraper--keywords"iphone"--proxy_api_key"your_api_key"## Specify a direct Amazon URL and API key (will default to "co.uk" locale and 20 pages):amazon_scraper--url"https://www.amazon.de/s?k=iphone&crid=1OHYY6U6OGCK5&sprefix=ipho%2Caps%2C335&ref=nb_sb_noss_2"--proxy_api_key"your_api_key"## Specify locale and Amazon URL (will default to 20 pages):amazon_scraper--localede--url"https://www.amazon.de/s?k=iphone&crid=1OHYY6U6OGCK5&sprefix=ipho%2Caps%2C335&ref=nb_sb_noss_2"--proxy_api_key"your_api_key"## Specify review to scrape product(s) reviews:amazon_scraper--keywords"watches"--proxy_api_key"your_api_key --reviewCreate Scraper API AccountSign up for a Scraper APIuser account.LicenseThis project is licensed under theMIT License.CopyrightCopyright © 2023Finbarrs Oketunji. All Rights Reserved.
amazonscraper
Package to search for products on Amazon and extract some useful information (title, ratings, number of reviews)
amazon_scraper
A Hybrid Web scraper / API client. Supplements the standard Amazon API with web scraping functionality to get extra data. Specifically, product reviews.Uses theAmazon Simple Product APIto provide API accessible data. API search functions are imported directly into the amazon_scraper module.Parameters are in the same style as the Amazon Simple Product API, which in turn uses Bottlenose style parameters. Hence the non-Pythonic parameter names (ItemId).The AmazonScraper constructor will pass ‘args’ and ‘kwargs’ toBottlenose(via Amazon Simple Product API). Bottlenose supports AWS regions, queries per second limiting, query caching and other nice features. Please view Bottlenose’ API for more information on this.The latest version of python-amazon-simple-product-api (1.5.0 at time of writing), doesn’t support these arguemnts, only Region. If you require these, please use the latest code from their repository with the following command:pip install git+https://github.com/yoavaviram/python-amazon-simple-product-api.git#egg=python-amazon-simple-product-apiCaveatAmazon continually try and keep scrapers from working, they do this by:A/B testing (randomly receive different HTML).Huge numbers of HTML layouts for the same product categories.Changing HTML layouts.Moving content inside iFrames.Amazon have resorted to moving more and more content into iFrames which this scraper can’t handle. I envisage a time where most data will be inaccessible without more complex logic.I’ve spent a long time trying to get these scrapers working and it’s a never ending battle. I don’t have the time to continually keep up the pace with Amazon. If you are interested in improving Amazon Scraper, please let me know (creating an issue is fine). Any help is appreciated.Installationpip install amazon_scraperDependenciespython-amazon-simple-product-apirequestsbeautifulsoup4xmltodictpython-dateutilExamplesAll Products All The TimeCreate an API instance:>>> from amazon_scraper import AmazonScraper >>> amzn = AmazonScraper("put your access key", "secret key", "and associate tag here")The creation function accepts ‘kwargs’ which are passed to ‘bottlenose.Amazon’ constructor:>>> from amazon_scraper import AmazonScraper >>> amzn = AmazonScraper("put your access key", "secret key", "and associate tag here", Region='UK', MaxQPS=0.9, Timeout=5.0)Search:>>> from __future__ import print_function >>> import itertools >>> for p in itertools.islice(amzn.search(Keywords='python', SearchIndex='Books'), 5): >>> print(p.title) Learning Python, 5th Edition Python Programming: An Introduction to Computer Science 2nd Edition Python In A Day: Learn The Basics, Learn It Quick, Start Coding Fast (In A Day Books) (Volume 1) Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython Python CookbookLookup by ASIN/ItemId:>>> p = amzn.lookup(ItemId='B00FLIJJSA') >>> p.title Kindle, Wi-Fi, 6" E Ink Display - for international shipment >>> p.url http://www.amazon.com/Kindle-Wi-Fi-Ink-Display-international/dp/B0051QVF7A/ref=cm_cr_pr_product_topBatch Lookups:>>> for p in amzn.lookup(ItemId='B0051QVF7A,B007HCCNJU,B00BTI6HBS'): >>> print(p.title) Kindle, Wi-Fi, 6" E Ink Display - for international shipment Kindle, 6" E Ink Display, Wi-Fi - Includes Special Offers (Black) Kindle Paperwhite 3G, 6" High Resolution Display with Next-Gen Built-in Light, Free 3G + Wi-Fi - Includes Special OffersBy URL:>>> p = amzn.lookup(URL='http://www.amazon.com/Kindle-Wi-Fi-Ink-Display-international/dp/B0051QVF7A/ref=cm_cr_pr_product_top') >>> p.title Kindle, Wi-Fi, 6" E Ink Display - for international shipment >>> p.asin B0051QVF7AProduct Ratings:>>> p = amzn.lookup(ItemId='B00FLIJJSA') >>> p.ratings [8, 4, 6, 4, 13]Alternative Bindings:>>> p = amzn.lookup(ItemId='B000GRFTPS') >>> p.alternatives ['B00IVM5X7E', '9163192993', '0899669433', 'B00IPXPQ9O', '1482998742', '0441444814', '1497344824'] >>> for asin in p.alternatives: >>> alt = amzn.lookup(ItemId=asin) >>> print(alt.title, alt.binding) The King in Yellow Kindle Edition The King in Yellow Unknown Binding King in Yellow Hardcover The Yellow Sign Audible Audio Edition The King in Yellow MP3 CD THE KING IN YELLOW Mass Market Paperback The King in Yellow PaperbackSupplemental text not available via the API:>>> p = amzn.lookup(ItemId='0441016685') >>> p.supplemental_text [u"Bob Howard is a computer-hacker desk jockey ... ", u"Lovecraft\'s Cthulhu meets Len Deighton\'s spies ... ", u"This dark, funny blend of SF and ... "]Review APIView lists of reviews:>>> p = amzn.lookup(ItemId='B0051QVF7A') >>> rs = p.reviews() >>> rs.asin B0051QVF7A >>> # print the reviews on this first page >>> rs.ids ['R3MF0NIRI3BT1E', 'R3N2XPJT4I1XTI', 'RWG7OQ5NMGUMW', 'R1FKKJWTJC4EAP', 'RR8NWZ0IXWX7K', 'R32AU655LW6HPU', 'R33XK7OO7TO68E', 'R3NJRC6XH88RBR', 'R21JS32BNNQ82O', 'R2C9KPSEH78IF7'] >>> rs.url http://www.amazon.com/product-reviews/B0051QVF7A/ref=cm_cr_pr_top_sort_recent?&sortBy=bySubmissionDateDescending >>> # iterate over reviews on this page only >>> for r in rs.brief_reviews: >>> print(r.id) 'R3MF0NIRI3BT1E' 'R3N2XPJT4I1XTI' 'RWG7OQ5NMGUMW' ... >>> # iterate over all brief reviews on all pages >>> for r in rs: >>> print(r.id) 'R3MF0NIRI3BT1E' 'R3N2XPJT4I1XTI' 'RWG7OQ5NMGUMW' ...View detailed reviews:>>> rs = amzn.reviews(ItemId='B0051QVF7A') >>> # this will iterate over all reviews on all pages >>> # each review will require a download as it is on a seperate page >>> for r in rs.full_reviews(): >>> print(r.id) 'R3MF0NIRI3BT1E' 'R3N2XPJT4I1XTI' 'RWG7OQ5NMGUMW' ...Convert a brief review to a full review:>>> rs = amzn.reviews(ItemId='B0051QVF7A') >>> # this will iterate over all reviews on all pages >>> # each review will require a download as it is on a seperate page >>> for r in rs: >>> print(r.id) >>> fr = r.full_review() >>> print(fr.id)Quickly get a list of all reviews on a review page using theall_reviewsproperty. This uses the brief reviews provided on the review page to avoid downloading each review separately. As such, some information may not be accessible:>>> p = amzn.lookup(ItemId='B0051QVF7A') >>> rs = p.reviews() >>> all_reviews_on_page = list(rs) >>> len(all_reviews_on_page) 10 >>> r = all_reviews_on_page[0] >>> r.title 'Fantastic device - pick your Kindle!' >>> fr = r.full_review() >>> fr.title 'Fantastic device - pick your Kindle!'By ASIN/ItemId:>>> rs = amzn.reviews(ItemId='B0051QVF7A') >>> rs.asin B0051QVF7A >>> rs.ids ['R3MF0NIRI3BT1E', 'R3N2XPJT4I1XTI', 'RWG7OQ5NMGUMW', 'R1FKKJWTJC4EAP', 'RR8NWZ0IXWX7K', 'R32AU655LW6HPU', 'R33XK7OO7TO68E', 'R3NJRC6XH88RBR', 'R21JS32BNNQ82O', 'R2C9KPSEH78IF7']For individual reviews use thereviewmethod:>>> review_id = 'R3MF0NIRI3BT1E' >>> r = amzn.review(Id=review_id) >>> r.id R3MF0NIRI3BT1E >>> r.asin B00492CIC8 >>> r.url http://www.amazon.com/review/R3MF0NIRI3BT1E >>> r.date 2011-09-29 18:27:14+00:00 >>> r.author FreeSpirit >>> r.text Having been a little overwhelmed by the choices between all the new Kindles ... <snip>By URL:>>> r = amzn.review(URL='http://www.amazon.com/review/R3MF0NIRI3BT1E') >>> r.id R3MF0NIRI3BT1EUser Reviews APIThis package also supports getting reviews written by a specific user.Get reviews that a single author has created:>>> ur = amzn.user_reviews(Id="A2W0GY64CJSV5D") >>> ur.brief_reviews >>> ur.name >>> fr = list(ur.brief_reviews)[0].full_review()Get reviews for a user, from a review object>>> r = amzn.review(Id="R3MF0NIRI3BT1E") >>> # we can get the reviews directly, or via the API with a URL or ID >>> ur = r.user_reviews() >>> ur = amzn.user_reviews(URL=r.author_reviews_url) >>> ur = amzn.user_reviews(Id=r.author_id) >>> ur.brief_reviews >>> ur.nameIterate over the current page’s reviews:>>> ur = amzn.user_reviews(Id="A2W0GY64CJSV5D") >>> for r in ur.brief_reviews: >>> print(r.id)Iterate over all author reviews:>>> ur = amzn.user_reviews(Id="A2W0GY64CJSV5D") >>> for r in ur: >>> print(r.id)AuthorsAdam GriffithsGreg Rehm
amazon-scraper-api
Amazon ScraperOxylabs'Amazon Scraper APIallows users to easily scrape publicly-available data from any page on Amazon, such as reviews, pricing, product information and more. If you're interested in testing out this powerful tool, you cansign up for a free trial on the Oxylabs website.OverviewBelow is a quick overview of all the available datasourcevalues we support with Amazon.SourceDescriptionStructured dataamazonSubmit any Amazon URL you like.Depends on the URL.amazon_bestsellersList of best seller items in a taxonomy node of your choice.Yesamazon_pricingList of offers available for an ASIN of your choice.Yes.amazon_productProduct page of an ASIN of your choice.Yes.amazon_questionsQ&A page of an ASIN of your choice.Yes.amazon_reviewsReviews page of an ASIN of your choice.Yes.amazon_searchSearch results for a search term of your choice.Yes.amazon_sellersSeller information of a seller of your choice.Yes.URLTheamazonsource is designed to retrieve the content from various Amazon URLs. Instead of sending multiple parameters, you can provide us with a direct URL to the required Amazon page. We do not strip any parameters or alter your URLs in any way.Query parametersParameterDescriptionDefault ValuesourceData source.More info.N/AurlDirect URL (link) to Amazon page-user_agent_typeDevice type and browser. The full list can be foundhere.desktoprenderEnables JavaScript rendering.More info.-callback_urlURL to your callback endpoint.More info.-parsetruewill return structured data, as long as the URL submitted is for one of the page types we can parse.false- required parameterPython code exampleIn the code example below, we make a request to retrieve the Amazon product page forB0BDJ279KF.importrequestsfrompprintimportpprint# Structure payload.payload={'source':'amazon','url':'https://www.amazon.co.uk/dp/B0BDJ279KF','parse':True}# Get response.response=requests.request('POST','https://realtime.oxylabs.io/v1/queries',auth=('YOUR_USERNAME','YOUR_PASSWORD'),#Your credentials go herejson=payload,)# Instead of response with job status and results url, this will return the# JSON response with results.pprint(response.json())To see the response example with retrieved data, downloadthissample outputin JSON format.SearchTheamazon_searchsource is designed to retrieve Amazon search result pages.Query parametersParameterDescriptionDefault ValuesourceData source.More info.amazon_searchdomainDomain localization for Amazon. The full list of available domains can be foundhere.comqueryUTF-encoded keyword-start_pageStarting page number1pagesNumber of pages to retrieve1geo_locationTheDeliver tolocation. See our guide to using this parameterhere.-user_agent_typeDevice type and browser. The full list can be foundhere.desktoprenderEnables JavaScript rendering.More info.-callback_urlURL to your callback endpoint.More info.-parsetruewill return structured data.-context:category_idSearch for items in a particular browse node (product category).-context:merchant_idSearch for items sold by a particular seller.-- required parameterPython code exampleIn the code example below, we make a request to retrieve product page for ASIN3AA17D2BRD4YMT0Xonamazon.nlmarketplace. In case the ASIN provided is a parent ASIN, we ask Amazon to return a product page of an automatically-selected variation.importrequestsfrompprintimportpprint# Structure payload.payload={'source':'amazon_search','domain':'nl','query':'adidas','start_page':11,'pages':10,'parse':True,'context':[{'key':'category_id','value':16391843031},{'key':'merchant_id','value':'3AA17D2BRD4YMT0X'}],}# Get response.response=requests.request('POST','https://realtime.oxylabs.io/v1/queries',auth=('user','pass1'),json=payload,)# Print prettified response to stdout.pprint(response.json())To see the response example with retrieved data, downloadthissample outputfile in JSON format.ProductTheamazon_productdata source is designed to retrieve Amazon product pages.Query parametersParameterDescriptionDefault ValuesourceData source.More info.amazon_productdomainDomain localization for Amazon. The full list of available domains can be foundhere.comquery10-symbol ASIN code-geo_locationTheDeliver tolocation. See our guide to using this parameterhere.-user_agent_typeDevice type and browser. The full list can be foundhere.desktoprenderEnables JavaScript rendering.More info.callback_urlURL to your callback endpoint.More info.-parsetruewill return structured data.-context:autoselect_variantTo get accurate pricing/buybox data, set this parameter totrue(which tells us to append theth=1&psc=1URL parameters to the end of the product URL). To get an accurate representation of the parent ASIN's product page, omit this parameter or set it tofalse.false- required parameterPython code exampleIn the code example below, we make a request to retrieve product page for ASINB09RX4KS1Gonamazon.nlmarketplace. In case the ASIN provided is a parent ASIN, we ask Amazon to return a product page of an automatically-selected variation.importrequestsfrompprintimportpprint# Structure payload.payload={'source':'amazon_product','domain':'nl','query':'B09RX4KS1G','parse':True,'context':[{'key':'autoselect_variant','value':True}],}# Get response.response=requests.request('POST','https://realtime.oxylabs.io/v1/queries',auth=('user','pass1'),json=payload,)# Print prettified response to stdout.pprint(response.json())To see the response example with retrieved data, downloadthissample outputfile in JSON format.Offer listingTheamazon_pricingdata source is designed to retrieve Amazon product offer listings.Query parametersParameterDescriptionDefault ValuesourceData source.More info.amazon_pricingdomainDomain localization for Amazon. The full list of available domains can be foundhere.comquery10-symbol ASIN code-start_pageStarting page number1pagesNumber of pages to retrieve1geo_locationTheDeliver tolocation. See our guide to using this parameterhere.-user_agent_typeDevice type and browser. The full list can be foundhere.desktoprenderEnables JavaScript rendering.More info.callback_urlURL to your callback endpoint.More info.-parsetruewill return structured data.-- required parameterPython code exampleIn the code examples below, we make a request to retrieve product offer listing page for ASINB09RX4KS1Gonamazon.nlmarketplace.importrequestsfrompprintimportpprint# Structure payload.payload={'source':'amazon_pricing','domain':'nl','query':'B09RX4KS1G','parse':True,}# Get response.response=requests.request('POST','https://realtime.oxylabs.io/v1/queries',auth=('user','pass1'),json=payload,)# Print prettified response to stdout.pprint(response.json())To see what the parsed output looks like, downloadthisJSON file.ReviewsTheamazon_reviewsdata source is designed to retrieve Amazon product review pages of an ASIN of your choice.Query parametersParameterDescriptionDefault ValuesourceData source.More info.amazon_reviewsdomainDomain localization for Amazon. The full list of available domains can be foundhere.comquery10-symbol ASIN code-geo_locationTheDeliver tolocation. See our guide to using this parameterhere.-user_agent_typeDevice type and browser. The full list can be foundhere.desktopstart_pageStarting page number1pagesNumber of pages to retrieve1renderEnables JavaScript rendering.More info.callback_urlURL to your callback endpoint.More info.-parsetruewill return structured data.-- required parameterPython code exampleimportrequestsfrompprintimportpprint# Structure payload.payload={'source':'amazon_reviews','domain':'nl','query':'B09RX4KS1G','parse':True,}# Get response.response=requests.request('POST','https://realtime.oxylabs.io/v1/queries',auth=('user','pass1'),json=payload,)# Print prettified response to stdout.pprint(response.json())To see the response example with retrieved data, download thissampleoutputfile in JSON format.Questions & AnswersTheamazon_questionsdata source is designed to retrieve any particular product's Questions & Answers pages.Query parametersParameterDescriptionDefault ValuesourceData source.More info.amazon_questionsdomainDomain localization for Amazon. The full list of available domains can be foundhere.comquery10-symbol ASIN code-geo_locationTheDeliver tolocation. See our guide to using this parameterhere.-user_agent_typeDevice type and browser. The full list can be foundhere.desktoprenderEnables JavaScript rendering.More info.****callback_urlURL to your callback endpoint.More info.-parsetruewill return structured data.-- required parameterPython code exampleimportrequestsfrompprintimportpprint# Structure payload.payload={'source':'amazon_questions','domain':'nl','query':'B09RX4KS1G','parse':True,}# Get response.response=requests.request('POST','https://realtime.oxylabs.io/v1/queries',auth=('user','pass1'),json=payload,)# Print prettified response to stdout.pprint(response.json())To see the response example with retrieved data, download thissampleoutputfile in JSON format.Best SellersTheamazon_bestsellersdata source is designed to retrieve Amazon Best Sellers pages.Query parametersParameterDescriptionDefault ValuesourceData source.More info.amazon_bestsellersdomainDomain localization for Amazon. The full list of available domains can be foundhere.comqueryDepartment name. Example:Clothing, Shoes & Jewelry-start_pageStarting page number1pagesNumber of pages to retrieve1geo_locationTheDeliver tolocation. See our guide to using this parameterhere.-user_agent_typeDevice type and browser. The full list can be foundhere.desktoprenderEnables JavaScript rendering.More info.callback_urlURL to your callback endpoint.More info.-parsetruewill return structured data.-context:category_idSearch for items in a particular browse node (product category).-- required parameterPython code exampleimportrequestsfrompprintimportpprint# Structure payload.payload={'source':'amazon_bestsellers','domain':'de','query':'automotive','start_page':2,'parse':True,'context':[{'key':'category_id','value':82400031},],}# Get response.response=requests.request('POST','https://realtime.oxylabs.io/v1/queries',auth=('user','pass1'),json=payload,)# Print prettified response to stdout.pprint(response.json())To see the response example with retrieved data, download thissample outputfile in JSON format.SellersTheamazon_sellersdata source is designed to retrieve Amazon Sellers pages.Query parametersParameterDescriptionDefault ValuesourceData source.More info.amazon_sellersdomainDomain localization for Amazon. The full list of available domains can be foundhere.comquery13-character seller ID-geo_locationTheDeliver tolocation. See our guide to using this parameterhere.-user_agent_typeDevice type and browser. The full list can be foundhere.desktoprenderEnables JavaScript rendering.More info.callback_urlURL to your callback endpoint.More info.-parsetruewill return structured data. Please note that right now we only support parsed output fordesktopdevice type. However, there is no apparent reason to get sellers pages with any other device type, as seller data is going to be exactly the same across all devices.-- required parameterPython code exampleIn the code examples below, we make a request to retrieve the seller page for seller IDABNP0A7Y0QWBNonamazon.demarketplace.importrequestsfrompprintimportpprint# Structure payload.payload={'source':'amazon_sellers','domain':'de','query':'ABNP0A7Y0QWBN','parse':True}# Get response.response=requests.request('POST','https://realtime.oxylabs.io/v1/queries',auth=('user','pass1'),json=payload,)# Print prettified response to stdout.pprint(response.json())
amazon-scraper-by-outscraper
Python SDK that allows scraping Amazon products data and Amazon products reviewsAPI DocsGenerate API TokenInstallationPython 3+pipinstallamazon-scraper-by-outscraperLink to the python package pageInitializationfromamazon_scraper_by_outscraperimportAmazonClientclient=AmazonClient(api_key='SECRET_API_KEY')Scrape Amazon Products# Get data by prodcut URLresults=client.get_prodcuts('https://www.amazon.com/dp/1612680194')# Get data from summary pageresults=client.get_prodcuts('https://www.amazon.com/s?k=iphone',limit=24)Scrape Amazon Reviews# Get reviews from Amazon product by URLresults=client.get_reviews('https://www.amazon.com/dp/1612680194',limit=3)# Get reviews from Amazon product by ASINresults=client.get_reviews('1612680194',limit=3)Amazon Products Results Demo{"id":"your-request-id","status":"Success","data":[[{"short_url":"https://www.amazon.com/Lacoste-Womens-Stainless-Leather-Calfskin/dp/B00G3XWLR8","asin":"B00G3XWLR8","name":"Lacoste Women's 12.12 Stainless Steel Quartz Watch with Leather Calfskin Strap, Taupe, 16 (Model: 2001150)","rating":null,"reviews":null,"answered_questions":null,"fast_track_message":null,"about":"ON TIME, IN STYLE: Once upon a time there was a polo shirt that made history now it has inspired a ladies timepiece to follow in its footsteps. The Lacoste.12.12 Lady is the epitome of sporty chic and comfort, with thoughtful dashes of detail. Game, set and match for style., QUALITY MATERIALS: Women's 36 mm pale rose gold ion-plated stainless steel case with a taupe leather strap featuring a carnation gold dial., QUARTZ MULTIFUNCTION: It’s a battery-powered watch that sends energy through a quartz crystal. Is typically built into three separate dials for the day of the week, date of the month and 24-hour time., DURABLE MINERAL CRYSTAL: Made from glass and protects watch from scratches., 2 YEAR WARRANTY: Lacoste offers a 2-year limited warranty against defects in materials and workmanship that prevent the watch from functioning properly under normal use. Only purchases from an authorized retailer are covered by the manufacturer’s warranty.","description":"The Lacoste legend was born in 1933, when Rene Lacoste revolutionized men's fashion replacing the classical woven fabric, long-sleeved and starched shirts on the courts, by what has now become the classic LACOSTE polo shirt. More than 75 years after its creation, LACOSTE has become a lifestyle brand which allies elegance and comfort. The LACOSTE art of living expresses itself today through a large collection of apparel for women, men and children, footwear, fragrances, leather goods, eyewear, watches, belts, home textiles, and fashion jewelry. LACOSTE founds its success on the essential values of authenticity, performance, and elegance. The crocodile incarnates today the elegance of the champion, Rene Lacoste, as well as of his wife Simone Lacoste and their daughter Catherine Lacoste, both also champions, in everyday life as on the tennis courts and golf courses. The Crocodile's origins The true story of the \"Crocodile\" begins in 1923 after a bet that Rene Lacoste had with the Captain of the French Davis Cup Team, Allan H. Muhr, who promised him an alligator suitcase if he won an important game for the team. This episode was reported in an article in the Boston Evening Transcript, where his nickname of the Crocodile came to life for the first time. The American public grew fond of this nickname which highlighted the tenacity he displayed on the tennis courts, never giving up his prey. His friend Robert George drew him a crocodile which was embroidered on the blazer he wore on the courts. The Legend was born.","categories":["Clothing, Shoes & Jewelry","Women","Watches","Wrist Watches"],"store_title":"Lacoste Store","store_url":"https://www.amazon.com/stores/Lacoste/page/C85490CB-0E64-4F8B-89A8-217111AFF6FE?ref_=ast_bln","price":"$175.00","availability":"In stock soon. Order it now.","strike_price":null,"price_saving":null,"shipping":"","merchant_info":"","bage":"","currency":null,"image_1":"https://m.media-amazon.com/images/I/314jzz9RfsL.jpg","image_2":"https://m.media-amazon.com/images/I/31he4kecs0L.jpg","image_3":"https://m.media-amazon.com/images/I/31FgebhbXEL.jpg","image_4":"https://m.media-amazon.com/images/I/41IYiTJyLIL.jpg","image_5":"https://m.media-amazon.com/images/I/41gZMv+FwoL.jpg","overview":null,"details":{"details.brand_seller_or_collection_name":"Lacoste","details.model_number":"2001150","details.part_number":"2001150","details.item_shape":"Round","details.dial_window_material_type":"Mineral","details.display_type":"Analog","details.clasp":"Tang Buckle","details.case_material":"Stainless Steel","details.case_diameter":"36 millimeters","details.case_thickness":"9.75 millimeters","details.band_material":"Leather","details.band_size":"Womens Standard","details.band_width":"16 millimeters","details.band_color":"Brown","details.dial_color":"Carnation Gold","details.bezel_material":"Stainless Steel","details.bezel_function":"Stationary","details.calendar":"Day-Date","details.movement":"Quartz","details.water_resistant_depth":"50 Meters","details.warranty":"Manufacturer’s warranty can be requested from customer service. to make a request to customer service.","details.package_dimensions\n‏\n\n‎":"3.58 x 3.46 x 3.23 inches; 3.84 Ounces","details.item_model_number\n‏\n\n‎":"2001150","details.department\n‏\n\n‎":"Womens","details.date_first_available\n‏\n\n‎":"January 13, 2021","details.manufacturer\n‏\n\n‎":"Lacoste","details.asin\n‏\n\n‎":"B00G3XWLR8","details.country_of_origin\n‏\n\n‎":"China"}}]]}Amazon Products Results Demo{"id":"your-request-id","status":"Success","data":[[{"query":"https://www.amazon.com/dp/1612680194","id":"R2VYT9ETWPTAWU","product_asin":"1612680194","title":"Everything","body":"I read this book about 11 years ago at 27 years old , had no money, I followed the advice in this book and now have 15 rental properties paid off free and clear, my assets more than cover all my expenses. I just bought this book again, I'm in the middle of reading it again now 11 years later and can't put it down. I hate reading btw. I plan on reading this book at least three more times over the next 20 years so I can keep all info fresh in my mind. People always ask me about success. I tell them to read this book...whats crazy is that they don't read it. You can lead a horse to water but can't make it drink. The book changed my life and it will change yours. Do you want change or do you just want to talk and think about change? There is a big difference , do it.","rating":5,"rating_text":"5.0 out of 5 stars","helpful":"1,331 people found this helpful","comments":null,"date":"Reviewed in the United States on March 17, 2018","bage":"Verified Purchase","official_comment_banner":"","url":"https://www.amazon.com/gp/customer-reviews/R2VYT9ETWPTAWU/ref=cm_cr_arp_d_rvw_ttl?ie=UTF8&ASIN=1612680194","img_url":null,"variation":"","total_reviews":65459,"overall_rating":4.7,"author_title":"Ilive4him24","author_descriptor":"","author_url":"https://www.amazon.com/gp/profile/amzn1.account.AGQCR5JZP3V6Y743KX3UYJBRRVOA/ref=cm_cr_arp_d_gw_btm?ie=UTF8","author_profile_img":"https://images-na.ssl-images-amazon.com/images/S/amazon-avatars/default._CR0,0,1024,1024_SX48_.png","product_name":"Rich Dad Poor Dad: What the Rich Teach Their Kids About Money That the Poor and Middle Class Do Not!","product_url":"https://www.amazon.com/dp/1612680194"},{"query":"https://www.amazon.com/dp/1612680194","id":"R1T9953QMMGUEX","product_asin":"1612680194","title":"Make sure you Select the Book Size","body":"I owned this book in the past and wanted to reorder it to read it again. Instead of getting the book I expected, I received a tiny, hand sized book, with print that is too small and that is, frankly, hard to open all the way in order to read the words near the binder. So the book is utterly useless. With all the complaints about this tiny book, I'm not sure why that is the book that automatically comes up when you search for the book. Instead, the normal sized book should be the default, and then people can select the pocket sized book if they want. So I would say that the content of the book is excellent. DO purchase the book; however, BE SURE TO SELECT THE LARGER, PAPERBACK VERSION if that's what you want (sorry for the all caps, just want to make sure people see that part).","rating":1,"rating_text":"1.0 out of 5 stars","helpful":"851 people found this helpful","comments":null,"date":"Reviewed in the United States on January 23, 2018","bage":"Verified Purchase","official_comment_banner":"","url":"https://www.amazon.com/gp/customer-reviews/R1T9953QMMGUEX/ref=cm_cr_arp_d_rvw_ttl?ie=UTF8&ASIN=1612680194","img_url":null,"variation":"","total_reviews":65459,"overall_rating":4.7,"author_title":"judysardenspeaker","author_descriptor":"","author_url":"https://www.amazon.com/gp/profile/amzn1.account.AHBNQFY6SXYTRVWW7RUKDYY4RBBA/ref=cm_cr_arp_d_gw_btm?ie=UTF8","author_profile_img":"https://images-na.ssl-images-amazon.com/images/S/amazon-avatars-global/default._CR0,0,1024,1024_SX48_.png","product_name":"Rich Dad Poor Dad: What the Rich Teach Their Kids About Money That the Poor and Middle Class Do Not!","product_url":"https://www.amazon.com/dp/1612680194"},{"query":"https://www.amazon.com/dp/1612680194","id":"RIGBUZ8E2S6UT","product_asin":"1612680194","title":"A great foundation book for beginning to improve your financial intelligence","body":"This is an enhanced reprint of the original, with additional study questions/ discussion and review added at the end of every chapter. I bought the original about 18 years ago and it changed my families destiny for the better. I am glad the reprint came out as it prompted me to reread it and deepen my understanding. Some people complain that this book does not give a step by step process for change. I would counter that one size shoe does not fit all feet. There are many individual paths to wealth, and Kiyosaki sets the guiding stars to navigate by, but you have to walk your own individual road. Some key concepts of this book are: 1) Assets put money in your pocket even when you are on vacation. Liabilities take money out of your pocket, therefore your house is a liability [unless you rent out rooms and the garage as one person I know did while rebuilding his asset base]. 2) Wealthy people buy assets first, and then let their assets buy their luxuries from the surplus cash flow. 3) Wealthy people continuously increase their assets by reinvesting their surplus cash flow in more assets. 4) There are 3 primary asset classes: Real Estate, Businesses, and Paper assets (stocks bonds notes, etc) 5) Cash Flow is more important than Net Worth. Net Worth is similar to potential energy, to use it you have to spend it, then it is gone. Cash Flow is like power from a hydroelectric dam, constantly replenished. The rich don't work for money, they work for assets. The tax laws are fair from the standpoint that the laws that the rich spent billions of dollars to have modified and interpreted apply to everyone who learns how to use them. A great foundation book for beginning to improve your financial intelligence so that you don't work 4 or more month's of every year for the Tax man, more months for the banks that hold your mortgage and credit cards, and whatever is left making the company you work for wealthy. Good luck on your journey to being Rich, poor, or middle class.","rating":4,"rating_text":"4.0 out of 5 stars","helpful":"1,186 people found this helpful","comments":null,"date":"Reviewed in the United States on June 19, 2017","bage":"Verified Purchase","official_comment_banner":"","url":"https://www.amazon.com/gp/customer-reviews/RIGBUZ8E2S6UT/ref=cm_cr_arp_d_rvw_ttl?ie=UTF8&ASIN=1612680194","img_url":null,"variation":"","total_reviews":65459,"overall_rating":4.7,"author_title":"Eugene C.","author_descriptor":"","author_url":"https://www.amazon.com/gp/profile/amzn1.account.AGUEMAJSJVAUZR2OUSFBBNJM3KQQ/ref=cm_cr_arp_d_gw_btm?ie=UTF8","author_profile_img":"https://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/grey-pixel.gif","product_name":"Rich Dad Poor Dad: What the Rich Teach Their Kids About Money That the Poor and Middle Class Do Not!","product_url":"https://www.amazon.com/dp/1612680194"}]]}
amazon-scraper-products-extractor
The Amazon Scraper API is a powerful Python library that allows you to scrape product information from Amazon's website. It provides easy-to-use functions to extract product details, prices, reviews, and more. Use it to gather data for market research, price tracking, or any application that requires Amazon product information.Key Features:Retrieve product details, including title, description, and ASIN.Get pricing information, including the current price and any discounts.Scrape customer reviews, ratings, and review summaries.Perform searches and extract search results.This library is designed to be user-friendly and provides extensive documentation and examples to get you started quickly.
amazon-scraper-toolkit
No description available on PyPI.
amazon-search-scraper-bc
amazon_search_scraper_bcamazon_search_scraper_bc is a Python library for getting amazon search data and product details data in your project.InstallationUse the package managerpipto install amazon_search_scraper_bc.pipinstallamazon_search_scraper_bcUsageYou need to have webdrivers downloaded for it to work and give the program the path of the drivers.To get search list:fromamazon_search_scraper_bc.scraperimportAmazonScraperimportospwd=os.path.abspath(os.getcwd())# Location of your webdriver Ex:path=pwd+"/webdrivers/chromedriver_linux64/chromedriver"# You can call or select which driver to use ['chrome', 'firefox'] it will open the browsers in headless mode.amasc=AmazonScraper("chrome",path)print(amasc.amazon_product_search("Mobile"))To get product details:fromamazon_search_scraper_bc.scraperimportAmazonScraperimportospwd=os.path.abspath(os.getcwd())# Location of your webdriver Ex:path=pwd+"/webdrivers/chromedriver_linux64/chromedriver"# You can call or select which driver to use ['chrome', 'firefox'] it will open the browsers in headless mode.amasc=AmazonScraper("chrome",path)url="https://www.amazon.in/Redmi-9A-2GB-32GB-Storage/dp/B08696XB4B/ref=sr_1_3?dchild=1&keywords=mobile&qid=1629271762&sr=8-3"print(amazonSc.get_single_product_details(url))To change the amazon urlfromamazon_search_scraper_bc.scraperimportAmazonScraperimportospwd=os.path.abspath(os.getcwd())# Location of your webdriver Ex:path=pwd+"/webdrivers/chromedriver_linux64/chromedriver"# You can call or select which driver to use ['chrome', 'firefox'] it will open the browsers in headless mode.# By default it takes https://www.amazon.in .# When calling get_single_product_details you need to send full url.amasc=AmazonScraper("chrome",path,"https://www.amazon.com")To see the time it is takingfromamazon_search_scraper_bc.scraperimportAmazonScraperimportospwd=os.path.abspath(os.getcwd())# Location of your webdriver Ex:path=pwd+"/webdrivers/chromedriver_linux64/chromedriver"# You can call or select which driver to use ['chrome', 'firefox'] it will open the browsers in headless mode.amasc=AmazonScraper("chrome",path)# page_no needs to be greatter than 0page_no=1time_show=Trueprint(amasc.amazon_product_search("Mobile",page_no,time_show))ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseCopyright [2021] [Barno Chakraborty]Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
amazon-selenium
No description available on PyPI.
amazonsessender
Mail sender which uses the Amazon SES client
amazon-ses-template-editor
Create, preview, test and upload Amazon SES templatesConsole command to edit, test and upload amazon SES templatesCurrently AWS SES has API endpoint to create email templates with handlebars syntax and API endpoint to send emails with template name and a dictionary with template variables. But it does not provide any UI to create and edit templates. This script allows you to manage your email templates from command lineInstallationpipinstallamazon-ses-template-editorUsageusage: amazon-ses-template-editor.py [-h] [-c CONFIG] {upload,test,preview} ... positional arguments: {upload,test,preview} upload Uploads templates from configuration file to SES using your system credentials upload_test Uploads templates for testing purposes test Sends emails to your email address so you can test layout preview Starts minimal http server for email template testing optional arguments: -h, --help show this help message and exit -c CONFIG, --config CONFIG Path to configuration file, default ./config.tomlUploading emailsusage: amazon-ses-template-editor.py upload [-h] [-t TEMPLATE] optional arguments: -h, --help show this help message and exit -t TEMPLATE, --template TEMPLATE Uploads only one template with given nameTesting emailsusage: amazon-ses-template-editor.py upload_test [-h] [-t TEMPLATE] optional arguments: -h, --help show this help message and exit -t TEMPLATE, --template TEMPLATE Uploads only one template with given name usage: amazon-ses-template-editor.py test [-h] [-t TEMPLATE] optional arguments: -h, --help show this help message and exit -t TEMPLATE, --template TEMPLATE Uploads only one template with given nameTesting emailsConfig example[[templates]]name='weekly-email'html="templates/weekly-email.hb2"title='Your links weekly report'[[templates]]name='confirmation-email'html="templates/confirmation-email.hb2"title='Please verify your email'[partials]footer='partials/footer.hb2'[tests]from='[email protected]'to=['[email protected]','[email protected]'][[test]]template='weekly-email'[test.data]encodedEmail='[email protected]'[test.data.user]id=12345name='Test test'[[test.data.domains]][test.data.domains.domain]hostname='test.com'[test.data.domains.stats]links=50clicks=50humanClicks=50[[test.data.domains.stats.device]]deviceName="Desktop"score=12345AuthorAndrii Kostenko, Short.cm Inc (https://short.cm)
amazon-short-url
Amazon Short URLA script to shorten Amazon product page URLs.Usageamazon-short-url[url-of-product-page]InstallationWithpipx:pipxinstallamazon-short-urlWithpip:pipinstallamazon-short-url
amazon-sns-extended-client
Amazon SNS Extended Client Library for PythonImplements the functionality ofamazon-sns-java-extended-client-libin PythonGetting StartedSign up for AWS-- Before you begin, you need an AWS account. For more information about creating an AWS account, seecreate and activate aws account.Minimum requirements-- Python 3.x (or later) and pipDownload-- Download the latest preview release or pick it up from pip:pip install amazon-sns-extended-clientOverviewsns-extended-client allows for publishing large messages through SNS via S3. This is the same mechanism that the Amazon libraryamazon-sns-java-extended-client-libprovides.Additional attributes available onboto3SNSclient,TopicandPlatformEndpointobjects.large_payload_support -- the S3 bucket name that will store large messages.use_legacy_attribute -- ifTrue, then all published messages use the Legacy reserved message attribute (SQSLargePayloadSize) instead of the current reserved message attribute (ExtendedPayloadSize).message_size_threshold -- the threshold for storing the message in the large messages bucket. Cannot be less than0or greater than262144. Defaults to262144.always_through_s3 -- ifTrue, then all messages will be serialized to S3. Defaults toFalses3 -- the boto3 S3resourceobject to use to store objects to S3. Use this if you want to control the S3 resource (for example, custom S3 config or credentials). Defaults toboto3.resource("s3")on first use if not previously set.UsageNote:The s3 bucket must already exist prior to usage, and be accessible by whatever credentials you have availableEnabling support for large payloads (>256Kb)importboto3importsns_extended_client# Low level clientsns=boto3.client('sns')sns.large_payload_support='bucket-name'# boto SNS.Topic resourceresource=boto3.resource('sns')topic=resource.Topic('topic-arn')# Ortopic=resource.create_topic(Name='topic-name')topic.large_payload_support='my-bucket-name'# boto SNS.PlatformEndpoint resourceresource=boto3.resource('sns')platform_endpoint=resource.PlatformEndpoint('endpoint-arn')platform_endpoint.large_payload_support='my-bucket-name'Enabling support for large payloads (>64K)importboto3importsns_extended_client# Low level clientsns=boto3.client('sns')sns.large_payload_support='BUCKET-NAME'sns.message_size_threshold=65536# boto SNS.Topic resourceresource=boto3.resource('sns')topic=resource.Topic('topic-arn')# Ortopic=resource.create_topic('topic-name')topic.large_payload_support='bucket-name'topic.message_size_threshold=65536# boto SNS.PlatformEndpoint resourceresource=boto3.resource('sns')platform_endpoint=resource.PlatformEndpoint('endpoint-arn')platform_endpoint.large_payload_support='my-bucket-name'platform_endpoint.message_size_threshold=65536Enabling support for large payloads for all messagesimportboto3importsns_extended_client# Low level clientsns=boto3.client('sns')sns.large_payload_support='my-bucket-name'sns.always_through_s3=True# boto SNS.Topic resourceresource=boto3.resource('sns')topic=resource.Topic('topic-arn')# Ortopic=resource.create_topic(Name='topic-name')topic.large_payload_support='my-bucket-name'topic.always_through_s3=True# boto SNS.PlatformEndpoint resourceresource=boto3.resource('sns')platform_endpoint=resource.PlatformEndpoint('endpoint-arn')platform_endpoint.large_payload_support='my-bucket-name'platform_endpoint.always_through_s3=TrueSetting a custom S3 resourceimportboto3frombotocore.configimportConfigimportsns_extended_client# Low level clientsns=boto3.client('sns')sns.large_payload_support='my-bucket-name'sns.s3=boto3.resource('s3',config=Config(signature_version='s3v4',s3={"use_accelerate_endpoint":True}))# boto SNS.Topic resourceresource=boto3.resource('sns')topic=resource.Topic('topic-arn')# Ortopic=resource.topic(Name='topic-name')topic.large_payload_support='my-bucket-name'topic.s3=boto3.resource('s3',config=Config(signature_version='s3v4',s3={"use_accelerate_endpoint":True}))# boto SNS.PlatformEndpoint resourceresource=boto3.resource('sns')platform_endpoint=resource.PlatformEndpoint('endpoint-arn')platform_endpoint.large_payload_support='my-bucket-name'platform_endpoint.s3=boto3.resource('s3',config=Config(signature_version='s3v4',s3={"use_accelerate_endpoint":True}))Setting a custom S3 KeyPublish Message Supports user defined S3 Key used to store objects in the specified Bucket.To use custom keys add the S3 key as a Message Attribute in the MessageAttributes argument with the MessageAttribute.Key - "S3Key"sns.publish(Message="message",MessageAttributes={"S3Key":{"DataType":"String","StringValue":"--S3--Key--",}},)Using SQSLargePayloadSize as reserved message attributeInitial versions of the Java SNS Extended Client used 'SQSLargePayloadSize' as the reserved message attribute to determine that a message is an S3 message.In the later versions it was changed to use 'ExtendedPayloadSize'.To use the Legacy reserved message attribute set use_legacy_attribute parameter toTrue.importboto3importsns_extended_client# Low level clientsns=boto3.client('sns')sns.large_payload_support='bucket-name'sns.use_legacy_attribute=True# boto SNS.Topic resourceresource=boto3.resource('sns')topic=resource.Topic('topic-arn')# Ortopic=resource.create_topic(Name='topic-name')topic.large_payload_support='my-bucket-name'topic.use_legacy_attribute=True# boto SNS.PlatformEndpoint resourceresource=boto3.resource('sns')platform_endpoint=resource.PlatformEndpoint('endpoint-arn')platform_endpoint.large_payload_support='my-bucket-name'platform_endpoint.use_legacy_attribute=TrueCODE SAMPLEHere is an example of using the extended payload utility:Here we create an SNS Topic and SQS Queue, then subscribe the queue to the topic.We publish messages to the created Topic and print the published message from the queue along with the original message retrieved from S3.importboto3fromsns_extended_clientimportSNSExtendedClientSessionfromjsonimportloadss3_extended_payload_bucket="extended-client-bucket-store"# S3 bucket with the given bucket name is a resource which is created and accessible with the given AWS credentialsTOPIC_NAME="---TOPIC-NAME---"QUEUE_NAME="---QUEUE-NAME---"defget_msg_from_s3(body):"""Handy Helper to fetch message from S3"""json_msg=loads(body)s3_client=boto3.client("s3")s3_object=s3_client.get_object(Bucket=json_msg[1].get("s3BucketName"),Key=json_msg[1].get("s3Key"))msg=s3_object.get("Body").read().decode()returnmsgdeffetch_and_print_from_sqs(sqs,queue_url):message=sqs.receive_message(QueueUrl=queue_url,MessageAttributeNames=["All"],MaxNumberOfMessages=1).get("Messages")[0]message_body=message.get("Body")print("Published Message:{}".format(message_body))print("Message Stored in S3 Bucket is:{}\n".format(get_msg_from_s3(message_body)))sns_extended_client=boto3.client("sns",region_name="us-east-1")create_topic_response=sns_extended_client.create_topic(Name=TOPIC_NAME)demo_topic_arn=create_topic_response.get("TopicArn")# create and subscribe an sqs queue to the sns clientsqs=boto3.client("sqs")demo_queue_url=sqs.create_queue(QueueName=QUEUE_NAME).get("QueueUrl")demo_queue_arn=sqs.get_queue_attributes(QueueUrl=demo_queue_url,AttributeNames=["QueueArn"])["Attributes"].get("QueueArn")# Set the RawMessageDelivery subscription attribute to TRUE if you want to use# SQSExtendedClient to help with retrieving msg from S3sns_extended_client.subscribe(TopicArn=demo_topic_arn,Protocol="sqs",Endpoint=demo_queue_arn,Attributes={"RawMessageDelivery":"true"})# Below is the example that all the messages will be sent to the S3 bucketsns_extended_client.large_payload_support=s3_extended_payload_bucketsns_extended_client.always_through_s3=Truesns_extended_client.publish(TopicArn=demo_topic_arn,Message="This message should be published to S3")print("\n\nPublished using SNS extended client:")fetch_and_print_from_sqs(sqs,demo_queue_url)# Prints message stored in s3# Below is the example that all the messages larger than 32 bytes will be sent to the S3 bucketprint("\nUsing decreased message size threshold:")sns_extended_client.always_through_s3=Falsesns_extended_client.message_size_threshold=32sns_extended_client.publish(TopicArn=demo_topic_arn,Message="This message should be published to S3 as it exceeds the limit of the 32 bytes",)fetch_and_print_from_sqs(sqs,demo_queue_url)# Prints message stored in s3# Below is the example to publish message using the SNS.Topic resourcesns_extended_client_resource=SNSExtendedClientSession().resource("sns",region_name="us-east-1")topic=sns_extended_client_resource.Topic(demo_topic_arn)topic.large_payload_support=s3_extended_payload_buckettopic.always_through_s3=True# Can Set custom S3 Keys to be used to store objects in S3topic.publish(Message="This message should be published to S3 using the topic resource",MessageAttributes={"S3Key":{"DataType":"String","StringValue":"347c11c4-a22c-42e4-a6a2-9b5af5b76587",}},)print("\nPublished using Topic Resource:")fetch_and_print_from_sqs(sqs,demo_queue_url)PRODUCED OUTPUT:Published using SNS extended client: Published Message: ["software.amazon.payloadoffloading.PayloadS3Pointer", {"s3BucketName": "extended-client-bucket-store", "s3Key": "465d51ea-2c85-4cf8-9ff7-f0a20636ac54"}] Message Stored in S3 Bucket is: This message should be published to S3 Using decreased message size threshold: Published Message: ["software.amazon.payloadoffloading.PayloadS3Pointer", {"s3BucketName": "extended-client-bucket-store", "s3Key": "4e32bc6c-e67e-4e09-982b-66dfbe0c588a"}] Message Stored in S3 Bucket is: This message should be published to S3 as it exceeds the limit of the 32 bytes Published using Topic Resource: Published Message: ["software.amazon.payloadoffloading.PayloadS3Pointer", {"s3BucketName": "extended-client-bucket-store", "s3Key": "347c11c4-a22c-42e4-a6a2-9b5af5b76587"}] Message Stored in S3 Bucket is: This message should be published to S3 using the topic resourceDEVELOPMENTWe have built-in Makefile to run test, format check or fix in one command. Please checkMakefilefor more information.Just run below command, and it will do format check and run unit test:make ciSecuritySeeCONTRIBUTINGfor more information.LicenseThis project is licensed under the Apache-2.0 License.
amazon-sp-api-clients
amazon-sp-api-clientsThis is a package generated from amazon selling partner open api models.The package is tested in many situations, with fully type hint supported. Enjoy it!AttentionV1.0.0 changes many api, compared with v0.x.x!注意!V1.0.0相较于v0.x.x更改了大量的API!Featuresready to use;provide code to generate clients, in case that amazon update models;type hint;orders api, feed api, report api, and all other apis;automatically manage tokens.Installationpipinstallamazon-sp-api-clientsNoteFor technical support, please [email protected] this lib is only open access but not open source, and now it's time to make it public to serve more developers.If there's any bug, please fell free to open an issue or send a pr.UsageFor saving time, I just paste part of my test code here as a demo.For better understanding, all the fields are the same length of actual fields, and some readable information are kept.fromdatetimeimportdatetimeimportamazon_sp_api_clientsendpoint="https://sellingpartnerapi-eu.amazon.com"marketplace_id="XXXXXXXXXXXXXX"refresh_token="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx""xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx""xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx""xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"role_arn="arn:aws:iam::123456789012:role/xxxxxx"aws_access_key='XXXXXXXXXXXXXXXXXXXX'aws_secret_key="XXXXX/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"client_id='amzn1.application-oa2-client.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'client_secret='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'client_config=dict(role_arn=role_arn,endpoint=endpoint,marketplace_id=marketplace_id,refresh_token=refresh_token,aws_access_key=aws_access_key,aws_secret_key=aws_secret_key,lwa_client_key=client_id,lwa_client_secret=client_secret,)clients=amazon_sp_api_clients.AmazonSpApiClients(**client_config)orders=clients.orders_v0.getOrders(MarketplaceIds=[marketplace_id],CreatedAfter=datetime(2000,1,1).isoformat()).payload.Ordersfororderinorders:print(order.AmazonOrderId,order.LastUpdateDate)ConfigurationThe client configuration can be set both at the initiation and as environment variables.SP_API_ROLE_ARNSP_API_ENDPOINTSP_API_REGIONSP_API_MARKETPLACE_IDSP_API_REFRESH_TOKENSP_API_AWS_ACCESS_KEYSP_API_AWS_SECRET_KEYSP_API_LWA_CLIENT_KEYSP_API_LWA_CLIENT_SECRETBuildThe client is generated in the following steps:download amazon open api repository;copy open api 2 json files from the amazon repository to a single directory;convert open api 2 json files to open api 3 json files;convert open api 3 json files to py clients.The main script of generation is thetest_mainpython file.When convert open api to py clients, I separated the process into 6 steps, which are defined in theswager_client_generator.stagesmodule.If my build is not suitable for your demand, or amazon api model updates but my build do not follow, you can clone this repo, modify theapi.pyttemplate and build it by yourself, and please push a PR, thanks!AcknowledgementThe auth method is partially frompython-amazon-sp-api.NoteIf this library helps you, please give me a star, thanks!如果这个库对您有用,请为我点亮Star,谢谢!
amazon-sqs-extended-client
Amazon SQS Extended Client Library for PythonImplements the functionality ofamazon-sqs-java-extended-client-libin PythonThe Amazon SQS Extended Client allows clients to manage Amazon SQS message payloads that exceed the 256 KB message size limit, up to a size of 2 GB. In the event of publishing such large messages, the client accomplishes this feat by storing the actual payload in a S3 bucket and by storing the reference of the stored object in the SQS queue. Similarly, the extended-client is also used for retrieving and dereferencing these references of message objects stored in S3. Thus, the library is used for the following purposes:Specify whether payloads are always stored in Amazon S3 or only when a payload's size exceeds 256 KB.Send a message that references a single message object stored in an Amazon S3 bucket.Get the corresponding payload object from an Amazon S3 bucket.Delete the corresponding payload object from an Amazon S3 bucket.Message Attributes for the SQS Extended ClientThe SQS Extended Client makes use of several additional message attributes which helps it to handle large message payloads. This section shall outline the various attributes present:large_payload_support: This consists of the S3 Bucket name that is responsible for storing large messagesalways_through_s3: This decides if all the message should be serialized to the S3 bucket. If set toFalse, messages smaller than256 KBwill not be serialized to the s3 bucket.use_legacy_attribute: This message attribute is present in the header for a message structure and is important for consumers to understand the name of the key in the dictionary which contains the information of the large message payload. If True, then all published messages use the Legacy reserved message attribute (SQSLargePayloadSize) instead of the current reserved message attribute (ExtendedPayloadSize).Global Variables used by the SQS Extended ClientMESSAGE_POINTER_CLASS: The value held by this global variable, or byLEGACY_MESSAGE_POINTER_CLASS, is critical to the functioning of the client as it holds the class name of the pointer that stored the original payload in a S3 bucket.MAX_ALLOWED_ATTRIBUTES: The value held by this global variable denotes the constraint of having a maximum of 10 message attributes for each large message payload.S3_KEY_ATTRIBUTE_NAME: The value held by this global variable denotes the S3 Key, if present, which would be used to store the large message payload.DEFAULT_MESSAGE_SIZE_THRESHOLD: This states the threshold for the size of the messages in the S3 bucket and it cannot be less than 0 or more than 262144 (default value).RESERVED_ATTRIBUTE_NAME: The value held by this global variable, or byLEGACY_RESERVED_ATTRIBUTE_NAME, denotes the attribute name which will be reserved for the purpose of handling large message payloads.Getting StartedSign up for AWS-- Before you begin, you need an AWS account. For more information about creating an AWS account, seecreate and activate aws account.Minimum requirements-- Python 3.x (or later) and pipDownload-- Download the latest preview release or pick it up from pip:pip install amazon-sqs-extended-clientUsing the Extended ClientEnsure that the package has been imported in the file you want to run your code from, by doing:Setting up the prerequisites for sending message payloads > 256 KBimport boto3 import sqs_extended_client sqs_extended_client = boto3.client("sqs", region_name="us-east-1") sqs_extended_client.large_payload_support = "BUCKET_NAME_HERE" sqs_extended_client.use_legacy_attribute = False # Creating a SQS Queue and extracting it's Queue URL queue = sqs_extended_client.create_queue( QueueName = "DemoPreparationQueue" ) queue_url = sqs_extended_client.get_queue_url( QueueName = "DemoPreparationQueue" )['QueueUrl'] # Creating a S3 bucket using the S3 client sqs_extended_client.s3_client.create_bucket(Bucket=sqs_extended_client.large_payload_support)Enabling support for payloads > 256 KB# Sending a large message large_message = small_message * 300000 # Shall cross the limit of 256 KB send_message_response = sqs_extended_client.send_message( QueueUrl=queue_url, MessageBody=large_message ) assert send_message_response['ResponseMetadata']['HTTPStatusCode'] == 200# Receiving the large message receive_message_response = sqs_extended_client.receive_message( QueueUrl=queue_url, MessageAttributeNames=['All'] ) assert receive_message_response['Messages'][0]['Body'] == large_message receipt_handle = receive_message_response['Messages'][0]['ReceiptHandle']# Deleting the large message # Set to True for deleting the payload from S3 sqs_extended_client.delete_payload_from_s3 = True delete_message_response = sqs_extended_client.delete_message( QueueUrl=queue_url, ReceiptHandle=receipt_handle ) assert delete_message_response['ResponseMetadata']['HTTPStatusCode'] == 200# Deleting the queue delete_queue_response = sqs_extended_client.delete_queue( QueueUrl=queue_url ) assert delete_queue_response['ResponseMetadata']['HTTPStatusCode'] == 200SecuritySeeCONTRIBUTINGfor more information.LicenseThis project is licensed under the Apache-2.0 License.
amazon-ss1
Amazon ss Python SDK-long
amazonstoreprice
AmazonStorePriceThis module find the price from url givenCompatible with all amazon storeLinkDocumentation:http://amazonstoreprice.readthedocs.org/Bug Tracker:https://github.com/Mirio/amazonstoreprice/issuesGitHub:https://github.com/Mirio/amazonstorepriceRequirementPython 3.xPython Library = [ 'requests', 'beautifulsoup4' ]How to installpip install amazonstorepriceInstall from sourcegit clone https://github.com/Mirio/amazonstoreprice.git cd amazonstoreprice python setup.py installGetting StartedExample:from amazonstoreprice import AmazonStorePrice url = "http://www.amazon.it/Inside-Out-Ronnie-Del-Carmen/dp/B016LMC90O/" \ "ref=sr_1_1?ie=UTF8&qid=1455389197&sr=8-1&keywords=inside+out" pricelib = AmazonStorePrice() print(pricelib.getprice(url, retry_ontemp=True))Output:$ python example_getprice.py 15.99
amazon-textract-caller
Textract-Calleramazon-textract-caller provides a collection of ready to use functions and sample implementations to speed up the evaluation and development for any project using Amazon Textract.Making it easy to call Amazon Textract regardless of file type and location.Install>python-mpipinstallamazon-textract-callerFunctionsfromtextractcallerimportcall_textractdefcall_textract(input_document:Union[str,bytes],features:Optional[List[Textract_Features]]=None,queries_config:Optional[QueriesConfig]=None,output_config:Optional[OutputConfig]=None,adapters_config:Optional[AdaptersConfig]=None,kms_key_id:str="",job_tag:str="",notification_channel:Optional[NotificationChannel]=None,client_request_token:str="",return_job_id:bool=False,force_async_api:bool=False,call_mode:Textract_Call_Mode=Textract_Call_Mode.DEFAULT,boto3_textract_client=None,job_done_polling_interval=1)->dict:Also useful when receiving the JSON response from an asynchronous job (start_document_text_detection or start_document_analysis)fromtextractcallerimportget_full_jsondefget_full_json(job_id:str=None,textract_api:Textract_API=Textract_API.DETECT,boto3_textract_client=None)->dict:And when receiving the JSON from the OutputConfig location, this method is useful as well.fromtextractcallerimportget_full_json_from_output_configdefget_full_json_from_output_config(output_config:OutputConfig=None,job_id:str=None,s3_client=None)->dict:SamplesCalling with file from local filesystem only with detect_texttextract_json=call_textract(input_document="/folder/local-filesystem-file.png")Calling with file from local filesystem only detect_text and using in Textract Response Parser(needs trp dependency throughpython -m pip install amazon-textract-response-parser)importjsonfromtrpimportDocumentfromtextractcallerimportcall_textracttextract_json=call_textract(input_document="/folder/local-filesystem-file.png")d=Document(textract_json)Calling with Queries for a multi-page document and extract the Answerssample also uses the amazon-textract-response-parserpython -m pip install amazon-textract-caller amazon-textract-response-parserimporttextractcallerastcimporttrp.trp2ast2importboto3textract=boto3.client('textract',region_name="us-east-2")q1=tc.Query(text="What is the employee SSN?",alias="SSN",pages=["1"])q2=tc.Query(text="What is YTD gross pay?",alias="GROSS_PAY",pages=["2"])textract_json=tc.call_textract(input_document="s3://amazon-textract-public-content/blogs/2-pager.pdf",queries_config=tc.QueriesConfig(queries=[q1,q2]),features=[tc.Textract_Features.QUERIES],force_async_api=True,boto3_textract_client=textract)t_doc:t2.TDocument=t2.TDocumentSchema().load(textract_json)# type: ignoreforpageint_doc.pages:query_answers=t_doc.get_query_answers(page=page)forxinquery_answers:print(f"{x[1]},{x[2]}")Calling with Custom Queries for a multi-page document using an adaptersample also uses the amazon-textract-response-parserpython -m pip install amazon-textract-caller amazon-textract-response-parserimporttextractcallerastcimporttrp.trp2ast2importboto3textract=boto3.client('textract',region_name="us-east-2")q1=tc.Query(text="What is the employee SSN?",alias="SSN",pages=["1"])q2=tc.Query(text="What is YTD gross pay?",alias="GROSS_PAY",pages=["2"])adapter1=tc.Adapter(adapter_id="2e9bf1c4aa31",version="1",pages=["1"])textract_json=tc.call_textract(input_document="s3://amazon-textract-public-content/blogs/2-pager.pdf",queries_config=tc.QueriesConfig(queries=[q1,q2]),adapters_config=tc.AdaptersConfig(adapters=[adapter1])features=[tc.Textract_Features.QUERIES],force_async_api=True,boto3_textract_client=textract)t_doc:t2.TDocument=t2.TDocumentSchema().load(textract_json)# type: ignoreforpageint_doc.pages:query_answers=t_doc.get_query_answers(page=page)forxinquery_answers:print(f"{x[1]},{x[2]}")Calling with file from local filesystem with TABLES featuresfromtextractcallerimportcall_textract,Textract_Featuresfeatures=[Textract_Features.TABLES]response=call_textract(input_document="/folder/local-filesystem-file.png",features=features)Call with images located on S3 but force asynchronous APIfromtextractcallerimportcall_textractresponse=call_textract(input_document="s3://some-bucket/w2-example.png",force_async_api=True)Call with OutputConfig, Customer-Managed-Keyfromtextractcallerimportcall_textractfromtextractcallerimportOutputConfig,Textract_Featuresoutput_config=OutputConfig(s3_bucket="somebucket-encrypted",s3_prefix="output/")response=call_textract(input_document="s3://someprefix/somefile.png",force_async_api=True,output_config=output_config,kms_key_id="arn:aws:kms:us-east-1:12345678901:key/some-key-id-ref-erence",return_job_id=False,job_tag="sometag",client_request_token="sometoken")Call with PDF located on S3 and force return of JobId instead of JSON responsefromtextractcallerimportcall_textractresponse=call_textract(input_document="s3://some-bucket/some-document.pdf",return_job_id=True)job_id=response['JobId']
amazon-textract-geofinder
Textract-Pipeline-GeoFinderProvides functions to use geometric information to extract information.Use cases include:Give context to key/value pairs from the Amazon Textract AnalyzeDocument API for FORMSFind values in specific areasInstall>python-mpipinstallamazon-textract-geofinderMake sure your environment is setup with AWS credentials through configuration files or environment variables or an attached role. (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)ConceptTo find information in a document based on geometry with this library the main advantage over defining x,y coordinates where the expected value should be is the concept of an area.An area is ultimately defined by a box with x_min, y_min, x_max, y_max coordinates but can be defined by finding words/phrases in the document and then use to create the area.From there functions to parse the information in the area help to extract the information. E. g. by defining the area based on the question like 'Did you feel fever or feverish lately?' we can associate the answers to it and create a new key/value pair specific to this question.SamplesGet context for key value pairsSample image:The Amazon Textract AnalyzeDocument API with the FORMS feature returns the following keys:KeyValueFirst Name:ALEJANDROFirst Name:CARLOSRelationship to Patient:BROTHERFirst Name:JANEMarital Status:MARRIEDPhone:646-555-0111Last Name:SALAZARPhone:212-555-0150Relationship to Patient:FRIENDLast Name:ROSALEZCity:ANYTOWNPhone:650-555-0123Address:123 ANY STREETYesSELECTEDYesNOT_SELECTEDDate of Birth:10/10/1982Last Name:DOESex:MYesNOT_SELECTEDYesNOT_SELECTEDYesNOT_SELECTEDState:CAZip Code:12345Email Address:NoNOT_SELECTEDNoSELECTEDNoNOT_SELECTEDYesSELECTEDNoSELECTEDNoSELECTEDNoSELECTEDBut the information to which section of the document the individual keys belong is not obvious. Most keys appear multiple times and we want to give them context to associate them with the 'Patient', 'Emergency Contact 1', 'Emergency Contact 2' or specific questions.This Jupyter notebook that walks through the sample:sample notebookMake sure to have AWS credentials setup when starting the notebook locally or use a SageMaker notebook with a role including permissions for Amazon Textract.This code snippet is take from the notebook.python-mpipinstallamazon-textract-helperamazon-textract-geofinderfromtextractgeofinder.ocrdbimportAreaSelectionfromtextractgeofinder.tgeofinderimportKeyValue,TGeoFinder,AreaSelection,SelectionElementfromtextractprettyprinter.t_pretty_printimportget_forms_stringfromtextractcallerimportcall_textractfromtextractcaller.t_callimportTextract_Featuresimporttrp.trp2ast2image_filename='./tests/data/patient_intake_form_sample.jpg'j=call_textract(input_document=image_filename,features=[Textract_Features.FORMS])t_document=t2.TDocumentSchema().load(j)doc_height=1000doc_width=1000geofinder_doc=TGeoFinder(j,doc_height=doc_height,doc_width=doc_width)defset_hierarchy_kv(list_kv:list[KeyValue],t_document:t2.TDocument,page_block:t2.TBlock,prefix="BORROWER"):forxinlist_kv:t_document.add_virtual_key_for_existing_key(key_name=f"{prefix}_{x.key.text}",existing_key=t_document.get_block_by_id(x.key.id),page_block=page_block)# patient informationpatient_information=geofinder_doc.find_phrase_on_page("patient information")[0]emergency_contact_1=geofinder_doc.find_phrase_on_page("emergency contact 1:",min_textdistance=0.99)[0]top_left=t2.TPoint(y=patient_information.ymax,x=0)lower_right=t2.TPoint(y=emergency_contact_1.ymin,x=doc_width)form_fields=geofinder_doc.get_form_fields_in_area(area_selection=AreaSelection(top_left=top_left,lower_right=lower_right))set_hierarchy_kv(list_kv=form_fields,t_document=t_document,prefix='PATIENT',page_block=t_document.pages[0])set_hierarchy_kv(list_kv=form_fields,t_document=t_document,prefix='PATIENT',page_block=t_document.pages[0])print(get_forms_string(t2.TDocumentSchema().dump(t_document)))KeyValue......PATIENT_first name:ALEJANDROPATIENT_address:123 ANY STREETPATIENT_sex:MPATIENT_state:CAPATIENT_zip code:12345PATIENT_marital status:MARRIEDPATIENT_last name:ROSALEZPATIENT_phone:646-555-0111PATIENT_email address:PATIENT_city:ANYTOWNPATIENT_date of birth:10/10/1982Using the Amazon Textact Helper command line tool with the sampleThis will show the full result, like the notebook.>python-mpipinstallamazon-textract-helperamazon-textract-geofinder >cattests/data/patient_intake_form_sample.json|bin/amazon-textract-geofinder|amazon-textract--stdin--pretty-printFORMSKeyValueFirst Name:ALEJANDROFirst Name:CARLOSRelationship to Patient:BROTHERFirst Name:JANEMarital Status:MARRIEDPhone:646-555-0111Last Name:SALAZARPhone:212-555-0150Relationship to Patient:FRIENDLast Name:ROSALEZCity:ANYTOWNPhone:650-555-0123Address:123 ANY STREETYesSELECTEDYesNOT_SELECTEDDate of Birth:10/10/1982Last Name:DOESex:MYesNOT_SELECTEDYesNOT_SELECTEDYesNOT_SELECTEDState:CAZip Code:12345Email Address:NoNOT_SELECTEDNoSELECTEDNoNOT_SELECTEDYesSELECTEDNoSELECTEDNoSELECTEDNoSELECTEDPATIENT_first name:ALEJANDROPATIENT_address:123 ANY STREETPATIENT_sex:MPATIENT_state:CAPATIENT_zip code:12345PATIENT_marital status:MARRIEDPATIENT_last name:ROSALEZPATIENT_phone:646-555-0111PATIENT_email address:PATIENT_city:ANYTOWNPATIENT_date of birth:10/10/1982EMERGENCY_CONTACT_1_first name:CARLOSEMERGENCY_CONTACT_1_phone:212-555-0150EMERGENCY_CONTACT_1_relationship to patient:BROTHEREMERGENCY_CONTACT_1_last name:SALAZAREMERGENCY_CONTACT_2_first name:JANEEMERGENCY_CONTACT_2_phone:650-555-0123EMERGENCY_CONTACT_2_last name:DOEEMERGENCY_CONTACT_2_relationship to patient:FRIENDFEVER->YESSELECTEDFEVER->NONOT_SELECTEDSHORTNESS->YESNOT_SELECTEDSHORTNESS->NOSELECTEDCOUGH->YESNOT_SELECTEDCOUGH->NOSELECTEDLOSS_OF_TASTE->YESNOT_SELECTEDLOSS_OF_TASTE->NOSELECTEDCOVID_CONTACT->YESSELECTEDCOVID_CONTACT->NONOT_SELECTEDTRAVEL->YESNOT_SELECTEDTRAVEL->NOSELECTED
amazon-textract-helper
Textractor-Textract-Helperamazon-textract-helper provides a collection of ready to use functions and sample implementations to speed up the evaluation and development for any project using Amazon Textract. It installs a command line tool calledamazon-textractInstall>python-mpipinstallamazon-textract-helperMake sure your environment is setup with AWS credentials through configuration files or environment variables or an attached role. (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)Test>amazon-textract--help usage:amazon-textract[-h](--input-documentINPUT_DOCUMENT|--example|--stdin)[--features{FORMS,TABLES}[{FORMS,TABLES}...]][--pretty-print{WORDS,LINES,FORMS,TABLES}[{WORDS,LINES,FORMS,TABLES}...]][--pretty-print-table-format{csv,plain,simple,github,grid,fancy_grid,pipe,orgtbl,jira,presto,pretty,psql,rst,medi awiki,moinmoin,youtrack,html,unsafehtml,latex,latex_raw,latex_booktabs,latex_longtable,textile,tsv}][--overlay{WORD,LINE,FORM,KEY,VALUE,TABLE,CELL}[{WORD,LINE,FORM,KEY,VALUE,TABLE,CELL}...]][--pop-up-overlay-output][--overlay-output-folderOVERLAY_OUTPUT_FOLDER][--version][--no-stdout][-v|-vv]optionalarguments:-h,--helpshowthishelpmessageandexit--input-documentINPUT_DOCUMENTs3object(s3://)orfilefromlocalfilesystem--exampleusingtheexampledocumenttocallTextract--stdinreceiveJSONfromstdin--features{FORMS,TABLES}[{FORMS,TABLES}...]featurestocallTextractwith.WilltriggercalltoAnalyzeDocumentinsteadofDetectDocumentText--pretty-print{WORDS,LINES,FORMS,TABLES}[{WORDS,LINES,FORMS,TABLES}...]--pretty-print-table-format{csv,plain,simple,github,grid,fancy_grid,pipe,orgtbl,jira,presto,pretty,psql,rst,mediawiki,moinmoin,youtrac k,html,unsafehtml,latex,latex_raw,latex_booktabs,latex_longtable,textile,tsv}whichformattooutputtheprettyprintinformationto.OnlyeffectsFORMSandTABLES--overlay{WORD,LINE,FORM,KEY,VALUE,TABLE,CELL}[{WORD,LINE,FORM,KEY,VALUE,TABLE,CELL}...]defineswhatboundingboxestodrawontheoutput--pop-up-overlay-outputshowsimagewithoverlay--overlay-textshowsimagewithWORDorLINEtextoverlay.WhenbothWORDandLINEoverlayarespecified,WORDtextwillbeoverlayed--overlay-confidenceshowsimagewithconfidenceoverlay--overlay-output-folderOVERLAY_OUTPUT_FOLDERoutputwithboundingboxestofolder--versionprintversioninformation--no-stdoutnooutputtostdout-v>=INFOlevelloggingoutputtostderr-vv>=DEBUGlevelloggingoutputtostderrSample CommandsEasy Start>amazon-textract--examplethis will run the examples document using the DetectDocumentText API. Output will be printed to stdout and look similar to this:{"DocumentMetadata":{"Pages":1},"Blocks":[{"BlockType":"PAGE","Geometry":{"BoundingBox":{"Width":1.0,"Height":1.0,"Left":0.0,"Top":0.0},"Polygon":[{"X":9.33321120033382e-17,"Y":0.0},{"X":1.0,"Y":1.6069064689339292e-16},{"X":1.0,"Y":1.0}],"HTTPHeaders":{"x-amzn-requestid":"12345678-1234-1234-1234-123456789012","content-type":"application/x-amz-json-1.1","content-length":"48177","date":"Thu, 01 Apr 2021 21:50:29 GMT"},"RetryAttempts":0}}It is working.Call with document on S3>amazon-textract--input-document"s3://somebucket/someprefix/someobjectname.png"Output similar to Easy StartCall with document on local file system>amazon-textract--input-document"./somepath/somefilename.png"Output similar to Easy StartWe will continue to use the--exampleparameter to keep it simple and easy to reproduce. S3 and local files work the same way, just instead of --example use --input-document .Call with STDIN# first create JSONamazon-textract--example>example.json# now use a stored JSON with the ```amazon-textract``` commandcatexample.json|amazon-textract--stdin-pretty-printLINESCall with FORMS and TABLES>amazon-textract--example--featuresFORMSTABLESThis will call the [AnalyzeDocument API] (https://docs.aws.amazon.com/textract/latest/dg/API_AnalyzeDocument.html) and output will include Output will look similar to "Easy Start" but include FORMS and TABLES informationPretty print the outputPretty print outputs nicely formatted information for words, lines, forms or tables.For example to print the tables identified by Amazon Textract to stdout, use>amazon-textract--example--featuresTABLES--pretty-printTABLESOutput will look like this:|------------|-----------|---------------------|-----------------|-----------------------| | | | Previous Employment | History | | | Start Date | End Date | Employer Name | Position Held | Reason for leaving | | 1/15/2009 | 6/30/2011 | Any Company | Assistant Baker | Family relocated | | 7/1/2011 | 8/10/2013 | Best Corp. | Baker | Better opportunity | | 8/15/2013 | present | Example Corp. | Head Baker | N/A, current employer |to pretty print both, FORMS and TABLES:>amazon-textract--example--featuresFORMSTABLES--pretty-printFORMSTABLESwill outputPhone Number:: 555-0100 Home Address:: 123 Any Street, Any Town, USA Full Name:: Jane Doe Mailing Address:: same as home address |------------|-----------|---------------------|-----------------|-----------------------| | | | Previous Employment | History | | | Start Date | End Date | Employer Name | Position Held | Reason for leaving | | 1/15/2009 | 6/30/2011 | Any Company | Assistant Baker | Family relocated | | 7/1/2011 | 8/10/2013 | Best Corp. | Baker | Better opportunity | | 8/15/2013 | present | Example Corp. | Head Baker | N/A, current employer |OverlayAt the moment overlay only works with images, we will add support for PDF soon.The following command runs DetectDocumentText, pretty prints the WORDS in the document to stdout and draws bounding boxes around each WORD and displays the result in a popup window and stores it to a folder called 'overlay-output-folder-name'.amazon-textract--example--pretty-printWORDS--overlayWORD--pop-up-overlay-output--overlay-output-folderoverlay-output-folder-nameThe following command runs AnalyzeDocument for FORMS and TABLES, pretty prints FORMS and TABLES to to stdout and draws bounding boxes around each TABLE-CELL and FORM KEY/VALUE and displays the result in a popup window and stores it to a folder called 'overlay-output-folder-name'.>amazon-textract--example--featuresTABLESFORMS--pretty-printFORMSTABLES--overlayFORMCELL--pop-up-overlay-output--overlay-output-folder../mywonderfuloutputfolderfordocs/The following command draws bounding boxes around each WORD, overlays the detected WORD text, and displays the result in a popup window and stores it to a folder called 'overlay-output-folder-name'.>amazon-textract--example--overlayWORD--overlay-text--pop-up-overlay-output--overlay-output-folderoverlay-output-folder-nameThe following command draws bounding boxes around each LINE, overlays LINE text along with percentage confidence of the detected LINE text, and displays the result in a popup window and stores it to a folder called 'overlay-output-folder-name'.>amazon-textract--example--overlayLINE--overlay-text--overlay-confidence--pop-up-overlay-output--overlay-output-folderoverlay-output-folder-name
amazon-textract-idp-cdk-constructs
Amazon Textract IDP CDK Constructs---All classes are under active development and subject to non-backward compatible changes or removal in any future version. These are not subject to theSemantic Versioningmodel. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.ContextThis CDK Construct can be used as Step Function task and call Textract in Asynchonous mode for DetectText and AnalyzeDocument APIs.For samples on usage, look atAmazon Textact IDP CDK Stack SamplesInputExpects a Manifest JSON at 'Payload'. Manifest description:https://pypi.org/project/schadem-tidp-manifest/Example call in Pythontextract_async_task=t_async.TextractGenericAsyncSfnTask(self,"textract-async-task",s3_output_bucket=s3_output_bucket,s3_temp_output_prefix=s3_temp_output_prefix,integration_pattern=sfn.IntegrationPattern.WAIT_FOR_TASK_TOKEN,lambda_log_level="DEBUG",timeout=Duration.hours(24),input=sfn.TaskInput.from_object({"Token":sfn.JsonPath.task_token,"ExecutionId":sfn.JsonPath.string_at('$$.Execution.Id'),"Payload":sfn.JsonPath.entire_payload,}),result_path="$.textract_result")Query ParameterExample:input=sfn.TaskInput.from_object({"Token":sfn.JsonPath.task_token,"ExecutionId":sfn.JsonPath.string_at('$$.Execution.Id'),"Payload":sfn.JsonPath.entire_payload,"Query":[{'Text':'string','Alias':'string','Pages':['string',]},{"Text":"What is the name of the realestate company","Alias":"APP_COMPANY_NAME"},{"Text":"What is the name of the applicant or the prospective tenant","Alias":"APP_APPLICANT_NAME"},]}),Documentation:https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/textract/client/start_document_analysis.htmlTo add a query parameter to the Manifest JSON, we are going to leverage the 'convert_manifest_queries_config_to_caller'. It transforms a list of Query objects (as indicated by the type hint List[tm.Query]) into a QueriesConfig object (as indicated by the return type tc.QueriesConfig).The function expects a list of Query objects as input. Each Query object should have the following attributes:text (required)alias (opt)pages (opt)The function creates a new QueriesConfig object. If the input list is not empty, it creates a list comprehension that generates a new Query object for each Query object in the input list, maintaining the same text, alias, and pages values. If the input list is empty, it simply creates a QueriesConfig object with an empty queries list.OutputAdds the "TextractTempOutputJsonPath" to the Step Function ResultPath. At this location the Textract output is stored as individual JSON files. Use the CDK Construct schadem-cdk-construct-sfn-textract-output-config-to-json to combine them to one single JSON file.example with ResultPath = textract_result (like configured above):"textract_result": { "TextractTempOutputJsonPath": "s3://schademcdkstackpaystuban-schademcdkidpstackpaystu-bt0j5wq0zftu/textract-temp-output/c6e141e8f4e93f68321c17dcbc6bf7291d0c8cdaeb4869758604c387ce91a480" }Spacy ClassificationExpect a Spacy textcat model at the root of the directory. Call the script <TO_INSERT) to copy a public one which classifies Paystub and W2.aws s3 cp s3://amazon-textract-public-content/constructs/en_textcat_demo-0.0.0.tar.gz .How to use Workmail IntegrationIn order to demonstrate this functionality, I have used below architecture where once the inbound email is delivered to your Amazon workmail inbox and if the pattern/s matches, it will invoke the rule action which is inovocation of a lambda function in this case. You can use my sample code to fetch the inbound email message body and parse it properly as text.PrerequisitesAs I have used Python 3.6 as my Lambda function runtime hence some knowledge of python 3 version is required.StepsFirst setup an Amazon workmail site, setup an organization and create a user access by following steps mentioned in 'Getting Started' documenthere. Once above setup process is done, you will have access to https://your Organization.awsapps.com/mail webmail url and you can login using your created user's username / password to access your emails.Now we will create a lambda function which will be invoked once inbound email reaches the inbox and email flow rule pattern is matched (more on this in below steps). You can use the sample lambda python(3.6) code ( lambda_function.py) provided in the 'code' folder for the same. It will fetch the inbound email message body and then parse it properly to get the message body as text. Once you get it as text you can perform various operations on it.Inbound email flow rules, also called rule actions, automatically apply to all email messages sent to anyone inside of the Amazon WorkMail organization. This differs from email rules for individual mailboxes. Now we will set up email flow rules to handle email flows based on email addresses or domains. Email flow rules are based on both the sender's and recipient's email addresses or domains.To create an email flow rule, we need to specify a rule action to apply to an email when a specified pattern is matched. Follow the documenttion linkhereto create email flow rule for your organization which you created in step #1 above. you have to select Action=Run Lambda for your rule. Below is the email flow rule created by me:you can now follow documentation linkhereto create pattern/s which need to be satisfied first in order to invoke the rule action (in this case it will invoke our lambda function). For this sample code functionality I have used my email address as pattern in 'origns' and my domain as pattern in 'destinations'. so in this case the lambda function will only be invoke if inbound email sender is my email address and destination is my domain only but you can set patterns as per your requirements. Below screen shots depicts my patterns:
amazon-textract-idp-cdk-manifest
just bla for now
amazon-textract-overlayer
Textract-Overlayeramazon-textract-overlayer provides functions to help overlay bounding boxes on documents.Install>python-mpipinstallamazon-textract-overlayerMake sure your environment is setup with AWS credentials through configuration files or environment variables or an attached role. (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)SamplesPrimary method provided is get_bounding_boxes which returns bounding boxes based on the Textract_Type passed in. Mostly taken from theamazon-textractcommand from the packageamazon-textract-helper.This will return the bounding boxes for WORD and CELL data types.fromtextractoverlayer.t_overlayimportDocumentDimensions,get_bounding_boxesfromtextractcaller.t_callimportTextract_Features,Textract_Types,call_textractdoc=call_textract(input_document=input_document,features=features)# image is a PIL.Image.Image in this casedocument_dimension:DocumentDimensions=DocumentDimensions(doc_width=image.size[0],doc_height=image.size[1])overlay=[Textract_Types.WORD,Textract_Types.CELL]bounding_box_list=get_bounding_boxes(textract_json=doc,document_dimensions=document_dimension,overlay_features=overlay)The actual overlay drawing of bounding boxes for images is in theamazon-textractcommand from the packageamazon-textract-helperand looks like this:fromPILimportImage,ImageDrawimage=Image.open(input_document)rgb_im=image.convert('RGB')draw=ImageDraw.Draw(rgb_im)# check the impl in amazon-textract-helper for ways to associate different colors to typesforbboxinbounding_box_list:draw.rectangle(xy=[bbox.xmin,bbox.ymin,bbox.xmax,bbox.ymax],outline=(128,128,0),width=2)rgb_im.show()The draw bounding boxes within PDF documents the following code can be used:importfitz# for local stored filesfile_path="<<replace with the local path to your pdf file>>"doc=fitz.open(file_path)# for files stored in S3 the streaming object can be used# doc = fitz.open(stream="<<replace with stream_object_variable>>", filetype="pdf")# draw boxesforp,pageinenumerate(doc):p+=1forbboxinbounding_box_list:ifbbox.page_number==p:page.draw_rect([bbox.xmin,bbox.ymin,bbox.xmax,bbox.ymax],color=(0,1,0),width=2)# save file locallydoc.save("<<local path for output file>>")
amazon-textract-pipeline-pagedimensions
Textract-Pipeline-PageDimensionsProvides functions to add page dimensions with doc_width and doc_height to the Textract JSON schema for the PAGE blocks under the custom attribute in the form of:e. g.{'PageDimension': {'doc_width': 1549.0, 'doc_height': 370.0} }Install>python-mpipinstallamazon-textract-pipeline-pagedimensionsMake sure your environment is setup with AWS credentials through configuration files or environment variables or an attached role. (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)SamplesAdd Page dimensions for a local filesample uses amazon-textract-caller amazon-textract-pipeline-pagedimensionspython-mpipinstallamazon-textract-callerfromtextractpagedimensions.t_pagedimensionsimportadd_page_dimensionsfromtextractcaller.t_callimportcall_textractfromtrp.trp2importTDocument,TDocumentSchemaj=call_textract(input_document='<path to some image file>')t_document:TDocument=TDocumentSchema().load(j)add_page_dimensions(t_document=t_document,input_document=input_file)print(t_document.pages[0].custom['PageDimension'])# output will be something like this:# {# 'doc_width': 1544,# 'doc_height': 1065# }Using the Amazon Textact Helper command line tool with PageDimensionsTogether with the Amazon Textract Helper and Amazon Textract Response Parser, we can build a pipeline that includes information about PageDimension and Orientation of pages as a short demonstration on the information that is added to the Textract JSON.>python-mpipinstallamazon-textract-helperamazon-textract-response-parseramazon-textract-pipeline-pagedimensions >amazon-textract--input-document"s3://amazon-textract-public-content/blogs/2-pager-different-dimensions.pdf"|amazon-textract-pipeline-pagedimensions--input-document"s3://amazon-textract-public-content/blogs/2-pager-different-dimensions.pdf"|amazon-textract-pipeline--componentsadd_page_orientation|jq'.Blocks[] | select(.BlockType=="PAGE") | .Custom'{"PageDimension":{"doc_width":1549,"doc_height":370},"Orientation":0}{"PageDimension":{"doc_width":1079,"doc_height":505},"Orientation":0}
amazon-textract-prettyprinter
Textract-PrettyPrinterProvides functions to format the output received from Textract in more easily consumable formats incl. CSV or Markdown. amazon-textract-prettyprinterInstall>python-mpipinstallamazon-textract-prettyprinterMake sure your environment is setup with AWS credentials through configuration files or environment variables or an attached role. (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)SamplesGet FORMS and TABLES as CSVfromtextractcaller.t_callimportcall_textract,Textract_Featuresfromtextractprettyprinter.t_pretty_printimportPretty_Print_Table_Format,Textract_Pretty_Print,get_stringtextract_json=call_textract(input_document=input_document,features=[Textract_Features.FORMS,Textract_Features.TABLES])print(get_string(textract_json=textract_json,table_format=Pretty_Print_Table_Format.csv,output_type=[Textract_Pretty_Print.TABLES,Textract_Pretty_Print.FORMS]))Get string for TABLES using the get_string methodfromtextractcaller.t_callimportcall_textract,Textract_Featuresfromtextractprettyprinter.t_pretty_printimportTextract_Pretty_Print,get_stringtextract_json=call_textract(input_document=input_document,features=[Textract_Features.TABLES])get_string(textract_json=textract_json,output_type=Textract_Pretty_Print.TABLES)Print out tables in LaTeX formatfromtextractcaller.t_callimportcall_textract,Textract_Featuresfromtextractprettyprinter.t_pretty_printimportTextract_Pretty_Print,get_stringtextract_json=call_textract(input_document=input_document,features=[Textract_Features.FORMS,Textract_Features.TABLES])get_tables_string(textract_json=textract_json,table_format=Pretty_Print_Table_Format.latex)Get linearized text from LAYOUT using get_text_from_layout_json methodGenerates a dictionary of linearized text from the Textract JSON response with LAYOUT, and optionally writes linearized plain text files to local file system or Amazon S3. It can take either per page JSON from AnalyzeDocument API, or a single combined JSON with all the pages created from StartDocumentAnalysis output JSONs.fromtextractcaller.t_callimportcall_textract,Textract_Featuresfromtextractprettyprinter.t_pretty_printimportget_text_from_layout_jsontextract_json=call_textract(input_document=input_document,features=[Textract_Features.LAYOUT,Textract_Features.TABLES])layout=get_text_from_layout_json(textract_json=textract_json)full_text=layout[1]print(full_text)In addition totextract_json, theget_text_from_layout_jsonfunction can take the following additional parameterstable_format(str, optional): Format of tables within the document. Supports all python-tabulate table formats. Seetabulatefor supported table formats. Defaults togrid.exclude_figure_text(bool, optional): If set to True, excludes text extracted from figures in the document. Defaults toFalse.exclude_page_header(bool, optional): If set to True, excludes the page header from the linearized text. Defaults toFalse.exclude_page_footer(bool, optional): If set to True, excludes the page footer from the linearized text. Defaults toFalse.exclude_page_number(bool, optional): If set to True, excludes the page number from the linearized text. Defaults toFalse.skip_table(bool, optional): If set to True, skips including the table in the linearized text. Defaults toFalse.save_txt_path(str, optional): Path to save the output linearized text to files. Either a local file system path or Amazon S3 path can be specified ins3://bucket_name/prefix/format. Files will be saved with<page_number>.txtnaming convention.generate_markdown(bool, optional): If set toTrue, generates markdown formatted linearized text. Defaults toFalse.Generate the layout.csv similar to the Textract Web ConsoleCustomers asked for the abilility to generate the layout.csv format, which can be downloaded when testing documents in the AWS Web Console. The method ``get_layout_csv_from_trp2```` generates for each page a list of the entries:'Page number,'Layout,'Text,'Reading Order,'Confidence scorePage number: starting at 1, incrementing for eac pageLayout: the BlockType + a number indicating the sequence for this BlockType starting at 1 and for LAYOUT_LIST elements the string: "- part of LAYOUT_LIST (index)" is addedText: except for LAYOUT_LIST and LAYOUT_FIGURE the underlying textReading Order: increasing int for each LAYOUT element starting with 0Confidence score: confidence in this being a LAYOUT elementthis can be used to generate a CSV (or another format). Below a sample how to generate a CSV.# taken from the test# generates the CSV in memoryfromtextractprettyprinterimportget_layout_csv_from_trp2withopen(<some_test_file>)asinput_fp:trp2_doc:TDocument=TDocumentSchema().load(json.load(input_fp))layout_csv=get_layout_csv_from_trp2(trp2_doc)csv_output=io.StringIO()csv_writer=csv.writer(csv_output,delimiter=",",quotechar='"',quoting=csv.QUOTE_MINIMAL)forpageinlayout_csv:csv_writer.writerows(page)print(csv_output)Sample outputPage numberLayoutTextReading OrderBlockTypeConfidence score1LAYOUT_SECTION_HEADER 1Amazing Headline!...0LAYOUT_SECTION_HEADER81.251LAYOUT_TEXT 1Lorem ipsum dolor sit amet, co...1LAYOUT_TEXT99.7558593751LAYOUT_SECTION_HEADER 2Unbelievable stuff...2LAYOUT_SECTION_HEADER90.4785156251LAYOUT_TEXT 2Ut ultrices felis vel mi susci...3LAYOUT_TEXT98.4863281251LAYOUT_LIST 14LAYOUT_LIST97.167968751LAYOUT_TEXT 3 - part of LAYOUT_LIST 1Priority list item 1...5LAYOUT_TEXT97.85156251LAYOUT_TEXT 4 - part of LAYOUT_LIST 1Priority list item 2...6LAYOUT_TEXT98.0957031251LAYOUT_TEXT 5 - part of LAYOUT_LIST 1Another list item 3...7LAYOUT_TEXT98.0957031251LAYOUT_TEXT 6 - part of LAYOUT_LIST 1And a total optional list item...8LAYOUT_TEXT98.730468751LAYOUT_LIST 29LAYOUT_LIST69.531251LAYOUT_TEXT 7 - part of LAYOUT_LIST 21. But we...10LAYOUT_TEXT95.7519531251LAYOUT_TEXT 8 - part of LAYOUT_LIST 22. can also...11LAYOUT_TEXT96.9238281251LAYOUT_TEXT 9 - part of LAYOUT_LIST 23. do numbered...12LAYOUT_TEXT97.363281251LAYOUT_TEXT 10 - part of LAYOUT_LIST 24. lists...13LAYOUT_TEXT96.67968751LAYOUT_TEXT 11congue ac. Phasellus mollis co...14LAYOUT_TEXT96.0449218751LAYOUT_TEXT 12Quisque a elementum diam. Null...15LAYOUT_TEXT96.4843751LAYOUT_TEXT 13Table Caption 1...16LAYOUT_TEXT86.8652343751LAYOUT_TABLE 1Date Description Amount 12-12-...17LAYOUT_TABLE96.4355468751LAYOUT_TEXT 14Quisque dapibus varius ipsum, ...18LAYOUT_TEXT93.066406251LAYOUT_FIGURE 119LAYOUT_FIGURE94.33593751LAYOUT_TEXT 15Figure Caption 1...20LAYOUT_TEXT63.18359375
amazon-textract-response-parser
Textract Response ParserYou can use Textract response parser library to easily parser JSON returned by Amazon Textract. Library parses JSON and provides programming language specific constructs to work with different parts of the document.textractoris an example of PoC batch processing tool that takes advantage of Textract response parser library and generate output in multiple formats.Installationpython -m pip install amazon-textract-response-parserPipeline and Serializer/DeserializerSerializer/DeserializerBased on themarshmallowframework, the serializer/deserializer allows for creating an object represenation of the Textract JSON response.Deserialize Textract JSON# j holds the Textract JSON dictfromtrp.trp2importTDocument,TDocumentSchemat_doc=TDocumentSchema().load(j)Serialize Textractfromtrp.trp2importTDocument,TDocumentSchemat_doc=TDocumentSchema().dump(t_doc)Deserialize Textract AnalyzeId JSON# j holds the Textract JSONfromtrp.trp2_analyzeidimportTAnalyzeIdDocument,TAnalyzeIdDocumentSchemat_doc=TAnalyzeIdDocumentSchema().load(json.loads(j))Serialize Textract AnalyzeId object to JSONfromtrp.trp2_analyzeidimportTAnalyzeIdDocument,TAnalyzeIdDocumentSchemat_doc=TAnalyzeIdDocumentSchema().dump(t_doc)PipelineWe added some commonly requested features as easily consumable components that modify the Textract JSON Schema and ideally don't require big changes to any existing workflow.Order blocks (WORDS, LINES, TABLE, KEY_VALUE_SET) by geometry y-axisBy default Textract does not put the elements identified in an order in the JSON response.The sample implementationorder_blocks_by_geoof a function using the Serializer/Deserializer shows how to change the structure and order the elements while maintaining the schema. This way no change is necessary to integrate with existing processing.# the sample code below makes use of the amazon-textract-callerpython-mpipinstallamazon-textract-callerfromtextractcaller.t_callimportcall_textract,Textract_Featuresfromtrp.trp2importTDocument,TDocumentSchemafromtrp.t_pipelineimportorder_blocks_by_geoimporttrpimportjsonj=call_textract(input_document="path_to_some_document (PDF, JPEG, PNG)",features=[Textract_Features.FORMS,Textract_Features.TABLES])# the t_doc will be not orderedt_doc=TDocumentSchema().load(j)# the ordered_doc has elements ordered by y-coordinate (top to bottom of page)ordered_doc=order_blocks_by_geo(t_doc)# send to trp for further processing logictrp_doc=trp.Document(TDocumentSchema().dump(ordered_doc))Page orientation in degreesAmazon Textract supports all in-plane document rotations. However the response does not include a single number for the degree, but instead each word and line does have polygon points which can be used to calculate the degree of rotation. The following code adds this information as a custom field to Amazon Textract JSON response.fromtrp.t_pipelineimportadd_page_orientationimporttrp.trp2ast2importtrpast1# assign the Textract JSON dict to jj=<call_textract(input_document="path_to_some_document (PDF, JPEG, PNG)")oryourJSONdict>t_document:t2.TDocument=t2.TDocumentSchema().load(j)t_document=add_page_orientation(t_document)doc=t1.Document(t2.TDocumentSchema().dump(t_document))# page orientation can be read now for each pageforpageindoc.pages:print(page.custom['PageOrientationBasedOnWords'])Using the pipeline on command lineThe amazon-textract-response-parser package also includes a command line tool to test pipeline components like the add_page_orientation or the order_blocks_by_geo.Here is one example of the usage (in combination with theamazon-textractcommand from amazon-textract-helper and thejqtool (https://stedolan.github.io/jq/))>amazon-textract--input-document"s3://somebucket/some-multi-page-pdf.pdf"|amazon-textract-pipeline--componentsadd_page_orientation|jq'.Blocks[] | select(.BlockType=="PAGE") | .Custom'm{"Orientation":7}{"Orientation":11}...{"Orientation":-7}{"Orientation":0}Merge or link tables across pagesSometimes tables start on one page and continue across the next page or pages. This component identifies if that is the case based on the number of columns and if a header is present on the subsequent table and can modify the output Textract JSON schema for down-stream processing. Other custom-logic is possible to develop for specific use cases.The MergeOptions.MERGE combines the tables and makes them appear as one for post processing, with the drawback that the geometry information is not accuracy any longer. So overlaying with bounding boxes will not be accuracy.The MergeOptions.LINK maintains the geometric structure and enriches the table information with links between the table elements. There is a custom['previus_table'] and custom['next_table'] attribute added to the TABLE blocks in the Textract JSON schema.Usage is simplefromtrp.t_pipelineimportpipeline_merge_tablesimporttrp.trp2ast2j=<call_textract(input_document="path_to_some_document (PDF, JPEG, PNG)")oryourJSONdict>t_document:t2.TDocument=t2.TDocumentSchema().load(j)t_document=pipeline_merge_tables(t_document,MergeOptions.MERGE,None,HeaderFooterType.NONE)Using from command line example# from the root of the repositorycatsrc-python/tests/data/gib_multi_page_table_merge.json|amazon-textract-pipeline--componentsmerge_tables|amazon-textract--stdin--pretty-printTABLES# compare to cat src-python/tests/data/gib_multi_page_table_merge.json | amazon-textract --stdin --pretty-print TABLESAdd OCR confidence score to KEY and VALUEIt can be useful for some use cases to validate the confidence score for a given KEY or the VALUE from an Analyze action with FORMS feature result.The Confidence property of a BlockType 'KEY_VALUE_SET' expresses the confidence in this particular prediction being a KEY or a VALUE, but not the confidence of the underlying text value.Simplified example:{"Confidence":95.5,"Geometry":{<...>},"Id":"v1","Relationships":[{"Type":"CHILD","Ids":["c1"]}],"EntityTypes":["VALUE"],"BlockType":"KEY_VALUE_SET"},{"Confidence":99.2610092163086,"TextType":"PRINTED","Geometry":{<...>},"Id":"c1","Text":"2021-Apr-08","BlockType":"WORD"},In this example the confidence in the prediction of the VALUE to be an actual value in a key/value relationship is 95.5.The confidence in the actual text representation is 99.2610092163086. For simplicity in this example the value consists of just one word, but is not limited to that and could contain multiple words.The KV_OCR_Confidence pipeline component adds confidence scores for the underlying OCR to the JSON. After executing the example JSON will look like this:{"Confidence":95.5,"Geometry":{<...>},"Id":"v1","Relationships":[{"Type":"CHILD","Ids":["c1"]}],"EntityTypes":["VALUE"],"BlockType":"KEY_VALUE_SET","Custom":{"OCRConfidence":{"mean":99.2610092163086,"min":99.2610092163086}}},{"Confidence":99.2610092163086,"TextType":"PRINTED","Geometry":{<...>},"Id":"c1","Text":"2021-Apr-08","BlockType":"WORD"},Usage is simplefromtrp.t_pipelineimportadd_kv_ocr_confidenceimporttrp.trp2ast2j=<call_textract(input_document="path_to_some_document (PDF, JPEG, PNG)")oryourJSONdict>t_document:t2.TDocument=t2.TDocumentSchema().load(j)t_document=add_kv_ocr_confidence(t_document)# further processingUsing from command line example and validating the output:# from the root of the repositorycat"src-python/tests/data/employment-application.json"|amazon-textract-pipeline--componentskv_ocr_confidence|jq'.Blocks[] | select(.BlockType=="KEY_VALUE_SET") 'Parse JSON response from TextractfromtrpimportDocumentdoc=Document(response)# Iterate over elements in the documentforpageindoc.pages:# Print lines and wordsforlineinpage.lines:print("Line:{}--{}".format(line.text,line.confidence))forwordinline.words:print("Word:{}--{}".format(word.text,word.confidence))# Print tablesfortableinpage.tables:forr,rowinenumerate(table.rows):forc,cellinenumerate(row.cells):print("Table[{}][{}] ={}-{}".format(r,c,cell.text,cell.confidence))# Print fieldsforfieldinpage.form.fields:print("Field: Key:{}, Value:{}".format(field.key.text,field.value.text))# Get field by keykey="Phone Number:"field=page.form.getFieldByKey(key)if(field):print("Field: Key:{}, Value:{}".format(field.key,field.value))# Search fields by keykey="address"fields=page.form.searchFieldsByKey(key)forfieldinfields:print("Field: Key:{}, Value:{}".format(field.key,field.value))TestClone the repo and run pytestgitclonehttps://github.com/aws-samples/amazon-textract-response-parser.gitcdamazon-textract-response-parser python-mvenvvirtualenv virtualenv/bin/activate python-mpipinstallpip--upgrade python-mpipinstallpytest python-mpipinstallsetuptools python-mpipinstalltabulate pythonsrc-python\setup.pyinstall pytestOther ResourcesLarge scale document processing with Amazon Textract - Reference ArchitectureBatch processing toolCode samplesLicense SummaryThis sample code is made available under the Apache License Version 2.0. See the LICENSE file.
amazon-textract-textractor
Textractoris a python package created to seamlessly work withAmazon Textracta document intelligence service offering text recognition, table extraction, form processing, and much more. Whether you are making a one-off script or a complex distributed document processing pipeline, Textractor makes it easy to use Textract.If you are looking for the other amazon-textract-* packages, you can find them using the links below:amazon-textract-caller(to simplify calling Amazon Textract without additional dependencies)amazon-textract-response-parser(to parse the JSON response returned by Textract APIs)amazon-textract-overlayer(to draw bounding boxes around the document entities on the document image)amazon-textract-prettyprinter(convert Amazon Textract response to CSV, text, markdown, ...)amazon-textract-geofinder(extract specific information from document with methods that help navigate the document using geometry and relations, e. g. hierarchical key/value pairs)InstallationTextractor is available on PyPI and can be installed withpip install amazon-textract-textractor. By default this will install the minimal version of Textractor which is suitable for lambda execution. The following extras can be used to add features:pandas(pip install "amazon-textract-textractor[pandas]") installs pandas which is used to enable DataFrame and CSV exports.pdf(pip install "amazon-textract-textractor[pdf]") includespdf2imageand enables PDF rasterization in Textractor. Note that this isnotnecessary to call Textract with a PDF file.torch(pip install "amazon-textract-textractor[torch]") includessentence_transformersfor better word search and matching. This will work on CPU but be noticeably slower than non-machine learning based approaches.dev(pip install "amazon-textract-textractor[dev]") includes all the dependencies above and everything else needed to test the code.You can pick several extras by separating the labels with commas like thispip install "amazon-textract-textractor[pdf,torch]".DocumentationGenerated documentation for the latest released version can be accessed here:aws-samples.github.io/amazon-textract-textractor/ExamplesWhile a collection of simplistic examples is presented here, the documentation has a muchlarger collection of exampleswith specific case studies that will help you get started.SetupThese two lines are all you need to use Textract. The Textractor instance can be reused across multiple requests for both synchronous and asynchronous requests.fromtextractorimportTextractorextractor=Textractor(profile_name="default")Text recognition# file_source can be an image, list of images, bytes or S3 pathdocument=extractor.detect_document_text(file_source="tests/fixtures/single-page-1.png")print(document.lines)#[Textractor Test, Document, Page (1), Key - Values, Name of package: Textractor, Date : 08/14/2022, Table 1, Cell 1, Cell 2, Cell 4, Cell 5, Cell 6, Cell 7, Cell 8, Cell 9, Cell 10, Cell 11, Cell 12, Cell 13, Cell 14, Cell 15, Selection Element, Selected Checkbox, Un-Selected Checkbox]Table extractionfromtextractor.data.constantsimportTextractFeaturesdocument=extractor.analyze_document(file_source="tests/fixtures/form.png",features=[TextractFeatures.TABLES])# Saves the table in an excel document for further processingdocument.tables[0].to_excel("output.xlsx")Form extractionfromtextractor.data.constantsimportTextractFeaturesdocument=extractor.analyze_document(file_source="tests/fixtures/form.png",features=[TextractFeatures.FORMS])# Use document.get() to search for a key with fuzzy matchingdocument.get("email")# [E-mail Address : [email protected]]Analyze IDdocument=extractor.analyze_id(file_source="tests/fixtures/fake_id.png")print(document.identity_documents[0].get("FIRST_NAME"))# 'MARIA'Receipt processing (Analyze Expense)document=extractor.analyze_expense(file_source="tests/fixtures/receipt.jpg")print(document.expense_documents[0].summary_fields.get("TOTAL")[0].text)# '$1810.46'If your use case was not covered here or if you are looking for asynchronous usage examples, seeour collection of examples.CLITextractor also comes with thetextractorscript, which supports calling, printing and overlaying directly in the terminal.textractor analyze-document tests/fixtures/amzn_q2.png output.json --features TABLES --overlay TABLESSeethe documentationfor more examples.TestsThe package comes with tests that call the production Textract APIs. Running the tests will incur charges to your AWS account.AcknowledgementsThis library was made possible by the work of Srividhya Radhakrishna (@srividh-r).ContributingSeeCONTRIBUTING.mdLicenseThis library is licensed under the Apache 2.0 License.Excavator image by macrovector on Freepik
amazon-transcribe
Amazon Transcribe Streaming SDKThe Amazon Transcribe Streaming SDK allows users to directly interface with the Amazon Transcribe Streaming service and their Python programs. The goal of the project is to enable users to integrate directly with Amazon Transcribe without needing anything more than a stream of audio bytes and a basic handler.This project is still in early alpha so the interface is still subject to change and may see rapid iteration. It's highly advised to pin to strict dependencies if using this outside of local testing. Please note awscrt is a dependency shared with botocore (the core module of AWS CLI and boto3). You may need to keep amazon-transcribe at the latest version when installed in the same environment.InstallationTo install from pip:python-mpipinstallamazon-transcribeTo install from Github:gitclonehttps://github.com/awslabs/amazon-transcribe-streaming-sdk.gitcdamazon-transcribe-streaming-sdk python-mpipinstall.To use from your Python application, addamazon-transcribeas a dependency in yourrequirements.txtfile.NOTE: This SDK is built on top of theAWS Common Runtime (CRT), a collection of C libraries we interact with through bindings. The CRT is available on PyPI (awscrt) as precompiled wheels for common platforms (Linux, macOS, Windows). Non-standard operating systems may need to compile these libraries themselves.UsagePrerequisitesIf you don't already have local credentials setup for your AWS account, you can follow thisguidefor configuring them using the AWS CLI.In essence you'll need one of these authentication configurations setup in order for the SDK to successfully resolve your API keys:Set theAWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEYand optionally theAWS_SESSION_TOKENenvironment variablesSet theAWS_PROFILEpointing to your AWS profile directoryConfigure the[default]profile in~/.aws/credentialsFor more details on the AWS shared configuration file and credential provider usage, check the following developer guides:Shared Config OverviewShared Config FormatExample Credential SetupsQuick StartSetup for this SDK will require either live or prerecorded audio. Full details on the audio input requirements can be found in theAmazon Transcribe Streaming documentation.Here's an example app to get started:importasyncio# This example uses aiofile for asynchronous file reads.# It's not a dependency of the project but can be installed# with `pip install aiofile`.importaiofilefromamazon_transcribe.clientimportTranscribeStreamingClientfromamazon_transcribe.handlersimportTranscriptResultStreamHandlerfromamazon_transcribe.modelimportTranscriptEventfromamazon_transcribe.utilsimportapply_realtime_delay"""Here's an example of a custom event handler you can extend toprocess the returned transcription results as needed. Thishandler will simply print the text out to your interpreter."""SAMPLE_RATE=16000BYTES_PER_SAMPLE=2CHANNEL_NUMS=1# An example file can be found at tests/integration/assets/test.wavAUDIO_PATH="tests/integration/assets/test.wav"CHUNK_SIZE=1024*8REGION="us-west-2"classMyEventHandler(TranscriptResultStreamHandler):asyncdefhandle_transcript_event(self,transcript_event:TranscriptEvent):# This handler can be implemented to handle transcriptions as needed.# Here's an example to get started.results=transcript_event.transcript.resultsforresultinresults:foraltinresult.alternatives:print(alt.transcript)asyncdefbasic_transcribe():# Setup up our client with our chosen AWS regionclient=TranscribeStreamingClient(region=REGION)# Start transcription to generate our async streamstream=awaitclient.start_stream_transcription(language_code="en-US",media_sample_rate_hz=SAMPLE_RATE,media_encoding="pcm",)asyncdefwrite_chunks():# NOTE: For pre-recorded files longer than 5 minutes, the sent audio# chunks should be rate limited to match the realtime bitrate of the# audio stream to avoid signing issues.asyncwithaiofile.AIOFile(AUDIO_PATH,"rb")asafp:reader=aiofile.Reader(afp,chunk_size=CHUNK_SIZE)awaitapply_realtime_delay(stream,reader,BYTES_PER_SAMPLE,SAMPLE_RATE,CHANNEL_NUMS)awaitstream.input_stream.end_stream()# Instantiate our handler and start processing eventshandler=MyEventHandler(stream.output_stream)awaitasyncio.gather(write_chunks(),handler.handle_events())loop=asyncio.get_event_loop()loop.run_until_complete(basic_transcribe())loop.close()SecuritySeeCONTRIBUTINGfor more information.LicenseThis project is licensed under the Apache-2.0 License.
amazontrends
amazontrendsDescriptionamazontrendsInstallpipinstallamazontrends# orpip3installamazontrends
amazon-web-services-helpers
No description available on PyPI.
amazon-wishlist-pricewatch
Amazon Wishlist PricewatchPeriodically check your public Amazon wishlist for price reductions.This package will send you a notification (SMTP email and/or telegram) each time a product on yourpublicly availablewishlist reaches a new lowest price. Price still not low enough? You'll only receive another notification for the same product when the price drops further.Pip install the package, fill in the configuration file and schedule to run with your preferred task scheduler. E.g. Windows Task Scheduler / launchd (Mac OS) / cron (Mac OS / Unix).Uses the wonderfulrequestsandBeautifulSoup4. No need for the overhead of a headless browser as all data can be gathered from the plain html.Table of ContentsHow It WorksGetting StartedPrerequisitesInstallationSet ConfigurationTest NotificationsSet Running ScheduleWindowsMac OSUnix/LinuxConfig File DocumentationNotification ModeSend Test NotificationUsing GmailUsing TelegramUser AgentQuestions, Suggestions and BugsContributing / DevelopmentLicenseHow It WorksOnce installed and configured, each run ofpricewatchdownloads and stores your wishlist as JSON and does price comparisons against items seen in previous runs. When a new lowest price for a product is seen you receive a notification, and the new price is saved to JSON for future runs.Schedule the script to run as often as you like with Task Scheduler/launchd/cron, and you're good to go.Getting StartedPrerequisitesPython >=3.8InstallationInstall with pip (recommended):pip install amazon-wishlist-pricewatchpricewatchOr clone the git repo:git clone https://github.com/sam0jones0/amazon_wishlist_pricewatch.gitcd amazon_wishlist_pricewatchpip install -r requirements.txtcd ./amazon_wishlist_pricewatchpython3 ./pricewatch.py(Optional) If you want telegram notifications:pip install python-telegram-botSet ConfigurationFill in the config file located atamazon_wishlist_pricewatch/config.jsonIf you can't find it enterpricewatch(orpython3 ./pricewatch.pyif you cloned the repo) into your console. Location of the file will be printed.Detailed config file documentationhere.Test NotificationsInconfig.jsonSetsend_test_notificationto "1" and runpricewatch. A test notification(s) should be sent and pricewatch will exit. Remember to set back to "0" once you're done.Set Running ScheduleYou can use any task scheduler you like to runpricewatch/pricewatch.pyHere's a few suggestions.WindowsI recommend using "Windows Task Scheduler". You can use the GUI or for e.g. to create a task that runs once each time the system boots enter the following into an elevated (Run as Administrator) cmd.exe/Powershell.schtasks /create /tn "Amazon Wishlist Pricewatch" /tr "C:\Path\To\Your\Python\exe\python.exe D:\Path\To\amazon-wishlist-pricewatch\amazon_wishlist_pricewatch\pricewatch.py" /sc onstart /RU WindowsUserName /RP WindowsPasswordMore examples and guidance on setting different scheduleshereMac OSYou can use cron, launchd, automator or any other tool. For e.g. to use launchd to create a task that runs once each time the system boots create the following file~/Library/LaunchAgents/local.amazonwishlistpricewatch.pricewatch.plistand paste in:<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plistversion="1.0"><dict><key>Label</key><string>local.amazonwishlistpricewatch.pricewatch.plist</string><key>ProgramArguments</key><array><string>/usr/local/bin/python3</string><string>/path/to/amazon_wishlist_pricewatch/pricewatch.py</string></array><key>RunAtLoad</key><true/></dict></plist>More information on launchdhereUnix/LinuxI assume you'll be fine! Perhaps use cron.Config File DocumentationAnnotated example config file contents:{"general":{"notification_mode":"12 (1 for email + 2 for telegram)","wishlist_url":"https://www.amazon.co.uk/hz/wishlist/ls/S0M3C0D3","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0) Gecko/20100101 Firefox/87.0","send_test_notification":"0"},"email":{"smtp_server":"YOUR-SMTP-SERVER (e.g. smtp.gmail.com)","smtp_port":"YOUR-SMTP-SSL-PORT (e.g. 465 for gmail)","sending_email":"SENDING EMAIL ADDRESS (e.g. [email protected])","sending_email_pass":"SENDING EMAIL ADDRESS PASSWORD","receiving_emails":["[email protected]","[email protected]"]},"telegram":{"chat_id":"1234567890","token":"9876543210:HFusj898IEXAMPLEHDKEIIE83exampleuUJ"}}Notification Mode"1" for email."2" for Telegram."12" for email and Telegram.Send Test NotificationSet to "1" to have the script attempt to send a notification to each method specified and then exit. Change back to "0" to have the script run normally.Using GmailIf you have 2FA enabled you cancreate an app passwordand put that insending_email_pass.Not recommended, but you can use your usual Google account password if youenable "Less secure app access". I'd recommend creating a new Gmail account if you do this.Using TelegramDownload / install Telegram.Create a bot withTelegram's BotFather botand keep a note of the token.Create a new group where you will receive notifications and add your new bot to it.Send at least one message to the group.Visithttps://api.telegram.org/botXXX:YYYYY/getUpdatesreplacing XXX:YYYYY with your token from step 2 and take a note of thechatid.Add your chat id and token to config.json.User AgentYou don't need to change this, but you can. Enter "my user agent" into Google to see your browser's user agent.Questions, Suggestions and BugsFeel free to open an issuehere.Contributing / DevelopmentContributions welcome.Clone the repo andpip install -r requirements_dev.txtin a new virtual environment.Uses pytest for testing, Mypy for type checking, and black for code formatting.LicenseMIT License. Sam Jones
amba-event-stream
amba-event-streamThe Amba Analysis Streams package is used as a Kafka connection wrapper to abstract from infrastructure implementation details by providing functions to connect to Kafka and PostgreSQL. It defines the event model used in the streaming platform and provides base consumer and producer classes. The package is implemented as a python package that is hosted on pypi.org, and documented with mkdocs.The consumer and producer are capable of running in multiple processes to allow for parallel processing to better utilize modern CPUs. Both have built in monitoring capabilities: a counter shared by all processes is updated for each processed event. A thread running a function every few seconds is checking the counter and resetting it. If no data is processed over a defined period of time (meaning multiple consecutive check function runs), the container is restarted automatically by closing all python processed. This heart beat function ensures that even unforeseeable errors, such as container crashes or blockings are resolved by restarting the container and providing a clean system state.more Information can be foundhereInstallationpipinstallamba-event-streamReleasingReleases are published automatically when a tag is pushed to GitHub.# Set next version numberexportRELEASE=x.x.x# Create tagsgitcommit--allow-empty-m"Release$RELEASE"gittag-a$RELEASE-m"Version$RELEASE"# Pushgitpushupstream--tags
ambari
Ambari python client based on ambari rest api.Installpip install ambariCommand lineambari -hambari localhost:8080 cluster create test typical_triple master1 master2 slaveambari localhost:8080 service start ZOOKEEPERambari localhost:8080 host delete server2Python modularfrom ambari.client import Clientclient=Client(’http://localhost:8080’)for s in client.cluster.services:print(s.name)s.start()
ambari_client
UNKNOWN
ambari-ldap-manager
UNKNOWN
ambari-lld
This program connects to the Ambari server API and returns all the alerts associated to a given host as a JSON that can be easily parsed by Zabbix to create Low-Level-Discovered items.Usage:usage: ambari_zabbix_lld [-h] [-a AMBARI_ENDPOINT] [-u USER] [-p PASSWORD] [-n HOSTNAME] Return a Zabbix LLD JSON resource for all available Ambari checks optional arguments: -h, --help show this help message and exit -a AMBARI_ENDPOINT, --ambari-endpoint AMBARI_ENDPOINT Ambari API address -u USER, --user USER Ambari user -p PASSWORD, --password PASSWORD Ambari user password -n HOSTNAME, --hostname HOSTNAME Filter alerts based on this hostnameBy default-nhas a value of*which means that no filters are applied to hostnames. You can pass an empty string if you want to retrieve alerts that are not assigned to any particular host. Obviously you can (and probably want to) pass an hostname to filter only relevant alerts.TheAMBARI_ENDPOINTURI must always begin withhttp(s)://.
ambars-sudoku-solver
Sudoku Helper Solver PackagePypi package for solving and creating sudoku problems.For a more in-depth look into the purpose and inspiration of the projecy please checkout the backed project that this package aims to servegithub.com --> sudoku-helper-solver-backendThe end product of this package ends up in this website:ambardas.nl
amb-distributions
No description available on PyPI.
ambee
Python: Ambee API ClientAsynchronous Python client for the Ambee API.AboutThis is a simple asynchronous Python client library for the Ambee API.Ambee fuses the power of thousands of on-ground sensor data and hundreds of remote imagery from satellites. Their state-of-the-art AI and ML techniques with proprietary models analyze environmental factors such as air quality, soil, micro weather, pollen, and more to help millions worldwide say safe and protect themselves.Get a free API key for 100 requests a day (or paid if you want more) here:https://api-dashboard.getambee.com/#/signupInstallationpipinstallambeeUsageimportasynciofromambeeimportAmbeeasyncdefmain():"""Show example on getting Ambee data."""asyncwithAmbee(api_key="example_api_key",latitude=12,longitude=77)asclient:air_quality=awaitclient.air_quality()print(air_quality)if__name__=="__main__":loop=asyncio.get_event_loop()loop.run_until_complete(main())Changelog & ReleasesThis repository keeps a change log usingGitHub's releasesfunctionality.Releases are based onSemantic Versioning, and use the format ofMAJOR.MINOR.PATCH. In a nutshell, the version will be incremented based on the following:MAJOR: Incompatible or major changes.MINOR: Backwards-compatible new features and enhancements.PATCH: Backwards-compatible bugfixes and package updates.ContributingThis is an active open-source project. We are always open to people who want to use the code or contribute to it.We've set up a separate document for ourcontribution guidelines.Thank you for being involved! :heart_eyes:Setting up development environmentThis Python project is fully managed using thePoetrydependency manager. But also relies on the use of NodeJS for certain checks during development.You need at least:Python 3.7+PoetryNodeJS 12+ (including NPM)To install all packages, including all development requirements:npminstall poetryinstallAs this repository uses thepre-commitframework, all changes are linted and tested with each commit. You can run all checks and tests manually, using the following command:poetryrunpre-commitrun--all-filesTo run just the Python tests:poetryrunpytestAuthors & contributorsThe original setup of this repository is byFranck Nijhof.For a full list of all authors and contributors, checkthe contributor's page.LicenseMIT LicenseCopyright (c) 2021 Franck NijhofPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
ambee-sdk
ambee_sdkAmbee SDK for python provides classes and methods to make use of ambee's APIs inside your python code. You can make calls to our apis, get output in dataframe format, make calls to multiple locations and more.Read our API documentation:Ambee: Developers toolReadthedocs -Welcome to ambee-sdk’s documentation!Getting Startedimportambee_sdkasambeeimportdatetimex_api_key="Your Key Here"aq=ambee.air_quality(x_api_key=x_api_key)aq.get_latest(by='latlng',lat=12,lng=77)
amber
AmberAmber is a python library that provides object orientated interface to REST APIs like Tastypie.Getting HelpRead the DocsInstallation$ pip install amberRequirementsAmber requires the following modules.Python 2.5+requestssimplejson (If using Python 2.5, or you desire the speedups for JSON serialization)
amber-actuator
amber_actuator
amber-actuator-amberobotics
Failed to fetch description. HTTP Status Code: 404
amberai-ice-rpc
No description available on PyPI.
amber-automl
No description available on PyPI.
amberdata
DemoInstallpipinstallamberdatafor referencefromamberdataimportAmberdataad_client=Amberdata(x_api_key="ENTER YOUR API KEY HERE")blockchain_address_logs=ad_client.blockchain_address_logs(address="0xb47e3cd837dDF8e4c57F05d70Ab865de6e193BBB",topic="0x58e5d5a525e3b40bc15abaa38b5882678db1ee68befd2f60bafe3a7fd06db9e3")print(blockchain_address_logs)
amber-data-utils
ContributingWe welcome contributions and improvements, please see thecontribution guidelines.LicenseThis code is distributed under the MIT license, please see theLICENSEfile.
amberelectric
Amber - An entirely new way to buy electricityAmber is an Australian-based electricity retailer that pass through the real-time wholesale price of energy.Because of Amber's wholesale power prices, you can save hundreds of dollars a year by automating high power devices like air-conditioners, heat pumps and pool pumps.This Python library provides an interface to the API, allowing you to react to current and forecast prices, as well as download your historic usage.DetailsAPI version: 1.0Package version: 1.0.3RequirementsPython >= 3.6Getting startedNot an Amber customer yet? Join here:https://join.amber.com.au/signupOnce your account has been created, you need to create anAPI tokenInstallationpip installIf the python package is hosted on a repository, you can install directly using:pipinstallamberelectricUsageSetup and confirguration# Import the libraryimportamberelectricfromamberelectric.apiimportamber_api# These are just for demo purposes...frompprintimportpprintfromdatetimeimportdate# Insert the API token you created at https://app.amber.com.au/developersconfiguration=amberelectric.Configuration(access_token='psk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')# Create an API instanceapi=amber_api.AmberApi.create(configuration)Fetching SitesAll the interesting functions require a site id, so find one of those first - they can be identified by the National Metering Identifier (NMI)try:sites=api.get_sites()exceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)This will return a List of SitesFetching PricesThe API allows you to fetch previous, current and forecast prices by day.If no start_date or end_date is supplied, it default to the current day.Note: If duration is 30, there will be 48 intervals per channel. A duration of 5 returns 288 intervals.site_id=sites[0].idtry:start_date=date(2021,6,1)end_date=date(2021,6,2)range=api.get_prices(site_id,start_date=start_date,end_date=end_date)today=api.get_prices(site_id)exceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)You can also just ask for the current pricesite_id=sites[0].idtry:current=api.get_current_prices(site_id)exceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)and the current price plus some number of previous and next intervalssite_id=sites[0].idtry:current=api.get_current_price(site_id,next=4)# returns the current interval and the next 4 forecast intervaslexceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)UsageYou can request your usage for a given day.site_id=sites[0].idtry:usage=api.get_usage(site_id,date(2021,6,1),date(2021,6,1))exceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)
amber-electric
Python Library for Amber Electric APIDescriptionConnects to theAmber Electric APIand retrieves market, usage and pricing informationNoteThis is in no way affiliated with Amber Electric.IssuesI don't know what the usage data for an account looks like at the moment. Until I have an account with active usage data - this part of the API is going to be very light.Logging / DebuggingThis library usesloggingjust set the log level and format you need.ExampleThe examples below may look a little complex - because this library relies on functions like.auth()and.update()beingawaited.See current market pricingimportasynciofromamber_electricimportAmberElectricapi=AmberElectric(latitude=-37.828690,longitude=144.997460,)asyncdefdisplay_market_pricing():awaitapi.market.update()print(api.market)asyncio.get_event_loop().run_until_complete(display_market_pricing())Get your current pricingimportasynciofromamber_electricimportAmberElectricapi=AmberElectric(latitude=-37.828690,longitude=144.997460,username="[email protected]",password="secret")asyncdefget_account_pricing():awaitapi.auth()awaitapi.price.update()print(api.price)asyncio.get_event_loop().run_until_complete(get_account_pricing())Get your current usage (WIP)importasynciofromamber_electricimportAmberElectricapi=AmberElectric(latitude=-37.828690,longitude=144.997460,username="[email protected]",password="secret")asyncdefget_account_pricing():awaitapi.auth()awaitapi.usage.update()print(api.usage)asyncio.get_event_loop().run_until_complete(get_account_pricing())Support
amberelectric.py
Amber - An entirely new way to buy electricityAmber is an Australian-based electricity retailer that pass through the real-time wholesale price of energy.Because of Amber's wholesale power prices, you can save hundreds of dollars a year by automating high power devices like air-conditioners, heat pumps and pool pumps.This Python library provides an interface to the API, allowing you to react to current and forecast prices, as well as download your historic usage.DetailsAPI version: 1.0Package version: 1.0.0RequirementsPython >= 3.6Getting startedNot an Amber customer yet? Join here:https://join.amber.com.au/sign-upOnce your account has been created, you need to create anAPI tokenInstallationpip installIf the python package is hosted on a repository, you can install directly using:pipinstallamberelectric.pyUsageSetup and confirguration# Import the libraryimportamberelectricfromamberelectric.apiimportamber_api# These are just for demo purposes...frompprintimportpprintfromdatetimeimportdate# Insert the API token you created at https://app.amber.com.au/developersconfiguration=amberelectric.Configuration(access_token='psk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')# Create an API instanceapi=amber_api.AmberApi.create(configuration)Fetching SitesAll the interesting functions require a site id, so find one of those first - they can be identified by the National Metering Identifier (NMI)try:sites=api.get_sites()exceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)This will return an List of SitesFetching PricesThe API allows you to fetch previous, current and forecast prices by day.If no start_date or end_date is supplied, it default to the current day.Note: If duration is 30, there will be 48 intervals per channel. A duration of 5 returns 288 intervals.site_id=sites[0].idtry:start_date=date(2021,6,1)end_date=date(2021,6,2)range=api.get_prices(site_id,start_date=start_date,end_date=end_date)today=api.get_prices(site_id)exceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)You can also just ask for the current pricesite_id=sites[0].idtry:current=api.get_current_prices(site_id)exceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)and the current price plus some number of previous and next intervalsYou can also just ask for the current pricesite_id=sites[0].idtry:current=api.get_current_price(site_id,next=4)# returns the current interval and the next 4 forecast intervaslexceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)UsageYou can request your usage for a given day.site_id=sites[0].idtry:usage=api.get_usage(site_id,date(2021,6,1),date(2021,6,1))exceptamberelectric.ApiExceptionase:print("Exception:%s\n"%e)
amberflo-metering-python
amberflo-metering-pythonAmberflois the simplest way to integrate metering into your application.This is the official Python 3 client that wraps theAmberflo REST API.:heavy_check_mark: FeaturesAdd and update customersAssign and update product plans to customersList invoices of a customerGet a new customer portal session for a customerAdd and list prepaid orders to customersSend meter eventsIn asynchronous batches for high throughput (with optional flush on demand)Or synchronouslyUsing the Amberflo API or the Amberflo supplied AWS S3 bucketQuery usageFine grained logging control:rocket: Quick StartSign up for freeand get an API key.Install the SDKpip install amberflo-metering-pythonCreate a customerimportosfrommetering.customerimportCustomerApiClient,create_customer_payloadclient=CustomerApiClient(os.environ.get("API_KEY"))message=create_customer_payload(customer_id="sample-customer-123",customer_email="[email protected]",customer_name="Sample Customer",traits={"region":"us-east-1",},)customer=client.add_or_update(message)Ingest meter eventsimportosfromtimeimporttimefrommetering.ingestimportcreate_ingest_clientclient=create_ingest_client(api_key=os.environ["API_KEY"])dimensions={"region":"us-east-1"}customer_id="sample-customer-123"client.meter(meter_api_name="sample-meter",meter_value=5,meter_time_in_millis=int(time()*1000),customer_id=customer_id,dimensions=dimensions,)Query usageimportosfromtimeimporttimefrommetering.usageimport(AggregationType,Take,TimeGroupingInterval,TimeRange,UsageApiClient,create_usage_query)client=UsageApiClient(os.environ.get("API_KEY"))since_two_days_ago=TimeRange(int(time())-60*60*24*2)query=create_usage_query(meter_api_name="my_meter",aggregation=AggregationType.SUM,time_grouping_interval=TimeGroupingInterval.DAY,time_range=since_two_days_ago,group_by=["customerId"],usage_filter={"customerId":["some-customer-321","sample-customer-123"]},take=Take(limit=10,is_ascending=False),)report=client.get(query):zap: High throughput ingestionAmberflo.io libraries are built to support high throughput environments. That means you can safely send hundreds of meter records per second. For example, you can chose to deploy it on a web server that is serving hundreds of requests per second.However, every call does not result in a HTTP request, but is queued in memory instead. Messages are batched and flushed in the background, allowing for much faster operation. The size of batch and rate of flush can be customized.Flush on demand:For example, at the end of your program, you'll want to flush to make sure there's nothing left in the queue. Calling this method will block the calling thread until there are no messages left in the queue. So, you'll want to use it as part of your cleanup scripts and avoid using it as part of the request lifecycle.Error handling:The SDK allows you to set up aon_errorcallback function for handling errors when trying to send a batch.Here is a complete example, showing the default values of all options:defon_error_callback(error,batch):...client=create_ingest_client(api_key=API_KEY,max_queue_size=100000,# max number of items in the queue before rejecting new itemsthreads=2,# number of worker threads doing the sendingretries=2,# max number of retries after failuresbatch_size=100,# max number of meter records in a batchsend_interval_in_secs=0.5,# wait time before sending an incomplete batchsleep_interval_in_secs=0.1,# wait time after failure to send or queue emptyon_error=on_error_callback,# handle failures to send a batch)...client.meter(...)client.flush()# block and make sure all messages are sentWhat happens if there are just too many messages?If the module detects that it can't flush faster than it's receiving messages, it'll simply stop accepting new messages. This allows your program to continually run without ever crashing due to a backed up metering queue.Ingesting through the S3 bucketThe SDK provides ametering.ingest.IngestS3Clientso you can send your meter records to us via the S3 bucket.Use of this feature is enabled if you install the library with thes3option:pip install amberflo-metering-python[s3]Just pass the S3 bucket credentials to the factory function:client=create_ingest_client(bucket_name=os.environ.get("BUCKET_NAME"),access_key=os.environ.get("ACCESS_KEY"),secret_key=os.environ.get("SECRET_KEY"),):book: DocumentationGeneral documentation on how to use Amberflo is available atProduct Walkthrough.The full REST API documentation is available atAPI Reference.:scroll: SamplesCode samples covering different scenarios are available in the./samplesfolder.:construction_worker: ContributingFeel free to open issues and send a pull request.Also, check outCONTRIBUTING.md.:bookmark_tabs: ReferenceAPI ClientsIngestfrommetering.ingestimport(create_ingest_payload,create_ingest_client,)Customerfrommetering.customerimport(CustomerApiClient,create_customer_payload,)Usagefrommetering.usageimport(AggregationType,Take,TimeGroupingInterval,TimeRange,UsageApiClient,create_usage_query,create_all_usage_query,)Customer Portal Sessionfrommetering.customer_portal_sessionimport(CustomerPortalSessionApiClient,create_customer_portal_session_payload,)Customer Prepaid Orderfrommetering.customer_prepaid_orderimport(BillingPeriod,BillingPeriodUnit,CustomerPrepaidOrderApiClient,create_customer_prepaid_order_payload,)Customer Product Invoicefrommetering.customer_product_invoiceimport(CustomerProductInvoiceApiClient,create_all_invoices_query,create_latest_invoice_query,create_invoice_query,)Customer Product Planfrommetering.customer_product_planimport(CustomerProductPlanApiClient,create_customer_product_plan_payload,)Exceptionsfrommetering.exceptionsimportApiErrorLoggingamberflo-metering-pythonuses the standard Python logging framework. By default, logging is and set at theWARNINGlevel.The following loggers are used:metering.ingest.producermetering.ingest.s3_clientmetering.ingest.consumermetering.session.ingest_sessionmetering.session.api_session
amber-runner
ambhas
ambhasAmbhas python libraryInstalling ambhasInstalling ambhas can be done by downloading source file (ambhas--.tar.gz), and after unpacking issuing the command::python setup.py installThis requires the usual Distutils options available.Or, download the ambhas--.tar.gz file and issue the command::pip install /path/to/ambhas--.tar.gzOr, directly using the pip::pip install ambhasUsageImport required modules::import numpy as np from ambhas.errlib import rmse, correlationGenerate some random numbers::x = np.random.normal(size=100) y = np.random.normal(size=100) rmse(x,y) correlation(x,y)AuthorSat Kumar Tomer satkumartomer at gmail dot comChanges0.1.0Initial version0.1.1Minor corrections in setup.py files0.2.0few packages added0.3.0groundwater, sun_rise_set, soil texture module, etc. added0.3.1manifest.in file edited0.3.2extract_gis_data, richards, risat added0.4.0extract_gis_data, richards, risat added1.0.0Updated for Python3Removed copula from this and is available now only as a seperate libraryAny questions/commentsIf you have any comment/suggestion/question, please feel free to write me [email protected] may go throughhttps://github.com/tomersk/learn-pythonto see the examples.
ambiance
Ambianceis a full implementation of the ICAO standard atmosphere 1993 written in Python.International Standard Atmosphere (Wikipedia)International Civil Aviation Organization ; Manual Of The ICAO Standard Atmosphere – 3rd Edition 1993 (Doc 7488) – extended to 80 kilometres (262 500 feet)Basic usageAtmospheric properties are computed from anAtmosphereobject which takes the altitude (geometric height) as input. For instance, to simply retrieve sea level properties, you can write:>>>fromambianceimportAtmosphere>>>sealevel=Atmosphere(0)>>>sealevel.temperaturearray([288.15])>>>sealevel.pressurearray([101325.])>>>sealevel.kinematic_viscosityarray([1.46071857e-05])List of available atmospheric propertiesCollision frequency (collision_frequency)Density (density)Dynamic viscosity (dynamic_viscosity)Geometric height aboveMSL(h)Geopotential height (H)Gravitational acceleration (grav_accel)Kinematic viscosity (kinematic_viscosity)Layer names (layer_name) [string array]Mean free path (mean_free_path)Mean particle speed (mean_particle_speed)Number density (number_density)Pressure (pressure)Pressure scale height (pressure_scale_height)Specific weight (specific_weight)Speed of sound (speed_of_sound)Temperature (temperature,temperature_in_celsius)Thermal conductivity (thermal_conductivity)Vector and matrix inputsAmbiancealso handles list-like input (list, tuples,Numpyarrays). The following code demonstrates how to produce a temperature plot withMatplotlib. In the example,Numpy’slinspace()function is used to produce an array with altitudes.importnumpyasnpimportmatplotlib.pyplotaspltfromambianceimportAtmosphere# Create an atmosphere objectheights=np.linspace(-5e3,80e3,num=1000)atmosphere=Atmosphere(heights)# Make plotplt.plot(atmosphere.temperature_in_celsius,heights/1000)plt.ylabel('Height [km]')plt.xlabel('Temperature [°C]')plt.grid()plt.show()The output isSimilarly, you can also pass in entirematrices. Example:>>>importnumpyasnp>>>fromambianceimportAtmosphere>>>h=np.array([[0,11,12],[20,21,35],[0,80,50]])*1000>>>h# Geometric heights in metresarray([[0,11000,12000],[20000,21000,35000],[0,80000,50000]])>>>Atmosphere(h).temperaturearray([[288.15,216.7735127,216.65],[216.65,217.58085353,236.51337209],[288.15,198.63857625,270.65]])>>>Atmosphere(h).speed_of_soundarray([[340.29398803,295.15359145,295.06949351],[295.06949351,295.70270856,308.29949587],[340.29398803,282.53793156,329.798731]])>>>Atmosphere([30000,0]).layer_namearray(['stratosphere','troposphere'],dtype='<U42')Instantiating from given pressure or densityIn some contexts it may be convenient to instantiate anAtmosphereobject from a given ambient pressure or density. This can be easily achieved by using theAtmosphere.from_pressure()orAtmosphere.from_density()methods, respectively. Both methods returnAtmosphereobjects from which all other properties, like temperature, can be requested.>>>Atmosphere.from_pressure([80e3,20e3])# 80 kPa and 20 kPaAtmosphere(array([1949.58557497,11805.91571135]))>>>Atmosphere.from_pressure([80e3,20e3]).pressurearray([80000.,20000.])>>>Atmosphere.from_density(1.0)# 1.0 kg/m^3Atmosphere(array([2064.96635895]))Complete user guideFor a comprehensive and detailed user guide, please see thecomplete documentation.InstallationPip (recommended)Ambianceis available onPyPIand may simply be installed withpip install ambianceCondaThe package can be installed via theCondaenvironment withconda install -c conda-forge ambianceRequirementsUsingAmbiancerequiresPython 3.6or higherNumPySciPyFor developers: Recommended packages may be installed with therequirements.txt.pip install -r requirements.txtLicenseLicense:Apache-2.0
ambiance-client
Ambiance Client for PythonThis library implements a Python3 compatible library for interacting with theAmbiance Server
ambibulb
AMBIBULBAmbibulb attempts to provide the similar experience to Ambilight® (Philips TV's feature that projects color onto the wall behind a TV) using Raspberry PI and an IR remote controlled LED light bulb.Please watch the demo.ambibulbcontrols the color of RC lights based on the dominant color of an image on your screen. This can enhance your viewing experience or just make your party more colorful 🌈.HARDWARERaspberry PI (tested on 3B+)HDMI output (TV, projector, display)RGBW LED light bulb with IR remote control (currently supported model: OSRAM LED STAR+)IR transmitter (tested on KY-005)IR receiver(tested on KY-022, optional)wiringSOFTWARE DEPENDENCIESRaspberry Pi OS (10 buster, or any other RPI compatible OS)LIRC (Linux Infrared Remote Control)SETUP AND INSTALLATIONInstall Raspberry PI OS on yourSD cardInstall lirc on your RPI:$ apt install lircConfigure lirc, connect and configure IR transmitter,guidelineInstall ambibulb. There are 2 ways, either build, install and configure locally:$ wget https://github.com/bespsm/ambibulb/archive/main.zip $ unzip main.zip $ cd ambibulb-main $ make install $ make configureor using pip (recommended in venv) and configure locally:$ python3 -m pip install ambibulb $ wget https://github.com/bespsm/ambibulb/archive/main.zip $ unzip main.zip $ cd ambibulb-main $ make configureCOMMANDSStart ambibulb service:$ systemctl --user start ambibulb.serviceStop ambibulb service:$ systemctl --user stop ambibulb.serviceCheck ambibulb service current status, two options:$ systemctl --user status ambibulb.service $ journalctl -fConfigure/change the settings of ambibulb service:$ ambibulb-config
ambie
No description available on PyPI.
ambience
AmbienceAmbient soundscape playerThis is a python cli program that plays audio files, looping them for a specified duration and fading between them.It reads a directory for.ogg,.wavor.flacfiles. The program comes with a set of files, but can be used with any files on your computer of the supported types by using the--pathparameter when invoking.Installation with pippip install ambience ambience --fetch-libraryManual InstallationRequires python3Requires pygameClone this repository.Runpip3 install --user -r requirements.txtto install pygame if not already installedRecommended: linkambiencein a directory on your path. E.g.ln -s ambience.py ~/bin/ambienceAlternate install for pygame (compile pygame from source)To install pygame from scratch instead of using pip, you can use the following commands (assuming linux):# Clone source repo sudo apt install mercurial hg clone https://bitbucket.org/pygame/pygame cd pygame sudo apt install libsdl-dev libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev libsmpeg-dev libportmidi-dev libavformat-dev libswscale-dev python3 setup.py build sudo python3 setup.py installUsageTo run with the default settings, simply runambience. Use "ctrl-c" to stop.Hello from the pygame community. https://www.pygame.org/contribute.html usage: ambience [-h] [-d DURATION] [-f] [-i] [-n] [-p PATH] [-q] [-v] [paths ...] positional arguments: paths load given sound file(s) or path(s) options: -h, --help show this help message and exit -d DURATION, --duration DURATION set the duration in minutes each sound will play: default=5 -f, --fetch-library fetch the sound library from internet -i, --noinit do not pre-initialize all sounds at start -n, --noinput disable the stdin input capture -p PATH, --path PATH set the path where the sound files are -q, --quiet produce no output -v, --version show version and exitIf invoked without the-nparameter, press 'n' to skip to next sound and 'q' to quit.The default sounds used are in the install directory (wherever you cloned/downloaded this repo) in the sub-directorysounds.Sound creditsCredit goes to the following for the sound files included in this package:Bruce Baron foralien-contact.oggVann Westfold forambienttraut.oggEro Kia forambient-wave-17.oggMynoise.net forb25-bomber.oggMynoise.net forbinaural-low-complex.oggSclolex forcave.ogg(Water Dripping in a Large Cave)Daniel Simion forcrackling-fireplacemusicbrain fordidgeridu-monk.oggEro Kia forelementary-wave-11.oggBlair Ferrier forhelicopter-mix.oggChris Zabriskie forlong-hallway.ogg(excerpt from "I Am Running Down the Long Hallway of Viewmont Elementary)Creative Commons 3.0György Ligeti forlux-aeterna-excerpt.oggSclolex fornight-sounds.ogg(Sounds on a quiet night)Luftrum forocean-waves.oggchzmn forperfect-storm.oggHargisss Sound forspring-birds.oggTrekcore.com forwarp-core-hum.oggNASA/JPL formars-perseverance.oggMynoise.net forb17-bomber.oggjuskiddink forbonfire.oggAshFox forcoffee-shop.oggunfa forfan.ogginchadney forforest.oggjuskiddink forleaves.oggel mar forlibrary.oggjuskiddink forseaside.oggSDLx fortrain.oggGreim for themachine-planet samplesNASA/JPL formars-ingenuity.oggAdrienPola foramazon-rainforest.oggdobroide forrural-spain.oggMynoise.net forthe-pilgrim.ogganankalisto forresonance-of-the-gods.oggEmanuele Correani fortrain-station.oggInspectorJ formachine-factory.oggGladkiy formetro-outdoors.oggZabuhailo formetro.ogg
ambient
ambientis a building physics and simulation Python library.The aims ofambientare to be:Simple: Provide intuitive defaults to get started quickly.Modular: Simulate only the parts you are interested in.Extensible: Enable users to create and modify their own components.Open: Provide an open-source building simulation library.Featuresambientcurrently offers modules for the analysis of:Layered constructions (including steady-state and dynamic response)Solar conditions (including design day calculations)Psychrometrics of moist airSimulations can be read from, and written to, JSON format for persistence.InstallationInstallambientin a virtual environment using:pipinstallambientLicenseambientis licensed under the terms of the Apache License, Version 2.0. Refer to theLICENSEfile for details.
ambient-api
No description available on PyPI.
ambient-aprs
No description available on PyPI.
ambient-archiver
ambient-archiverDownload and analyse your data from ambientweather.netInstallationpip install ambient-archiverThis installsambientin your PATH.Usageambienttakes three required options:--api_key,--application_keyand--mac, which you can get from youraccount page on ambientweather.net. You can omit the options by settingAMBIENT_API_KEYAMBIENT_APPLICATION_KEYandAMBIENT_MACin your environment.Seeambient --helpfor more.Commandsambient backfillwrites all data from 2020-01-01 to the end of the last UTC day into YYYY-MM-DD.json.gz files in the present working directory (one file per day)ambient todayoverwrites .json.gz with all data since 00:00 UTCambient yesterdayoverwrites .json.gz with all data between 00:00 UTC yesterday and 23:59 UTC yesterday.backfilldoes not overwrite files. You must manually delete them if you want fresh copies for some reason.todayandyesterdayoverwrite.Shell completionYou can optionally enable shell completion by running the appropriate command for your shell:eval"$(_AMBIENT_COMPLETE=bash_sourceambient)">>~/.bashrc# basheval"$(_AMBIENT_COMPLETE=zsh_sourceambient)">>~/.zshrc# zsh_AMBIENT_COMPLETE=fish_sourcefoo-bar>~/.config/fish/completions/ambient.fish# fishAutomation with Github ActionsCreate a new repository, runambient backfillthen check everything inAdd these files in.github/workflows/.github/workflows/ambient.yml(ambient todayevery five minutes)name: ambient on: workflow_dispatch: # every 5 minutes schedule: - cron: '*/5 * * * *' jobs: ambient: runs-on: ubuntu-latest steps: - name: Check out repo uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version: 3.10 - name: Install Python dependencies run: | pip install ambient-archiver - name: Overwrite since midnight env: AMBIENT_MAC: ${{ secrets.AMBIENT_MAC }} AMBIENT_API_KEY: ${{ secrets.AMBIENT_API_KEY }} AMBIENT_APPLICATION_KEY: ${{ secrets.AMBIENT_APPLICATION_KEY }} run: ambient today - name: Commit and push if it changed run: |- git config --global user.name "scraper-bot" git config user.email "[email protected]" git add -A timestamp=$(date -u) git commit -m "Scraped at ${timestamp}" || exit 0 git push.github/workflows/daily.yml(ambient yesterdayevery day at 01:00 UTC)name: daily on: workflow_dispatch: # daily, 1am UTC schedule: - cron: '0 1 * * *' jobs: daily: runs-on: ubuntu-latest steps: - name: Check out repo uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version: 3.10 - name: Install Python dependencies run: | pip install ambient-archiver - name: Overwrite yesterday env: AMBIENT_MAC: ${{ secrets.AMBIENT_MAC }} AMBIENT_API_KEY: ${{ secrets.AMBIENT_API_KEY }} AMBIENT_APPLICATION_KEY: ${{ secrets.AMBIENT_APPLICATION_KEY }} run: ambient-oy - name: Commit and push if it changed run: |- git config --global user.name "scraper-bot" git config user.email "[email protected]" git add -A timestamp=$(date -u) git commit -m "Downloaded at at ${timestamp}" || exit 0 git pushThe daily workflow deals with the fact that the more regular job does not in practice run every five minutes. It ensures the completed file for that day has the last few records for the day.Push to GitHubConfigureAMBIENT_MAC,AMBIENT_API_KEYandAMBIENT_APPLICATION_KEYas Secrets in the GitHub settings for that repository
ambientco2
ambientco2Python module for CozIR Ambient CO2 sensorsThe CozIR Ambient family of sensors all provide CO2 measurements, at different ranges. Some are able to measure temperature and relative humidity as well. The sensors use serial UART and analog voltage output.This library has been developed using aCozIR Ambient 0-5000 ppm CO2 (only)sensor.DevelopmentMilestoneFeaturesVersionStatusBetaBasic reading0.x.x:heavy_check_mark:LaunchModes, settings1.x.xSensorsRange, CO2, relative humidity, temperature2.x.xLibrary DocumentationAdd the following line to use this libraray:fromambientco2importSensorMember functionsNameParametersReturnsDescriptionSensor()str serial_devicevoidConstructorsetup()int mode, int fieldsvoidSensor setupread()str valueintReads CO2 concentration in PPMInstallationpip$pipinstallambientco2Usagefromambientco2importSensorserial_device="/dev/ttyUSB0"# Debian (Ubuntu, Raspberry Pi OS etc.)sensor=Sensor(serial_device)co2=sensor.read()print(type(co2))print(co2)Seeget_co2.pyfor a basic exampleSensor DocumentationProduct pageData sheetUser's ManualApplication Noteandsample code
ambientika
API wrapper for Ambientika devicesA simple API wrapper for the smart home air conditioning device(s) sold as Ambientika. Probably incomplete because it is reverse engineered from a single device.
ambientika-py
API wrapper for Ambientika devicesA simple API wrapper for the smart home air conditioning device(s) sold as Ambientika. Probably incomplete because it is reverse engineered from a single device.
ambient-package-update
Ambient Package UpdateThis repository will help keep all Python packages following a certain basic structure tidy and up-to-date. It's being maintained byAmbient Digital.This package will render all required configuration and installation files for your target package.Typical use-cases:A new Python or Django version was releasedA Python or Django version was deprecatedYou want to update the Sphinx documentation builderYou want to update the linter versionsYou want to add the third-party dependenciesVersioningThis project follows the CalVer versioning pattern:YY.MM.[RELEASE]How to update a packageThese steps will tell you how to update a package which was created by using this updater.Navigate to the main directory ofyourpackageActivate your virtualenvRunpython -m ambient_package_update.cli render-templatesValidate the changes and increment the version accordinglyRelease a new version of your target packageHow to create a new packageJust follow these steps if you want to create a new package and maintain it using this updater.Create a new repo at GitHubCheck out the new repository in the same directory this updater lives in (not inside the updater!)Create a directory ".ambient-package-update" and create a file "metadata.py" inside.fromambient_package_update.metadata.authorimportPackageAuthorfromambient_package_update.metadata.constantsimportDEV_DEPENDENCIESfromambient_package_update.metadata.packageimportPackageMetadatafromambient_package_update.metadata.readmeimportReadmeContentfromambient_package_update.metadata.ruff_ignored_inspectionimportRuffIgnoredInspectionMETADATA=PackageMetadata(package_name='my_package_name',authors=[PackageAuthor(name='Ambient Digital',email='[email protected]',),],development_status='5 - Production/Stable',readme_content=ReadmeContent(tagline='A fancy tagline for your new package',content="""A multiline string containing specific things you want to have in your package readme.""",),dependencies=['my_dependency>=1.0',],optional_dependencies={'dev':[*DEV_DEPENDENCIES,],# you might add further extras here},ruff_ignore_list=[RuffIgnoredInspection(key='XYZ',comment="Reason why we need this exception"),],)Install theambient_package_updatepackage# ideally in a virtual environment pip install ambient-package-updateAdddocs/index.rstand link your readme and changelog to have a basic documentation (surely, you can add or write more custom docs if you want!)Enable the readthedocs hook in your GitHub repo to update your documentation on a commit basisFinally, follow the steps of the section above (How to update a package).ContributionDependency updatesThe dependencies of this package are being maintained withpip-tools.pip install -U pip-toolsTo add/update/remove a package, please do so in the mainpyproject.toml. Afterward, call the following command to reflect your changes in therequirements.txt.pip-compile --extra dev -o requirements.txt pyproject.toml --resolver=backtrackingTo install the packages, run:pip-syncPublish to PyPiUpdate documentation about new/changed functionalityUpdate theChangelogIncrement version in main__init__.pyIncrement version of this package in dependencies inambient_package_update/metadata/constants.pyCreate pull request / merge to masterThis project uses the flit package to publish to PyPI. Thus publishing should be as easy as running:flit publishTo publish to TestPyPI use the following ensure that you have set up your .pypirc as shownhereand use the following command:flit publish --repository testpypiChangelogCan be found atGitHub.
ambient-toolbox
Python toolbox of Ambient Digital containing an abundance of useful tools and gadgets.PyPIGitHubFull documentationCreator & Maintainer:Ambient DigitalFeaturesUseful classes and mixins for Django adminCoverage script for GitLabExtensions for DRF and GraphQLMailing backends for DjangoManagement commands for validating a projects test structureObject ownership tracking with timestampingPattern for improved workflow with Django ORMHelper and util functions for many different use-casesSentry pluginsdjango-scrubber wrapper classMixins and test classes for django (class-based) viewsMigration from "ai_django_core"This package was previously known asai_django_core. Due to the misleading nature of the name, we chose to rename it with something more meaningful.The migration is really simple, just:Install ambient-toolbox and remove the dependency to ai-django-coreSearch and replace all usages offrom ai_django_core...tofrom ambient_toolbox...The class-based mail functionality was moved to a separate package calleddjango-pony-express.InstallationInstall the package via pip:pip install ambient-toolboxor via pipenv:pipenv install ambient-toolboxAdd module toINSTALLED_APPSwithin the main djangosettings.py:INSTALLED_APPS = ( ... 'ambient_toolbox', )Apply migrations by running:python ./manage.py migrateContributeSetup package for developmentCreate a Python virtualenv and activate itInstall "pip-tools" withpip install -U pip-toolsCompile the requirements withpip-compile --extra dev,drf,graphql,sentry,view-layer, -o requirements.txt pyproject.toml --resolver=backtrackingSync the dependencies with your virtualenv withpip-syncAdd functionalityCreate a new branch for your featureChange the dependency in your requirements.txt to a local (editable) one that points to your local file system:-e /Users/workspace/ambient-toolboxor via pippip install -e /Users/workspace/ambient-toolboxEnsure the code passes the testsCreate a pull requestRun testsRun testspytest --ds settings testsCheck coveragecoverage run -m pytest --ds settings tests coverage report -mGit hooks (via pre-commit)We use pre-push hooks to ensure that only linted code reaches our remote repository and pipelines aren't triggered in vain.To enable the configured pre-push hooks, you need toinstallpre-commit and run once:pre-commit install -t pre-push -t pre-commit --install-hooksThis will permanently install the git hooks for both, frontend and backend, in your local.git/hooksfolder. The hooks are configured in the.pre-commit-config.yaml.You can check whether hooks work as intended using theruncommand:pre-commit run [hook-id] [options]Example: run single hookpre-commit run ruff --all-files --hook-stage pushExample: run all hooks of pre-push stagepre-commit run --all-files --hook-stage pushUpdate documentationTo build the documentation run:sphinx-build docs/ docs/_build/html/.Opendocs/_build/html/index.htmlto see the documentation.Translation filesIf you have added custom text, make sure to wrap it in_()where_is gettext_lazy (from django.utils.translation import gettext_lazy as _).How to create translation file:Navigate toambient-toolboxpython manage.py makemessages -l deHave a look at the new/changed files withinambient_toolbox/localeHow to compile translation files:Navigate toambient-toolboxpython manage.py compilemessagesHave a look at the new/changed files withinambient_toolbox/localePublish to ReadTheDocs.ioFetch the latest changes in GitHub mirror and push themTrigger new build at ReadTheDocs.io (follow instructions in admin panel at RTD) if the GitHub webhook is not yet set up.Publish to PyPiUpdate documentation about new/changed functionalityUpdate theChangelogIncrement version in main__init__.pyCreate pull request / merge to masterThis project uses the flit package to publish to PyPI. Thus publishing should be as easy as running:flit publishTo publish to TestPyPI use the following ensure that you have set up your .pypirc as shownhereand use the following command:flit publish --repository testpypiMaintenancePlease note that this package supports theambient-package-update. So you don't have to worry about the maintenance of this package. All important configuration and setup files are being rendered by this updater. It works similar to well-known updaters likepyupgradeordjango-upgrade.To run an update, refer to thedocumentation pageof the "ambient-package-update".
ambiguous
ambiguousbecause magic is funInstallpip install ambiguousUsagedecorator: because decorators should accept args too@decorator def suffix(fn, str_='xyz'): '''add a suffix to the result of the wrapped fn''' def wrapper(*args, **kwargs): return '%s_%s' % (fn(*args, **kwargs), str_) return wrapper @suffix def abc(): return 'abc' abc() > 'abc_xyz' @suffix('123') def count(repeat=1): return '0' * repeat count() > '0_123' count(3) > '000_123'thing_or_things: merges gets and multigets@thing_or_things def itself(args): return { x : x for x in args } itself(1) > 1 itself([1, 2]) > { 1 : 1, 2 : 2 } # specified argument @thing_or_things('args') def prefix(prefix, args): return { x : "%s_%s" % (prefix, x) for x in args } prefix('abc', [1, 2]) > { 1 : 'abc_1', 2 : 'abc_2' } # works with default args @thing_or_things def multiply(args, factor=1): return { x : x * factor for x in args } multiply(2) > 2 multiply(2, factor=2) > 4 multiply([1, 2], factor=3) > { 1 : 3, 2 : 6 }what, parentheses optional?! (warning: still highly experimental)import ambiguous @ambiguous def foo(): return 'foo' print foo > 'foo' print foo() > 'foo' foo + 'abc' > 'fooabc'
ambikesh13491
UNKNOWN
ambikesh1349-1
UNKNOWN
ambio
AmbioA light-weight bioinformatics library written in Python. It contains a variety of algorithms and functions that deal with strings.DocumentationThe complete documentation can be found onambio.readthedocs.ioInstallationRun the following command in a terminal to install:$pipinstallambioDevelopmentTo installambio, along with the tools you need to develop and run tests, run the following command:$pipinstall-e.