package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
akData
UNKNOWN
akdigitalpy
No description available on PyPI.
ak-django-activity-stream
Django Activity StreamWhat is Django Activity Stream?Django Activity Stream is a way of creating activities generated by the actions on your site.It is designed for generating and displaying streams of interesting actions and can handle following and unfollowing of different activity sources. For example, it could be used to emulate the Github dashboard in which a user sees changes to projects they are watching and the actions of users they are following.Action events are categorized by four main components.Actor. The object that performed the activity.Verb. The verb phrase that identifies the action of the activity.Action Object.(Optional)The object linked to the action itself.Target.(Optional)The object to which the activity was performed.Actor,Action ObjectandTargetareGenericForeignKeysto any arbitrary Django object and so can represent any Django model in your project. An action is a description of an action that was performed (Verb) at some instant in time by someActoron some optionalTargetthat results in anAction Objectgetting created/updated/deleted.For example:justquick(actor)closed(verb)issue 2(object)ondjango-activity-stream(target)12 hours agoNomenclature of this specification is based on the Activity Streams Spec:http://activitystrea.ms/For complete documentation seeDjango Activity Stream DocumentationContributorsThis project exists thanks to all the people who contribute!SponsorsGet supported django-activity-stream with the Tidelift Subscription
ak-django-datadog
# Django DatadogA simple Django middleware for submitting timings and exceptions to Datadog.## InstallationDownload the code into your project and install it.`bash git clonegit://github.com/conorbranagan/django-datadog.gitcddjango-datadogpython setup.py install `Adddatadogto your list of installed apps.`python INSTALLED_APPS += ('datadog') `Add the following configuration to your projects’settings.pyfile:`python DATADOG_API_KEY = 'YOUR_API_KEY' DATADOG_APP_KEY = 'YOUR_APP_KEY' DATADOG_APP_NAME = 'my_app' # Used to namespace metric names `The API and app keys can be found athttps://app.datadoghq.com/account/settings#apiAdd the Datadog request handler to your middleware insettings.py.`python MIDDLEWARE_CLASSES += ('datadog.middleware.DatadogMiddleware') `## UsageOnce the middlewhere installed, you’ll start receiving events in your Datadog stream in the case of an app exception. Here’s an example:![example django exception](https://dl.dropbox.com/u/126553/django-datadog.png)You will also have new timing metrics available:my_app.request_time.{avg,max,min}my_app.errorsMetrics are tagged withpath:/path/to/viewNote:my_appwill be replaced by whatever value you give forDATADOG_APP_NAME.
ak-django-minio-backend
django-minio-backendThedjango-minio-backendprovides a wrapper around theMinIO Python SDK. Seeminio/minio-pyfor the source.IntegrationGet and install the package:pipinstalldjango-minio-backendAdddjango_minio_backendtoINSTALLED_APPS:INSTALLED_APPS=[# '...''django_minio_backend',# https://github.com/theriverman/django-minio-backend]If you would like to enable on-start consistency check, install viaDjangoMinioBackendConfig:INSTALLED_APPS=[# '...''django_minio_backend.apps.DjangoMinioBackendConfig',# https://github.com/theriverman/django-minio-backend]Then add the following parameter to your settings file:MINIO_CONSISTENCY_CHECK_ON_START=TrueNote:The on-start consistency check equals to manually callingpython manage.py initialize_buckets.It is recommended to turnoffthis feature during development by settingMINIO_CONSISTENCY_CHECK_ON_STARTtoFalse, because this operation can noticeably slow down Django's boot time when many buckets are configured.Add the following parameters to yoursettings.py:fromdatetimeimporttimedeltafromtypingimportList,TupleMINIO_ENDPOINT='minio.your-company.co.uk'MINIO_EXTERNAL_ENDPOINT="external-minio.your-company.co.uk"# Default is same as MINIO_ENDPOINTMINIO_EXTERNAL_ENDPOINT_USE_HTTPS=True# Default is same as MINIO_USE_HTTPSMINIO_REGION='us-east-1'# Default is set to NoneMINIO_ACCESS_KEY='yourMinioAccessKey'MINIO_SECRET_KEY='yourVeryS3cr3tP4ssw0rd'MINIO_USE_HTTPS=TrueMINIO_URL_EXPIRY_HOURS=timedelta(days=1)# Default is 7 days (longest) if not definedMINIO_CONSISTENCY_CHECK_ON_START=TrueMINIO_PRIVATE_BUCKETS=['django-backend-dev-private',]MINIO_PUBLIC_BUCKETS=['django-backend-dev-public',]MINIO_POLICY_HOOKS:List[Tuple[str,dict]]=[]# MINIO_MEDIA_FILES_BUCKET = 'my-media-files-bucket' # replacement for MEDIA_ROOT# MINIO_STATIC_FILES_BUCKET = 'my-static-files-bucket' # replacement for STATIC_ROOTMINIO_BUCKET_CHECK_ON_SAVE=True# Default: True // Creates bucket if missing, then save# Custom HTTP Client (OPTIONAL)importosimportcertifiimporturllib3timeout=timedelta(minutes=5).secondsca_certs=os.environ.get('SSL_CERT_FILE')orcertifi.where()MINIO_HTTP_CLIENT:urllib3.poolmanager.PoolManager=urllib3.PoolManager(timeout=urllib3.util.Timeout(connect=timeout,read=timeout),maxsize=10,cert_reqs='CERT_REQUIRED',ca_certs=ca_certs,retries=urllib3.Retry(total=5,backoff_factor=0.2,status_forcelist=[500,502,503,504]))Implement your own Attachment handler and integratedjango-minio-backend:fromdjango.dbimportmodelsfromdjango_minio_backendimportMinioBackend,iso_date_prefixclassPrivateAttachment(models.Model):file=models.FileField(verbose_name="Object Upload",storage=MinioBackend(bucket_name='django-backend-dev-private'),upload_to=iso_date_prefix)Initialize the buckets & set their public policy (OPTIONAL):Thisdjango-admincommand creates both the private and public buckets in case one of them does not exists, and sets thepublicbucket's privacy policy fromprivate(default) topublic.pythonmanage.pyinitialize_bucketsCode reference:initialize_buckets.py.Static Files Supportdjango-minio-backendallows serving static files from MinIO. To learn more about Django static files, seeManaging static files, andSTATICFILES_STORAGE.To enable static files support, update yoursettings.py:STATICFILES_STORAGE='django_minio_backend.models.MinioBackendStatic'MINIO_STATIC_FILES_BUCKET='my-static-files-bucket'# replacement for STATIC_ROOT# Add the value of MINIO_STATIC_FILES_BUCKET to one of the pre-configured bucket lists. eg.:# MINIO_PRIVATE_BUCKETS.append(MINIO_STATIC_FILES_BUCKET)# MINIO_PUBLIC_BUCKETS.append(MINIO_STATIC_FILES_BUCKET)The value ofSTATIC_URLis ignored, but it must be defined otherwise Django will throw an error.IMPORTANTThe value set inMINIO_STATIC_FILES_BUCKETmust be added either toMINIO_PRIVATE_BUCKETSorMINIO_PUBLIC_BUCKETS, otherwisedjango-minio-backendwill raise an exception. This setting determines the privacy of generated file URLs which can be unsigned public or signed private.Note:IfMINIO_STATIC_FILES_BUCKETis not set, the default value (auto-generated-bucket-static-files) will be used. Policy setting for default buckets isprivate.Default File Storage Supportdjango-minio-backendcan be configured as a default file storage. To learn more, seeDEFAULT_FILE_STORAGE.To configuredjango-minio-backendas the default file storage, update yoursettings.py:DEFAULT_FILE_STORAGE='django_minio_backend.models.MinioBackend'MINIO_MEDIA_FILES_BUCKET='my-media-files-bucket'# replacement for MEDIA_ROOT# Add the value of MINIO_STATIC_FILES_BUCKET to one of the pre-configured bucket lists. eg.:# MINIO_PRIVATE_BUCKETS.append(MINIO_STATIC_FILES_BUCKET)# MINIO_PUBLIC_BUCKETS.append(MINIO_STATIC_FILES_BUCKET)The value ofMEDIA_URLis ignored, but it must be defined otherwise Django will throw an error.IMPORTANTThe value set inMINIO_MEDIA_FILES_BUCKETmust be added either toMINIO_PRIVATE_BUCKETSorMINIO_PUBLIC_BUCKETS, otherwisedjango-minio-backendwill raise an exception. This setting determines the privacy of generated file URLs which can be unsigned public or signed private.Note:IfMINIO_MEDIA_FILES_BUCKETis not set, the default value (auto-generated-bucket-media-files) will be used. Policy setting for default buckets isprivate.Health CheckTo check the connection link between Django and MinIO, use the providedMinioBackend.is_minio_available()method.It returns aMinioServerStatusinstance which can be quickly evaluated as boolean.Example:fromdjango_minio_backendimportMinioBackendminio_available=MinioBackend().is_minio_available()# An empty string is fine this timeifminio_available:print("OK")else:print("NOK")print(minio_available.details)Policy HooksYou can configuredjango-minio-backendto automatically execute a set of pre-defined policy hooks.Policy hooks can be defined insettings.pyby addingMINIO_POLICY_HOOKSwhich must be a list of tuples.Policy hooks are automatically picked up by theinitialize_bucketsmanagement command.For an exemplary policy, see the implementation ofdef set_bucket_to_public(self)indjango_minio_backend/models.pyor the contents ofexamples/policy_hook.example.py.Consistency Check On StartWhen enabled, theinitialize_bucketsmanagement command gets called automatically when Django starts.This command connects to the configured minIO server and checks if all buckets defined insettings.py.In case a bucket is missing or its configuration differs, it gets created and corrected.Reference ImplementationFor a reference implementation, seeExamples.BehaviourThe following list summarises the key characteristics ofdjango-minio-backend:Bucket existence isnotchecked on a save by default. To enable this guard, setMINIO_BUCKET_CHECK_ON_SAVE = Truein yoursettings.py.Bucket existences arenotchecked on Django start by default. To enable this guard, setMINIO_CONSISTENCY_CHECK_ON_START = Truein yoursettings.py.Many configuration errors are validated throughAppConfigbut not every error can be captured there.Files with the same name in the same bucket arenotreplaced on save by default. Django will store the newer file with an altered file name To allow replacing existing files, pass thereplace_existing=Truekwarg toMinioBackend. For example:image = models.ImageField(storage=MinioBackend(bucket_name='images-public', replace_existing=True))Depending on your configuration,django-minio-backendmay communicate over two kind of interfaces: internal and external. If yoursettings.pydefines a different value forMINIO_ENDPOINTandMINIO_EXTERNAL_ENDPOINT, then the former will be used for internal communication between Django and MinIO, and the latter for generating URLs for users. This behaviour optimises the network communication. SeeNetworkingbelow for a thorough explanationThe uploaded object's content-type is guessed during save. Ifmimetypes.guess_typefails to determine the correct content-type, then it falls back toapplication/octet-stream.Networking and DockerIf your Django application is running on a shared host with your MinIO instance, you should consider using theMINIO_EXTERNAL_ENDPOINTandMINIO_EXTERNAL_ENDPOINT_USE_HTTPSparameters. This way most traffic will happen internally between Django and MinIO. The external endpoint parameters are required for external pre-signed URL generation.If your Django application and MinIO instance are running on different hosts, you can omit theMINIO_EXTERNAL_ENDPOINTandMINIO_EXTERNAL_ENDPOINT_USE_HTTPSparameters, anddjango-minio-backendwill default to the value ofMINIO_ENDPOINT.Setting up and configuring custom networks in Docker is not in the scope of this document.To learn more about Docker networking, seeNetworking overviewandNetworking in Compose.SeeREADME.Docker.mdfor a real-life Docker Compose demonstration.CompatibilityDjango 2.2 or laterPython 3.6.0 or laterMinIO SDK 7.0.2 or laterNote:This library relies heavily onPEP 484 -- Type Hintswhich was introduced inPython 3.5.0.ContributionPlease find the details inCONTRIBUTE.mdCopyrighttheriverman/django-minio-backend licensed under the MIT Licenseminio/minio-py is licensed under the Apache License 2.0
ak-django-oauth-toolkit
UNKNOWN
ak-djangorestframework-jsonapi
OverviewJSON API support for Django REST FrameworkDocumentation:http://django-rest-framework-json-api.readthedocs.org/Format specification:http://jsonapi.org/format/By default, Django REST Framework will produce a response like:{ "count": 20, "next": "http://example.com/api/1.0/identities/?page=3", "previous": "http://example.com/api/1.0/identities/?page=1", "results": [{ "id": 3, "username": "john", "full_name": "John Coltrane" }] }However, for anidentitymodel in JSON API format the response should look like the following:{ "links": { "prev": "http://example.com/api/1.0/identities", "self": "http://example.com/api/1.0/identities?page=2", "next": "http://example.com/api/1.0/identities?page=3", }, "data": [{ "type": "identities", "id": 3, "attributes": { "username": "john", "full-name": "John Coltrane" } }], "meta": { "pagination": { "count": 20 } } }RequirementsPython (2.7, 3.4, 3.5, 3.6)Django (1.11, 2.0)Django REST Framework (3.6, 3.7)InstallationFrom PyPI$ pip install djangorestframework-jsonapiFrom Source$ git clone https://github.com/django-json-api/django-rest-framework-json-api.git $ cd django-rest-framework-json-api $ pip install -e .Running the example app$ git clone https://github.com/django-json-api/django-rest-framework-json-api.git $ cd django-rest-framework-json-api $ pip install -e . $ django-admin.py runserver --settings=example.settingsBrowse tohttp://localhost:8000Running TestsIt is recommended to create a virtualenv for testing. Assuming it is already installed and activated:$ pip install -e . $ pip install -r requirements-development.txt $ py.testUsagerest_framework_json_apiassumes you are using class-based views in Django Rest Framework.SettingsOne can either addrest_framework_json_api.parsers.JSONParserandrest_framework_json_api.renderers.JSONRendererto eachViewSetclass, or overridesettings.REST_FRAMEWORKREST_FRAMEWORK = { 'PAGE_SIZE': 10, 'EXCEPTION_HANDLER': 'rest_framework_json_api.exceptions.exception_handler', 'DEFAULT_PAGINATION_CLASS': 'rest_framework_json_api.pagination.PageNumberPagination', 'DEFAULT_PARSER_CLASSES': ( 'rest_framework_json_api.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser' ), 'DEFAULT_RENDERER_CLASSES': ( 'rest_framework_json_api.renderers.JSONRenderer', 'rest_framework.renderers.BrowsableAPIRenderer', ), 'DEFAULT_METADATA_CLASS': 'rest_framework_json_api.metadata.JSONAPIMetadata', }IfPAGINATE_BYis set the renderer will return ametaobject with record count and alinksobject with the next and previous links. Pages can be specified with thepageGET parameter.This package provides much more including automatic inflection of JSON keys, extra top level data (using nested serializers), relationships, links, and handy shortcuts like MultipleIDMixin. Read more athttp://django-rest-framework-json-api.readthedocs.org/
ak_docx
ak-docxA base module to manipulate docx filesView Demo·Documentation·Report Bug·Request FeatureTable of Contents1. About the Project1.1. Features2. Getting Started2.1. Prerequisites2.2. Dependencies2.3. Installation3. Usage4. License5. Contact6. Acknowledgements1. About the ProjectA base project to simplifydocxfile manipulation1.1. FeaturesFeature 1Feature 2Feature 32. Getting Started2.1. Prerequisites2.2. DependenciesThe repo comes pre-compiled with all dependencies.2.3. InstallationInstall from pypipython-mvenv.venv .venv\Scripts\activate pipinstallak_docx3. Usage4. LicenseSee LICENSE.txt for more information.5. ContactArun Kishore -@rpakishoreProject Link:https://github.com/rpakishore/ak-docx6. AcknowledgementsAwesome README TemplateBanner MakerShields.ioCarbon
ake
compois a package for PyScript components.
akefpdf
This is the homepage of our projects.
akello
Setup DynamoDB on your local environmentWarningFor local development you will need a free AWS account with cognito pools created. Please reference AWS docs.Setup NoSQL WorkbenchDownload and install NoSQL Workbench over here:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.settingup.htmlOnce you installed NoSQL Workbench, run DynamoDB locallySet your environment variablesexport AWS_REGION=## export AWS_SECRET_NAME=## export AWS_ACCESS_KEY_ID=## export AWS_SECRET_ACCESS_KEY=## export DYNAMODB_TABLE=## export AWS_COGNITO_USERPOOL_ID=## export AWS_COGNITO_APP_CLIENT_ID=##Run the Fast API serverpython3 -m venv venv source venv/bin/activate pip install -r requirements.txt uvicorn akello.main:app --reloadCall akello servicesCreate a new registryfromakelloimportregistrymoderate_depression=registry.create_registry('Moderate Depression')Refer a patientfromakelloimportregistryfromakello.dynamodb.models.registryimportPatientRegistry# build a patient object using the PatientRegistry modelpatient_registry=PatientRegistry(id='registry id',# .. other attributes)registry.refer_patient(patient_registry)Add a patient encounterfromakelloimportregistryfromakello.dynamodb.models.registryimportTreatmentLogtreatment_log=TreatmentLog(patient_mrn='<patients mrn>',phq9_score=16,gad7_score=12,minutes=4,# .. other required attributes)registry.add_treatment_log('<registry_id>','<patient_id>',treatment_log)Publish a packagepython3 -m build twine upload dist/*Run testspython3 -m unittest
akellogpt
Akello GPT helps ensure deterministic results for healthcare applications
akello-publisher
Failed to fetch description. HTTP Status Code: 404
akeneo
A Python wrapper for the Akeneo REST API. Easily interact with the Akeneo REST API using this library.InstallationpipinstallakeneoGetting startedGenerate API credentials (Consumer Key & Consumer Secret) following this instructionshttps://api.akeneo.com/getting-started-admin.html.Check out the Akeneo API endpoints and data that can be manipulated inhttps://api.akeneo.com/api-reference-index.html.Basic setupBasic setup for the Akeneo REST API:fromakeneoimportAkeneoAPIakeneo=AkeneoAPI(url="AKENEO_INSTANCE_URL",client_id="YOUR_CLIENT_ID",secret="YOUR_SECRET",username="YOUR_USERNAME",password="YOUR_PASSWORD",)ResponseAll methods will directly return the JSON response from the API.Changelog0.0.1Initial version: Every endpoint should be ok, except these ones for ‘Media files’.
akeneo-api-client
Akeneo API Python ClientA simple Python client for the Akeneo PIM APIInstallationpip install akeneo-api-clientUsageInitialise the clientfromakeneo_api_client.client_builderimportClientBuilderfromakeneo_api_client.client.akeneo_api_errorimportAkeneoApiErrorcb=ClientBuilder(uri)api=cb.build_authenticated_by_password(username,password,client_id,secret)Or if you already have a cached version of the token you can use:api=cb.build_authenticated_by_token(client_id,secret,token,refresh_token)Fetch a producttry:response=api.product_uuid_api.get(uuid)print(response)exceptAkeneoApiErrorase:print(e.response.status_code)print(e.response_body)Iterate over a list of productsfromakeneo_api_client.search.search_builderimportSearchBuildersb=SearchBuilder()sb.add_filter('updated','>','2023-11-29 00:00:00')sb.add_filter('completeness','=',100,{"scope":"ecommerce"})sb.add_filter('enabled','=',True)search=sb.get_filters()try:forpageinapi.product_uuid_api.all(query_params={"search":search}):foriteminpage:print(item["uuid"])exceptAkeneoApiErrorase:print(e.message)Create a producttry:response=api.product_uuid_api.create(data={"family":"my_family"})print(response.headers.get("location"))exceptAkeneoApiErrorase:print(e.response_body)Upsert a productThis call will create a product if it doesn't exist or update it if it doesdata={"values":{"Product_name":[{"scope":None,"locale":"en_GB","data":"My product"}]}}try:api.product_uuid_api.upsert(uuid,data)exceptAkeneoApiErrorase:print(e.message)Upsert a list of productsproducts=[{"uuid":str(uuid.uuid4()),"values":{"Product_name":[{"scope":None,"locale":"en_GB","data":"Product 1"}]}},{"values":{"Product_name":[{"scope":None,"locale":"en_GB","data":"Product 2"}]}}]try:response=api.product_uuid_api.upsert_list(products)foriteminresponse:ifitem['status_code']>=400:print(item)exceptAkeneoApiErrorase:print(e.response.reason)Delete a producttry:api.product_uuid_api.delete(uuid)exceptAkeneoApiErrorase:print(e.response_body)
akeneo-api-client-globus
akeneo
akeneo-cli
Akeneo CLIYou'll need to get an app credentials for Akeneo as explainhereThis package use generic calls to Akeneo api. To know the list of available endpoints and how the API work please refer to the officialdocumentationCLIThe CLI itself is a work in progress. Currently only the product can be retrieved with a command like.source.env#Create your own .env from env.exampleakeneogetproductCodeExamples of usage from codefromakeneo_cli.clientimportAkeneoClientakeneo_client=AkeneoClient(os.getenv(AKENEO_URL),os.getenv(AKENEO_CLIENT_ID),os.getenv(AKENEO_CLIENT_SECRET),)withakeneo_client.login(os.getenv(AKENEO_USERNAME),os.getenv(AKENEO_PASSWORD))assession:product_list=session.get("products")product=session.get("products",code="my-product")product-model=session.get("product-models",code="some-model")response=session.patch("products",code="my-product",data=product_data)response=session.post("products",code="my-product",data=product_data)response=session.bulk("products",data=[product_data1,product_data2,product_data3])response=session.put_product_file("my-product","my-attribute","my-filepath",is_model=False,locale=None,scope=None)response=session.put_asset_file("my-asset-filepath")response=session.delete("products",code="my-product")
aker
No description available on PyPI.
akera-distribution
No description available on PyPI.
akerbp-mlops
MLOps FrameworkThis is a framework for MLOps that deploys models as functions in Cognite Data FusionUser GuideReference guideThis assumes you are already familiar with the framework, and acts as a quick reference guide for deploying models using the prediction service, i.e. when model training is performed outside of the MLOps framework.Train model to generate model artifactsManually upload artifacts to your test environmentThis includes model artifacts generated during training, mapping- and settings-file for the model, scaler object etc. Basically everything that is needed to preprocess the data and make predictions using the trained model.Deploy prediction service to testThis is handled by the CI/CD pipeline on GitHubManually promote model artifacts from test to productionManually trigger deployment of model to productionTrigger in the CI/CD pipelineCall deployed modelSee section "Calling a deployed model prediction service hosted in CDF" belowGetting Started:Follow these steps (in the context of your virtual environment):Install package:pip install akerbp-mlops[cdf](On some OSes you may need to escape the brackets by doing sopip install "akerbp-mlops[cdf]")Set up pipeline files.github/workflows/main.ymland config filemlops_settings.yamlby running this command from your repo's root folder:python-makerbp.mlops.deployment.setupFill in user settings and then validate them by running this (from repo root):python-c"from akerbp.mlops.core.config import validate_user_settings; validate_user_settings()"alternatively, run the setup again:python-makerbp.mlops.deployment.setupCommit the pipeline and settings files to your repoBecome familiar with the model template (see foldermodel_code) and make sure your model follows the same interface and file structure (seeFiles and Folders Structure)A this point every git push in master branch will trigger a deployment in the test environment. More information about the deployments pipelines is provided later.Updating MLOpsFollow these steps:Install a new version using pip, e.g.pip install akerbp-mlops[cdf]==x, or upgrade your existing version to the latest release by runningpip install --upgrade akerbp-mlops[cdf]Run this command from your repo's root folder:python-makerbp.mlops.deployment.setupThis will update the GitHub pipeline with the newest release of akerbp.mlops and validate your settings. Once the settings are validated, commit changes and you're ready to go!General GuidelinesUsers should consider the following general guidelines:Model artifacts shouldnotbe committed to the repo. Foldermodel_artifactdoes store model artifacts for the model defined inmodel_code, but it is just to help users understand the framework (see this sectionon how to handle model artifacts)Follow the recommended file and folder structure (see this section)There can be several models in your repo: they need to be registered in the settings, and then they need to have their own model and test filesFollow the import guidelines (see this section)Make sure the prediction service gets access to model artifacts (see this section)ConfigurationMLOps configuration is stored inmlops_settings.yaml. Example for a project with a single model:model_name:model1human_friendly_model_name:'MyFirstModel'model_file:model_code/model1.pyreq_file:model_code/requirements.modelartifact_folder:model_artifactartifact_version:1# Optionaltest_file:model_code/test_model1.pyplatform:cdfdataset:mlopspython_version:py39helper_models:-my_helper_modelinfo:prediction:&descdescription:'Descriptionpredictionservice,model1'metadata:required_input:-ACS-RDEP-DENtraining_wells:-3/14-2/7-18input_types:-int-float-stringunits:-s/ft-1-kg/m3output_curves:-ACoutput_units:-s/ftpetrel_exposure:Falseimputed:Truenum_filler:-999.15cat_filler:UNKNOWNowner:[email protected]:<<:*descdescription:'Descriptiontrainingservice,model1'metadata:required_input:-ACS-RDEP-DENoutput_curves:-AChyperparameters:learning_rate:1e-3batch_size:100epochs:10FieldDescriptionmodel_namea suitable name for your model. No spaces or dashes are allowedhuman_friendly_model_nameName of function (in CDF)model_filemodel file path relative to the repo's root folder. All required model code should be under the top folder in that path (model_codein the example above).req_filemodel requirement file. Do not use.txtextension!artifact_foldermodel artifact folder. It can be the name of an existing local folder (note that it should not be committed to the repo). In that case it will be used in local deployment. It still needs to be uploaded/promoted with the model manager so that it can be used in Test or Prod. If the folder does not exist locally, the framework will try to create that folder and download the artifacts there. Set tonullif there is no model artifact.artifact_version (optional)artifact version number to use during deployment. Defaults to the latest version if not specifiedtest_filetest file to use. Set tonullfor no testing before deployment (not recommended).platformdeployment platforms, eithercdf(Cognite) orlocalfor local testing.python_versionIfplatformis set tocdf, thepython_versionrequired by the model to be deployed needs to be specified. Available versions can be foundherehelper_modelsArray of helper models using for feature engineering during preprocessing. During deployment, iterate through this list and check that helper model requirements are the same as the main model. For now we only check for akerbp-mlpetdatasetCDF Dataset to use to read/write model artifacts (seeModel Manager). Set tonullis there is no dataset (not recommended).infodescription, metadata and owner information for the prediction and training services. Training field can be discarded if there's no such service.Note: allpathsshould beunix style, regardless of the platform.Notes on metadata: We need to specify the metadata under info as a dictionary with strings as keys and values, as CDF only allows strings for now. We are also limited to the followingKeys can contain at most 16 charactersValues can contain at most 512 charactersAt most 16 key-value pairsMaximum size of entire metadata field is 512 bytesIf there are multiple models, model configuration should be separated using---. Example:model_name:model1human_friendly_model_name:'MyFirstModel'model_file:model_code/model1.py(...)---# <- this separates model1 and model2 :)model_name:model2human_friendly_model:'MySecondModel'model_file:model_code/model2.py(...)Files and Folders StructureAll the model code and files should be under a single folder, e.g.model_code.Requiredfiles in this folder:model.py: implements the standard model interfacetest_model.py: tests to verify that the model code is correct and to verify correct deploymentrequirements.model: libraries needed (with specificversion numbers), can't be calledrequirements.txt. Add the MLOps framework like this:# requirements.model(...)# your other reqsakerbp-mlops==MLOPS_VERSIONDuring deployment,MLOPS_VERSIONwill be automatically replaced by the specific versionthat you have installed locally. Make sure you have the latest release on your local machine prior to model deployment.For the prediction service we require the model interface to have the following class and functioninitialization(), with required argumentspath to artifact foldersecretsthese arguments can safely be set to None, and the framework will handle everything under the hood.only set path to artifact folder as None if not using any artifactspredict(), with required argumentsdatainit_object (output from initialization() function)secretsYou can safely put the secrets argument to None, and the framework will handle the secrets under the hood.ModelException class with inheritance from an Exception base classFor the training service we require the model interface to have the following class and functiontrain(), with required argumentsfolder_pathpath to store model artifacts to be consumed by the prediction serviceModelException class with inheritance from an Exception base classThe following structure is recommended for projects with multiple models:model_code/model1/model_code/model2/model_code/common_code/This is because when deploying a model, e.g.model1, the top folder in the path (model_codein the example above) is copied and deployed, i.e.common_codefolder (assumed to be needed bymodel1) is included. Note thatmodel2folder would also be deployed (this is assumed to be unnecessary but harmless).Import GuidelinesThe repo's root folder is the base folder when importing. For example, assume you have these files in the folder with model code:model_code/model.pymodel_code/helper.pymodel_code/data.csvIfmodel.pyneeds to importhelper.py, use:import model_code.helper. Ifmodel.pyneeds to readdata.csv, the right path isos.path.join('model_code', 'data.csv').It's of course possible to import from the Mlops package, e.g. its logger:fromakerbp.mlops.coreimportloggerlogging=logger.get_logger("logger_name")logging.debug("This is a debug log")ServicesWe consider two types of services: prediction and training.Deployed services can be called withfromakerbp.mlops.xx.helpersimportcall_functionoutput=call_function(external_id,data)Wherexxis either'cdf'or'gc', andexternal_idfollows the structuremodel-service-model_env:model: model name given by the user (settings file)service: eithertrainingorpredictionmodel_env: eitherdev,testorprod(depending on the deployment environment)The output has a status field (okorerror). If they are 'ok', they have also apredictionandprediction_fileortrainingfield (depending on the type of service). The former is determined by thepredictmethod of the model, while the latter combines artifact metadata and model metadata produced by thetrainfunction. Prediction services have also amodel_idfield to keep track of which model was used to predict.See below for more details on how to call prediction services hosted in CDF.Deployment PlatformModel services (described below) can be deployed to CDF, i.e. Cognite Data Fusion or Google Cloud Run. The deployment platform is specified in the settings file.CDF Functions include metadata when they are called. This information can be used to redeploy a function (specifically, thefile_idfield). Example:importakerbp.mlops.cdf.helpersascdfhuman_readable_name="My model"external_id="my_model-prediction-test"cdf.set_up_cdf_client('deploy')cdf.redeploy_function(human_readable_nameexternal_id,file_id,'Description','[email protected]')Note that the external-id of a function needs to be unique, as this is used to distinguish functions between services and hosting environment.It's possible to query available functions (can be filtered by environment and/or tags). Example:importakerbp.mlops.cdf.helpersascdfcdf.set_up_cdf_client('deploy')all_functions=cdf.list_functions()test_functions=cdf.list_functions(model_env="test")tag_functions=cdf.list_functions(tags=["well_interpretation"])Functions can be deleted. Example:importakerbp.mlops.cdf.helpersascdfcdf.set_up_cdf_client('deploy')cdf.delete_service("my_model-prediction-test")Functions can be called in parallel. Example:fromakerbp.mlops.cdf.helpersimportcall_function_parallelfunction_name='my_function-prediction-prod'data=[dict(data='data_call_1'),dict(data='data_call_2')]response1,response2=call_function_parallel(function_name,data)#TODO - Document common use cases for GCRModel ManagerModel Manager is the module dedicated to managing the model artifacts used by prediction services (and generated by training services). This module uses CDF Files as backend.Model artifacts are versioned and stored together with user-defined metadata. Uploading a new model increases the version count by 1 for that model and environment. When deploying a prediction service, the latest model version is chosen. It would be possible to extend the framework to allow deploying specific versions or filtering by metadata.Model artifacts are segregated by environment (e.g. only production artifacts can be deployed to production). Model artifacts have to be uploaded manually to test (or dev) environment before deployment. Code example:importakerbp.mlops.model_managerasmmmetadata=train(model_dir,secrets)# or define it directlymm.setup()folder_info=mm.upload_new_model_version(model_name,model_env,folder_path,metadata)If there are multiple models, you need to do this one at at time. Note thatmodel_namecorresponds to one of the elements inmodel_namesdefined inmlops_settings.py,model_envis the target environment (where the model should be available),folder_pathis the local model artifact folder andmetadatais a dictionary with artifact metadata, e.g. performance, git commit, etc.Model artifacts needs to be promoted to the production environment (i.e. after they have been deployed successfully to test environment) so that a prediction service can be deployed in production.# After a model's version has been successfully deployed to testimportakerbp.mlops.model_managerasmmmm.setup()mm.promote_model('model','version')VersioningEach model artifact upload/promotion increments a version number (environment dependent) available in Model Manager. However, this doesn't modify the model artifacts used in existing prediction services (i.e. nothing changes in CDF Functions). To reflect the newly uploaded/promoted model artifacts in the existing services one need to deploy the services again. Note that we dont have to specify the artifact version explicitly if we want to deploy using the latest artifacts, as this is done by default.Recommended process to update a model artifact and prediction service:New model features implemented in a feature branchNew artifact generated and uploaded to test environmentFeature branch merged with masterTest deployment is triggered automatically: prediction service is deployed to test environment with the latest artifact version (in test)Prediction service in test is verifiedArtifact version is promoted manually from command line whenever suitableProduction deployment is triggered manually from GitHub: prediction service is deployed to production with the latest artifact version (in prod)It's possible to get an overview of the model artifacts managed by Model Manager. Some examples (seeget_model_version_overviewdocumentation for other possible queries):importakerbp.mlops.model_managerasmmmm.setup()# all artifactsfolder_info=mm.get_model_version_overview()# all artifacts for a given modelfolder_info=mm.get_model_version_overview(model_name='xx')If the overview shows model artifacts that are not needed, it is possible to remove them. For example if artifact "my_model/dev/5" is not needed:model_to_remove="my_model/dev/5"mm.delete_model_version(model_to_remove)Model Manager will by default show information on the artifact to delete and ask for user confirmation before proceeding. It's possible (but not recommended) to disable this check. There's no identity check, so it's possible to delete any model artifact (from other data scientist). Be careful!It's possible to download a model artifact (e.g. to verify its content). For example:mm.download_model_version('model_name','test','artifact_folder',version=5)If no version is specified, the latest one is downloaded by default.By default, Model Manager assumes artifacts are stored in themlopsdataset. If your project uses a different one, you need to specify during setup (seesetupfunction).Further information:Model Manager requires specific environmental variables (see next section) or a suitable secrets to be passed to thesetupfunction.In projects with a training service, you can rely on it to upload a first version of the model. The first prediction service deployment will fail, but you can deploy again after the training service has produced a model.When you deploy from the development environment (covered later in this document), the model artifacts in the settings file can point to existing local folders. These will then be used for the deployment. Version is then fixed tomodel_name/dev/1. Note that these artifacts are not uploaded to CDF Files.Prediction services are deployed with model artifacts (i.e. the artifact is copied to the project file used to create the CDF Function) so that they are available at prediction time. Downloading artifacts at run time would require waiting time, and files written during run time consume ram memory).Model versioningTo allow for model versioning and rolling back to previous model deployments, the external id of the functions (in CDF) includes a version number that is reflected by the latest artifact version number when deploying the function (see above). Everytime we upload/promote new model artifacts and deploy our services, the version number of the external id of the functions representing the services are incremented (just as the version number for the artifacts).To distinguish the latest model from the remaining model versions, we redeploy the latest model version using a predictable external id that does not contain the version number. By doing so we relieve the clients need of dealing with version numbers, and they will call the latest model by default. For every new deployment, we will thus have two model deployments - one with the version number, and one without the version number in the external id. However, the predictable external id is persisted across new model versions, so when deploying a new version the latest one, with the predictable external id, is simply overwritten.We are thus concerned with two structures for the external id<model_name>-<service>-<model_env>-<version>for rolling back to previous versions, and<model_name>-<service>-<model_env>for the latest deployed modelFor the latest model with a predictable external id, we tag the description of the model to specify that the model is in fact the latest version, and add the version number to the function metadata.We can now list out multiple models with the same model name and external id prefix, and choose to make predictions and do inference with a specific model version. An example is shown below.# List all prediction services (i.e. models) with name "My Model" hosted in the test environment, and model corresponding to the first element of the listfromakerbp.mlops.cdf.helpersimportget_clientclient=get_client(client_id=<client_id>,client_secret=<client_secret>)my_models=client.functions.list(name="My Model",external_id_prefix="mymodel-prediction-test")my_model_specific_version=my_models[0]Calling a deployed model prediction service hosted in CDFThis section describes how you can call deployed models and obtain predictions for doing inference. We have two options for calling a function in CDF, either using the MLOps framework directly or by using the Cognite SDK. Independent of how you call your model, you have to pass the data as a dictionary with a key "data" containing a dictionary with your data, where the keys of the inner dictionary specifies the columns, and the values are list of samples for the corresponding columns.First, load your data and transform it to a dictionary as assumed by the framework. Note that the data dictionary you pass to the function might vary based on your model interface. Make sure to align with what you specified in yourmodel.pyinterface.importpandasaspddata=pd.read_csv("path_to_data")input_data=data.drop(columns=[target_variables])data_dict={"data":input_data.to_dict(orient=list),"to_file":True}The "to_file" key of the input data dictionary specifies how the predictions can be extracted downstream. More details are provided belowCalling deployed model using MLOps:Set up a cognite client with sufficient access rightsExtract the response directly by specifying the external id of the model and passing your data as a dictionaryNote that the external id is on the form"<model_name>-<service>-<model_env>-<version>", and"<model_name>-<service>-<model_env>"Use the latter external id if you want to call the latest model. The former external id can be used if you want to call a previous version of your model.fromakerbp.mlops.cdf.helpersimportset_up_cdf_client,call_functionset_up_cdf_client(context="deploy")#access CDF data, files and functions with deploy contextresponse=call_function(function_name="<model_name>-prediction-<model_env>",data=data_dict)Calling deployed model using the Cognite SDK:set up cognite client with sufficient access rightsRetreive model from CDF by specifying the external-id of the modelCall the functionExtract the function call response from the function callfromakerbp.mlops.cdf.helpersimportget_clientclient=get_client(client_id=<client_id>,client_secret=<client_secret>)client=CogniteClient(config=cnf)function=client.functions.retrieve(external_id="<model_name>-prediction-<model_env>")function_call=function.call(data=data_dict)response=function_call.get_response()Depending on how you specified the input dictionary, the predictions are available directly from the response or needs to be extracted from Cognite Files. If the input data dictionary contains a key "to_file" with value True, the predictions are uploaded to cognite Files, and the 'prediction_file' field in the response will contain a reference to the file containing the predictions. If "to_file" is set to False, or if the input dictionary does not contain such a key-value pair, the predictions are directly available through the function call response.If "to_file" = True, we can extract the predictions using the following code-snippetfile_id=response["prediction_file"]bytes_data=client.files.download_bytes(external_id=file_id)predictions_df=pd.DataFrame.from_dict(json.loads(bytes_data))Otherwise, the predictions are directly accessible from the response as follows.predictions=response["predictions"]Extracting metadata from deployed model in CDFOnce a model is deployed, a user can extract potentially valuable metadata as follows.my_function=client.functions.retrieve(external_id="my_model-prediction-test")metadata=my_function.metadataWhere the metadata corresponds to whatever you specified in the mlops_settings.yaml file. For this example we get the following metadata{'cat_filler': 'UNKNOWN', 'imputed': 'True', 'input_types': '[int, float, string]', 'num_filler': '-999.15', 'output_curves': '[AC]', 'output_unit': '[s/ft]', 'petrel_exposure': 'False', 'required_input': '[ACS, RDEP, DEN]', 'training_wells': '[3/1-4]', 'units': '[s/ft, 1, kg/m3]'}Local Testing and DeploymentIt's possible to tests the functions locally, which can help you debug errors quickly. This is recommended before a deployment.Define the following environmental variables (e.g. in.bashrc):exportMODEL_ENV=devexportCOGNITE_OIDC_BASE_URL=https://api.cognitedata.comexportCOGNITE_TENANT_ID=<tenantid>exportCOGNITE_CLIENT_ID_WRITE=<writeaccessclientid>exportCOGNITE_CLIENT_SECRET_WRITE=<writeaccessclientsecret>exportCOGNITE_CLIENT_ID_READ=<readaccessclientid>exportCOGNITE_CLIENT_SECRET_READ=<readaccessclientsecret>From your repo's root folder:python -m pytest model_code(replacemodel_codeby your model code folder name)deploy_prediction_servicedeploy_training_service(if there's a training service)The first one will run your model tests. The last two run model tests but also the service tests implemented in the framework and simulate deployment.If you want to run tests only you need to setTESTING_ONLY=Truebefore calling the deployment script.Automated Deployments from BitbucketDeployments to the test environment are triggered by commits (you need to push them). Deployments to the production environment are enabled manually from the Bitbucket pipeline dashboard. Branches that match 'deploy/*' behave as master. Branches that matchfeature/*run tests only (i.e. do not deploy).It is assumed that most projects won't include a training service. A branch that matches 'mlops/*' deploys both prediction and training services. If a project includes both services, the pipeline file could instead be edited so that master deployed both services.It is possible to schedule the training service in CDF, and then it can make sense to schedule the deployment pipeline of the model service (as often as new models are trained)NOTE: Previous version of akerbp-mlops assumes that callingLOCAL_DEPLOYMENT=True deploy_prediction_servicewill not deploy models and run tests. The package is now refactored to only trigger tests when the environment variableTESTING_ONLYis set toTrue. Make sure to update the pipeline definition for branches with prefixfeature/to callTESTING_ONLY=True deploy_prediction_serviceinstead.GitHub SetupThe following environments need to be defined inrepository settings > deployments:dev: where two environment variables are definedMODEL_ENV=devSERVICE_NAME=predictiontest: where two environment variables are definedMODEL_ENV=testSERVICE_NAME=predictionprod: where two environment variables are definedMODEL_ENV=prodSERVICE_NAME=predictioneThe following secrets need to be defined inrepository settings > Secrets and variables > Actions > Repository secrets:COGNITE_CLIENT_ID_WRITECOGNITE_CLIENT_SECRET_WRITECOGNITE_CLIENT_ID_READCOGNITE_CLIENT_SECRET_READCOGNITE_OIDC_BASE_URLCOGNITE_TENANT_ID(these should be CDF client id and secrets for respective read and write access).GitHub Actions need to be enabled on the repo.Developer/Admin GuideThis package is managed usingpoetry. Please refer to thepoetry documentationfor more information on how to use poetry and install itInstallationTo install the package, run the following command from the root folder of the repopoetryinstall-Ecdf--with=dev,pre-commit,version,testPoetry usesgroupsto manage dependencies. The above command installs the package with all the defined groups in the toml file.Package versioningThe versioning of the package followsSemVer, using theMAJOR.MINOR.PATCHstructure. We are thus updating the package version using the following conventionIncrement MAJOR when making incompatible API changesIncrement MINOR when adding backwards compatible functionalityIncrement PATCH when making backwards compatible bug-fixesThe version is updated based on the latest commit to the repo, and we are currently using the following rules.The MAJOR version is incremented if the commit message includes the wordmajorThe MINOR version is incremented if the commit message includes the wordminorThe PATCH number is incremented if neithermajornorminorif found in the commit messageIf the commit message includes the phraseprerelease, the package version is extended witha, thus taking the formMAJOR.MINOR.PATCHa.Note that the above keywords arenotcase sensitive. Moreover,majortakes precedence overminor, so if both keywords are found in the commit message, the MAJOR version is incremented and the MINOR version is kept unchanged.In dev and test environment, we release the package using the pre-release tag, and the package takes the following version numberMAJOR.MINOR.PATCH-alpha.PRERELEASE.The version number is automatically generated by combiningpoetry-dynamic-versioningwith theincrement_package_version.pyscript and is based off git tagging and the incremental version numbering system mentioned above.MLOps Files and FoldersThese are the files and folders in the MLOps repo:srccontains the MLOps framework packagemlops_settings.yamlcontains the user settings for the dummy modelmodel_codeis a model template included to show the model interface. It is not needed by the framework, but it is recommended to become familiar with it.model_artifactstores the artifacts for the model shown inmodel_code. This is to help to test the model and learn the framework..github/*describes all the relevant configurations for the CI/CD pipeline run by GitHub Actionsbuild.shis the script to build and upload the packagepyproject.tomlis the project's configuration fileLICENSEis the package's licenseCDF DatasetsIn order to control access to the artifacts:Set up a CDF Dataset withwrite_protected=Trueand aexternal_id, which by default is expected to bemlops.Create a group of owners (CDF Dashboard), i.e. those that should have write accessLocal Testing (only implemented for the prediction service)To perform local testing of before pushing to GITHUB, you can run the following commands:poetryrunpython-mpytest(assuming you have first runpoetry install -E cdf --with=dev,pre-commit,version,test"in the same environment)Build and Upload PackageCreate an account in pypi, then create a token and a$HOME/.pypircfile if you want to deploy from local. Editpyproject.tomlfile and note the following:Dependencies need to be registeredBash scripts will be installed in abinfolder in thePATH.The pipeline is setup to build the library, but it's possible to build and upload the library from the development environment as well (as long as you have thePYPI_TOKENenvironment variable set). To do so, run:bashbuild.shIn order to authenticate to GitHub to deploy to pypi you need to setup a token. Copy its content and add that to the secured repository secretPYPI_TOKEN.Notes on the codeService testing happens in an independent process (subprocess library) to avoid setup problems:When deploying multiple models the service had to be reloaded before testing it, otherwise it would be the first model's service. Model initialization in the prediction service is designed to load artifacts only once in the processIf the model and the MLOps framework rely on different versions of the same library, the version would be changed during runtime, but the upgraded/downgraded version would not be available for the current process
akerbp-mlpet
akerbp.mlpetPreprocessing tools for Petrophysics ML projects at EurekaInstallationInstall the package by running the following (requires python 3.8 or later)pip install akerbp.mlpetQuick startFor a short example of how to use the mlpet Dataset class for pre-processing data see below. Please refer to the tests folder of this repository for more examples as well as some examples of thesettings.yamlfile:import os from akerbp.mlpet import Dataset from akerbp.mlpet import utilities # Instantiate an empty dataset object using the example settings and mappings provided ds = Dataset( settings=os.path.abspath("settings.yaml"), # Absolute file paths are required folder_path=os.path.abspath(r"./"), # Absolute file paths are required ) # Populate the dataset with data from a file (support for multiple file formats and direct cdf data collection exists) ds.load_from_pickle(r"data.pkl") # Absolute file paths are preferred # The original data will be kept in ds.df_original and will remain unchanged print(ds.df_original.head()) # Split the data into train-validation sets df_train, df_test = utilities.train_test_split( df=ds.df_original, target_column=ds.label_column, id_column=ds.id_column, test_size=0.3, ) # Preprocess the data for training according to default workflow # print(ds.default_preprocessing_workflow) <- Uncomment to see what the workflow does df_preprocessed = ds.preprocess(df_train)The procedure will be exactly the same for any other dataset class. The only difference will be in the "settings". For a full list of possible settings keys see either thebuilt documentationor the akerbp.mlpet.Dataset class docstring. Make sure that the curve names are consistent with those in the dataset.The loaded data is NOT mapped at load time but rather at preprocessing time (i.e. when preprocess is called).Recommended workflow for preprocessingDue to the operations performed by certain preprocessing methods in akerbp.mlpet, the order in which the different preprocessing steps can sometimes be important for achieving the desired results. Below is a simple guide that should be followed for most use cases:Misrepresented missing data should always be handled first (usingset_as_nan)This should then be followed by data cleaning methods (e.g.remove_outliers,remove_noise,remove_small_negative_values)Depending on your use case, once the data is clean you can then impute missing values (seeimputers.py). Note however that some features depend on the presence of missing values to provide better estimates (e.g.calculate_VSH)Add new features (using methods fromfeature_engineering.py) or usingprocess_wellsfrompreprocessors.pyif the features should be well specific.Fill missing values if any still exist or were created during step 4. (usingfillna_with_fillers)Scale whichever features you want (usingscale_curvesfrompreprocessors.py). In some use cases this step could also come before step 5.Encode the GROUP & FORMATION column if you want to use it for training. (usingencode_columnsfrompreprocessors.py)Select or drop the specific features you want to keep for model training. (usingselect_columnsordrop_columnsfrompreprocessors.py)NOTE:The dataset classdropsall input columns that are not explicitly named in your settings.yaml or settings dictionary passed to the Dataset class at instantiation. This is to ensure that the data is not polluted with features that are not used. Therefore, if you have features that are being loaded into the Dataset class but are not being preprocessed, these need to be explicitly defined in your settings.yaml or settings dictionary under the keyword argumentkeep_columns.API DocumentationFull API documentaion of the package can be found under thedocsfolder once you have run the make html command.For developersto make the API documentation, from the root directory of the project run (assuming you have installed all development dependencies)cd docs/ make htmlto install mlpet in editable mode for use in another project, there are two possible solutions dependent on the tools being used:If the other package uses poetry, please refer to thisguideIf you are not using poetry (using conda, pyenv or something else), just revert to usingpip install -e .from within the root directory (Note: you need to have pip version >= 21.3).Licenseakerbp.mlpet Copyright 2021 AkerBP ASALicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
akerbp.models
AkerBP.modelsMachine Learning Models for Petrophysics.Classification ModelsWrapper for XGBoost classifierHierarchical and nested models for lithology (outdated)Regression ModelsWrapper for XGBoost regressorRule-based models - several methods for badlog detection, including:Crossplots outlier detection (supported: OCSVM, Elliptic envelope and Isolation Forest)Logtrend, outliers, DENC and washout (based on the crossplots above)FlatlineResistivitiesCasingUMAP 3D segmentationcrosscorrelationHow to useExample of how to use the badlog model class.import akerbp.models.rule_based_models as models # instantiate a badlog model object model = models.BadlogModel() # define which methods to run for badlog detection and run prediction # on data from one well methods = ['casing', 'flatline', 'dencorr', 'logtrend'] model_predictions = model.predict( df_well, methods=methods, settings=None, mappings=None, folder_path=None )Example of how to use the regression model class (wrapper of XGBoost).import akerbp.models.regression_models as models # instantiate an XGBoost regression model object with parameters as model_settings reg_model = models.XGBoostRegressionModel( settings=model_settings, model_path=folder_path ) results = reg_model.predict(df_well) reg_model.save() # it saves the model to specified folder pathThis library is closely related and advised to be used together withakerbp.mlpet, also developed by AkerBP.Rule-based modelsThe dataframe returned from running predictions on data from one well will contain new columns named in the following format "TYPE_METHOD_VAR", where:TYPE: either "flag" or "agg". Flag can be 0 or 1 for regular or badlog samples respectively. Agg is the aggregation type, or score. It indicates how anomalous is the sample (used as a way for the user to set thresholds per method).METHOD: method for the column flag. It should be as in the methods given to the predictions. An exception is the crossplots method, that will instead have [vpden, vpvs, aivpvs] as output method column names.VAR: variable or curve that the column flags. It should be one of the following: den, ac, acs, rmed, rdep, rmic, calib_bs (one column only).LicenseAkerBP.models Copyright 2021 AkerBP ASALicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
akernel
akernelA Python Jupyter kernel, with different flavors:concurrent cell execution,reactive programming,cell execution caching,multi-kernel emulation.Installpipinstallakernel# pip install akernel[react] # if you want to be able to use reactive programming# pip install akernel[cache] # if you want to be able to use cell execution cachingYou can parameterize akernel's execution mode:akernelinstall# default (chained cell execution mode)akernelinstallconcurrent# concurrent cell execution modeakernelinstallreact# reactive programming modeakernelinstallcache# cell execution caching modeakernelinstallmulti# multi-kernel emulation modeakernelinstallcache-multi-react-concurrent# you can combine several modesMotivationipykerneloffers the ability torun asynchronous code from the REPL. This means you canawaitat the top-level, outside of an async function. Unfortunately, this will still block the kernel.akernel changes this behavior by launching each cell in a task. By default, cell tasks are chained, which means that a cell will start executing after the previous one is done. You might wonder, is it not the same as ipykernel then? Well, not quite. In ipykernel, when an async cell is executing, it also blocks the processing ofComm messages, which prevents the kernel from interacting with e.g. JupyterLab widgets (seehereandthere). In akernel, it will not be the case.If you want to go all the way and have cells execute concurrently, you can also do so (see below).FeaturesAsynchronous executionFirst, set the concurrent execution mode in order to have async cells execute concurrently (you could also do that at install-time withakernel install concurrent):__unchain_execution__()# __chain_execution__()akernel allows for asynchronous code execution. What this means is that when used in a Jupyter notebook, you can run cells concurrently if the code is cooperative. For instance, you can run a cell with the following code:# cell 1foriinrange(10):print("cell 1:",i)awaitasyncio.sleep(1)Since this cell isasync(it has anawait), it will not block the execution of other cells. So you can run another cell concurrently, provided that this cell is also cooperative:# cell 2forjinrange(10):print("cell 2:",j)awaitasyncio.sleep(1)If cell 2 was blocking, cell 1 would pause until cell 2 was finished. You can see that by changingawait asyncio.sleep(1)intotime.sleep(1)in cell 2.You can make a cell wait for the previous one to be finished with:# cell 3await__task__()# wait for cell 2 to be finishedprint("cell 2 has run")Reactive programmingOne feature other notebooks offer is the ability to have variables react to other variables' changes.Observable notebooksare a good example of this, and it can give a whole new user experience. For instance, you can run cells out of order:# cell 1a=b+1# "b" is not defined yetaExecuting cell 1 won't result in an "undefined variable" error. Instead, theresultof the operation is undefined, and the output of cell 1 isNone. You can then continue with the definition ofb:# cell 2b=2# triggers the computation of "a" in cell 1Nowa, which depends onb, is automatically updated, and the output of cell 1 is3.You can of course define much more complex data flows, by defining variables on top of other ones.Cell execution cachingWith this mode, cell execution is cached so that the next time a cell is run, its outputs are retrieved from cache (if its inputs didn't change). Inputs and outputs are inferred from the cell code.Multi-kernel emulation modeThis mode emulates multiple kernels inside the same kernel. Kernel isolation is achieved by using the session ID of execution requests. You can thus connect multiple notebooks to the same kernel, and they won't share execution state.This is particularly useful if cells are async, because they won't block the kernel. The same kernel can thus be "shared" and used by potentially a lot of notebooks, greatly reducing resource usage.LimitationsIt is still a work in progress, in particular:stdout/stderrredirection to the cell output is only supported through theprintfunction.No rich representation for now, only the standard__repr__is supported. This means no matplotlib figure yet :-( But since ipywidgets work, why not usingipympl? :-)
akeru-cloud-access
Akeru Cloud AccessAbout AkeruAkeru the two faced lion was an egyptian god that protected gods and kings during his time in Egypt and will protect your access to the AWS cloud!There are two main functions of this package:Create IAM roles / users with policies attached for users to log in as or as service rolesFacilitate access to these IAM roles / users based on django user / group status.UsageCredentialsAkeru currently assumes that the credentials will be available via the environment through standard mechanisms offered byboto3.Credentials are used in 3 key actions within the package:Assume a target role to generate temporary credentials.Assume an account role used to create user keysCreate read template policy objects to create users/rolesSuch a system is not needed to support it's current setup, but will allow for expanding into a multi cloud environment where users and roles are created / assumed in accounts outside of the account this app runs in. This is not a feature that Akeru is optimized for and is not yet enabled.Policy TemplatesPolicies are mapped to users / roles on a 1-1 basis. Features like multiple policies or permission boundaries are not supported by Akeru. Policies are stored in an S3 bucket and can be pointed to by specifying POLICY_BUCKET and POLICY_PREFIX in your django settings file. There is no current support to modify the framework to allow for storing templates in other locations (ie local file system or as IAM policies).User and Role accessA policy template can be used create an 'AWSRole' object which specifies a number of parameters including but not limited to whether it's a user or a role, role trust policy, if it's an EC2 or lambda service role.Once you have created an 'AWSRole', you are now able to create an 'AccessRole' that provisions access to the underlying 'AWSRole'. This can be tied to a django user / group and users are then able to log in via the /access/ page.SettingsRequired SettingsACCOUNT_ID (The account ID that this application is operating in / for)POLICY_BUCKET (The bucket that IAM policies are stored in)POLICY_PREFIX (The prefix that policies are stored under)DEFAULT_TRUST_POLICY (The default trust policy that is added to roles)Optional SettingsREMOTE_ACCESS_ROLE (akeru-cloud-access)ASSUMED_ROLE_TIMEOUT (60 * 60)FEDERATED_USER_TIMEOUT (60 * 60)Recommended not to changeEC2_TRUST_POLICY (policy provided when checking 'ec2' on AWSRole)LAMBDA_TRUST_POLICY (policy provided when checking 'lambda' on AWSRole)Required SetupAkeru Application Policycreate this IAM role and assign credentials to Akeru{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "sts:AssumeRole", "s3:Get*" "s3:List*" ] "Resource": "*" } ] }####Akeru Remote Policy create this IAM role and allow the previous role to assume it{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:*", ] "Resource": "*" } ] }Default Trust Policyspecify this in your settingsDEFAULT_TRUST_POLICY = """{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": arn:aws:iam::<ACCOUNT_ID>:role/<name_of_local_akeru_role> }, "Action": "sts:AssumeRole", "Condition": {} } ] } """
akeru-distributions
No description available on PyPI.
akerun-sum
入退出集計プログラムNFCカードのドアキーアケルンの入退出記録から、勤務日数や勤務時間を集計するプログラムです。以下の環境で動作確認Windows 10 Home, Python 3.4.3Ubuntu 16.04.2 LTS, Python 3.5.2使用方法akerun-sum.py -i inputfile -o outputfile -d yyyymm [-f n]引数-i 入力ファイル名 -o 出力ファイル名 -d 集計期間 yyyymm の形式で指定 -f 出力タイプ 初期値は0 0 出力パターン1 1 出力パターン2実行例akerun-sum.py -i input-euc.csv -o output-euc.csv -d 201610 akerun-sum.py -i input-anotherformat.csv -o output-anotherformat.csv -d 201610 -f 1社員数やレコードの数はリストで管理しているため無制限想定している入力ファイルDATE,AKERUN,USER,LOCK,CLIENTのカラムを持つCSVファイルDATE日付データyyyy/mm/dd hh:mmとyyyy-mm-dd hh:mm:ssの2パターンに対応昇順にソートされていることが前提AKERUN本プログラムでは使用していないUSER社員名データLOCK入室:オフィスに入った退室:オフィスから出た解錠:オフィスに入室したか退室のどちらか施錠:鍵を締めた(本プログラムでは使用していない)CLIENT鍵の種類(本プログラムでは使用していない)出力ファイル出力ファイルは2パターンあり、引数によって切替可能文字コードは入力ファイルに合わせる出力パターン1Excelファイルで開くことを想定氏名就業日数就業時間yyyy/mm/dd入yyyy/mm/dd退yyyy/mm/dd時…山田太郎213.58:4710:121.25…山田次郎220.58:4720:1211.25…:::::::出力パターン2通常のCSVファイル氏名山田太郎集計期間yyyymm就業日数2就業時間13.5月日入室時刻退室時刻就業時間yyyy/mm/dd8:4710:121.25::::氏名山田次郎集計期間yyyymm就業日数2就業時間13.5月日入室時刻退室時刻就業時間yyyy/mm/dd8:4720:1211.25::::
akeuroo-deck
##################DESCRIPTION#######################A model of a card pack consisting with good features. GitHub:https://github.com/akeuroo/card-deckChange Log2.0.1 (2021-12-29)First Release
akeva
No description available on PyPI.
akey
akeyDeveloper GuideSetup# create conda environment$mambaenvcreate-fenv.yml# update conda environment$mambaenvupdate-nakey--fileenv.ymlInstallpipinstall-e.# install from pypipipinstallakeynbdev# activate conda environment$condaactivateakey# make sure the akey package is installed in development mode$pipinstall-e.# make changes under nbs/ directory# ...# compile to have changes apply to the akey package$nbdev_preparePublishing# publish to pypi$nbdev_pypi# publish to conda$nbdev_conda--build_args'-c conda-forge'$nbdev_conda--mambabuild--build_args'-c conda-forge -c dsm-72'UsageInstallationInstall latest from the GitHubrepository:$pipinstallgit+https://github.com/dsm-72/akey.gitor fromconda$condainstall-cdsm-72akeyor frompypi$pipinstallakeyDocumentationDocumentation can be found hosted on GitHubrepositorypages. Additionally you can find package manager specific guidelines oncondaandpypirespectively.
akeyless
The purpose of this application is to provide access to Akeyless API. # noqa: E501
akeyless-api-gateway
######################## Akeyless Api Gateway - the Python library for the AKEYLESS Vault API ########################RESTFull API for interacting with AKEYLESS Vault APIMinimum requirementsPython 3.4+certifi>=2017.4.17python-dateutil>=2.1six>=1.10urllib3>=1.23Installation.. code:: $pip install akeyless_api_gatewayUsage.. code:: fromfutureimport print_functionimport time import akeyless_api_gateway from akeyless_api_gateway.rest import ApiException from pprint import pprint # Defining the host is optional and defaults to https://127.0.0.1:8080 # See configuration.py for a list of all supported configuration parameters. configuration = akeyless_api_gateway.Configuration() configuration.host = "https://127.0.0.1:8080" # create an instance of the API class api_instance = akeyless_api_gateway.DefaultApi(akeyless_api_gateway.ApiClient(configuration)) role_name = 'role_name_example' # str | The role name to associate am_name = 'am_name_example' # str | The auth method name to associate token = 'token_example' # str | Access token sub_claims = 'sub_claims_example' # str | key/val of sub claims, ex. group=admins,developers (optional) try: # Create an association between role and auth method api_response = api_instance.assoc_role_am(role_name, am_name, token, sub_claims=sub_claims) pprint(api_response) except ApiException as e: print("Exception when calling DefaultApi->assoc_role_am: %s\n" % e)LicenseThis SDK is distributed under theApache License, Version 2.0_ see LICENSE.txt for more information... _Apache License, Version 2.0:http://www.apache.org/licenses/LICENSE-2.0
akeyless-auth-api
Auth manages access for services that need accesses management for their clients. Auth also issues temporary credentials for the services' clients and validates them for the services # noqa: E501
akeyless-cloud-id
######################## Akeyless Python Cloud Id ########################Retrieves cloud identityCurrently, AWS, GCP and Azure clouds are supported. In order to get cloud identity you should import this package and call the relevant method per your chosen CSP:AWS: "generate" (If no aws access id/key and token are provided they will be retrieved automatically from the default session.)GCP: "generateGcp"Azure: "generateAzure"Minimum requirementsPython 3.5+urllib3 >= 1.15requestsOptional Dependenciesboto3google-authInstallation.. code:: pip install akeyless-python-cloud-idAWS:To install with AWS:.. code::pip install akeyless-python-cloud-id[aws]The following additional packages will be installed:boto3GCP:To install with GCP:.. code::pip install akeyless-python-cloud-id[gcp]The following additional packages will be installed:google-authUsageSuch code can be used, for example, in order to retrieve secrets from Akeyless as part of AWS Code Pipeline:.. code:: pip install git+https://github.com/akeylesslabs/akeyless-python-sdkimport akeyless_api_gateway from akeyless_cloud_id import CloudId configuration = akeyless_api_gateway.Configuration() configuration.host="http://<api-gateway-host>:<port>" api_instance = akeyless_api_gateway.DefaultApi(akeyless_api_gateway.ApiClient(configuration)) cloud_id = CloudId() # for AWS use: id = cloud_id.generate() # For Azure use: id = cloud_id.generateAzure() # For GCP use: id = cloud_id.generateGcp() access_id = event['CodePipeline.job']['data']['actionConfiguration']['configuration']['UserParameters'] auth_response = api_instance.auth(access_id, access_type="aws_iam", cloud_id=id) token = auth_response.token postgresPassword = api_instance.get_secret_value("PostgresPassword", token)LicenseThis SDK is distributed under theApache License, Version 2.0_ see LICENSE.txt for more information... _Apache License, Version 2.0:http://www.apache.org/licenses/LICENSE-2.0
akeyless-kfm-api
KFM manages and stores key fragments. The core operations of each KFM instance are as follows: Creating secure random encryption keys which will be used as an encryption key fragment. Managing data storage for key fragments. Performing a key fragment derivation function, which generates a derived fragment from the original key fragment. # noqa: E501
akeyless-proxy-api
RESTFull API for interacting with AKEYLESS Proxy Vault # noqa: E501
akeyless-uam-api
UAM manages client accounts and allows each client to define items, roles and auth methods. The core operations of UAM are as follows: Creating new accounts. For each account: Creating items. Adding new auth methods Adding new roles creating roles - auth methods association. Returning the key's metadata together with temporary access credentials in order to access the key fragments. # noqa: E501
akeyra
Agent forSakeyraWhat is it?Akeyra is the client-side of Sakeyra.It serves the purpose of creating/updating~/.ssh/authorized_keysIt also create users that don’t exist on your server but that are in the key-bundle.How to install ?Use Pippip install akeyraHow to use it?You have to fill the configuration file (see below) to connect to your SAKman Server.Then you just have to runakeyraas root.Make sure you have a Cron somewhere to update as frequently as possible.Optionsusage: akeyra [-h] [-H HOST] [-E ENV] [-K KEY] [-P PROXY] [-F FILE] [-D]You can provide all informations in CLI, use the basic configfile (/etc/akeyra.cfg), or an alternative one. If nothing is passed by CLI, then the basic configfile will be used.CLI > CLI-File > base fileoptional arguments: * -h, –help show this help message and exit * -H HOST, –host HOST Key Server * -E ENV, –env ENV Environment * -K KEY, –key KEY Secret key * -P PROXY, –proxy PROXY Proxy * -F FILE, –cnf FILE Alt Conffile * -D, –dry Dry runIf you need to use a proxy, you either set environment variable like http_proxy or use proxy in the configfile.Configuration file/etc/akeyra.cfg[agent] host = key = environment = proxy =Format between Akeyra and Sakeyra (decode){"environment":"rec","users":[{"user1":{"email":"[email protected]","name":"userkey1","pubkey":"laclepubliquedeuserkey1"}},{"user2":{"email":"[email protected]","name":"userkey2","pubkey":"laclepubliquedeuserkey2"}}],"pub_date":"2017-10-18T17:15:46.799689"}
ak-fancywallet
No description available on PyPI.
ak_file
ak-fileA base module to manipulate files and foldersView Demo·Documentation·Report Bug·Request FeatureTable of Contents1. About the Project1.1. Features2. Getting Started2.1. Dependencies2.2. Installation3. Usage4. Roadmap4. License5. Contact6. Acknowledgements1. About the ProjectA base project to simplify file manipulation1.1. FeaturesCan sanitize filename based on windows limitaionsCan search for and return files with specified filenames2. Getting Started2.1. DependenciesThe repo comes pre-compiled with all dependencies. Needs Python 3.11+2.2. InstallationInstall from pypipipinstallak_file3. Usageimportak_fileimportFilefile=File("<path/to/file>")file.exists()# Returns boolfile.properties()# Returns dictfile.encrypt(password='Some Fancy Password')# Returns bytes datafile._DEFAULT_SALT=b'SuperSecureSaltForEncryption'# Change default encryption saltfile.decrypt(password='Some Fancy Password')# Returns bytes data# To sanitize filenamefromak_fileimportsanitizersanitizer.sanitize("Dirty_windows_file_name.ext",strict=False)# Obfuscate/Unobfuscate filename with simple char shiftsanitizer.obfuscate('Filename to obfuscate')# Returns 'WzCvErDvqKFqFswLJtrKv'sanitizer.unobfuscate('WzCvErDvqKFqFswLJtrKv')# Returns 'Filename to obfuscate'# Searchfromak_fileimportSearchFoldersearch=SearchFolder(folder_path="Folder\path",recurse=True)search.extension(extension_str='py')# by extensionsearch.size(min_size=1024,max_size=2048)# Bytes; by sizesearch.regex(pattern=r'[A-Z]{3}\.py',case_sensitive=False)# By regexsearch.modification_date(start_date=datetime(2023,01,01),end_date=datetime(2023,10,10))## search by generic functionsearch.search(condition=lambdafile:'Alpha'infile.parent)4. RoadmapException HandlingFile OperationsAdding methods to write content to files.Methods for appending content to files.Support for working with directories: create, remove, list contents, move, etc.Checksum AlgorithmsMetadata ExtractionSupport for More Encryption OptionsContext ManagersFile Comparison MethodsLogging and DebuggingAsynchronous I/O SupportSerialization and DeserializationSupport Different File Types4. LicenseSee LICENSE for more information.5. ContactArun Kishore -@rpakishoreProject Link:https://github.com/rpakishore/ak-file6. AcknowledgementsAwesome README TemplateShields.io
akflask
自动生成flask项目,生成的项目中带有日志管理,蓝图模式等,方便快速上手。
akfraction
akfraction is a Python library for dealing with arithmetic operation.InstallationUse the package managerpipto install akfraction.pipinstallakfractionUsagefromakfractionimportFractiona,b,c,d=[int(e)foreininput().split()]fraction1=Fraction(a,b)fraction2=Fraction(c,d)print(f"fraction1 ={fraction1}")print(f"fraction2 ={fraction2}")print(f"fraction1+fraction2 ={fraction1+fraction2}")print(f"fraction1-fraction2 ={fraction1-fraction2}")print(f"fraction1*fraction2 ={fraction1*fraction2}")print(f"fraction1/fraction2 ={fraction1/fraction2}")ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.AuthorAdisak KarnbanjongLicenseMIT
ak-frame-extractor
# AKFruitData: AK_FRAEX - Azure Kinect Frame ExtractorPython-based GUI tool to extract frames from video files produced with Azure Kinect cameras. Visit the project site athttps://pypi.org/project/ak-frame-extractor/and check the source code athttps://github.com/GRAP-UdL-AT/ak_frame_extractor/ContentsPre-requisites.Functionalities.Install and run.Files and folder description.Development tools, environment, build executables.1. Pre-requisitesSDK Azure Kinectinstalled.pyk4a libraryinstalled. If the operating system is Windows, follow thissteps. You can find test basic examples with pyk4ahere.In Ubuntu 20.04, we provide a script to install the camera drivers following the instructions inazure_kinect_notes.Videos recorded with the Azure Kinect camera, optional video samples are available atAK_FRAEX - Azure Kinect Frame Extractor demo videos2. FunctionalitiesThe functionalities of the software are briefly described. Supplementary material can be found inUSER's Manual.[Dataset creation]This option creates a hierarchy of metadata. This hierarchy contains sub-folders that will be used to store the extracted data.[Data Extraction]The user can configure the parameters for extracting data frames from videos, such as: output folder, number of frames to extract. The extraction can be done from one video or by processing a whole folder in batch mode.[Data Migration]In this tab, we offer a tool for data migration in object labelling tasks. It is used to convert files from .CSV format (generated withPychet Labeller) toPASCALVOCformat.Data extracted and 3D cloud points can be retrieved from *"your dataset metadata folder"**.3. Install and run3.1 PIP quick install packageCreate your Python virtual environment.** For Linux systems ** python3 -m venv ./ak_frame_extractor_venv source ./ak_frame_extractor_venv/bin/activate pip install --upgrade pip pip install python -m ak-frame-extractor ** execute package ** python -m ak_frame_extractor ** For Windows 10 systems ** python -m venv ./ak_frame_extractor_venv .\ak_frame_extractor_venv\Scripts\activate.bat python.exe -m pip install --upgrade pip pip install ak-frame-extractor ** execute package ** python -m ak_frame_extractor3.2 Install and run virtual environments using scripts provided[Linux] Enter to the folder"ak_frame_extractor/"Create virtual environment(only first time)./creating_env_ak_frame_extractor.shRun script../ak_frame_extractor_start.sh[Windows] Enter to the folder "ak_frame_extractor/"Create virtual environment(only first time)TODO_HERERun script from CMD../ak_frame_extractor_start.bat4.3 Files and folder descriptionFolder description:FoldersDescriptiondocs/Documentationsrc/Source codewin_exe_conf/Specifications for building .exe files withPyinstaller..tools/Examples of code to use data migrated. We offer scripts in MATLAB, Python, R.data/Examples of output produced by AK_FRAEX, data extracted from recorded video...Python environment files:FilesDescriptionOSactivate_env.batActivate environments in WindowsWINclean_files.batClean files under CMD.WINcreating_env_ak_frame_extractor.shAutomatically creates Python environmentsLinuxak_sm_recorder_main.batExecuting main scriptWINak_frame_extractor_start.shExecuting main scriptLinux/ak_frame_extractor_main.pyPython main functionSupported by PythonPyinstaller files:FilesDescriptionOSbuild_win.batBuild .EXE for distributionWIN/src/ak_frame_extractor/main.pyMain function used in package compilationSupported by Python/ak_frame_extractor_main.pyPython main functionSupported by PythonPypi.org PIP packages files:FilesDescriptionOSbuild_pip.batBuild PIP package to distributionWIN/src/ak_frame_extractor/main.pyMain function used in package compilationSupported by Pythonsetup.cfgPackage configuration PIPSupported by Pythonpyproject.tomlPackage description PIPSupported by Python5. Development tools, environment, build executablesSome development tools are needed with this package, listed below:Pyinstaller.Opencv.Curses for Pythonpip install windows-curses.7zip.5.1 Notes for developersYou can use themain.py for execute as first time in src/ak_frame_extractor/_ _ main _ _.py Configure the path of the project, if you use Pycharm, put your folder root like this:5.2 Creating virtual environment Linuxpython3 -m venv ./venv source ./venv/bin/activate pip install --upgrade pip pip install -r requirements_linux.txt5.3 Creating virtual environment Windows%userprofile%"\AppData\Local\Programs\Python\Python38\python.exe" -m venv ./venv venv\Scripts\activate.bat pip install --upgrade pip pip install -r requirements_windows.txt** If there are some problems in Windows, followthis**pip install pyk4a --no-use-pep517 --global-option=build_ext --global-option="-IC:\Program Files\Azure Kinect SDK v1.4.1\sdk\include" --global-option="-LC:\Program Files\Azure Kinect SDK v1.4.1\sdk\windows-desktop\amd64\release\lib"5.4 Building PIP packageWe are working to offer Pypi support for this package. At this time this software can be built by scripts automatically.5.4.1 Build packagespy -m pip install --upgrade build build_pip.bat5.4.2 Download PIP packagepip install package.whl5.4.3 Run ak_frame_extractorpython -m ak_frame_extractor.py5.4 Building .EXE for Windows 10build_win.batAfter the execution of the script, a new folder will be generated inside the project"/dist". You can copy ** ak_frame_extracted_f/** or a compressed file"ak_frame_Extractor_f.zip"to distribute.5.6 Package distribution formatExplain about packages distribution.Package typePackageUrlDescriptionWindows.EXE.EXEExecutables are stored under build/Linux.deb.debNOT IMPLEMENTED YETPIP.whl.whlPIP packages are stored in build/AuthorshipThis project is contributed byGRAP-UdL-AT. Please contact authors to report [email protected] you find this code useful, please consider citing:@article{MIRANDA2022101231, title = {AKFruitData: A dual software application for Azure Kinect cameras to acquire and extract informative data in yield tests performed in fruit orchard environments}, journal = {SoftwareX}, volume = {20}, pages = {101231}, year = {2022}, issn = {2352-7110}, doi = {https://doi.org/10.1016/j.softx.2022.101231}, url = {https://www.sciencedirect.com/science/article/pii/S2352711022001492}, author = {Juan Carlos Miranda and Jordi Gené-Mola and Jaume Arnó and Eduard Gregorio}, keywords = {RGB-D camera, Data acquisition, Data extraction, Fruit yield trials, Precision fructiculture}, abstract = {The emergence of low-cost 3D sensors, and particularly RGB-D cameras, together with recent advances in artificial intelligence, is currently driving the development of in-field methods for fruit detection, size measurement and yield estimation. However, as the performance of these methods depends on the availability of quality fruit datasets, the development of ad-hoc software to use RGB-D cameras in agricultural environments is essential. The AKFruitData software introduced in this work aims to facilitate use of the Azure Kinect RGB-D camera for testing in field trials. This software presents a dual structure that addresses both the data acquisition and the data creation stages. The acquisition software (AK_ACQS) allows different sensors to be activated simultaneously in addition to the Azure Kinect. Then, the extraction software (AK_FRAEX) allows videos generated with the Azure Kinect camera to be processed to create the datasets, making available colour, depth, IR and point cloud metadata. AKFruitData has been used by the authors to acquire and extract data from apple fruit trees for subsequent fruit yield estimation. Moreover, this software can also be applied to many other areas in the framework of precision agriculture, thus making it a very useful tool for all researchers working in fruit growing.} }AcknowledgementsThis work is a result of the RTI2018-094222-B-I00 project(PAgFRUIT)granted by MCIN/AEI and by the European Regional Development Fund (ERDF). This work was also supported by the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya under Grant 2017-SGR-646. The Secretariat of Universities and Research of the Department of Business and Knowledge of theGeneralitat de Catalunyaand Fons Social Europeu (FSE) are also thanked for financing Juan Carlos Miranda’s pre-doctoral fellowship(2020 FI_B 00586). The work of Jordi Gené-Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU. The authors would also like to thank the Institut de Recerca i Tecnologia Agroalimentàries(IRTA)for allowing the use of their experimental fields, and in particular Dr. Luís Asín and Dr. Jaume Lordán who have contributed to the success of this work.
ak-gchartwrapper
################################################################################# GChartWrapper - v0.9# Copyright (C) 2009 Justin Quick <[email protected]>## This program is free software. See attached LICENSE.txt for more info################################################################################GChartWrapper - Google Chart API WrapperThe wrapper can render the URL of the Google chart based on your parameters.With the chart you can render an HTML img tag to insert into webpages on the fly,show it directly in a webbrowser, or save the chart PNG to disk.################################################################################Changelog:-- 0.9 --Switched to New BSD License-- 0.8 --Reverse functionality>>> G = GChart.fromurl('http://chart.apis.google.com/chart?ch...')<GChartWrapper.GChart instance at...>Chaining fixesRestuctured Axes functionsCentralized and added unittestsEnhanced unicode supportDemos pages w/ source code-- 0.7 --Full py3k complianceColor name lookup from the css names: http://www.w3schools.com/css/css_colornames.asp>>> G = Pie3D(range(1,5))>>> G.color('green')New charts Note,Text,Pin,BubbleUpdated Django templatetags to allow context inclusion and new chartsAdded some more templating examples-- 0.6 --The wrapper now supports chainingThe old way:>>> G = Pie3D(range(1,5))>>> G.label('A','B','C','D')>>> G.color('00dd00')>>> print GThe new way with chaining>>> print Pie3D(range(1,5)).label('A','B','C','D').color('00dd00')New chart PieC for concentric pie charts################################################################################Doc TOC:1.1 General1.2 Constructing1.3 Rendering and Viewing2.1 Django extension2.2 Static data2.3 Dynamic data3.1 Other Templating Langs4.1 Test framework5.1 API documentation1.1 GeneralCustomizable charts can be generated using the Google Chart API availableat http://code.google.com/apis/chart/. The GChart Wrapper allows Pythonic accessto the parameters of constructing the charts and displaying the URLs generated.1.2 Constructingclass GChart(Dict):"""Main chart classChart type must be valid for cht parameterDataset can be any python iterable and be multi dimensionalKwargs will be put into chart API params if valid"""def __init__(self, ctype=None, dataset=[], **kwargs):The chart takes any iterable python data type (now including numpy arrays)and does the encoding for you# Datasets>>> dataset = (1, 2, 3)# Also 2 dimensional>>> dataset = [[3,4], [5,6], [7,8]]Initialize the chart with a valid type (see API reference) and dataset# 3D Piechart>>> GChart('p3', dataset)<GChart p3 (1, 2, 3)># Encoding (simple/text/extended)>>> G = GChart('p3', dataset, encoding='text')# maxValue (for encoding values)>>> G = GChart('p3', dataset, maxValue=100)# Size>>> G = GChart('p3', dataset, size=(300,150))# OR directly pass in API parameters>>> G = GChart('p3', dataset, chtt='My Cool Chart', chl='A|B|C')1.3 Rendering and ViewingThe wrapper has many useful ways to take the URL of your chart and output itinto different formats like...# As the chart URL itself using __str__>>> str(G)'http://chart.apis.google.com/chart?...'# As an HTML <img> tag, kw arguments can be valid tag attributes>>> G.img(height=500,id="chart")'<img alt="" title="" src="http://chart.apis.google.com/chart?..." id="chart" height="500" >'# Save chart to a file as PNG image, returns filename>>> G.save('my-cool-chart')'my-cool-chart.png'# Now fetch the PngImageFile using the PIL module for manipulation>>> G.image()<PngImagePlugin.PngImageFile instance at 0xb795ee4c># Now that you have the image instance, the world is your oyster# Try saving image as JPEG,GIF,etc.>>> G.image().save('my-cool-chart.jpg','JPEG')# Show URL directly in default web browser>>> G.show()2.1 Django ExtensionNewer versions of the wrapper contain templatetags for generating charts inDjango templates. This allows for dynamic insertion of data for viewing on anyweb application. Install the module first using `python setup.py install` thenplace 'GChartWrapper.charts' in your INSTALLED_APPS and then you are ready to go.Just include the '{% load charts %}' tag in your templates before making charts.In the templating folder there is a folder called djangoproj which is an exampleDjango project to get you started.2.2 Static dataThen try out some static data in your templates{% chart Line GurMrabsClgubaolGvzCrgrefOrnhgvshyvforggregunahtyl %}{% title 'The Zen of Python' 00cc00 36 %}{% color 00cc00 %}{% endchart %}Or try a bubble{% bubble icon_text_big snack bb $2.99 ffbb00 black as img %}2.3 Dynamic dataThe module supports dynamic insertion of any variable within the context like so# View codedef example(request):return render_to_response('example.html',{'dataset':range(50)})# example.html template code{% chart Line dataset %}{% color 00cc00 %}{% endchart %}Look to example.html in the djangoproj for more detailed examples3.1 Other Templating LanguagesOther examples of using the chartwrapper in templating languagesCurrently under developmentCheetah - doneMako - doneJinja2 - working, gonna b roughGenshi?Airspeed?More to come...4.1 Test frameworkThe module also comes with a test framework with sample charts available inGChartWrapper/testing.py. The tests are executed through GChartWrapper/tests.pyUsage$ python tests.py [<mode>]Where mode is one of the following:unit - Runs unit test cases for all charts to see if checksums matchsave - Saves images of all charts in 'tests' folderdemo - Creates html demo pages (needs pygments)url - Prints urls of all charts [default]5.1 API DocumentationThe Epydoc API information is generated in HTML format and available in thedocs folder under index.html
akgoodreads
Goodreads ScraperA python wrapper forgoodreads APIView Demo·Documentation·Report Bug·Request FeatureTable of Contents1. Getting Started1.1. Dependencies1.2. Installation2. Usage2.1. Initialize the client2.2. Search for books2.3. Search for books3. License4. Contact5. Acknowledgements1. Getting StartedYou will need a Goodreads API key for this to work. Unfortunately, Goodreads is not giving out new keys at this time, so unless you have signed up from a while ago - this wont work for you.On the launch of the script, you will be prompted for the API key. If you are using windows, you will be prompted to save this key locally for ease (usingkeypasslibrary).1.1. DependenciesAll the dependencies should automatically be installed when installing the script. This project heavily relies on therequestslibrary to make the api calls.1.2. InstallationInstall with pippipinstallakgoodreads2. Usage2.1. Initialize the clientimportakgoodreadsclient=goodreads.Goodreads("<your email>")2.2. Search for booksWith titleclient.book("Ender's Game",limit=5)or with goodreads IDclient._book_from_id(50)2.3. Search for booksWith titleclient.author("Rowling")or with goodreads IDclient._author_from_id(7995)3. LicenseSee LICENSE.txt.4. ContactArun Kishore -@rpakishoreGithub Link:https://github.com/rpakishore/5. AcknowledgementsAwesome README TemplateBanner MakerShields.ioCarbon
ak-gpapi
No description available on PyPI.
akhaleel338package
No description available on PyPI.
akhdefo-functions
AkhdefoClick on the Logo to Navigate to the Main PageComputer Vision for Slope Stability: Land Deformation MonitoringBackground of Akh-DefoAkh-Defois derived from two distinct words:'Akh' in the Kurdish language, representing land, earth, or soil (originating from the Kurdish Badini dialect).'Defo', a shorthand for the English term 'deformation'.Recommended CitationMuhammad M, Williams-Jones G, Stead D, Tortini R, Falorni G, and Donati D (2022) Applications of Image-Based Computer Vision for Remote Surveillance of Slope Instability.Front. Earth Sci.10:909078. doi:10.3389/feart.2022.909078UpdatesDeprecated:Akhdefo version one.Current recommendation:Use Akhdefo version 2.New Feature:Cloud-based real-time processing capabilities.Expansion:Over 20 modules for end-to-end Python-based GIS and Image Processing, and Customized Figure generation.Integration:Access, filter, and download capabilities for Planet Labs data using the Planet Lab API.Enhancement:Orthorectification feature for satellite images.Installation of Akhdefo SoftwareFollow these steps to install the Akhdefo software:Create a new Python Anaconda environment using the command:condacreate--nameakhdefo_envCreate Anaconda environment and install the following libraries with Anacondadependencies:-python=3.8# Assuming Python 3.8, can be changed as needed-cmocean-pip-opencv-earthpy-flask-geopandas-glob2-gstools-hyp3_sdk-ipywidgets-json5-matplotlib-numpy-gdal-pandas-recommonmark-sphinx-nbsphinx-sphinx-book-theme-myst-parser-plotly-pykrige-rasterio-requests-rioxarray-scipy-seaborn-shapely-scikit-image# skimage-scikit-learn# sklearn-statsmodels-tensorflow-tqdm-xmltodictDownload the Python package requirement file:pip_req.txt.Install required Python packages with the command:pipinstall-rpip_req.txtInstall Akhdefo using the following command:pipinstallakhdefo-functions
akhellosetup
Test Package for PyPI Learning
akhenaten-py
Akhenaten-pyClient to use the plotly hosting service at akhenaten.euThis library can be used to upload Plotly plots to the akhenaten plotly hosting service!To use this service first get an account by contacting frank (at) amunanalytics.euIf you would like to use a GUI, simply use your client Id and key to login tohttps://console.akhenaten.euInstallationpython3 -m pip install akhenaten-pyUsageimportos# authentication can be done with environment variables or directly# this example shows both, this is just to show the possibilities!os.environ['AKHENATEN_ID']='<your client ID>'os.environ['AKHENATEN_KEY']='<your client key'fromakhenatenimportAkhenatenClient,MetadataClassclient_hoster=AkhenatenClient(# not needed when using environment variables!akhenaten_id='<your client ID>',akhenaten_key='<your client key>',bucket_name='<bucketname>'# only applicable if you are using custom access key, otherwise deduced from client id)# get all current uploaded figsprint(client_hoster.list_figs())# create some figfig=get_some_plotly_fig()# upload it and display the urls# if no slug is specified then a random uuid4 will be generatedresult=client_hoster.upload_fig(fig,slug='<optional slug>',meta_obj=MetadataClass(title='some plot title',author='some author'))print(result['json_url'])# the url to use in your own embeddingprint(result['fig_url'])# direct html access# to get back a fig to plotlyfig,meta_obj=client_hoster.download_fig('<slug>')Alternative usageThis service is backed by minio which is fully AWS S3 compatible. Thus if you would like more extensive features you can use theboto3package.The hosting url will then behttps://s3.akheaten.eu/BUCKET_NAME/ITEM_NAME.json
akhilesh
UNKNOWN
akhilnester
No description available on PyPI.
akhmadulin
No description available on PyPI.
aki
No description available on PyPI.
akiban-automation
No description available on PyPI.
akid
akidis a python package written for doing research in Neural Network (NN). It also aims to be production ready by taking care of concurrency and communication in distributed computing (which depends on the utilities provided byPyTorchandTensorflow). It could be seen as a user friendly front-end to torch, or tensorflow, likeKeras. It grows out of the motivation to reuse my old code, and in the long run, to explore alternative framework for building NN. It supports two backends, i.e., Tensorflow and Pytorch. If combining withGlusterFS,DockerandKubernetes, it is able to provide dynamic and elastic scheduling, auto fault recovery and scalability (which is not to brag the capability ofakid, since the features are not features ofakidbut features thanks to open source (and libre software), but to mention the possibilities that they can be combined.).Seehttp://akid.readthedocs.io/en/latest/index.htmlfor documentation. The document is dated, and has not been updated to include new changes e.g., the PyTorch backend. But the backbone design is the same, and main features are there.NOTE: the PyTorch end support has been way ahead of Tensorflow support now ...
akida
Akida Execution EngineThe Akida Execution Engine is an interface to the Brainchip Akida Neural Processor. To allow the development of Akida models without an actual Akida hardware, it includes a software backend that simulates the Akida Neural Processor.
akida-models
Akida modelsThis package contains a zoo of TensorFlow/Keras defined models that can be quantized and that are compatible for Akida conversion.
akiFlagger
AKIFlaggerIntroductionAcute Kidney Injury (AKI) is a sudden onset of kidney failure and damage marked by an increase in the serum creatinine levels (amongst other biomarkers) of the patient. Kidney Disease Improving Global Outcomes (KDIGO) has a set of guidelines and standard definitions of AKI:Stage 1: 50% increase in creatinine in < 7 days or 0.3 increase in creatinine in < 48 hoursStage 2: 100% increase in (or doubling of) creatinine in < 48 hoursStage 3: 200% increase in (or tripling of) creatinine in < 48 hoursThis package contains a flagger to determine if a patient has developed AKI based on longitudinal data of serum creatinine measurements. More information about the specific data input format can be found in the documentation under theGetting Startedsection.InstallationYou can install the flagger withpip. Simply type the following into command line and the package should install properly.pipinstallakiFlaggerTo ensure that it is working properly, you can open a Python session and test it with.importakiFlaggerprint(akiFlagger.__version__)>>'1.0.0'Alternatively, you can download the source and wheel files to build manually fromhttps://pypi.org/project/akiFlagger/.Getting startedThere is awalk-through notebookavailable on Github to introduce the necessary components and parameters of the flagger. The notebook can be accessed via Google Colab notebooks. The notebook has also been adapted in thedocumentation.Change LogVersion 0.1.x- Function-based implementation of flagger.Version 0.2.x- Switched to class-based implementation (OOP approach).Version 0.3.x- Switched to single-column output for AKI column.Version 0.4.x- Removed encounter and admission as optional columns.
akikm
No description available on PyPI.
akilan
AKILAN :Image Classification EngineAkilan is a Python library for quick image classification usingTensorFlow. It is a wrapper aroundTensorFlow'stf.keraslibrary. It is designed to be easy to use and easy to understand. The library is currently in development and is not yet ready for use. The library usesnumpy,pandas,matplotlibandseabornfor data manipulation and visualization. The library is written in Python 3.6. It is tested on windows and linux.InstallationUse the package managerpipto install the library.pipinstallakilanUsageimportICE# Convert directory into dataframedf=ICE.dir_to_df(path)# Create train and test dataframestrain_df,test_df=ICE.split_df(df,test_size=0.2)
akilib
This Library is Hardware Library. and You can buy parts in Japan Akihabara .Akihabara Library => akilibhttps://github.com/nonNoise/akilib1.InstallEdison> opkg install python-pip> pip install pip –upgrade> pip install akilibRaspberryPi> sudo apt-get install python-pip> sudo pip install pip –upgrade> sudo pip install akilib1.LibraryEdisonHDC1000 [ AKI_I2C_HDC1000 ]L3GD20 [ AKI_I2C_L3GD20 ]LPS25H [ AKI_I2C_LPS25H ]LIS3DH [ AKI_I2C_LIS3DH ]MCP23017 [ AKI_I2C_MCP23017 ]SO1602AWYB [ AKI_I2C_SO1602AWYB ]S11059 [ AKI_I2C_S11059 ]AQM0802A [ AKI_I2C_AQM0802A ]AQM1248A [ AKI_SPI_AQM1248A ]SG12864ASLB [ AKI_GPIO_SG12864ASLB ]RaspberryPi A+.B+,2B,3B.ZeroAQM0802A [ AKI_I2C_AQM0802A ]AQM1602A [ AKI_I2C_AQM1602A ]HDC1000 [ AKI_I2C_HDC1000 ]LPS25H [ AKI_I2C_LPS25H ]ADT7410 [ AKI_I2C_ADT7410 ]S11059 [ AKI_I2C_S11059 ]SHT31 [ AKI_I2C_SHT31 ]2.LicenseThe MIT License (MIT) Copyright (c) 2015 Yuta Kitagami ([email protected],@nonnoise)
akima
Akima is a Python library that implements Akima’s interpolation method described in:A new method of interpolation and smooth curve fitting based on local procedures. Hiroshi Akima, J. ACM, October 1970, 17(4), 589-602.A continuously differentiable sub-spline is built from piecewise cubic polynomials. It passes through the given data points and will appear smooth and natural.This module is no longer being actively developed. Consider usingscipy.interpolate.Akima1DInterpolatorinstead.Author:Christoph GohlkeLicense:BSD 3-ClauseVersion:2024.1.6QuickstartInstall the akima package and all dependencies from thePython Package Index:python -m pip install -U akimaSeeExamplesfor using the programming interface.Source code, examples, and support are available onGitHub.RequirementsThis revision was tested with the following requirements and dependencies (other versions may work):CPython3.9.13, 3.10.11, 3.11.7, 3.12.1NumPy1.26.3Revisions2024.1.6Add type hints.Remove support for Python 3.8 and 1.22 (NEP 29).2022.9.12Remove support for Python 3.7 (NEP 29).Update metadata.Examples>>> import numpy >>> from matplotlib import pyplot >>> from scipy.interpolate import Akima1DInterpolator >>> def example(): ... '''Plot interpolated Gaussian noise.''' ... x = numpy.sort(numpy.random.random(10) * 100) ... y = numpy.random.normal(0.0, 0.1, size=len(x)) ... x2 = numpy.arange(x[0], x[-1], 0.05) ... y2 = interpolate(x, y, x2) ... y3 = Akima1DInterpolator(x, y)(x2) ... pyplot.title('Akima interpolation of Gaussian noise') ... pyplot.plot(x2, y2, 'r-', label='akima') ... pyplot.plot(x2, y3, 'b:', label='scipy', linewidth=2.5) ... pyplot.plot(x, y, 'go', label='data') ... pyplot.legend() ... pyplot.show() >>> example()
akimous
AkimousAkimous is a Python IDE with unique features boosting developers' productivity.FeaturesMachine-learning-assisted/NLP-assisted context-aware auto completionBeautifully rendered function documentationLayered keyboard control (a more intuitive key binding than vim and Emacs)Real-time code formatterInteractive console (integration with IPython kernel)For more information and documentation, visit the official website.InstallationPrerequisitePython 3.7 or 3.8 (with pip)Git (for version control integration)C/C++ compiler (may be required by some dependencies during installation)A modern browserInstalling From PyPIThe recommended way for installing Akimous is through PyPI.pipinstall-UakimousStarting ApplicationStart it in the terminal. The browser should be automatically opened.akimousTo see available arguments, doakimous --help.Using Docker ImageIf you have difficulty installing, or you are running in a cloud environment, try the prebuilt docker image.dockerrun--mounttype=bind,source=$HOME,target=/home/user-p127.0.0.1:3179:3179-itred8012/akimousakimousCommandsStart the app by typing in the terminal (the browser will automatically open if available):akimousOptions--help: show help message and exit.--host HOST: specify the host for Akimous server to listen on. (default to 0.0.0.0 if inside docker, otherwise 127.0.0.1)--port PORT: The port number for Akimous server to listen on. (default=3179)--no-browser: Do not open the IDE in a browser after startup.--verbose: Print extra debug messages.DevelopmentMake sure you have recent version of the following build dependencies installed.Node (12+)Python (3.7+)PoetryYarnMakeZopfliParallelRun the following commands according to your need.make# build everythingmaketest# run testsmakelint# run lintersmakeinstall# (re)install the packageRunningmakewill install all Python and Javascript dependencies listed inpyproject.tomlandui/package.jsonautomatically.ContributingThis program is at pre-alpha stage. Please do report issues if you run into some problems. Contributions of any kind are welcome, including feature requests or pull requests (can be as small as correcting spelling errors) .LicenseBSD-3-ClauseLinksOfficial websitePyPIDocker Hub
akimzemar-api
akim_zemar_apiBiblioteka dlya mirea
akin
AkinPython library for detecting near duplicate texts in a corpus at scale using Locality Sensitive Hashing, as described in chapter three ofMining Massive Datasets. This algorithm identifies similar texts in a corpus efficiently by estimating their Jaccard similarity with sub-linear time complexity. This can be used to detect near duplicate texts at scale or locate different versions of a document.Example UsagefromakinimportMinHash,LSHcontent=['Jupiter is primarily composed of hydrogen with a quarter of its mass being helium','Jupiter moving out of the inner Solar System would have allowed the formation of inner planets.','A helium atom has about four times as much mass as a hydrogen atom, so the composition changes ''when described as the proportion of mass contributed by different atoms.','Jupiter is primarily composed of hydrogen and a quarter of its mass being helium','A helium atom has about four times as much mass as a hydrogen atom and the composition changes ''when described as a proportion of mass contributed by different atoms.','Theoretical models indicate that if Jupiter had much more mass than it does at present, it ''would shrink.','This process causes Jupiter to shrink by about 2 cm each year.','Jupiter is mostly composed of hydrogen with a quarter of its mass being helium','The Great Red Spot is large enough to accommodate Earth within its boundaries.']# Labels for each text in content.content_labels=[1,2,3,4,5,6,7,8,9]# Create MinHash object.minhash=MinHash(content,n_gram=9,permutations=100,hash_bits=64,seed=3)# Create LSH model.lsh=LSH(minhash,content_labels,no_of_bands=50)# Query to find near duplicates for text 1.print(lsh.query(1,min_jaccard=0.5))>>>[8,4]# Generate minhash signature and add new texts to LSH model.new_text=['Jupiter is primarily composed of hydrogen with a quarter of its mass being helium','Jupiter moving out of the inner Solar System would have allowed the formation of ''inner planets.']new_labels=['doc1','doc2']new_minhash=MinHash(new_text,n_gram=9,permutations=100,hash_bits=64,seed=3)lsh.update(new_minhash,new_labels)# Check contents of documents.print(lsh.contains())>>>[1,2,3,4,5,6,7,8,9,'doc1','doc2']# Remove text and label from model.lsh.remove(5)print(lsh.contains())>>>[1,2,3,4,6,7,8,9,'doc1','doc2']# Return adjacency list for all similar texts.adjacency_list=lsh.adjacency_list(min_jaccard=0.55)print(adjacency_list)>>>{1:['doc1',4],2:['doc2'],3:[],4:[1,'doc1'],6:[],7:[],8:[],9:[],'doc1':[1,4],'doc2':[2]}API GuideMinHashCreates a MinHash object that contains matrix of Minhash Signatures for each text.MinHash ParametersMinHash(text,n_gram=9,n_gram_type='char',permutations=100,hash_bits=64,seed=None)text{list or ndarray}Iterable containing strings of text for each text in a corpus.n_gramint, optional, default: 9Size of each overlapping text shingle to break text into prior to hashing. Shingle size should be carefully selected dependent on average text length as too low a shingle size will yield false similarities, whereas too high a shingle size will fail to return similar documents.n_gram_typestr, optional, default: 'char'Type of n gram to use for shingles, must be 'char' to split text into character shingles or 'term' to split text into overlapping sequences of words.permutationsint, optional, default: 100Number of randomly sampled hash values to use for generating each texts minhash signature. Intuitively the larger the number of permutations, the more accurate the estimated Jaccard similarity between the texts but longer the algorithm will take to run.hash_bitsint, optional, default: 64Hash value size to be used to generate minhash signatures from shingles, must be 32, 64 or 128 bit. Hash value size should be chosen based on text length and a trade off between performance and accuracy. Lower hash values risk false hash collisions leading to false similarities between documents for larger corpora of texts.methodstr, optional, default: 'multi_hash'Method for random sampling via hashing, must be 'multi_hash' or 'k_smallest_values'.If multi_hash selected texts are hashed once per permutation and the minimum hash value selected each time to construct a signature.If k_smallest_values selected each text is hashed once and k smallest values selected for k permutations. This method is much faster than multi_hash but far less stable.seedint, optional, default: NoneSeed from which to generate random hash function, necessary for reproducibility or to allow updating of the LSH model with new minhash values later.MinHash Propertiesn_gram:int.n_gramReturns size of each overlapping text shingle used to create minhash signatures.n_gram_type:int.n_gram_typeReturns type of n-gram used for text shingling.permutations:int.permutationsReturns number of permutations used to create signatures.hash_bits:int.hash_bitsReturns hash value size used to create signatures.method:str.methodReturns hashing method used in minhash function.seed:int.seedReturns seed value used to generate random hashes in minhash function.signatures:numpy.array.signaturesReturns matrix of text signatures generated by minhash function.n = text row, m = selected permutations.LSHCreates an LSH model of text similarity that can be used to return similar texts based on estimated Jaccard similarity.LSH ParametersLSH(minhash=None,labels=None,no_of_bands=None)minhashoptional, default: NoneMinhash object containing minhash signatures returned by MinHash class.labels{list or ndarray}, optional, default: NoneList, array or Pandas series containing unique labels for each text in minhash object signature. This should be provided in the same order as texts passed to the MinHash class. Example labels include filepaths and database ids.no_of_bandsoptional, default: permutations // 2Number of bands to break minhash signature into before hashing into buckets. A smaller number of bands will result in a stricter algorithm, requiring larger possibly leading to false negatives missing some similar texts, whereas a higher number may lead to false similarities.LSH MethodsupdateUpdates model from a MinHash object containing signatures generated from new texts and their corresponding labels..update(minhash,new_labels)minhash:MinHash object containing signatures of new texts, parameters must match any previous MinHash objects.new_labels:List, array or Pandas series containing text labels.queryTakes a label and returns the labels of any similar texts..query(label,min_jaccard=None,sensitivity=1)label:Label of text to return list of similar texts for.min_jaccard:Jaccard similarity threshold texts have to exceed to be returned as similar.sensitivity:Number of buckets texts must share to be returned as similar.removeRemove file label and minhash signature from model..remove(label)label:Label of text to remove from LSH model.containsReturns list of labels contained in the model..contains()adjacency_listReturns an adjacency list that can be used to create a text similarity graph..adjacency_list(min_jaccard=None,sensitivity=1)min_jaccard:Jaccard similarity threshold texts have to exceed to be returned as similar.sensitivity:Number of buckets texts must share to be returned as similar.LSH Propertiesno_of_bands:int.no_of_bandsNumber of bands used in LSH model.permutations:int.permutationsNumber of permutations used to create minhash signatures used in LSH model.
akinaka
AkinakaThis is a general all-purpose tool for managing things in AWS that Terraform is not responsible for -- you can think of it as an extension to theawsCLI.At the moment it only does three things; blue/green deploys for plugging into Gitlab, AMI cleanups, and RDS copies to other accounts.AkinakaInstallationRequirements and PresumptionsA Note on Role AssumptionDeploysCleanupsAMIsEBS VolumesRDS SnapshotsRDSCopyDisaster RecoveryTransferContainerBillingContributingInstallationpip3 install akinakaRequirements and PresumptionsFormat of ASG names: "whatever-you-like*-blue/green*" — the part in bold is necessary, i.e. you must have two ASGs, one ending with "-blue" and one ending with "-green".The following permissions are necessary for the IAM role / user that will be running Akinaka:sts:AssumeRoleNOTE: Going forward, IAM policies will be listed separately for their respective subcommands (as is already the case forTransfer). For now however, the following single catch-all policy can be used, attach it to the IAM profile that Akinaka will be assuming:{ "Version": "2012-10-17", "Statement": [ { "Sid": "2018121701", "Effect": "Allow", "Action": [ "ec2:AuthorizeSecurityGroupIngress", "ec2:DescribeInstances", "ec2:CreateKeyPair", "ec2:CreateImage", "ec2:CopyImage", "ec2:DescribeSnapshots", "elasticloadbalancing:DescribeLoadBalancers", "ec2:DeleteVolume", "ec2:ModifySnapshotAttribute", "autoscaling:DescribeAutoScalingGroups", "ec2:DescribeVolumes", "ec2:DetachVolume", "ec2:DescribeLaunchTemplates", "ec2:CreateTags", "ec2:RegisterImage", "autoscaling:DetachLoadBalancerTargetGroups", "ec2:RunInstances", "ec2:StopInstances", "ec2:CreateVolume", "autoscaling:AttachLoadBalancerTargetGroups", "elasticloadbalancing:DescribeLoadBalancerAttributes", "ec2:GetPasswordData", "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:DescribeAccountLimits", "ec2:DescribeImageAttribute", "elasticloadbalancing:DescribeRules", "ec2:DescribeSubnets", "ec2:DeleteKeyPair", "ec2:AttachVolume", "autoscaling:DescribeAutoScalingInstances", "ec2:DeregisterImage", "ec2:DeleteSnapshot", "ec2:DescribeRegions", "ec2:ModifyImageAttribute", "elasticloadbalancing:DescribeListeners", "ec2:CreateSecurityGroup", "ec2:CreateSnapshot", "elasticloadbalancing:DescribeListenerCertificates", "ec2:ModifyInstanceAttribute", "elasticloadbalancing:DescribeSSLPolicies", "ec2:TerminateInstances", "elasticloadbalancing:DescribeTags", "ec2:DescribeTags", "ec2:DescribeLaunchTemplateVersions", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DeleteSecurityGroup", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:DescribeTargetGroups" ], "Resource": "*" }, { "Sid": "2018121702", "Effect": "Allow", "Action": [ "ssm:PutParameter", "ssm:GetParameter", "autoscaling:UpdateAutoScalingGroup", "ec2:ModifyLaunchTemplate", "ec2:CreateLaunchTemplateVersion", "autoscaling:AttachLoadBalancerTargetGroups" ], "Resource": [ "arn:aws:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*", "arn:aws:ssm:eu-west-1:[YOUR_ACCOUNT]:parameter/deploying-status-*", "arn:aws:ec2:*:*:launch-template/*" ] } ] }A Note on Role AssumptionAkinaka uses IAM roles to gain access into multiple accounts. Most commands require you to specify a list of roles you wish to perform a task for, and that role must have thests:AssumeRolepermission. This is not only good security, it's helpful for ensuring you're doing things to the accounts you think you're doing things for ;)DeploysDone with theupdateparent command, and then theasgandtargetgroupsubcommands (update targetgroupis only needed for blue/green deploys).Example:# For standalone ASGs (not blue/green) akinaka update \ --region eu-west-1 \ --role-arn arn:aws:iam::123456789100:role/management_assumable \ asg \ --asg workers \ --ami ami-000000 # For blue/green ASGs akinaka update \ --region eu-west-1 \ --role-arn arn:aws:iam::123456789100:role/management_assumable \ asg \ --lb lb-asg-ext \ --ami ami-000000 # For blue/green ASGs with multiple Target Groups behind the same ALB akinaka update \ --region eu-west-1 \ --role-arn arn:aws:iam::123456789100:role/management_assumable \ asg \ --target-group application-1a \ --ami ami-000000For blue/green deploys, the next step is to check the health of your new ASG. For the purposes of Gitlab CI/CD pipelines, this will be printed out as the only output, so that it can be used in the next job.Once the new ASG is confirmed to be working as expected:akinaka update --region eu-west-1 --role-arn arn:aws:iam::123456789100:role/management_assumable asg --new blueThe value of--role-arnis used to assume a role in the target account with enough permissions to perform the actions of modifying ASGs and Target Groups. As such,akinakais able to do cross-account deploys. It will deliberately error if you do not supply an IAM Role ARN, in order to ensure you are deploying to the account you think you are.CleanupsCurrently AMI, EBS, and RDS snapshot cleanups are supported.Common option:--role-arnsis a space separated list of IAM ARNs that can be assumed by the token you are using to run this command. The AMIs for the running instances found in these accounts will not be deleted. Not to be confused with--role-arn, accepted for theupdateparent command, for deploys.AMIsCleans up AMIs and their snapshots based on a specified retention period, and deduced AMI usage (will not delete AMIs that are currently in use). You can optionally specify an AMI name pattern, and it will keep the latest version of all the AMIs it finds for it.Usage:akinaka cleanup \ --region eu-west-1 \ --role-arns "arn:aws:iam::198765432100:role/management_assumable arn:aws:iam::123456789100:role/management_assumable" \ ami \ --exceptional-amis cib-base-image-* --retention 7The above will delete all AMIs and their snapshots,except for those which:Are younger than 7 days ANDAre not in use by AWS accounts "123456789100" or "198765432100" ANDWHERE the AMI name matches the pattern "cib-base-image-*", there is more than one match AND it is the oldest one--exceptional-amisis a space seperated list of exact names or patterns for which to keep the latest version of an AMI for. For example, the pattern "cib-base-image-*" will match with normal globbing, and if there is more than one match, only the latest one will not be deleted (else there is no effect).--retentionis the retention period you want to exclude from deletion. For example;--retention 7will keep all AMIs found within 7 days, if they are not in the--exceptional-amislist.EBS VolumesDelete all EBS volumes that are not attached to an instance (stopped or not):akinaka cleanup \ --region eu-west-1 \ --role-arns "arn:aws:iam::198765432100:role/management_assumable arn:aws:iam::123456789100:role/management_assumable" \ ebsRDS SnapshotsThis will delete all snapshots tagged "akinaka-made": akinaka cleanup \ --not-dry-run \ --region eu-west-1 \ --role-arns "arn:aws:iam::876521782800:role/OlinDataAssumedAdministrator" \ rds \ --tags "akinaka-made"RDSPerform often necessary but complex tasks with RDS.CopyCopy encrypted RDS instances between accounts:akinaka copy --region eu-west-1 \ rds \ --source-role-arn arn:aws:iam::198765432100:role/management_assumable \ --target-role-arn arn:aws:iam::123456789100:role/management_assumable \ --snapshot-style running_instance \ --source-instance-name DB_FROM_ACCOUNT_198765432100 \ --target-instance-name DB_FROM_ACCOUNT_123456789100 \ --target-security-group SECURITY_GROUP_OF_TARGET_RDS \ --target-db-subnet SUBNET_OF_TARGET_RDS \--regionis optional because it will default to the environment variableAWS_DEFAULT_REGION.Disaster RecoveryAkinaka has limited functionality for backing up and restoring data for use in disaster recovery.TransferTransfer data from S3, RDS, and RDS Aurora into a backup account:akinaka dr \ --region eu-west-1 \ --source-role-arn arn:aws:iam::[LIVE_ACCOUNT_ID]:role/[ROLE_NAME] \ --destination-role-arn arn:aws:iam::[BACKUP_ACCOUNT_ID]:role/[ROLE_NAME] \ transfer \ --service s3Omitting--servicewill include all supported services.You can optionally specify the name of the instance to transfer with--namesin a comma separated list, e.g.--names 'database-1, database-2. This can be for either RDS instances, or S3 buckets, but not both at the same time. Future versions may remove--serviceand replace it with a subcommand instead, i.e.akinaka dr transfer rds, so that those service can have--namesto themselves.A further limitation is that only a single region can be handled at a time for S3 buckets. If you wish to backup all S3 buckets in an account, and they are in different regions, you will have to specify them per run, using the appropriate region each time. Future versions will work the bucket regions out automatically, and remove this limitation.Akinaka must be run from either an account or instance profile which can use sts:assume to assume both thesource-role-arnanddestination-role-arn. This is true even if you are running on the account thatdestination-role-arnis on. You will therefore need this policy attached to the user/role that's doing the assuming:{ "Version": "2012-10-17", "Statement": [ { "Sid": "akinakaassume", "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": [ "arn:aws:iam::[DESTINATION_ACCOUNT]:role/[ROLE_TO_ASSUME]", "arn:aws:iam::[SOURCE_ACCOUNT]:role/[ROLE_TO_ASSUME]" ] } ] }Note:A period of 4 hours (469822 seconds) is hardcoded into the sts:assume call made in the RDS snapshot class, since snapshot creation can take a very long time. This must therefore be the minimum value for the role'smax-session-duration.The following policy is needed for usage of this subcommand, attach it to the role you'll be assuming:{ "Version": "2012-10-17", "Statement": [ { "Sid": "KMSEncrypt", "Effect": "Allow", "Action": [ "kms:GetPublicKey", "kms:ImportKeyMaterial", "kms:Decrypt", "kms:UntagResource", "kms:PutKeyPolicy", "kms:GenerateDataKeyWithoutPlaintext", "kms:Verify", "kms:ListResourceTags", "kms:GenerateDataKeyPair", "kms:GetParametersForImport", "kms:TagResource", "kms:Encrypt", "kms:GetKeyRotationStatus", "kms:ReEncryptTo", "kms:DescribeKey", "kms:Sign", "kms:CreateGrant", "kms:ListKeyPolicies", "kms:UpdateKeyDescription", "kms:ListRetirableGrants", "kms:GetKeyPolicy", "kms:GenerateDataKeyPairWithoutPlaintext", "kms:ReEncryptFrom", "kms:RetireGrant", "kms:ListGrants", "kms:UpdateAlias", "kms:RevokeGrant", "kms:GenerateDataKey", "kms:CreateAlias" ], "Resource": [ "arn:aws:kms:*:*:alias/*", "arn:aws:kms:*:*:key/*" ] }, { "Sid": "KMSCreate", "Effect": "Allow", "Action": [ "kms:DescribeCustomKeyStores", "kms:ListKeys", "kms:GenerateRandom", "kms:UpdateCustomKeyStore", "kms:ListAliases", "kms:CreateKey", "kms:ConnectCustomKeyStore", "kms:CreateCustomKeyStore" ], "Resource": "*" } ] }The following further policies need to be attached to the assume roles to backup each service:RDS / RDS Aurora{ "Version": "2012-10-17", "Statement": [ { "Sid": "RDSBackup", "Effect": "Allow", "Action": [ "rds:DescribeDBClusterSnapshotAttributes", "rds:AddTagsToResource", "rds:RestoreDBClusterFromSnapshot", "rds:DescribeDBSnapshots", "rds:DescribeGlobalClusters", "rds:CopyDBSnapshot", "rds:CopyDBClusterSnapshot", "rds:DescribeDBSnapshotAttributes", "rds:ModifyDBSnapshot", "rds:ListTagsForResource", "rds:CreateDBSnapshot", "rds:DescribeDBClusterSnapshots", "rds:DescribeDBInstances", "rds:CreateDBClusterSnapshot", "rds:ModifyDBClusterSnapshotAttribute", "rds:ModifyDBSnapshotAttribute", "rds:DescribeDBClusters", "rds:DeleteDBSnapshot", "rds:DeleteDBClusterSnapshot" ], "Resource": "*" } ] }S3{ "Version": "2012-10-17", "Statement": [ { "Sid": "S3RW", "Effect": "Allow", "Action": [ "s3:ListBucketMultipartUploads", "s3:GetObjectRetention", "s3:GetObjectVersionTagging", "s3:ListBucketVersions", "s3:CreateBucket", "s3:ListBucket", "s3:GetBucketVersioning", "s3:GetBucketAcl", "s3:GetObjectAcl", "s3:GetObject", "s3:GetEncryptionConfiguration", "s3:ListAllMyBuckets", "s3:PutLifecycleConfiguration", "s3:GetObjectVersionAcl", "s3:GetObjectTagging", "s3:GetObjectVersionForReplication", "s3:HeadBucket", "s3:GetBucketLocation", "s3:PutBucketVersioning", "s3:GetObjectVersion", "s3:PutObject", "s3:PutObjectAcl", "s3:PutEncryptionConfiguration", "s3:PutBucketPolicy" ], "Resource": "*" } ] }ContainerLimited functionality for interactive with EKS and ECR. At the moment it's just getting a docker login via an assumed role to another assumed role:akinaka container --region eu-west-1 --role-arn arn:aws:iam::0123456789:role/registry-rw get-ecr-login --registry 0123456789The above will assume the rolearn:aws:iam::0123456789:role/registry-rwin the account with the registry, and spit out adocker loginline for you to use — exactly likeaws ecr get-login, but working for assumed roles.BillingGet a view of your daily AWS estimated bill for the x number of days. Defaults to today's estimated bill.akinaka reporting --region us-east-1 \ --role-arn arn:aws:iam::1234567890:role/billing_assumerole \ bill-estimates --from-days-ago 1Example output:Today's estimated bill +------------+-----------+ | Date | Total | |------------+-----------| | 2019-03-14 | USD 13.93 | +------------+-----------+You can specify any integer value to the--days-agoflag. It's optional. Default value set for today (current day).You can specify any region to the--regionflag.ContributingModules can be added easily by simply dropping them in and adding an entry intoakinakato include them, and someclickcode in their__init__(or elsewhere that's loaded, but this is the cleanest way).For example, given a module calledakinaka_moo, and a single command and file calledmoo, add these two lines in the appropriate places ofakinaka:from akinaka_update.commands import moo as moo_commands cli.add_command(moo_commands)and the following in the module'scommands.py:@click.group() @click.option("--make-awesome", help="The way in which to make moo awesome") def moo(make_awesome): import .moo # YOUR CODE USING THE MOO MODULEAdding commands that need subcommands isn't too different, but you might want to take a look at the already present examples ofupdateandcleanup.
akinator
InstallingTo install the regular library without asynchronous support, just run the following command:# Unix / macOSpython3-mpipinstall"akinator"# Windowspy-mpipinstall"akinator"Otherwise, to get asynchronous support, do:# Unix / macOSpython3-mpipinstall"akinator[async]"# Windowspy-mpipinstall"akinator[async]"To get async support plus faster performance (via theaiodnsandcchardetlibraries), do:# Unix / macOSpython3-mpipinstall"akinator[fast_async]"# Windowspy-mpipinstall"akinator[fast_async]"To install the development version, do the following:gitclonehttps://github.com/Infiniticity/akinator.pyRequirementsPython ≥ 3.8.0requestsaiohttp(Optional, for async)aiodnsandcchardet(Optional, for faster performance with async)Usuallypipwill handle these for you.LinksAkinatorDocumentation
akinator.py
Akinator-pyPython bindings forakinator-rs, a wrapper around the undocumented akinator API, made usingpyo3designed for easy implementation of an akinator game in code, providing a simple and easy to use API.InstallationPrebuilt wheels are uploaded onto pypi, if you platform is supported, you can install with:$py-mpipinstallakinator.pyYou can also build from source yourself if you haverustinstalledExamplesrefer to thetestsfor full examples on usageRefer to thedocumentationherefor more information
akind
No description available on PyPI.
akindofmagic
This module uses ctypes to access the libmagic file type identification library. It makes use of the local magic database and supports both textual and MIME-type output.
akinoncli
Akinon Cloud Commerce CLIAkinon CLI is an application designed to manage projects and applications in Akinon Cloud Commerce through a command line interface.InstallationAfter installing python 3.8+, run the following command. Then the Akinon CLI would be ready to be used.$ pip install --user akinoncliUsageEvery cloud commerce user is registered to anAccount. For a fresh user anAccountwould be created automatically. The authorized users who are registered toAccountcan create users for the sameAccountusing the CLI or UI. (Registration is not currently available on the CLI)-hcommand argument can be used in order to know more about the command. (Command's required parameters, information about what it does, etc.)Example:akinoncli user -hCommandsAuthenticationIn order to use Akinon CLI, the user must authenticate first.akinoncli loginAuthentication is achieved by submitting e-mail and password.akinoncli logoutBy logging out the previous credentials can be deleted.Public KeysIn order to access theProjectApp's repositories, public keys are required.akinoncli publickey create {key}Creates public key.ParameterDescriptionRequiredkeythe text inside .ssh/id_rsa.pubYesExampleakinoncli publickey create "ssh-rsa AAAAB..."akinoncli publickey listLists the public keys.akinoncli publickey delete {ID}Deletes the public key.ParameterDescriptionRequiredIDPublic Key IDYesApplicationsUsers can create their own application and publish it so other users can set up for their project and use it.The application which you'd like to publish needs to be managed by GIT version control system. In order to have read/write access to application's repository, the application needed to be created in the Akinon Cloud Commerce and a public key must be created.For an application to be distributed and compiled by the Akinon Cloud Commerce, it needed to have a file namedakinon.jsonin its home directory.TODO: add the link to documentation of akinon.jsonakinoncli applicationtype listLists the application types.akinoncli application create {name} {slug} {application_type_id}Creates an application.ParameterDescriptionRequirednameApplication NameYesslugApplication Slug (must be unique)Yesapplication_type_idApplication Type ID (akinoncli applicationtype list)YesTo see the git address you should run thelistafter creation of the application. It can be seen asClone URLcolumn.akinoncli application listLists the applications.akinoncli application get {app_id}Prints the details of an application.In order to upload the source code of application run the following commands:$ git remote add akinon {CLONE_URL} $ git push akinon {branch_name}In order to deploy an application for a project, it needs to be built by Akinon Cloud Commerce. A stable version is required for such an action. One can create a tag usinggit tagcommands. After that, the tag needs to be pushed to the remote repository in the Akinon Cloud Commerce system.Example$ git tag 1.0 $ git push akinon --tagsThe building process can be started by CLI when the tag is pushed.akinoncli application build {app_id} {tag} {--note}Starts the building process for the given tag.ParameterDescriptionRequiredapp_idApplication IDYestagTagYes--noteNoteNoExample$ akinoncli application build 1 1.0 --note="test note for the build"akinoncli application versions {app_id}Lists the built versions of the application.ParameterDescriptionRequiredapp_idApplication IDYesThe status of version beingcompletedindicates that the version is ready for deployment.akinoncli application version-logs <app_id> <version_id>Lists application version logs.ParameterDescriptionRequiredidApplication IDYesversion_idVersion IDYesExample$ akinoncli application version-logs 27 518ProjectsThe projects are the ecosystem of applications working together and integrated. When a project is createdOmnitronapplication is automatically being added to project and starts running.akinoncli project listLists the projects.akinoncli project create {name} {slug}Creates project.ParameterDescriptionRequirednameProject NameYesslugProject Slug (Must be unique.)YesProject ApplicationsThe applications cannot be run without a project in the Akinon Cloud Commerce. They should be related to a project and that relation can be assured with the creation of aProjectApp.akinoncli projectapp add {project_id} {app_id}Adds the application to project by creatingProjectApp.ParameterDescriptionRequiredproject_idProject IDYesapp_idApp IDYesakinoncli projectapp list {project_id}Lists the applications of the relevant project.ParameterDescriptionRequiredproject_idProject IDYesEnvironment ParametersThe applications are able to run with different configurations on the various projects. The same application running with different default language on two different projects can be given as example. For this kind of requirements, Environment Parameters can be used. These parameters could be seen in theENV Variablescolumn when listing the applications.akinoncli projectapp add-env {project_id} {project_app_id} {ENV_KEY}={ENV_VALUE} {ANOTHER_ENV_KEY}={ANOTHER_ENV_VALUE}Adds the environment parameter on the application.ParameterDescriptionRequiredproject_idProject IDYesproject_app_idApplication IDYesENV_KEYThe key of relevant environment parameterYesENV_VALUEThe value of relevant environment parameterYes--deployRedeploy the current version to activate environment variable changes.NoThe same command can also be used for updating.Example$ akinoncli projectapp add-env 1 32 LANGUAGE_CODE=en-usIt's also possible to use complex (i.e. non-string) values by encoding them as JSON. The value must be quoted properly to function correctly.$ akinoncli projectapp add-env 1 32 MIDDLEWARE='["my.custom.MiddlewareClass", "django.middleware.security.SecurityMiddleware", "whitenoise.middleware.WhiteNoiseMiddleware", "django.contrib.sessions.middleware.SessionMiddleware"]' $ akinoncli projectapp add-env 1 32 THUMBNAIL_OPTIONS='{"product-list": {"width": 273, "height": 210}, "product-detail__slider_zoom": {"quality": 90}}'For larger or dynamic payloads you can useEOFoperator insh-based terminals. This also allows string interpolation without having to escape double quotes.img_quality=90opts=$(cat<<EOF{"product-list": {"width": 273,"height": 210,"quality": $img_quality},"product-detail__slider_zoom": {"quality": $img_quality}}EOF)akinoncliprojectappadd-env132THUMBNAIL_OPTIONS="$opts"This environment variable can then be deserialized usingdjango-environpackage, orjson.loads:fromenvironimportEnvenv=Env()DEFAULT_MIDDLEWARE=["django.middleware.security.SecurityMiddleware","whitenoise.middleware.WhiteNoiseMiddleware","django.middleware.common.CommonMiddleware",]MIDDLEWARE=env.json('MIDDLEWARE',default=DEFAULT_MIDDLEWARE)# omit `default` to throw an error if it's not setTHUMBNAIL_OPTIONS=env.json('THUMBNAIL_OPTIONS')# throws error if THUMBNAIL_OPTIONS is not set# print(MIDDLEWARE[0]) # prints "my.custom.MiddlewareClass"# print(list(THUMBNAIL_OPTIONS)) # prints ["product-list", "product-detail__slider_zoom"]Refer todjango-environdocumentationfor further information.akinoncli projectapp remove-env {project_id} {app_id} {ENV_KEY} {ANOTHER_ENV_KEY}Deletes the environment parameter from the application.ParameterDescriptionRequiredproject_idProject IDYesproject_app_idApplication IDYesENV_KEYThe key of relevant environment parameterYes--deployRedeploy the current version to activate environment variable changes.NoDeploying the ApplicationIn order to deploy an application it needed to be built firstly. (Those steps are explained in Applications section.)akinoncli projectapp deploy {project_id} {project_app_id} {tag}Deploys the relevant tag to the relevant project application. (That process might take some time.)ParameterDescriptionRequiredproject_idProject IDYesproject_app_idApplication IDYestagTag of the versionYesakinoncli projectapp deploy-multiple {app_id} {tag}Deploys multiple project applications with given tagParameterDescriptionRequiredapp_idApplication IDYestagTag of the versionYes--project-appsProject App IDsIf--deploy-allis not used Yes otherwise No--deploy-allDeploys all project apps with given tagIf--project-appsis not used Yes otherwise Noakinoncli projectapp deployments list {project_id} {app_id}Lists the deployments of the relevant application. The status of deployment can be seen in thestatuscolumn.ParameterDescriptionRequiredproject_idProject IDYesproject_app_idApplication IDYesakinoncli projectapp deployment-logs project_id project_app_id deployment_idLists deployment logs.ParameterDescriptionRequiredproject_idProject IDYesproject_app_idProject App IDYesdeployment_idDeployment IDYesExample$ akinoncli projectapp deployment-logs 1 1 1Application LogThe applications logs can be seen following commands. process type parameter is optional.akinoncli projectapp logs {project_id} {project_app_id}ParameterDescriptionRequiredproject_idProject IDYesproject_app_idApplication IDYes-pProcess type (If that parameter is passed the relevant process typed logs would be returned) (By default returns logs with any process types )NoExampleThe following command returns a hundred logs created in the last minute.akinoncli projectapp logs 1 1The following command returns a hundred logs created in the last minute.akinoncli projectapp logs 1 1 -p webExporting Application LogsThe application logs can be exported with following commands.ParameterDescriptionRequiredproject_idProject IDYesapp_idApplication IDYes-dFilters logs that were created on the specified dates. Dates must be separated by commas. Date format must be YYYY-MM-DD.No-sFilters logs that were created after the given specified date. Date format must be YYYY-MM-DD.No-eFilters logs that were created before the given specified date. Date format must be YYYY-MM-DD.No-pProcess type (If that parameter is passed the relevant process typed logs would be returned) (By default returns logs with any process types )NoExampleThe following command exports all the logs of the given application.akinoncli projectapp export-logs 1 1The following command exports all the logs of the given application which were created in 2021-09-23 and 2021-09-24 dates.*akinoncli projectapp export-logs 1 1 -d 2021-09-23,2021-09-24The following command exports all the logs of the given application which were created with web and beat process types.akinoncli projectapp export-logs 1 1 -p web,beatThe following command exports all the logs of the given application which were created between 2021-09-23 and 2021-09-28 dates.akinoncli projectapp export-logs 1 1 -s 2021-09-23 -e 2021-09-28Attaching certificate to Project ApplicationIn order to deploy an application with certain domain and ssl certificate, that command can be used. One certificate can only be attached to one Project Application.Attaches certificate to Project Application.akinoncli projectapp attach-certificate {project_id} {project_app_id} {certificate_id}ParameterDescriptionRequiredproject_idProject IDYesproject_app_idApplication IDYescertificate_idCertificate IDYesExample$ akinoncli projectapp attach-certificate 1 1 1Domain and CertificateAkinon Cloud Commerce sistemine bir uygulamayı dağıtıma çıkarırken uygulamanın ssl sertifikası ile belirli bir alan adı ile çalışmasını istediğimiz durumlarda alan adı ve sertifika oluşturarak oluşturulan sertifikayı proje uygulamasına bağlayabiliriz.DomainLists the domains.akinoncli domain listCreates domain.akinoncli domain create {hostname} {is_managed}ParameterDescriptionRequiredhostnameHostnameYesis_managedAlan adının akinon cloud tarafından yönetilmesini belirtir. (true, false değerlerinden biri olmalıdır)YesExample$ akinoncli domain create akinoncloud.net trueCertificateLists the certificates of the relevant domain.akinoncli certificate list {domain_id}Creates a certificate.akinoncli certificate create {domain_id} {fqdn}ParameterDescriptionRequireddomain_idDomain IDYesfqdnFully Qualified Domain NameYesExample$ akinoncli certificate create 1 test.akinoncloud.netAddonsAddons are the third-party technologies such as Redis, Sentry, PostgreSQL etc.Lists the addons.akinoncli addon list {project_id} {project_app_id}ParameterDescriptionRequiredproject_idProject IDYesproject_app_idProject Application IDYesExample$ akinoncli addon list 1 1Kube Metric MonitorCPU and memory metrics of the applications running in the relevant Kubernetes cluster and namespace can be seen with the following command.Lists the metrics.akinoncli addon list {project_id} {project_app_id}ParameterDescriptionRequiredclusterCluster NameYesnamespaceNamespaceYesExample$ akinoncli metrics list cluster1 namespace1
akiokio-django-geoposition
A model field that can hold a geoposition (latitude/longitude), and corresponding admin/form widget.PrerequisitesStarting with version 0.2, django-geoposition requires Django 1.4.10 or greater. If you need to support Django versions prior to 1.4.10, please use django-geoposition 0.1.5.InstallationUse your favorite Python packaging tool to installgeopositionfromPyPI, e.g.:pip install django-geopositionAdd"geoposition"to yourINSTALLED_APPSsetting:INSTALLED_APPS = ( # … "geoposition", )If you are still using Django <1.3, you are advised to installdjango-staticfilesfor static file serving.Usagedjango-geopositioncomes with a model field that makes it pretty easy to add a geoposition field to one of your models. To make use of it:In yourmyapp/models.py:from django.db import models from geoposition.fields import GeopositionField class PointOfInterest(models.Model): name = models.CharField(max_length=100) position = GeopositionField()This enables the following simple API:>>> from myapp.models import PointOfInterest >>> poi = PointOfInterest.objects.get(id=1) >>> poi.position Geoposition(52.522906,13.41156) >>> poi.position.latitude 52.522906 >>> poi.position.longitude 13.41156Form field and widgetAdminIf you use aGeopositionFieldin the admin it will automatically show aGoogle Mapswidget with a marker at the currently stored position. You can drag and drop the marker with the mouse and the corresponding latitude and longitude fields will be updated accordingly.It looks like this:Regular FormsUsing the map widget on a regular form outside of the admin requires just a little more work. In your template make sure thatjQueryis includedthe static files (JS, CSS) of the map widget are included (just use{{ form.media }})Example:<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8/jquery.min.js"></script> <form method="POST" action="">{% csrf_token %} {{ form.media }} {{ form.as_p }} </form>SettingsYou can customize theMapOptionsandMarkerOptionsused to initialize the map and marker in JavaScript by definingGEOPOSITION_MAP_OPTIONSorGEOPOSITION_MARKER_OPTIONSin yoursettings.py.Example:GEOPOSITION_MAP_OPTIONS = { 'minZoom': 3, 'maxZoom': 15, } GEOPOSITION_MARKER_OPTIONS = { 'cursor': 'move' }Please note that you cannot use a value likenew google.maps.LatLng(52.5,13.4)for a setting likecenterorpositionbecause that would end up as a string in the JavaScript code and not be evaluated. Please useLat/Lng Object Literalsfor that purpose, e.g.{'lat': 52.5, 'lng': 13.4}.You can also customize the height of the displayed map widget by settingGEOPOSITION_MAP_WIDGET_HEIGHTto an integer value (default is 480).LicenseMIT
ak-ipatool-py
No description available on PyPI.
akips
akipsThis akips module provides a simple way for python scripts to interact with theAKiPS Network Monitoring SoftwareAPI interface.InstallationTo install akips, simply us pip:$ pip install akipsAKiPS SetupAKiPS includes a way to extend the server through custom perl scripts. They publish a list from theirSupport - Site scriptspage, along with install instructions.This module can use additional routines included in theakips_setupdirectory of this repository,site_scripting.pl.Usage ExamplefromakipsimportAKIPSapi=AKIPS('akips.example.com',username='api-ro',password='something')devices=api.get_devices()forname,fieldsindevices.items():print("Device:{}{}".format(name,fields))Bugs/RequestsPlease use theGitHub issue trackerto submit bugs or request features.
akira
library_publish_test
akira3d
No description available on PyPI.
akiraa
No description available on PyPI.
akiraPyYaml
Library for interacting with files and data in yaml formatBy Marc Jose RubioLibrary for interacting with files and data in yaml formatPrerequisitesPython 3Example of usefrom akiraPyYaml import read_yaml_from_file read_yaml_from_file('file.yaml')Donate
ak-isign
No description available on PyPI.
akismet
A Python interface tothe Akismet spam-filtering service.Two API clients are available from this library:akismet.SyncClientis an Akismet API client which performs synchronous (blocking) HTTP requests to the Akismet web service.akismet.AsyncClientis an Akismet API client which performs asynchronous (async/await/non-blocking) HTTP requests to the Akismet web service.Aside from one being sync and the other async, the two clients expose identical APIs, and implement all methods ofthe Akismet web API, including the v1.2 key and API usage metrics.To use this library, you will need to obtain an Akismet API key and register a site for use with the Akismet web service; you can do this at <https://akismet.com>. Once you have a key and corresponding registered site URL to use with it, place them in the environment variablesPYTHON_AKISMET_API_KEYandPYTHON_AKISMET_BLOG_URL, and they will be automatically detected and used.You can then construct a client instance and call its methods. For example, to check a submitted forum post for spam:importakismetakismet_client=akismet.SyncClient.validated_client()ifakismet_client.comment_check(user_ip=submitter_ip,comment_content=submitted_content,comment_type="forum-post",comment_author=submitter_name):# This piece of content was classified as spam; handle it appropriately.Or using the asynchronous client:importakismetakismet_client=awaitakismet.AsyncClient.validated_client()ifawaitakismet_client.comment_check(user_ip=submitter_ip,comment_content=submitted_content,comment_type="forum-post",comment_author=submitter_name):# This piece of content was classified as spam; handle it appropriately.Note that in both cases the client instance is created via the alternate constructorvalidated_client(). This is recommended instead of using the default constructor (i.e., directly callingakismet.SyncClient()orakismet.AsyncClient()); thevalidated_client()constructor will perform automatic discovery of the environment-variable configuration and validate the configuration with the Akismet web service before returning the client, while directly constructing an instance will not (so if you do directly construct an instance, you must manually provide and validate its configuration).Seethe documentationfor full details.The original version of this library was written by Michael Foord.
akismet-async
akismet-asyncAn asyncronous Python 3 Akismet client library.Installationpip install akismet-asyncAPI key verificationGet your Akismet API keyhere.fromakismetimportAkismet,Commentakismet_client=Akismet(api_key="YOUR_AKISMET_API_KEY"blog="http://your.blog/",user_agent="My App/1.0.0")awaitakismet_client.verify_key()Example usageYou can check a comment's spam score by creating a dictionary or aComment()object for greater type safety:fromakismetimportAkismet,Commentakismet_client=Akismet(api_key="YOUR_AKISMET_API_KEY"blog="http://your.blog/",user_agent="My App/1.0.0")comment=Comment(comment_content="This is the body of the comment",user_ip="127.0.0.1",user_agent="some-user-agent",referrer="unknown")first_spam_status=awaitakismet_client.check(comment)second_spam_status=awaitakismet_client.check({"user_ip":"127.0.0.2","user_agent":"another-user-agent","referrer":"unknown","comment_content":"This is the body of another comment","comment_author":'John Doe',"is_test":True,})check()returns one of the following strings:hamprobable_spamdefinite_spamunknownSubmit HamIf you have determined that a reported comment is not spam, you can report the false positive to Akismet:awaitakismet_client.submit_ham(comment)Submit SpamIf a spam comment passes the Akismet check, report it to Akismet:awaitakismet_client.submit_spam(comment)
akita
No description available on PyPI.
akita-django
Akita Django IntegrationThis package extendsdjango.test.Clientin order to instrument Django integration tests, capturing requests and responses to the service under test. You can drop inakita_django.test.Clienteverywhere you use Django'sClient, and Akita will use your integration tests to build a spec for your service.Why build specs? A spec shows your service's APIs. Using Akita to build specs from your integration tests makes it clear what APIs your code implements -- and you can diff specs, showing what impact a code change will have on your customers. For more info, seeCatching Breaking Changes Fasterin the Akita docs.See it in ActionTake a look at theAkibox Django Tutorial, which implements a toy Dropbox-like file server and tests it using the Akita Django Integration.MiddlewareThis package also provides a Django Middleware class that sends requests and responses to the Akita CLI, running in Daemon mode. SeeDjango on Herokuin the Akita docs for more information.
akita-fastapi
Akita FastAPI IntegrationThis package extendsfastapi.testclient.TestClientin order to instrument FastAPI integration tests, capturing requests and responses to the service under test. You can drop inakita_fastapi.testclient.HarClienteverywhere you use FastAPI'sTestClient, and Akita will use your integration tests to build a spec for your service.Why build specs? A spec shows your service's APIs. Using Akita to build specs from your integration tests makes it clear what APIs your code implements -- and you can diff specs, showing what impact a code change will have on your customers. For more info, seeCatching Breaking Changes Fasterin the Akita docs.See it in ActionTake a look at theAkibox FastAPI Tutorial, which implements a toy Dropbox-like file server and tests it using the Akita FastAPI Integration.
akita-flask
Akita Flask IntegrationThis package extendsflask.testing.FlaskClientin order to instrument Flask integration tests, capturing requests and responses to the service under test. You can drop inakita_flask.testing.HarClienteverywhere you use Flasks'sFlaskClient, and Akita will use your integration tests to build a spec for your service.Why build specs? A spec shows your service's APIs. Using Akita to build specs from your integration tests makes it clear what APIs your code implements -- and you can diff specs, showing what impact a code change will have on your customers. For more info, seeCatching Breaking Changes Fasterin the Akita docs.See it in ActionTake a look at theAkibox Flask Tutorial, which implements a toy Dropbox-like file server and tests it using the Akita Flask Integration.
akita-har
Akita HTTP Archive (HAR) UtilityThis package provides Pydantic models for HTTP Archive (HAR) files, along with a thread-safe file-writer for creating and concurrently writing HAR entries.See it in ActionTake a look at theAkita Django Integration, which extends thedjango.test.Clientto produce a HAR file containing requests and responses from Django integration tests.
akitchensyncio
akitchensyncioUtility functions for asyncio which Stephan wished where in the stdlib but aren’t.Requires a Python version which supports theasyncsyntax (Python 3.5 or higher).InstallationTo install akitchensyncio, simply:$pipinstallakitchensyncioFunctionwrap_future(f)Takes a callablefwhich returns an awaitable, and returns a callable which wraps the awaitable inasyncio.ensure_future.Can also be used as a decorator, especially with coroutine functions:@wrap_futureasyncdeffoo(arg1,arg2):...This is especially useful in combination withfunctools.lru_cache. Suppose you have a coroutine function which does an asynchronous query, and you decide you want to introduce some caching. Just add two decorators as [email protected]_cache(100)@wrap_futureasyncdefdo_some_query(arg1,arg2):...Functiontransform_future(f, awaitable)Apply a function to the result of an awaitable, return a future which delivers the result.As an example, suppose you have a way to query addresses given names. The API takes a bunch of names rather than a single one to reduce overhead. However, to your callers you would like to hand out futures representing results for individual names.Essentially you want to turn a “future resulting in a dict” into a “dict containing futures”. Kind of the opposite ofasync.gather.fromoperatorimportitemgetterdefquery_addresses(names):fut=do_bunched_address_query(names)# fut is a single future which resolves# into a dict mapping names to addresses.return{name:transform_future(itemgetter(name),fut)fornameinnames}Functioniawait(awaitable)“Interactive await” – Run default eventloop until awaitable has completed. Mainly useful for interactive experimentation.Then remove the “i” fromiawaitto get code which you can use in anasync deffunction.An alternative is to put this in your~/.pythonrc.py:defiawait(x):importasyncioloop=asyncio.get_event_loop()returnloop.run_until_complete(x)This will only importasyncioon first use ofiawait, so it won’t slow down your startup in general.
akithon
No description available on PyPI.
akitools
Installpip install akitoolsExample>>>fromakitoolsimportftime>>>ftime()'20191109'>>>fromakitoolsimportctime>>>ctime('2017-01-01')1483200000
akivymd
Awesome KivyMDAwesome KivyMD is a package containing customized and non-material widgets for KivyMD.InstallationUse the package managerpipto install package:# Latest versionpipinstallakivymd# Latest changespipinstallgit+https://github.com/quitegreensky/akivymd.gitIn this case you must add the following to yourbuildozer.spec# Latest version requirements = kivy,kivymd, akivymd # Latest changes requirements = kivy,kivymd, git+https://github.com/quitegreensky/akivymd.gitUsageYou can find usage examples in the demo app.ExamplesContributingLicenseMIT
akiwi
No description available on PyPI.
akkadian
AkkademiaAkkademia is a tool for automatically transliterating Unicode cuneiform glyphs. It is written in python script and uses HMM, MEMM and BiLSTM neural networks to determine appropriate sign-readings and segmentation.We trained these algorithms on the RINAP corpora (Royal Inscriptions of the Neo-Assyrian Period), which are available in JSON and XML/TEI formats thanks to the efforts of the Official Inscriptions of the Middle East in Antiquity (OIMEA) Munich Project of Karen Radner and Jamie Novotny, funded by the Alexander von Humboldt Foundation, availablehere. We achieve accuracy rates of 89.5% with HMM, 94% with MEMM, and 96.7% with BiLSTM on the trained corpora. Our model can also be used on texts from other periods and genres, with varying levels of success.Getting StartedAkkademia can be accessed in three different ways:WebsitePython packageGithub cloneThe website and python package are meant to be accessible to people without advanced programming knowledge.WebsiteGo to theBabylonian Engine website(under development)Go to the "Akkademia" tab and follow the instructions there for transliterating your signs.Python PackageOur python package "akkadian" will enable you to use Akkademia on your local machine.PrerequisitesYou will need a Python 3.7.x installed. Our package currently does not work with other versions of python. You can follow the installation instructionshereor go straight ahead topython's downloads pageand pick an appropriate version.Mac comes preinstalled with python 2.7, which may remain the default python version even after installing 3.7.x. To check, typepython --versioninto terminal. If the running version is python 2.7, the simplest short-term solution is to typepython3orpip3in Terminal throughout instead ofpythonandpipas in the instructions below.Package InstallationYou can install the package using the pip install function. If you do not have pip installed on your computer, or you are not sure whether it is installed or not, you can follow the instructionshereBefore installing the package akkadian, you will need to install the torch package. For Windows, copy the following into Command Prompt (CMD):pip install torch==1.0.0 torchvision==0.2.1 -f https://download.pytorch.org/whl/torch_stable.htmlFor Mac and Linux copy the following into Terminal:pip install torch torchvisionThen, type the following in Command Prompt (Windows), or Terminal (Mac and Linux):pip install akkadianyour installation should be executed. This will take several minutes.RunningOpen a python IDE (Integrated development environment) where a python code can be run. There are many possible IDEs, seerealpython's guideorwiki python's list. For beginners, we recommend using Jupyter Notebook: see downloading instructionshere, or see downloading instructions and beginners' tutorialhere.First, importakkadian.transliterateinto your coding environment:import akkadian.transliterate as akkThen, you can use HMM, MEMM, or BiLSTM to transliterate the signs. The functions are:akk.transliterate_hmm("Unicode_signs_here") akk.transliterate_memm("Unicode_signs_here") akk.transliterate_bilstm("Unicode_signs_here") akk.transliterate_bilstm_top3("Unicode_signs_here")akk.transliterate_bilstm_top3gives the top three BiLSTM options, whileakk.transliterate_bilstmgives only the top one.For an immediate output of the results, put theakk.transliterate()function inside theprint()function. Here are some examples with their output:print(akk.transliterate_hmm("𒃻𒅘𒁀𒄿𒈬𒊒𒅖𒁲𒈠𒀀𒋾")) ša₂ nak-ba-i-mu-ru iš-di-ma-a-tiprint(akk.transliterate_memm("𒃻𒅘𒁀𒄿𒈬𒊒𒅖𒁲𒈠𒀀𒋾")) ša₂ SILIM ba-i-mu-ru-iš-di-ma-a-tiprint(akk.transliterate_bilstm("𒃻𒅘𒁀𒄿𒈬𒊒𒅖𒁲𒈠𒀀𒋾")) ša₂ nak-ba-i-mu-ru iš-di-ma-a-tiprint(akk.transliterate_bilstm_top3("𒃻𒅘𒁀𒄿𒈬𒊒𒅖𒁲𒈠𒀀𒋾")) ('ša₂ nak-ba-i-mu-ru iš-di-ma-a-ti ', 'ša₂-di-ba i mu ru-iš di ma tukul-tu ', 'MUN kis BA še-MU-šub-šah-ṭi-nab-nu-ti-')This line was taken from the first line of the Epic of Gilgamesh:ša₂ naq-ba i-mu-ru iš-di ma-a-ti; "He who saw the Deep, the foundation of the country" (George, A.R. 2003.The Babylonian Gilgamesh Epic: Introduction, Critical Edition and Cuneiform Texts. 2 vols. Oxford: Oxford University Press). Although the algorithms were not trained on this text genre, they show promising, useful results.GithubThese instructions will get you a copy of the project up and running on your local machine for development and testing purposes.PrerequisitesYou will need a Python 3.7.x installed. Our package currently does not work with other versions of python. Go topython's downloads pageand pick an appropriate version.If you don't have git installed, install githere(Choose the appropriate operating system).If you don't have a Github user, create onehere.Installing the python dependenciesIn order to run the code, you will need the torch and allennlp libraries. If you have already installed the package akkadian, these were installed on your computer and you can skip to the next step.Install torch: For Windows, copy the following to Command Promptpip install torch===1.3.1 torchvision===0.4.2 -f https://download.pytorch.org/whl/torch_stable.htmlfor Mac and Linux, copy the following to Terminalpip install torch torchvisionInstall allennlp: copy the following to Command Prompt (with windows) or Terminal (with mac):pip install allennlp==0.8.5Cloning the projectCopy the following into Command Prompt (with windows) or Terminal (with mac) to clone the project:git clone https://github.com/gaigutherz/Akkademia.gitRunningNow you can develop the Akkademia repository and add your improvements!TrainingUse the file train.py in order to train the models using the datasets. There is a function for each model that trains, stores the pickle and tests its performance on a specific corpora.The functions are as follows:hmm_train_and_test(corpora) memm_train_and_test(corpora) biLSTM_train_and_test(corpora)TransliteratingUse the file transliterate.py in order to transliterate using the models. There is a function for each model that takes Unicode cuneiform signs as parameter and returns its transliteration.Example of usage:cuneiform_signs = "𒃻𒅘𒁀𒄿𒈬𒊒𒅖𒁲𒈠𒀀𒋾" print(transliterate(cuneiform_signs)) print(transliterate_bilstm(cuneiform_signs)) print(transliterate_bilstm_top3(cuneiform_signs)) print(transliterate_hmm(cuneiform_signs)) print(transliterate_memm(cuneiform_signs))DatasetsFor training the algorithms, we used the RINAP corpora (Royal Inscriptions of the Neo-Assyrian Period), which are available in JSON and XML/TEI formats thanks to the efforts of the Humboldt Foundation-funded Official Inscriptions of the Middle East in Antiquity (OIMEA) Munich Project led by Karen Radner and Jamie Novotny, availablehere. The current output in our website, package and code is based on training done on these corpora alone.For additional future training, we added the following corpora (in JSON file format) to the repository:RIAO-Royal Inscriptions of Assyria onlineRIBO-Royal Inscriptions of Babylonia onlineSAAO-State Archives of Assyria onlineSUHU-The Inscriptions of Suhu online ProjectThese corpora were all prepared by the Munich Open-access Cuneiform Corpus Initiative (MOCCI) and OIMEA project teams, both led by Karen Radner and Jamie Novotny, and are fully accessible for download in JSON or XML/TEI format in their respective project webpages (see left side-panel on project webpages and look for project-name downloads).We also included a separate dataset which includes all the corpora in XML/TEI format.Datasets deploymentAll the dataset are taken from their respective project webpages (see left side-panel on project webpages and look for project_name downloads) and are fully accessible from there.In our repository the datasets are located in the "raw_data" directory. They can also be downloaded from the Github repository using git clone or zip download.Project structureBiLSTM_input:Contains dictionaries used for transliteration by BiLSTM.NMT_input:Contains dictionaries used for natural machine translation.akkadian.egg-info:Information and settings for akkadian python package.akkadian:Sources and train's output. output: Train's output for HMM, MEMM and BiLSTM - mostly pickles. __init__.py: Init script for akkadian python package. Initializes global variables. bilstm.py: Class for BiLSTM train and prediction using AllenNLP implementation. build_data.py: Code for organizing the data in dictionaries. check_translation.py: Code for translation accuracy checking. combine_algorithms.py: Code for prediction using both HMM, MEMM and BiLSTM. data.py: Utils for accuracy checks and dictionaries interpretations. full_translation_build_data.py: Code for organizing the data for full translation task. get_texts_details.py: Util for getting more information about the text. hmm.py: Implementation of HMM for train and prediction. memm.py: Implementation of MEMM for train and prediction. parse_json: Json parsing used for data organizing. parse_xml.py: XML parsing used for data organizing. train.py: API for training all 3 algorithms and store the output. translation_tokenize.py: Code for tokenization of translation task. transliterate.py: API for transliterating using all 3 algorithms.build/lib/akkadian:Information and settings for akkadian python package.dist:Akkadian python package - wheel and tar.raw_data:Databases used for training the models: RINAP 1, 3-5 Additional databases for future training: RIAO RIBO SAAO SUHU Miscellanea: tei - the same databases (RINAP, RIAO, RIBO, SAAO, SUHU) in XML/TEI format. random - 4 texts used for testing texts outside of the training corpora. They were randomly selected from RIAO and RIBO.LicensingThis repository is made freely available under the Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) license. This means you are free to share and adapt the code and datasets, under the conditions that you cite the project appropriately, note any changes you have made to the original code and datasets, and if you are redistributing the project or a part thereof, you must release it under the same license or a similar one.For more information about the license, seehere.Issues and BugsIf you are experiencing any issues with the website, the python package akkadian or the git repository, please contact us [email protected], and we would gladly assist you. We would also much appreciate feedback about using the code via the website or the python package, or about the repository itself, so please send us any comments or suggestions.AuthorsGai GutherzAriel ElazaryAvital RomachShai Gordin
akkaserverless
No description available on PyPI.
akkdict
akkdictis a simple command-line python program that takes an akkadian word as input and opens pdfs of several akkadian dictionaries to the right page (or somewhere thereabouts). It works well for some of the illicit pdfs ofThe Concise Dictionary of Akkdian(CDA) andDas Akkadishes Handwörterbuch(AHw) floating around, if you happen to have them. It also will get you within 100 pages in theAssyrian Dictionary of the Oriental Institute of the University of Chicago(CAD), which can be freely downloaded fromhttp://oi.uchicago.edu/research/publications/assyrian-dictionary-oriental-institute-university-chicago-cad(naturally,akkdictalso has a helper script for this; see below). If you would like to contribute to expanding and improving the CAD index or any of the other indicies, we would be happy to pull from you! If you are a humanities major and using git is too hard, contact me!InstallationInstall akkdict from PyPI with pip3!pip install akkdictIf you don’t have pip and are unmotivated to get it on your platform, you can download clone the source from github, enter the project directory and runpython3 setup.py install, which will probably require root, and you should just not do it. Just git pip.One thing to know is you must have a way to open a PDF to a specific page number from the command line. This is no problem on Linux. Just look at the man page for your favorite PDF reader. I hear talk that this is possible on OS X with AppleScript. There are also command line options for this in Acrobat Reader. I frankly don’t even know if this package works at all on Windows, but I have my doubts. If someone who knows something about Windows wants to contribute, please do!akkdictrequires some configuration. It will let you know about it. The default config file has comments which explain things.UsageOpen a word in the configured dictionaries:akkdict šarruPrint the page number (and volume):akkdict-pšarruDownload the CAD into a local folder:akkdict--download-cad
ak-keydetector
Ak-KeyDetectorThis package contains setup files to installapi_key_detectorto be used as a pip package.Installationpipinstallak-keydetectorUsageExample imports post installation:fromapi_key_detectorimportstring_classifierfromapi_key_detector.classifier_singletonimportclassifierDetailed UsagePlease refer to theREADMEof the module for detailed usage.
akki-distributions
No description available on PyPI.
akkio
akkio-pythonConvenient access to theAkkioAPI from pythonInstallationpipinstallakkioUsageimportakkioakkio.api_key='YOUR-API-KEY-HERE'# get your API key at https://app.akk.io/team-settingsmodels=akkio.get_models()['models']formodelinmodels:print(model)datasets=akkio.get_datasets()['datasets']fordatasetindatasets:print(dataset)new_dataset=akkio.create_dataset('python api test')print(new_dataset)# create a toy datasetimportrandomrows=[]foriinrange(1000):rows.append({'x':random.random()})rows[-1]['y']=rows[-1]['x']>0.5akkio.add_rows_to_dataset(new_dataset['dataset_id'],rows)new_model=akkio.create_model(new_dataset['dataset_id'],['y'],[],{'duration':1})print(new_model)prediction=akkio.make_prediction(new_model['model_id'],[{'x':0.1},{'x':0.7}],explain=True)print(prediction)
akkits
AkiraKan ITS SDKAuthors: Jia Huei TAN, Tun Jian TAN, Pin Siang TANDocumentation LinkCHANGELOGCIItemStatusPython Lint and Formatting ChecksCI (Python)Documentation Build
akkpredict
akkpredict evx ML modelThis is a simplified version ofevxpredictorpackage used to generate buy and sell signals for crypto and conventional stock markets based on the excess volume indicator(EVX). EVX is a concept where the bid-ask spread is estimated inherently from current market prices.You can read more about Evx in the free whitepaperhereInstallationInstall akkpredict withpython3 -m pip install akkpredictUsageIn your python script simply import the module and use as follows:from akkpredict.moonapi import Moon, signal print(signal(20,65,utcdatetime))The above methods take an assets open, close prices of the asset based on the time interval you have chosen in OHCLV type. A zero classification output would instruct the user to sell, while one output means don't sell or buy if the asset is not already present in the orders.WarningThis is not financial advise. akkpredict is entirely on its preliminary stages. Use it at your own risk.
akl-cameras
AKL CamerasTool that allow to operate cameras owned by AKL.InstalationInstall required packageManjaro:$sudopamacinstallgphoto2Fedora:$sudodnfinstallgphoto2RaspberyPi:$sudoaptinstallgphoto2Setup [email protected]:academic-aviation-club/droniada-2024/akl-cameras.git $cdakl-cameras $poetryinstall--no-root $poetryshellSetup repository on RaspberyPi:[email protected]:academic-aviation-club/droniada-2024/akl-cameras.git $cdakl-camerasHardware preparationConnect camera to the on-board computer,