package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
advana | No description available on PyPI. |
advance-common | Protobuf messages in messages_pb2 are tracked by git, so when importing this repo as submodule # there is no need to compile protobuf.If you are reading this outside of PyPi, this project is published:pip install advance-common |
advanced-alchemy | Advanced AlchemyProjectStatusCI/CDQualityCommunityMetaCheck out theproject documentation📚 for more information.AboutA carefully crafted, thoroughly tested, optimized companion library for SQLAlchemy,
offering features such as:Sync and async repositories, featuring common CRUD and highly optimized bulk operationsIntegration with major web frameworks including Litestar, Starlette, FastAPI, Sanic.Custom-built alembic configuration and CLI with optional framework integrationUtility base classes with audit columns, primary keys and utility functionsOptimized JSON types including a custom JSON type for Oracle.Integrated support for UUID6 and UUID7 usinguuid-utils(install with theuuidextra)Pre-configured base classes with audit columns UUID or Big Integer primary keys and
asentinel column.Synchronous and asynchronous repositories featuring:Common CRUD operations for SQLAlchemy modelsBulk inserts, updates, upserts, and deletes with dialect-specific enhancementslambda_stmtwhen possible
for improved query building performanceIntegrated counts, pagination, sorting, filtering withLIKE,IN, and dates before and/or after.Tested support for multiple database backends including:SQLite viaaiosqliteorsqlitePostgres viaasyncpgorpsycopg3 (async or sync)MySQL viaasyncmyOracle viaoracledb (async or sync)(tested on 18c and 23c)Google Spanner viaspanner-sqlalchemyDuckDB viaduckdb_engineMicrosoft SQL Server viapyodbcoraioodbcCockroachDB viasqlalchemy-cockroachdb (async or sync)UsageInstallationpipinstalladvanced-alchemy[!IMPORTANT]Check outthe installation guidein our official documentation!RepositoriesAdvanced Alchemy includes a set of asynchronous and synchronous repository classes for easy CRUD operations on your SQLAlchemy models.fromadvanced_alchemy.baseimportUUIDBasefromadvanced_alchemy.filtersimportLimitOffsetfromadvanced_alchemy.repositoryimportSQLAlchemySyncRepositoryfromsqlalchemyimportcreate_enginefromsqlalchemy.ormimportMapped,sessionmakerclassUser(UUIDBase):# you can optionally override the generated table name by manually setting it.__tablename__="user_account"# type: ignore[assignment]email:Mapped[str]name:Mapped[str]classUserRepository(SQLAlchemySyncRepository[User]):"""User repository."""model_type=User# use any compatible sqlalchemy engine.engine=create_engine("duckdb:///:memory:")session_factory=sessionmaker(engine,expire_on_commit=False)# Initializes the database.withengine.begin()asconn:User.metadata.create_all(conn)withsession_factory()asdb_session:repo=UserRepository(session=db_session)# 1) Create multiple users with `add_many`bulk_users=[{"email":'[email protected]','name':'Cody'},{"email":'[email protected]','name':'Janek'},{"email":'[email protected]','name':'Peter'},{"email":'[email protected]','name':'Jacob'}]objs=repo.add_many([User(**raw_user)forraw_userinbulk_users])db_session.commit()print(f"Created{len(objs)}new objects.")# 2) Select paginated data and total row count. Pass additional filters as kwargscreated_objs,total_objs=repo.list_and_count(LimitOffset(limit=10,offset=0),name="Cody")print(f"Selected{len(created_objs)}records out of a total of{total_objs}.")# 3) Let's remove the batch of records selected.deleted_objs=repo.delete_many([new_obj.idfornew_objincreated_objs])print(f"Removed{len(deleted_objs)}records out of a total of{total_objs}.")# 4) Let's count the remaining rowsremaining_count=repo.count()print(f"Found{remaining_count}remaining records after delete.")For a full standalone example, see the samplehereServicesAdvanced Alchemy includes an additional service class to make working with a repository easier. This class is designed to accept data as a dictionary or SQLAlchemy model and it will handle the type conversions for you.Here's the same example from above but using a service to create the data:fromadvanced_alchemy.baseimportUUIDBasefromadvanced_alchemy.filtersimportLimitOffsetfromadvanced_alchemyimportSQLAlchemySyncRepository,SQLAlchemySyncRepositoryServicefromsqlalchemyimportcreate_enginefromsqlalchemy.ormimportMapped,sessionmakerclassUser(UUIDBase):# you can optionally override the generated table name by manually setting it.__tablename__="user_account"# type: ignore[assignment]email:Mapped[str]name:Mapped[str]classUserRepository(SQLAlchemySyncRepository[User]):"""User repository."""model_type=UserclassUserService(SQLAlchemySyncRepositoryService[User]):"""User repository."""repository_type=UserRepository# use any compatible sqlalchemy engine.engine=create_engine("duckdb:///:memory:")session_factory=sessionmaker(engine,expire_on_commit=False)# Initializes the database.withengine.begin()asconn:User.metadata.create_all(conn)withsession_factory()asdb_session:service=UserService(session=db_session)# 1) Create multiple users with `add_many`objs=service.create_many([{"email":'[email protected]','name':'Cody'},{"email":'[email protected]','name':'Janek'},{"email":'[email protected]','name':'Peter'},{"email":'[email protected]','name':'Jacob'}])print(objs)print(f"Created{len(objs)}new objects.")# 2) Select paginated data and total row count. Pass additional filters as kwargscreated_objs,total_objs=service.list_and_count(LimitOffset(limit=10,offset=0),name="Cody")print(f"Selected{len(created_objs)}records out of a total of{total_objs}.")# 3) Let's remove the batch of records selected.deleted_objs=service.delete_many([new_obj.idfornew_objincreated_objs])print(f"Removed{len(deleted_objs)}records out of a total of{total_objs}.")# 4) Let's count the remaining rowsremaining_count=service.count()print(f"Found{remaining_count}remaining records after delete.")Web FrameworksAdvanced Alchemy works with nearly all Python web frameworks. Several helpers for popular libraries are included, and additional PRs to support others are welcomed.LitestarAdvanced Alchemy is the official SQLAlchemy integration for Litestar.In addition to installed withpip install advanced-alchemy, it can also be installed installed as a Litestar extra withpip install litestar[sqlalchemy].fromlitestarimportLitestarfromlitestar.plugins.sqlalchemyimportSQLAlchemyPlugin,SQLAlchemyAsyncConfig# alternately...# from advanced_alchemy.extensions.litestar.plugins import SQLAlchemyPlugin# from advanced_alchemy.extensions.litestar.plugins.init.config import SQLAlchemyAsyncConfigalchemy=SQLAlchemyPlugin(config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"),)app=Litestar(plugins=[alchemy])For a full Litestar example, checkhereFastAPIfromfastapiimportFastAPIfromadvanced_alchemy.configimportSQLAlchemyAsyncConfigfromadvanced_alchemy.extensions.starletteimportStarletteAdvancedAlchemyapp=FastAPI()alchemy=StarletteAdvancedAlchemy(config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"),app=app,)For a full CRUD example, seehereStarlettefromstarlette.applicationsimportStarlettefromadvanced_alchemy.configimportSQLAlchemyAsyncConfigfromadvanced_alchemy.extensions.starletteimportStarletteAdvancedAlchemyapp=Starlette()alchemy=StarletteAdvancedAlchemy(config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"),app=app,)SanicfromsanicimportSanicfromsanic_extimportExtendfromadvanced_alchemy.configimportSQLAlchemyAsyncConfigfromadvanced_alchemy.extensions.sanicimportSanicAdvancedAlchemyapp=Sanic("AlchemySanicApp")alchemy=SanicAdvancedAlchemy(sqlalchemy_config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"),)Extend.register(alchemy)ContributingAllJoltprojects will always be a community-centered, available for contributions of any size.Before contributing, please review thecontribution guide.If you have any questions, reach out to us onDiscord, our org-wideGitHub discussionspage,
or theproject-specific GitHub discussions page.AJolt OrganizationProject |
advanced-analysis-package | #advanced_analysis_packageThis is a python package to perform analysis , outlier treatment and variable reduction on pandas dataframe. List of functions include:-analyze - a. edd - replica of sas edd maco with new features of coelation and p values when dependent variable is given.return a pandas dataframe. b. graphical_analysis - gives bivariate ananlysis of variable with respect to dependent variable. Display and saves plots and tables at equied path. c. numerical_categorical_division - return list of numeric and categoical variables in pandas dataframe
variable_treatment a. exponential_smoothning - outlier treatment for variable b.capping_and_flooring - outlier teatment for variable c.make_dummies - create dummies for categorical variable(one hot encoding) d.make_dummies_binary - create binary dummies for variable
variable_reduction a. inter_correlation_clusters - return clusters of variables based on cutoff correlation b. varclus - return list of columns 1 from each cluster c. vif_reduction - return list of columns to be dropped by vif_eduction method d. backward_selection - returns list of columns to be dropped by backwad selection |
advanced-analytics-coke | Medium multiplyA small demo library for a Medium publication about publishing libraries.Installationpip install medium-multiplyGet startedHow to multiply one number by another with this lib:frommedium_multiplyimportMultiplication# Instantiate a Multiplication objectmultiplication=Multiplication(2)# Call the multiply methodresult=multiplication.multiply(5) |
advancedcalc | AdvancedCalcAn Advanced Calculator to Calculate Advanced Methods like Average, Trignometry, HCF, LCM easilyInstallationAdvancedCalc requires an installation of python 3.6 or greater, as well as pip. Pip is typically bundled with python installations, and you can find options for how to install python athttps://python.org.To Install from pypi with pip:pipinstalladvancedcalcSometime, the pypi release becomes slightly outdated. To install from the source with pip:pipinstallgit+https://github.com/programmerayush7/advancedcalc/Quick StartAdding NumbersimportAdvancedCalccalculator=AdvancedCalc.Calculator# returns 10sum=calculator.add(5,5)print(sum)Subtracting NumbersimportAdvancedCalccalculator=AdvancedCalc.Calculator# returns 5answer=calculator.add(10,5)print(answer)Usingauto_arrangeParameterTheauto_arrangeparameter is usually used to pretend to get negative answers (eg. -1, -6, etc.).Withoutauto_arrangeimportAdvancedCalccalculator=AdvancedCalc.Calculator# returns -1answer=calculator.subtract(4,5)print(answer)Withauto_arrangeimportAdvancedCalccalculator=AdvancedCalc.Calculator# returns 1answer=calculator.add(4,5,auto_arrange=True)print(answer)Multiplying NumbersMultiply 2 Numbers using this FunctionimpportAdvancedCalccalculator=AdvancedCalc.Calculator# returns 25product=calculator.add(5,5)print(product)Dividing NumbersDivide 2 Numbers using this FunctionimportAdvancedCalccalculator=AdvancedCalc.Calculator# returns 2quotient=calculator.add(10,5)print(quotient)Finding RemaindersFind the Remainder from the Division of 2 Numbers using this functionimportAdvancedCalccalculator=AdvancedCalc.Calculator# returns 2remainder=calculator.remainder(10,4)print(remainder)Advanced FunctionsAverage Of NumbersFind the Average of Numbers in a ListfromAdvancedCalcimportAdvancednumbers=[1,2,3,4,5]# Returns 3average=Advanced.average(numbers)print(average)CubeFind the Cube of a Number using this functionfromAdvancedCalcimportAdvanced# returns 8cube=Advanced.cube(2)print(cube)Percentagelets try finding 10 percent of 200
Note: This Function returns a float, so if you want a number, you can convert it into an int.fromAdvancedCalcimportAdvanced# returns 20.0percent=Advanced.percent(10,200)print(percent)FactorsFind all Factors of a Specific Numbers using this functionfromAdvancedCalcimportAdvanced# returns a list of factorsfactors=Advanced.factors(10)print(factors)HFCFind the Highest Common Factor of 2 NumbersfromAdvancedCalcimportAdvanced# returns 10hfc=Advanced.hfc(10,20)print(hfc)LCMFind the LCM of 2 Numbers using this Function.fromAdvancedCalcimportAdvanced# returns 11lcm=Advanced.lcm(5,6)print(lcm)Changelog0.0.3Added Advanced FunctionsModified Simple FunctionsModified README.mdFixed Installing with PIP Error |
advanced-calculator | This is an advanced calculator that can do square roots, factorials, fibonacci numbers, and moreChange Log0.0.1 (Nov 29, 2021)First Release0.0.2 (Nov 30, 2021)Added calculator functionality. Moved init.0.0.3 (Nov 30, 2021)Removed hyphen from name |
advanced-collections | advanced-collectionsExtends the builtin collections with advanced collections, including database sized lists, better queues, and more. Written in Python 3, this library also includes annotations/type-hints to make usage with an IDE easier and abstract base classes for easily creating custom implementations. |
advanced-config-manager | # advanced_config_manager
A configuration manager for python that handles multiple config sources and sections.Documentation for this can be found at:http://advanced-config-manager.readthedocs.org/en/latest/ |
advanced-database-replace | Advanced Database ReplaceShort descriptionA utility management library which runs an advanced
search&replace action against a specified database's records
(event if they are serialized).Long descriptionThis project aims to take search&replace to the next level by applying
search&replace even to serialized records in the database. This type of project
is especially effective against e.g. Wordpress databases since they may
contain serialized PHP array records.PrerequisitesA MySql database.This project installed with:pipinstalladvanced_database_replaceor:./install.shUsageReplacing all occurrences on all tablesfromadvanced_database_replace.database_replaceimportDatabaseReplacefromadvanced_database_replace.database_credentialsimportDatabaseCredentialsdb_credentials=DatabaseCredentials()db_replacer=DatabaseReplace(credentials=db_credentials)db_replacer.replace_all('my-old-record','my-new-record')Replacing all occurrences on a specific tablefromadvanced_database_replace.database_replaceimportDatabaseReplacefromadvanced_database_replace.database_credentialsimportDatabaseCredentialsdb_credentials=DatabaseCredentials()db_replacer=DatabaseReplace(credentials=db_credentials)db_replacer.replace('my-old-record','my-new-record','my-table')Using custom serializerSince (as mentioned in the description) this find&replace project handles serialized
data, by default it assumes PHP serialization, however, you can provide a custom
serializer.fromadvanced_database_replace.database_replaceimportDatabaseReplacefromadvanced_database_replace.database_credentialsimportDatabaseCredentialsclassMyCustomSerializer:@staticmethoddefdumps(*args,**kwargs):pass@staticmethoddefloads(*args,**kwargs):passdb_credentials=DatabaseCredentials()db_replacer=DatabaseReplace(credentials=db_credentials,serializer=MyCustomSerializer())db_replacer.replace_all('my-old-record','my-new-record')Release history1.0.0Initial. |
advanced-databases | advanced-databasesA collection of pure python database implementations. |
advanced-descriptors | Advanced descriptorsThis package includes helpers for special cases:SeparateClassMethod- allow to have classmethod and normal method both with the same name.AdvancedProperty- property with possibility to set class wide getter.LogOnAccess- property with logging on successful get/set/delete or failure.SeparateClassMethodThis descriptor can be set using standard decorator syntax.
Create instance with arguments:defimeth(instance):returninstance.valuedefcmeth(owner):returnowner.valueclassTarget(object):value=1def__init__(self):self.value=2getval=advanced_descriptors.SeparateClassMethod(imeth,cmeth)Create instance wrapping as decorator:classTarget(object):value=1def__init__(self):self.value=2@advanced_descriptors.SeparateClassMethoddefgetval(self):[email protected]_methoddefgetval(cls):returncls.valueCases with method only and classmethod only is useless:
method as-is and@classmethodshould be used in corresponding cases.Noteclassmethod receives class as argument. IDE’s don’t know about custom descriptors and substitutesselfby default.AdvancedPropertyThis descriptor should be used in cases, when in addition to normal property API, class getter is required.
If class-wide setter and deleter also required - you should use standard propery in metaclass.Usage examples:In addition to normal property API:classTarget(object):_value=777def__init__(self):self._value=42@advanced_descriptors.AdvancedPropertydefval(self):[email protected](self,value):[email protected](self):[email protected](cls):returncls._valueUse class-wide getter for instance too:classTarget(object):_value=1val=advanced_descriptors.AdvancedProperty()@val.cgetterdefval(cls):returncls._valueNoteclass-wide getter receives class as argument. IDE’s don’t know about custom descriptors and substitutesselfby default.LogOnAccessThis special case of property is useful in cases, where a lot of properties should be logged by similar way without writing a lot of code.Basic API is conform withproperty, but in addition it is possible to customize logger, log levels and log conditions.Usage examples:Simple usage.
All by default.
logger is re-used from instance if available with namesloggerorlogelse used internaladvanced_descriptors.log_on_accesslogger:importloggingclassTarget(object):definit(self,val='ok')self.val=valself.logger=logging.get_logger(self.__class__.__name__)# Single for class, follow subclassingdef__repr__(self):return"{cls}(val={self.val})".format(cls=self.__class__.__name__,self=self)@advanced_descriptors.LogOnAccessdefok(self):[email protected](self,val):[email protected](self):self.val=""Use with global logger for class:classTarget(object):definit(self,val='ok')self.val=valdef__repr__(self):return"{cls}(val={self.val})".format(cls=self.__class__.__name__,self=self)@advanced_descriptors.LogOnAccessdefok(self):[email protected](self,val):[email protected](self):self.val=""ok.logger='test_logger'ok.log_level=logging.INFOok.exc_level=logging.ERRORok.log_object_repr=True# As by defaultok.log_success=True# As by defaultok.log_failure=True# As by defaultok.log_traceback=True# As by defaultok.override_name=None# As by default: use original nameTestingThe main test mechanism for the packageadvanced-descriptorsis usingtox.
Available environments can be collected viatox -lCI systemsFor CI/CD GitHub actions is used:GitHub actions:is used for checking: PEP8, pylint, bandit, installation possibility and unit tests. |
advanced-dicts | advanced-dictsAdvanced dicts (python package) |
advanced-dtypes | advanced-dtypesAdvanced DataTypes is a package for Python that provides access faster, feature-rich data types not provided in the standard library, motivated by my thoughts of "why is that not a thing?"
The aim of the project is to fill out the holes in the standard library data types' feature-set while improving upon the performance of the existing feature-set.Constant StoreConstant Store is a class that provides a namespace within which to store constants on a class attribute basis. Class definition is very simple:fromadvanced_dtypesimportConstStoreclassExample(ConstStore):NAME_1="some_value"NAME_2=23The class is built to populate a set and dictionary on import, which drives many of the features of the class, as well as the __slots__ attribute, which helps boost performance.
Since the class attributes are maintained, names can be directly dot-referenced.>>Example.NAME_1=="some_value"True>>type(Example.NAME_1)==strTrueConstStorealso provides an interface for the following functionality:>>len(Example)2>>23inExampleTrue>>Example(23)NAME_2>>Example["NAME_2"]23>>[valueforvalueinExample]["some_value",23]You should be aware, however, that non-hashable objects will have slower performance in lookups such asin,Class(value)andClass[name].Fast EnumFastEnumis simply a faster implementation of the existing standard libraryEnum.Definition is exactly the same as a standard enum:fromadvanced_dtypesimportFastEnumclassExample(FastEnum):NAME_1="some_value"NAME_2=23Similarly, standard enum functionality remains:>>Example.NAME_1<Example.NAME_1:some_value>>>Example.NAME_1.nameNAME_1>>Example.NAME_1.valuesome_value>>Example("some_value")<Example.NAME_1:some_value>>>Example["NAME_1"]<Example.NAME_1:some_value>>>len(Example)2>>[itemforiteminExample][<Example.NAME_1:some_value>,<Example.NAME_2:23>]>>Example.NAME_1==Example.NAME_1TrueThe only functional addition in this instance is equality checking of members against values directly:>>Example.NAME_1=="some_value"TrueHow To Installpipinstalladvanced-dtypes |
advancedEXASOL | advancedEXASOLAdvancedEXASOL is a Python library that extends the functionality ofpyEXASOLand allows for faster data manipulation using Dask. It includes features such as table management, data importing/exporting, and data merging. It also includes various decorators to ensure safe and efficient execution of SQL queries.The AdvancedEXASOL library contains all the methods and has "decorators" which includes following: ensure_table_exists, ensure_connection, handle_transactions, sql_injection_safe, and enforce_resource_limits. These decorators are used to add additional functionality and safety measures to the methods in the AdvancedEXASOL class.One of the main features of AdvancedEXASOL is the ability to create a new table from a external source like a Dask DataFrame or an existing EXASOL table using the create_table_from_df and create_table_from_table methods, respectively. It also has the ability to convert an EXASOL table to various formats, such as JSON, CSV, XML, and Excel, using the to_json, to_csv, to_xml, and to_excel methods.In addition, AdvancedEXASOL allows for data to be imported and exported to and from Dask Dataframes using the import_from_dask and export_to_dask methods. This can be useful for interacting with larger datasets that may not fit in memory.Finally, AdvancedEXASOL has the ability to merge two tables using the merge_tables method or merge two tables from external sources using the merge_from_external method. This allows for easy data consolidation and management.Overall, AdvancedEXASOL aims to streamline the interaction between Python and the EXASOL database by providing a wide range of useful features and safety measures.FeaturesAdvancedEXASOL includes the following features:create_table_from_df: Create a new table from a Pandas Dataframecreate_table_from_table: Create a new table from an existing EXASOL tableexport_to_dask: Export data from EXASOL to Dask Dataframeimport_from_dask: Import data from Dask Dataframe to EXASOLto_pandas: Convert an EXASOL table to Pandas Dataframeto_json: Convert an EXASOL table to JSON formatto_csv: Convert an EXASOL table to CSV formatcsv_to_excel: Convert an EXASOL CSV to an Excel fileto_xml: Convert an EXASOL table to XML formatto_excel: Convert an EXASOL table to an Excel filemerge_tables: Merge two EXASOL tablesmerge_from_external: Merge two tables from external sourcesRequirementsAdvancedEXASOL requires the following modules:pyEXASOLdask.dataframePandasxlwtUsageAdvancedEXASOL has a class namedFeatureswhich contains all the methods. It also has a scriptdecoratorswhich contains the following decorators:ensure_table_exists: This decorator is used to check if a table exists in the EXASOL database before performing any operation on it. If the table does not exist, an error is raised. This is useful for ensuring that the desired table is available for use before trying to perform any actions on it.ensure_connection: This decorator is used to ensure that a connection to the EXASOL database has been established before running any operation. If a connection has not been established, an error is raised. This is useful for preventing operations from being run without a valid connection to the database.handle_transactions: This decorator is used to automatically handle transactions when performing any operation on the EXASOL database. Transactions allow multiple related SQL statements to be executed as a single unit of work, either all succeeding or all being rolled back if any of the statements fail. This decorator ensures that transactions are properly started and ended for each operation, helping to ensure the integrity of the data in the database.sql_injection_safe: This decorator is used to sanitize input strings before running any operation on the EXASOL database. SQL injection is a type of security vulnerability that occurs when an attacker is able to send malicious code to a database through user input. This decorator helps to prevent such attacks by ensuring that input strings are properly escaped and validated before being used in an operation.enforce_resource_limits: This decorator is used to enforce resource limits when performing any operation on the EXASOL database. Resource limits can be used to prevent certain operations from consuming too many resources, such as memory or CPU time, which could negatively impact the performance of the database. This decorator helps to ensure that resource limits are properly enforced for each operation, helping to maintain the performance and stability of the database.ExampleHere is an example of how to use advancedEXASOL:Connect to the Exasol databasefromadvanced_exasolimportFeatures# Connect to EXASOLconn=Features.connect('hostname','username','password')Export to a dask dataframe and import from a dask dataframe via HTTP transport# Export data from EXASOL to Dask Dataframedf=conn.export_to_dask('existing_table')# Import data from Dask Dataframe to EXASOLconn.import_from_dask(df,'new_table')# Disconnect from EXASOLconn.close()Different methods to convert data from Exasol to other formats.# Convert EXASOL table to Pandas Dataframedf=conn.to_pandas('existing_table')# Convert EXASOL table to JSONjson_string=conn.to_json('existing_table')# Convert EXASOL table to CSVconn.to_csv('existing_table','existing_table.csv')# Convert EXASOL CSV to Excel fileconn.csv_to_excel('existing_table.csv','existing_table.xls')# Convert EXASOL table to XMLxml_string=conn.to_xml('existing_table')# Convert EXASOL table to Excel fileconn.to_excel('existing_table','existing_table.xls')# Disconnect from EXASOLconn.close()Merge data from one table to another. It also supports simple select statements and CTEs.# Connect to EXASOLconn=Features.connect('hostname','username','password')# Merge two EXASOL tablesconn.merge_tables('table1','table2','merged_table','column1, column2')# Disconnect from EXASOLconn.close()Merge data from different data sources into the database..# Connect to EXASOLconn=Features.connect('hostname','username','password')# Merge two tables from external sourcesconn.merge_from_external('/path/to/table1.csv','/path/to/table2.xls','merged_table','column1, column2')# Disconnect from EXASOLconn.close()DocumentationThe complete documentation for pyEXASOL can be foundhere.LicenseAdvancedEXASOL is released under theMIT License.ContributionsContributions are welcome! Please open an issue or submit a pull request for any bugs or feature requests.ContactIf you have any questions or feedback, please feel free to contact me via email [email protected] you for using AdvancedEXASOL! |
advancedfirebase | Advanced FirebaseAdvanced Firebase ia a library that adds more functional to standart firebase sdk.InstallationUse the package managerpipto install Advanced Firebase.pipinstalladvancedfirebaseMODULESDBThis module is only for use with Realtime Database.All functions:GetGetkeyUpdateSetParseExample database*{'fruits:{⠀⠀⠀⠀'apple':{⠀⠀⠀⠀⠀⠀'color': 'red'⠀⠀⠀⠀⠀⠀}⠀⠀⠀⠀'banana':{⠀⠀⠀⠀⠀⠀'color': 'yellow'⠀⠀⠀⠀⠀⠀}⠀⠀⠀⠀'kiwi':{⠀⠀⠀⠀⠀⠀'color': 'green'⠀⠀⠀⠀⠀⠀}⠀⠀⠀⠀}}*This is database that used in examplesProject initializefromadvancedfirebaseimportdbdb.init('url','credentials.json')GetArgs: pathout=db.get('/fruits/apple')print(out)OUTPUT:{'color':'red'}GetkeyArgs: path, keyout=db.getkey('/fruits/apple','color')print(out)OUTPUT:
redUpdateArgs: path, objobject={'apple':{'color':'green','size':'large'}}db.update('/fruits/',object)Result:{'fruits:{⠀⠀⠀⠀'apple':{⠀⠀⠀⠀⠀⠀'color': 'green',⠀⠀⠀⠀⠀⠀'size': 'large'⠀⠀⠀⠀⠀⠀}⠀⠀⠀⠀'banana':{⠀⠀⠀⠀⠀⠀'color': 'yellow'⠀⠀⠀⠀⠀⠀}⠀⠀⠀⠀'kiwi':{⠀⠀⠀⠀⠀⠀'color': 'green'⠀⠀⠀⠀⠀⠀}⠀⠀⠀⠀}}Setobject={'apple':{'color':'green','size':'large'}}db.set('/fruits/',object)Result:{'fruits:{⠀⠀⠀⠀'apple':{⠀⠀⠀⠀⠀⠀'color': 'green',⠀⠀⠀⠀⠀⠀'size': 'large'⠀⠀⠀⠀⠀⠀}⠀⠀⠀}}Parsereq=db.get('/fruits/apple')print(req)print(db.parse(req,'color'))OUTPUT:{'color':'red'}redMade by Artem Lukashenkov1.1.4 |
advanced-geometry-utils | =======geometry-utilsIntroductionA python module for working on 2D and 3D geometries. |
advanced-global-optimizers | Advanced Global Optimizers - AGOThis is a package that integrates many advanced global optimizers. |
advancediscord | No description available on PyPI. |
advanced_jabberclient | UNKNOWN |
advanced-led-control | AboutA package to ease the control of theblinkstick nanoSource code is athttps://github.com/weshouman/advanced_led_controlExampleSave this example astest.pythen run it as in theUsagesection belowimportadvanced_led_control.models.colorsascfromadvanced_led_control.models.ledsimport*fromadvanced_led_control.models.ValueIndicationimport*fromadvanced_led_control.models.Indicatorimport*fromadvanced_led_control.models.Procedureimport*defindic1_func():# Indication values are best shown between 1 and 10value_indication=ValueIndication(c.RED,10)returnvalue_indicationdefindic2_func():value_indication=ValueIndication(c.BLUE,NO_FLICKER)returnvalue_indicationindic1=Indicator(m_col=c.GREEN,func=indic1_func,i_time=4)indic2=Indicator(m_col=c.GREEN,func=indic2_func,i_time=4)stick=blinkstick.find_first()procedure=Procedure(stick=stick,mode_led=LED_2,quiet=10,sync_duration=10)# Append all indicators to be runprocedure.indicators.append(indic1)procedure.indicators.append(indic2)# Run the specified procedureprocedure.run()UsageThis utility needssudoto execute, as required by theblinkstickpackage.
If you are using a virtualenv, runvirtualenv -p python3 venv
pip install advanced-led-control
source venv/bin/activate
# Follow this [answer](https://stackoverflow.com/a/50335946/2730737) for why we need to use fully qualified path with sudo
sudo venv/bin/python test.pyIf you are using sudo natively, runsudo -E python test.pyConfigurationIndicator Paramsm_col: the indication mode color, based on any of the color types defined belowfunc: a callback that will be called to evaluate the color and flickering for the value led.brightness: a value in range [0, 1] that gets multiplied by both the mode and value indication.i_time: indication time, is how much time is allocated for this mode.Indicator CallbackReturnscolor: the indication value color, based on any of the color types defined belowNOTE: CSS names don't work with flickering, follow thisreportflickering: an integer that shows the speed of flickering of this color,
the higher the value the faster the flickering.
Recommended Values are in range [1, 20].
UseNO_FLICKERor1001to set the led on without flickering.Color TypeAn indication color is either one of the 3 typesRGB: a list of 3 vals in RGB ie. [0, 10, 0].HEX: a string ie. '#00ffff'.CSS_NAME: a string ie. 'aliceblue'.Procedure Paramsmode_led: Choose betweenLED_1andLED_2, currentlyLED_1isn't supported
as the blinkstick doesn't allow flickering the second led
while the first is set constantly.Follow thisreport.quiet: A percentage to be taken from the indication time for the stick to shutdown.Settingquiet=0will disable this feature,
and force all the indications to run consecutively.Settingquiet=100is useless, as it will take all the indication time as a break!sync_duration: Allows starting the procedure at a specific second,
as this package is made to run for multiple machines/RPis that are working separately,
and to avoid using communication to control the led,
we allow sync based on real time.Use async_durationof at least 1.2 total indication time to count for the
delay the pulses make, and if it's less than the total indication time,
it will be almost useless.NOTE: Setsync_duration=0to disable this feature.NOTE: Settingsync_durationto say 8 will start at every eighth second,
In example, [0, 8, 16, 24, 32 ...]NotesThis project was inspired by thePi Dramble |
advancedlogging | Builds futher on python’s logging by adding loggers for classes.Free software: MIT licenseInstallationpip install advancedloggingYou can also install the in-development version with:pip install https://github.com/fonganthonym/python-advancedlogging/archive/main.zipDocumentationhttps://python-advancedlogging.readthedocs.io/DevelopmentTo run all the tests run:toxNote, to combine the coverage data from all the tox environments run:Windowsset PYTEST_ADDOPTS=--cov-append
toxOtherPYTEST_ADDOPTS=--cov-append toxChangelog1.0.0 (2021-05-11)First release on PyPI. |
advanced-markdown-tools | No description available on PyPI. |
advanced-math | advanced_math module to handle advanced math operationsDocumentationNAME
advanced_math - advanced_math module for handling some advanced mathamatical operationsCLASSES
builtins.object
Point
Point3D
Vector2
Vector3class Point(builtins.object)
| Point(x, y, formatting='default')
|
| Creates a 2D point object
|
| Methods defined here:
|
| __add__(self, another_point)
|
| __eq__(self, another_point)
| Return self==value.
|
| __init__(self, x, y, formatting='default')
| Initialize self. See help(type(self)) for accurate signature.
|
| __repr__(self)
| Return repr(self).
|
| __str__(self)
| Return str(self).
|
| __sub__(self, another_point)
|
| distance(self, another_point)
| Calculates the distance between two points
| :param another_point: Point
| :return: float
|
| form_vector2(self, another_point)
| Returns a new Vector2 object having x and y of subtraction of first point and second point.
| :param another_point: Point
| :return: Vector2
|
| format(self, formatting)
| Changes the format in which the point is printed (converted into str).
| Supported formats are: default, standard, expanded.
| Although, it doesn't throw errors if any invalid formatting is presented, it's a bad practice doing that!
| :param formatting: str
| :return: None
|
| to_tuple(self)
| Returns the point object in a tuple form
| :return: Tuple
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| from_seq(sequence)
|
| make_point_from_seq(sequence)
| Creates a new point using a sequence
| :param sequence: Sequence
| :return: Point
|
| origin()
| Returns the origin point
| :return: Point
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
class Point3D(builtins.object)
| Point3D(x, y, z, formatting='default')
|
| Creates a 3D point object
|
| Methods defined here:
|
| __add__(self, another_point)
|
| __eq__(self, another_point)
| Return self==value.
|
| __init__(self, x, y, z, formatting='default')
| Initialize self. See help(type(self)) for accurate signature.
|
| __repr__(self)
| Return repr(self).
|
| __str__(self)
| Return str(self).
|
| __sub__(self, another_point)
|
| distance(self, another_point)
| Calculates the distance between two points
| :param another_point: Point3D
| :return: float
|
| form_vector3(self, another_point)
| TODO: IMPLEMENT AFTER VECTOR3
| Returns a new Vector2 object having x and y of subtraction of first point and second point.
| :param another_point: Point
| :return: Vector2
|
| format(self, formatting)
| Changes the format in which the point3D is printed (converted into str).
| Supported formats are: default, standard, expanded.
| Although, it doesn't throw errors if any invalid formatting is presented, it's a bad practice doing that!
| :param formatting: str
| :return: None
|
| to_tuple(self)
| Returns the point3D object in a tuple form
| :return: Tuple
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| from_seq(sequence)
| Creates a new point using a sequence
| :param sequence: Sequence
| :return: Point3D
|
| origin()
| Returns the origin point
| :return: Point3D
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
class Vector2(builtins.object)
| Vector2(x, y, formatting='default')
|
| Creates a new 2D vector object
|
| Methods defined here:
|
| __add__(self, another_vector)
|
| __eq__(self, another_vector)
| Return self==value.
|
| __floordiv__(self, value)
|
| __init__(self, x, y, formatting='default')
| Initialize self. See help(type(self)) for accurate signature.
|
| __mul__(self, value)
|
| __repr__(self)
| Return repr(self).
|
| __str__(self)
| Return str(self).
|
| __sub__(self, another_vector)
|
| __truediv__(self, value)
|
| direction(self)
| Calculates the direction of a vector
| :return: float
|
| format(self, formatting)
| Changes the format in which the vector is printed (converted into str).
| Supported formats are: default, standard, expanded.
| Although, it doesn't throw errors if any invalid formatting is presented, it's a bad practice doing that!
| :param formatting: str
| :return: None
|
| from_seq(sequence)
|
| magnitude(self)
| Calculates the magnitude of a 2D vector
| :return: float
|
| to_tuple(self)
| Returns the vector object in a tuple form
| :return: Tuple
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| down()
| Returns a unit vector in positive y direction
|
| left()
| Returns a unit vector in negative x direction
|
| make_vector_from_seq(sequence)
| Creates a new vector using a sequence
| :param sequence: Sequence
| :return: Vector2
|
| right()
| Returns a unit vector in positive x direction
|
| up()
| Returns a unit vector in negative y direction
|
| zero()
| Returns the null or zero vector
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None
class Vector3(builtins.object)
| Vector3(x, y, z, formatting='default')
|
| Creates a new 3D vector object
|
| Methods defined here:
|
| __add__(self, another_vector)
|
| __eq__(self, another_vector)
| Return self==value.
|
| __floordiv__(self, value)
|
| __init__(self, x, y, z, formatting='default')
| Initialize self. See help(type(self)) for accurate signature.
|
| __mul__(self, value)
|
| __repr__(self)
| Return repr(self).
|
| __str__(self)
| Return str(self).
|
| __sub__(self, another_vector)
|
| __truediv__(self, value)
|
| direction(self)
| TODO: IMPLEMENT LATER
| Calculates the direction of a vector
| :return: float
|
| format(self, formatting)
| Changes the format in which the vector is printed (converted into str).
| Supported formats are: default, standard, expanded.
| Although, it doesn't throw errors if any invalid formatting is presented, it's a bad practice doing that!
| :param formatting: str
| :return: None
|
| from_seq(sequence)
| Creates a new vector using a sequence
| :param sequence: Sequence
| :return: Vector2
|
| magnitude(self)
| Calculates the magnitude of a 3D vector
| :return: float
|
| to_tuple(self)
| Returns the vector object in a tuple form
| :return: Tuple
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| down()
| Returns a unit vector in positive y direction
|
| inward()
| Returns a unit vector in positive z direction
|
| left()
| Returns a unit vector in negative x direction
|
| outward()
| Returns a unit vector in negative z direction
|
| right()
| Returns a unit vector in positive x direction
|
| up()
| Returns a unit vector in negative y direction
|
| zero()
| Returns the null or zero vector
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __hash__ = None |
advancedmaths | Advanced Math is a library to calculate math problems.InstallStable Version:pipinstall-Uadvanced-mathBeta Version:pipinstall--pre-Uadvanced-mathFractionsA twin prime is a prime number that is either 2 less or 2 more than another prime number.Example: Find all twin primes below 1000.fromfind_primesimportfind_twinsprint(find_twins(1000)) |
advanced-number-game | Advanced Number GameThis package is an alternative to other non-advanced number game packages, within containing functions to continue play and select difficultyInstallationThis package can be installed from PyPIpip install advanced_number_gameHow to useTo play this number game, call the programnumber_game |
advanced-pca | Advanced Priniciple Component AnalysisTable of ContentsProject MotivationUsageInstallationFile DescriptionsLicensing, Authors, and AcknowledgementsProject MotivationResearchers use Principle Component Analysis (PCA) intending to summarize features, identify structure in data or reduce the number of features. The interpretation of principal components is challenging in most of the cases due to the high amount of cross-loadings (one feature having significant weight across many principal components). Different types of matrix rotations are used to minimize cross-loadings and make factor interpretation easier.Thecustom_PCAclass is the child ofsklearn.decomposition.PCAand uses varimax rotation and enables dimensionality reduction in complex pipelines with the modifiedtransformmethod.custom_PCAclass implements:varimax rotationfor better interpretation of principal componentsdimensionality reduction based on siginificantfeature communalities> 0.5dimensionality reduction based onfeature weights significancecalculated based on sample sizesurrogate feature selection- only features with maximum laoding are selected instead of principal componentsUsageExample of using varimax rotation:# 3rd party imports
import numpy as np
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from advanced_pca import CustomPCA
# load dataset
dataset = datasets.load_diabetes()
X_std = StandardScaler().fit_transform(dataset.data)
# fit pca objects with and without rotation with 5 principal components
standard_pca5 = CustomPCA(n_components=5).fit(X_std)
varimax_pca5 = CustomPCA(n_components=5, rotation='varimax').fit(X_std)
# display factor matrices and number of cross loadings
print('Factor matrix:\n', standard_pca5.components_.round(1))
print(' Number of cross-loadings:', standard_pca5.count_cross_loadings())
print('\nRotated factor matrix:\n', varimax_pca5.components_.round(1))
print(' Number of cross_loadings:', varimax_pca5.count_cross_loadings()Factor matrix:
[[ 0.2 0.2 0.3 0.3 0.3 0.4 -0.3 0.4 0.4 0.3]
[ 0. -0.4 -0.2 -0.1 0.6 0.5 0.5 -0.1 -0. -0.1]
[ 0.5 -0.1 0.2 0.5 -0.1 -0.3 0.4 -0.4 0.1 0.3]
[-0.4 -0.7 0.5 -0. -0.1 -0.2 -0.1 0. 0.3 0.1]
[-0.7 0.4 0.1 0.5 0.1 0.1 0.2 -0.1 -0.2 0. ]]
Number of cross-loadings: 20
Rotated factor matrix:
[[ 0.1 0. 0.1 0.1 0.6 0.6 0. 0.4 0.2 0.1]
[ 0.1 0.1 0.5 0.6 0.2 0.1 -0.1 0.2 0.4 0.4]
[ 0. 0.2 0.3 -0.1 -0. 0.1 -0.7 0.5 0.3 0.2]
[-0.1 -0.9 0.1 -0.3 0.1 -0.1 0.2 -0.2 0.1 -0.1]
[-0.9 -0.1 0.1 -0.1 -0.1 -0.1 -0. -0.1 -0.2 -0.2]]
Number of cross_loadings: 13Example of dimensionality reduction based on features' weights and communalities significance:# fit pca objects with option selecting only significant features
significant_pca5 = (CustomPCA(n_components=5, feature_selection='significant')
.fit(X_std))
# print selected features based on weights and communalities significance
print('Communalities:\n', significant_pca5.communalities_)
print('\nSelected Features:\n',
np.asarray(dataset.feature_names)[significant_pca5.get_support()])
# execute dimensionality reduction and pring dataset shapes
print('\nOriginal dataset shape:', X_std.shape)
print('Reduced dataset shape:', significant_pca5.transform(X_std).shape)Communalities:
[0.93669362 0.79747464 0.4109572 0.59415803 0.47225155 0.44619639
0.55086939 0.35416151 0.24100886 0.1962288 ]
Selected Features:
['age' 'sex' 'bp' 's3']
Original dataset shape: (442, 10)
Reduced dataset shape: (442, 4)Example of selection method of surrogate features:# fit pca objects with option selecting only surrogate features
surrogate_pca = (CustomPCA(rotation='varimax', feature_selection='surrogate')
.fit(X_std))
# print factor matrix
print('Factor matrix:\n', surrogate_pca.components_.round(1))
print('\nSelected Features:\n',
np.asarray(dataset.feature_names)[surrogate_pca.get_support()])
# execute dimensionality reduction and pring dataset shapes
print('\nOriginal dataset shape:', X_std.shape)
print('Reduced dataset shape:', surrogate_pca.transform(X_std).shape)Factor matrix:
[[ 0.1 0. 0.1 0.1 0.6 0.7 0. 0.3 0.2 0.1]
[ 0. 0.2 0.2 0. -0.1 0.2 -0.7 0.6 0.2 0.1]
[ 0.1 0. 0.2 0.1 0.3 -0. -0.1 0.3 0.9 0.2]
[-0.1 -1. -0. -0.1 0. -0.1 0.2 -0.1 -0. -0.1]
[-1. -0.1 -0.1 -0.2 -0.1 -0.1 0. -0.1 -0.1 -0.1]
[ 0.1 0.1 0.2 0.9 0.1 0. -0. 0.1 0.2 0.2]
[ 0.1 0. 0.9 0.2 0.1 0.1 -0.2 0.1 0.2 0.2]
[ 0.1 0.1 0.1 0.2 0.1 0.1 -0.1 0.2 0.2 0.9]
[ 0. 0. 0. 0. 0.1 -0.1 0.2 1. 0. 0. ]
[ 0. -0. 0. 0. 0.8 -0.7 0. 0. 0. 0. ]]
Selected Features:
['bmi' 'bp' 's1' 's2' 's3' 's4' 's5' 's6']
Original dataset shape: (442, 10)
Reduced dataset shape: (442, 8)InstallationThere are several necessary 3rd party libraries beyond the Anaconda distribution of Python which needs to be installed and imported to run code. These are:rpy2Python interface to the R language used to calculate the varimax rotationpip install advanced-pcaFile DescriptionsThere are additional files:custom_pca.pyadvanced principle component analysis class definitionlicence.txtsee MIT lincence to followsetup.cfgandsetup.pyused for creating PyPi packageLicensing, Authors, AcknowledgementsMust give credit toJoseph F. Hair Jr, William C. Black, Barry J. Babin, Rolph E. Anderson.
The ones using projects shall followMIT lincence |
advanced-pid | advanced-pidAn advanced PID controller in Python. The derivative term can also be used in
practice thanks to built-in first-order filter. Detailed information can be
foundhere.Usage is very simple:fromadvanced_pidimportPID# Create PID controllerpid=PID(Kp=2.0,Ki=0.1,Kd=1.0,Tf=0.05)# Control loopwhileTrue:# Get current measurement from systemtimestamp,measurement=system.get_measurement()# Calculate control signal by using PID controllerreference=1.0control=pid(timestamp,reference-measurement)# Feed control signal to systemsystem.set_input(control)Complete API documentation can be foundhere.UsageBiggest advantage of advanced-pid, the derivative term has a built-in first-order
filter.advanced-pid package includes a toy mass-spring-damper system model for testing:fromadvanced_pidimportPIDfromadvanced_pid.modelsimportMassSpringDamperfrommatplotlibimportpyplotaspltfromnumpyimportdiff# Create a mass-spring-damper system modelsystem=MassSpringDamper(mass=1.0,spring_const=1.0,damping_const=0.2)system.set_initial_value(initial_position=1.0,initial_velocity=0.0)# Create PID controllerpid=PID(Kp=1.0,Ki=0.0,Kd=2.0,Tf=0.5)# Control looptime,meas,cont=[],[],[]foriinrange(800):# Get current measurement from systemtimestamp,measurement=system.get_measurement()# Calculate control signal by using PID controllercontrol=pid(timestamp,-measurement)# Feed control signal to systemsystem.set_input(control)# Record for plottingtime.append(timestamp)meas.append(measurement)cont.append(control)# Plot resultfig,(ax1,ax2,ax3)=plt.subplots(3,1)fig.suptitle('Mass-Spring-Damper system')ax1.set_ylabel('Measured Position [m]')ax1.plot(time,meas,'b')ax1.grid()ax2.set_ylabel('Force [N]')ax2.plot(time,cont,'g')ax2.grid()ax3.set_xlabel('Time [s]')ax3.set_ylabel('Derivative Term')ax3.plot(time[1:],diff(meas)/diff(time),'r')ax3.grid()plt.show()As It can be seen in the figure, derivative term cannot be use without a filter:InstallationTo install, run:pip3 install advanced-pidTestsTo run tests, run:python -m unittest tests.test_pidLicenseLicensed under theMIT License. |
advanced-plot | No description available on PyPI. |
advanced-print | Advanced PrintA glorified print that will:Display file from which print occuredDisplay time of printDisplay message on a seperate lineAdd colors for seperationInstallationpipinstalladvanced-printOr using poetrypoetryaddadvanced-printExamplefromadvanced_printimportprintprint("Hello world") |
advanced-pw-gen | pw_genA simple password generator written in Python 3.11UsageOptions:pw_length=16# Length of the password (integer)pw_type="upper,lower,number,symbol,legible"# Type of the password (string)Types:upper=Uppercaseletterslower=Lowercaselettersnumber=Numberssymbol=Symbolslegible=Legiblecharacters(no0,O,1,l,I)Example:frompw_genimportPasswordpw=Password(pw_length=16,pw_type="upper,lower,number,special,legible")print(pw.password())Output:Xy4@9#3$7%8&1!23 |
advanced-python | No description available on PyPI. |
advancedpythonmalware | AdvancedPythonMalware is a Python Library for creating simple malwares. It can make GDI, Spam Boxes, or Even delete SYSTEM32!List Of Classes And Commands
GDI.tunnel() – Creates a tunnel effect like in the memz virus
GDI.screen_glitch(repeat_time, r, g, b) – Creates screen glitches. Takes in (repeat_time, r, g, b) repeat_time is many times to glitch. and then the r, g, and b values give it the color.
Base.warning(nameofmalware, command) – Creates a malware warning. Takes in nameofmalware and command. Nameofmalware, is the name of your malware. And will use that to make a custom messagebox. Command is what function you will run if the user presses ok, AKA The malware command.
destructive.DeleteMBR() – Overwrites the Master Boot Record of the PC
destructive.TakeownSystem32() – Takes full ownership of system32 so you can delete files and so on.
destructive.DeleteCriticalFiles(repeat) – Trys to delete critical files in System32. The repeat argument is how many times you want it to try.
destructive.BSOD() – Blue screens the PC.
annoying.repeatingPopup(message, title) - Self explanitory. Creates a popup with specified message and text that cannot be closed.
annoying.shutdown(time, message) – Shuts down the computer with specified time and specified message.
annoying.restart(time, message) – Restarts the computer with specified time and specified message.Lets make some malware!!!!DISCLAIMER!!!
1. DO NOT USE THIS FOR MALICIOUS PURPOSES. OTHER THEN TESTING AND FUN.
2. DO NOT USE ANY OF THE COMMANDS IN THE “destructive” CLASS ON YOUR MAIN PC, AS OTHER THEN THE BSOD, THEY WILL CAUSE HARM AND MAYBE EVEN DESTROY IT.
3. DO NOT SPREAD ANY MALWARE MADE WITH THIS LIBRARY.
4. DO NOT DO THIS TO ANYONE WITHOUT THEIR CONSENT.
5. HAVE FUN :)©️ 2022 UnusualMonkey, LLC. All rights reserved.The Change Log1.0.0 (7/26/22)First ReleaseAdded basic malware functions.Check the README for more. |
advanced-quotes | advanced_quotesYour all-access pass to the world's largest quotations database. Random , daily quotes , over 40 thousand authors, various topics. Inside your comfy python development environment!FeaturesRetrieve random quotes.Retrieve daily quotes.View a specific author's full set of quotes and info.View a specific topic's full set of quotes .A topic/author index so you can search for them.Well documented with examples.Note: This is heavily based on classes, you must be familiar with those.Getting startedDownload the module using pip :$ pip install advanced-quotesNote: This may differentiate depending on your use caseImport the module, use whatever you need. Also, you'll be accessing the module usingadvanced_quotesinstead ofadvanced-quotesDocumentationThis module is fully documented on the dedicatedwebsite.CopyrightCopyright (c) 2020 Copyright Holder All Rights Reserved.Data provided byBrainyQuote |
advanced-radiomics | Advanced Radiomics Functions.Free software: 3-clause BSD licenseDocumentation: (COMING SOON!)https://FelipeAugustoMachado.github.io/advanced-radiomics.FeaturesTODO |
advanced-rpn-calculator | RPN Calculator by Akbar BadShahIt's a full-fledged rpn calculator I made to practice my pythonic skills. Works fine to the best of my knowledge, but you're still welcome to share your thoughts, add features, report and correct bugs.New Features!+: Take 2 numbers from the stack, add them and put the result in the stack-: Take 2 numbers from the stack, substracte them and put the result in the stack*: Take 2 numbers from the stack, mul them and put the result in the stack/: Take 2 numbers from the stack, divise them and put the result in the stackcla: Clear both stack and variableclr: Empty the stackclv: Clear both stack and variable!: None!=: Return True if last two numbers in stack are equal, False otherwise%: Take 2 integers from the stack, divide them and put the remainder in the stack++: Increment an integer--: Decrement an integer&: Take 2 numbers from the stack, apply a bitwise "and" and put the result in the stack|: Take 2 numbers from the stack, apply a bitwise "or" and put the result in the stack^: Take 2 numbers from the stack, apply a bitwise "xor" and put the result in the stack~: Take 2 numbers from the stack, apply a bitwise "xor" and put the result in the stack<<: Take 2 numbers from the stack, apply a left shift and put the result in the stack>>: Take 2 numbers from the stack, apply a right shift and put the result in the stack&&: Perform boollean AND operation on two values and output the result (not added to the stack)||: Perform boollean OR operation on two values and output the result (not added to the stack)^^: Perform boolean XOR operation on two values and output the result (not added to the stack)<: Smaller than operation on two values and output the result (not added to the stack)<=: Smaller than or equal to operation on two values and output the result (not added to the stack)==: Equal to operation on two values and output the result (not added to the stack)>: Greater than operation on two values and output the result (not added to the stack)>=: Greater than operation on two values and output the result (not added to the stack)acos: Take arc cosine on a value and output the result (also added to the stack)asin: Take arc sine on a value and output the result (also added to the stack)atan: Take arc tangent on a value and output the result (also added to the stack)cos: Take cosine on a value and output the result (also added to the stack)cosh: Take arc hyperbolic cosine on a value and output the result (also added to the stack)sin: Take sine on a value and output the result (also added to the stack)sinh: Take hyperbolic sine on a value and output the result (also added to the stack)tanh: Take hyperbolic tangent on a value and output the result (also added to the stack)ceil: Take ceil of an integer and output the result (also added to the stack)floor: Take floor of an integer and output the result (also added to the stack)round: Take round of an integer and output the result (also added to the stack)ip: Separates int from floating part of a decimal number and output the result (also added to the stack)fp: separates floating part of a decimal number and output the result (also added to the stack)abs: Take ceil of an integer and output the result (also added to the stack)max: Take maximum value of the stack and output the result (also removed from the stack)min: Take minimum value of the stack and output the result (also removed from the stack)hex: Change display mode to hexbin: Change display mode to binarydec: Change display mode to decimale: Nonepi: Nonerand: Noneexp: Apply e**x to the last number of the stackfact: Push factorial of the last number to the stacksqrt: Push factorial of the last number to the stackln: Apply log10 to the last number of the stacklog: Apply log10 to the last number of the stackpow: Take 2 numbers from the stack, apply power and put the result in the stackpick: Pick the nth item form stackrepeat: Repeat an operation n timesdrop: Drop the top most item from the stackdropn: Drop the n top most items from the stackdup: Duplicates the top item from the stackdupn: Duplicates n top items of thew stackroll: Roll the stack upwards by nrolld: Roll the stack downwards by nstack: Toggles stack display from horizontal to verticalswap: Swap the top 2 stack itemsx=: Assigns a variable, e.g. '1024 x='help: Print help; Same as pol --listexit: Quit the programAvailable ModesFor one time result of a single line statement, do:rpn 3 2 + #(should exit outputting 5)To solve statements from a file before entering shell, do:rpn --file #pathTo simply enter the shell, do:rpnInstallationpip install advanced_rpn_calculatorTodosWrite MORE Tests or just test if everything works fine.LicenseMITFree Software, Hell Yeah! |
advanced-scrapy-proxies | advanced-scrapy-proxiesadvanced-scrapy-proxies is a Python library for dealing with proxies in your Scrapy project.
Starting fromAivarsk's scrapy proxy(no more updated since 2018) i'm adding more features to manage lists of proxies generated dinamically.InstallationUse the package managerpipto install advanced-scrapy-proxies.pipinstalladvanced-scrapy-proxiesUsagesettings.pyDOWNLOADER_MIDDLEWARES={'scrapy.downloadermiddlewares.retry.RetryMiddleware':90,'advanced-scrapy-proxies.RandomProxy':100,'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':110}## Proxy mode# -1: NO_PROXY, middleware is configured but does nothing. Useful when needed to automate the selection of the mode# 0: RANDOMIZE_PROXY_EVERY_REQUESTS, every requrest use a different proxy# 1: RANDOMIZE_PROXY_ONCE, selects one proxy for the whole execution from the input list# 2: SET_CUSTOM_PROXY, use the proxy specified with option CUSTOM_PROXY# 3: REMOTE_PROXY_LIST, use the proxy list at the specified URLPROXY_MODE=3PROXY_LIST='https://yourproxylisturl/list.txt'As every scrapy project, you can override the settings in settings.py when calling the scraper.##PROXY_MODE=-1, the spider does not use the proxy list provided.scrapycrawlmyspider-sPROXY_MODE=-1-sPROXY_LIST='myproxylist.txt'##PROXY_MODE=0, the spider use the proxy list provided, choosing for every request a different proxy.scrapycrawlmyspider-sPROXY_MODE=0-sPROXY_LIST='myproxylist.txt'##PROXY_MODE=1, the spider use the proxy list provided, choosing only one proxy for the whole execution.scrapycrawlmyspider-sPROXY_MODE=1-sPROXY_LIST='myproxylist.txt'##PROXY_MODE=2, the spider uses the proxy provided.scrapycrawlmyspider-sPROXY_MODE=2-sPROXY_LIST='http://myproxy.com:80'##PROXY_MODE=3, the spider uses the proxy list at the url provided. The list is read at every request made by the spider, so it can be updated during the execution.scrapycrawlmyspider-sPROXY_MODE=3-sPROXY_LIST='http://myproxy.com:80'Planned new features and updatesMinor updatesadding more tests on the format of the input variablesrewriting error messagesNew featuresAdding a cooldown list: instead of deleting proxy after a failed attempt to get data, use a cooldown list where they are not used for a limited time in the scraper but ready to be reused when the cooldown finishes.Adding support for reading urls of the lists behind user and passwordUpdating proxy list at every request even for PROXY_MODE=0ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseGNU GPLv2 |
advancedselector | Advanced SelectorHas the ability to use multiple selection types:Single selectMultiple selectionCross platform implementationThe library make use of thegetchlib on Linux andmsvcrton WindowsTry it!WindowsMake sure to havePythonandGitinstalled.Open a powershell and run:# Clone the repositorygitclonehttps://github.com/kougen/py-advanced-selector# Change dircdpy-advanced-selector# Create a virtual envpy-mvenvvenv# Enter the venv./venv/Scripts/activate# Install the dependenciespipinstall-rrequirements.txt# Start the sample.pypytest\sample.py |
advanced-sqlalchemy-manager | Managers for SQLAlchemy.Manager for model, methods were added during runtime to query.Installation$ [sudo] pip install advanced-sqlalchemy-managerDocumentationManagedQueryManaged query that replaces sqlalchemy.query classExample:fromsqlalchemy.ormimportsessionmakerfromsqlalchemyimportcreate_enginefromsqlalchemy.ormimportdeclarative_basefromsqlalchemyimportColumn,Integer,StringfromalchmanagerimportManagedQuery,BaseQueryManagerengine=create_engine('sqlite:///:memory:')session_factory=sessionmaker(query_cls=ManagedQuery,bind=engine)session=session_factory()Base=declarative_base()classPerson(Base):__tablename__='persons'id=Column(Integer,primary_key=True)name=Column(String(50),nullable=False)age=Column(Integer,nullable=False)classPersonQueryManager(BaseQueryManager):__model__=Person@staticmethoddefolder_than(query:ManagedQuery,age:int)->ManagedQuery:returnquery.filter(Person.age>age)@staticmethoddefyounger_than(query:ManagedQuery,age:int):returnquery.filter(Person.age<age)@staticmethoddeffirst_of_exact_age(query:ManagedQuery,age:int):returnquery.filter(Person.age==age).first()filtered_persons=session.query(Person).older_than(30).filter(Person.name.contains('_')).younger_than(60).all()person_25_years_old=session.query(Person).first_of_exact_age(25)ManagedSessionManaged session.
Use decoratorload_manager()to register query managers into session.
Registered that way session managers will be usable on any model.Example:fromsqlalchemy.ormimportsessionmakerfromsqlalchemyimportcreate_enginefromsqlalchemy.ormimportdeclarative_basefromsqlalchemyimportColumn,Integer,String,BooleanfromalchmanagerimportManagedQuery,ManagedSession,BaseQueryManagerengine=create_engine('sqlite:///:memory:')session_factory=sessionmaker(class_=ManagedSession,bind=engine)session=session_factory()Base=declarative_base()classPerson(Base):__tablename__='persons'id=Column(Integer,primary_key=True)name=Column(String(50),nullable=False)age=Column(Integer,nullable=False)classBook(Base):__tablename__='books'id=Column(Integer,primary_key=True)title=Column(String)is_public=Column(Boolean,nullable=False,default=False)@session.load_manager()classBookQueryManager(BaseQueryManager):@staticmethoddefis_book_public(query:ManagedQuery)->ManagedQuery:returnquery.filter(Book.is_public.is_(True))count_of_filtered_books=session.query(Book).is_book_public().count()# This will produce broken query because is_public does not exists in Person modelpersons=session.query(Person).is_book_public().count() |
advanced-ssh-config | Advanced SSH config===================|Travis| |PyPI version| |PyPI downloads| |License| |Requires.io||Gitter||ASSH logo - Advanced SSH Config logo|Enhances ``ssh_config`` file capabilities**NOTE**: This program is called by`ProxyCommand <http://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_Jump_Hosts#ProxyCommand_with_Netcat>`__from `lib-ssh <https://www.libssh.org>`__.--------------It works *transparently* with :- ssh- scp- rsync- git- and even desktop applications depending on ``lib-ssh`` (for instance`Tower <http://www.git-tower.com>`__, `Atom.io <https://atom.io>`__,`SSH Tunnel Manager <http://projects.tynsoe.org/fr/stm/>`__)--------------The ``.ssh/config`` file is automatically generated, you need to update``.ssh/config.advanced`` file instead; With new features and a betterregex engine for the hostnames.Usage-----.. code:: console$ assh --helpUsage: assh [OPTIONS] COMMAND [arg...]Commands:build Build .ssh/config based on .ssh/config.advancedconnect <host> Open a connection to <host>info <host> Print connection informationsinit Build a .ssh/config.advanced file based on .ssh/configgenerate-etc-hosts Print a /etc/hosts file of .ssh/config.advancedstats Print statisticsOptions:--version show program's version number and exit-h, --help show this help message and exit-p PORT, --port=PORT SSH port-c CONFIG_FILE, --config=CONFIG_FILEssh_config file-f, --force-v, --verbose-l LOG_LEVEL, --log_level=LOG_LEVEL--dry-runCommmand line features----------------------**Gateway chaining**.. code:: bashssh foo.com/bar.comConnect to ``bar.com`` using ssh and create a proxy on ``bar.com`` to``foo.com``. Then connect to ``foo.com`` using the created proxy on``bar.com``... code:: bashssh foo.com/bar.com/baz.comConnect to ``foo.com`` using ``bar.com/baz.com`` which itself uses``baz.com``.Configuration features----------------------- **regex for hostnames**: ``gw.school-.*.domain.net``- **aliases**: ``gate`` -> ``gate.domain.tld``- **gateways**: transparent ssh connections chaining- **includes**: split configuration into multiple files, supportglobbing- **local command execution**: finally a way to execute a commandlocally on connection- **inheritance**: ``inherits = gate.domain.tld``- **variable expansion**: ``User = $USER`` (take $USER fromenvironment)- **smart proxycommand**: connect using ``netcat``, ``socat`` or customhandlerConfig example--------------``~/.ssh/config.advanced``.. code:: ini# Simple example[foo.com]user = pacmanport = 2222[bar]hostname = 1.2.3.4gateways = foo.com # `ssh bar` will use `foo.com` as gateway[^vm-[0-9]*\.joe\.com$]gateways = bar # `ssh vm-42.joe.com will use `bar` as gateway which# itself will use `foo.com` as gateway[default]ProxyCommand = assh --port=%p connect %h--------------.. code:: ini# Complete example[foo]user = pacmanport = 2222hostname = foo.com[bar]hostname = 1.2.3.4gateways = foo# By running `ssh bar`, you will ssh to `bar` through a `ssh foo`[^vm-[0-9]*\.joe\.com$]IdentityFile = ~/.ssh/root-joegateways = direct joe.com joe.com/bar# Will try to ssh without proxy, then fallback to joe.com proxy, then# fallback to joe.com through barDynamicForward = 43217LocalForward = 1723 localhost:1723ForwardX11 = yes[default]Includes = ~/.ssh/config.advanced2 ~/.ssh/config.advanced3 ~/.ssh/configs/*/host.config# The `Includes` directive must be in the `[default]` sectionPort = 22User = rootIdentityFile = ~/.ssh/id_rsaProxyCommand = assh connect %h --port=%pGateways = directPubkeyAuthentication = yesVisualHostKey = yesControlMaster = autoControlPath = ~/.ssh/controlmaster/%h-%p-%r.sockEscapeChar = ~Installation------------Download the latest build.. code:: console$ curl -L https://github.com/moul/advanced-ssh-config/releases/download/v1.1.0/assh-`uname -s`-`uname -m` > /usr/local/bin/assh$ chmod +x /usr/local/bin/asshUsing Pypi.. code:: console$ pip install advanced-ssh-configOr by cloning.. code:: console$ git clone https://github.com/moul/advanced-ssh-config$ cd advanced-ssh-config$ make installFirst run---------Automatically generate a new ``.ssh/config.advanced`` based on yourcurrent ``.ssh/config`` file:.. code:: console$ assh init > ~/.ssh/config.advanced$ assh build -fTests-----.. code:: console$ make testDocker------Build.. code:: console$ docker build -t moul/advanced-ssh-config .Run.. code:: console$ docker run -rm -i -t moul/advanced-ssh-configor$ docker run -rm -i -t -v $(pwd)/:/advanced_ssh_config moul/advanced-ssh-configor$ docker run -rm -i -t -v moul/advanced-ssh-config python setup.py testContributors------------- `Christo DeLange <https://github.com/dldinternet>`__--© 2009-2015 Manfred Touron - `MITLicense <https://github.com/moul/advanced-ssh-config/blob/master/License.txt>`__... |Travis| image:: https://img.shields.io/travis/moul/advanced-ssh-config.svg:target: https://travis-ci.org/moul/advanced-ssh-config.. |PyPI version| image:: https://img.shields.io/pypi/v/advanced-ssh-config.svg:target: https://pypi.python.org/pypi/advanced-ssh-config/.. |PyPI downloads| image:: https://img.shields.io/pypi/dm/advanced-ssh-config.svg:target:.. |License| image:: https://img.shields.io/pypi/l/advanced-ssh-config.svg?style=flat:target: https://github.com/moul/advanced-ssh-config/blob/develop/LICENSE.md.. |Requires.io| image:: https://img.shields.io/requires/github/moul/advanced-ssh-config.svg:target: https://requires.io/github/moul/advanced-ssh-config/requirements/.. |Gitter| image:: https://img.shields.io/badge/chat-gitter-ff69b4.svg:target: https://gitter.im/moul/advanced-ssh-config.. |ASSH logo - Advanced SSH Config logo| image:: https://raw.githubusercontent.com/moul/advanced-ssh-config/develop/assets/assh.jpg:target: https://github.com/moul/advanced-ssh-config |
advanced-ta | This module is a python implementation of Lorentzian Classification algorithm developed by @jdehorty in pinescript. The original work can be found here -https://www.tradingview.com/script/WhBzgfDu-Machine-Learning-Lorentzian-Classification/UsageAt the most simplest, you can just do this:fromadvanced_taimportLorentzianClassification..# df here is the dataframe containing stock data as [['open', 'high', 'low', 'close', 'volume']]. Notice that the column names are in lower case.lc=LorentzianClassification(df)lc.dump('output/result.csv')lc.plot('output/result.jpg')..For advanced use, you can do:fromadvanced_taimportLorentzianClassificationfromta.volumeimportmoney_flow_indexasMFI..# df here is the dataframe containing stock data as [['open', 'high', 'low', 'close', 'volume']]. Notice that the column names are in lower case.lc=LorentzianClassification(df,features=[LorentzianClassification.Feature("RSI",14,2),# f1LorentzianClassification.Feature("WT",10,11),# f2LorentzianClassification.Feature("CCI",20,2),# f3LorentzianClassification.Feature("ADX",20,2),# f4LorentzianClassification.Feature("RSI",9,2),# f5MFI(df['open'],df['high'],df['low'],df['close'],df['volume'],14)#f6],settings=LorentzianClassification.Settings(source='close',neighborsCount=8,maxBarsBack=2000,useDynamicExits=False),filterSettings=LorentzianClassification.FilterSettings(useVolatilityFilter=True,useRegimeFilter=True,useAdxFilter=False,regimeThreshold=-0.1,adxThreshold=20,kernelFilter=LorentzianClassification.KernelFilter(useKernelSmoothing=FalselookbackWindow=8relativeWeight=8.0regressionLevel=25crossoverLag=2)))lc.dump('output/result.csv')lc.plot('output/result.jpg')..Sample PlotGeneratedReference From TradingViewVersion History0.1.8Replacing dependency onTA-Libwithtato simplify setup |
advanced-telegram-bot | Advanced Telegram BotPython library containing utils for telegram botsAuthorsmade bySDAL:minish144&usual-oneAvailable utilsIncoming/outcoming messages serviceLocale-dependent data storageLoggerRole systemState systemUser meta data storageDependenciesAll the dependencies can be found inrequirements.txtfile.Installationpip install advanced-telegram-botUsageJust import the class and you're good to gofromadvancedbotimportTelegramBotDocumentation & helpHelp can be found in ourwiki pages. |
advanced-timedate | TimedateAdvanced date and time management library.Author:Axelle (LassaInora) VIANDIERLicense:GNU General Public License v3.0Version:3.0.5SummaryLinksSupported languagesTimedate functions and variablesset_languageFor Time and DateMethodsClass TimeTime initializationFormatsClass DateDate initializationMethodsFormatsLinksPersonal GitHubGitHub projectWebsite projectPypi projectSupported languages:English (en)Mandarin Chinese (ma)Hindi (hi)Spanish (sp)Bengali (be)French (fr)Russian (ru)Portuguese (po)Timedate functions:set_language()Change the default language of the library.[en, ma, hi, sp, be, fr, ru, po] are accepted.For Time and Date:Methods:(Property) recommended_formatReturn a recommended format for time or date with format()(Property) copy_timeReturn a copy of current value in Time class(Property) copy_dateReturn a copy of current value in Date class< / > / <= / >= / == / != comparatorReturn the result of comparaison with each comparator.int(value)Return the numbers of second since years 0.float(value)Return the numbers of second since years 0 with a precision of 24 decimal places.str(value)Return the current value with the recommended format.repr(value)Return the current value with "YYYY MM DD - hh:mm:ss.mls mcs nns pcs fms ats zps yts" formatiter(value) / list(value)Return each sub-value of current value.current value - other valueRemove the other value on current valuecurrent value + other valueAdd the other value on current valueClass Time:Time initialization.year: The number of years.month: The number of months.day: The number of days.hour: The number of hours.minute: The number of minutes.second: The number of seconds.milli: The number of milliseconds.micro: The number of microseconds.nano: The number of nanoseconds.pico: The number of picoseconds.femto: The number of femtosecondes.atto: The number of attosecondes.zepto: The number of zeptosecondes.yocto: The number of yoctosecondes.For each value, the default value is 0.Formats:_YYYY_: The years in 4 digits._YY_: The years in 2 digits._Y_: The years._MM_: The months in 2 digits._M_: The months_DD_: The days in 2 digits._D_: The day_hh_: The hours in 2 digits._h_: The hour_mm_: The minutes in 2 digits._m_: The minute_ss_: The secondes in 2 digits._s_: The seconde_mls_: The milliseconds in 3 digits._mcs_: The microseconds in 3 digits._nns_: The nanoseconds in 3 digits._pcs_: The picosecondes in 3 digits._fms_: The femtosecondes in 3 digits._ats_: The attosecondes in 3 digits._zps_: The zeptosecondes in 3 digits._yts_: The yoctosecondes in 3 digits._en-time_: The time in english format_ma-time_: The time in Mandarin format_hi-time_: The time in Hindi format_sp-time_: The time in Spanish format_be-time_: The time in Bengali format_fr-time_: The time in French format_ru-time_: The time in Russian format._po-time_: The time in Portuguese format.Class Date:Date initialization.year: The number of year. Default is 400.month: The number of month. Default is 1.day: The number of day. Default is 1.hour: The number of hour.minute: The number of minute.second: The number of second.millisecond: The number of millisecond.microsecond: The number of microsecond.nanosecond: The number of nanosecond.pico: The number of picoseconds.femto: The number of femtosecondes.atto: The number of attosecondes.zepto: The number of zeptosecondes.yocto: The number of yoctosecondes.timestamp: You can ignore all previously value and use a timestamp for initialize the Date.For each unspecified value, the default value is 0.Year cannot be less than 400.Methods:(Static Method) from_datetime(datetime_)Return a Date class create by datetime value(Class Method) NOW()Return the current Date(Class Method) its_a_leap_year(year)Return if year is a leap year.(Property) name_monthReturn the name of the month in the library langage (English in default).(Property) name_dayReturn the name of the day in the library langage (English in default).(Property) datetimeReturn a datetime with current value(Property) timestampReturn a timestamp with current value(Property) is_a_leap_yearReturn if current year is a leap year(Property) countdownReturn the remaining time until the date.(Property) chronoReturns the time passed since the date.Formats:_YYYY_: The years in 4 digits._YY_: The years in 2 digits._Y_: The years._MM_: The months in 2 digits._M_: The months_NM_: The name of the month._DD_: The days in 2 digits._D_: The day_ND_: The name of the day._hh_: The hours in 2 digits._h_: The hour_mm_: The minutes in 2 digits._m_: The minute_ss_: The secondes in 2 digits._s_: The seconde_mls_: The milliseconds in 3 digits._mcs_: The microseconds in 3 digits._nns_: The nanoseconds in 3 digits._pcs_: The picosecondes in 3 digits._fms_: The femtosecondes in 3 digits._ats_: The attosecondes in 3 digits._zps_: The zeptosecondes in 3 digits._yts_: The yoctosecondes in 3 digits._en-time_: The time in english format_ma-time_: The time in Mandarin format_hi-time_: The time in Hindi format_sp-time_: The time in Spanish format_be-time_: The time in Bengali format_fr-time_: The time in French format_ru-time_: The time in Russian format._po-time_: The time in Portuguese format. |
advanced-utils | No description available on PyPI. |
advanced-value-counts | Welcome to advanced-value-countsadvanced-value-counts is a Python-package containing theAdvancedValueCountsclass that makes use of pandas'.value_counts(),.groupby()andseabornto easily get a lot of info about the counts of a (categorical) column in a pandas DataFrame. The potential of this package is at its peak when wanting info of counts of a column after a grouping:df.groupby(groupby_col)[column].value_counts(). Clickhereto read how to useAdvancedValueCounts. Readthis medium articleor consultthis notebookfor an explanation on the added value of this package.Git repository.Table of contents:Installation for users using PyPiInstallation for users without PyPiUsageInstallation for contributorsInstallation for users using PyPipip install advanced-value-countsIf errors surface please upgrade pip and setuptoolspython3 -m pip install --upgrade pip
python3 -m pip install --upgrade setuptoolsInstallation for users without PyPigit clone https://github.com/sTomerG/advanced-value-counts.git
cd advanced-value-counts
pip install -e .
# optional but potentially crucial
pip install -r requirements/requirements.txtTo test whether the installation was succesfull run in the advanced-value-counts directory (DeprecationWarningsare expected)pytestUsagePlease consulthere.Installation for contributorsgit clone https://github.com/sTomerG/advanced-value-counts.git
cd advanced-value-counts
python3 -m venv .venvActivate the virtual environmentWindows:.\.venv\Scripts\activateLinux / MacOS:source .venv/bin/activateInstall requirementspython -m pip install --upgrade pip
pip install -r requirements/requirements.txtTest if everything works properly(DeprecationWarningsare expected)With toxtoxWithout toxpip install -e .
pytest |
advancepackage | This is long Description |
advance-touch | No description available on PyPI. |
advantage-air | Advantage Air API WrapperGetReturns the current state of all components.async_get()SetChange attributes by sending the updated values to the relevant endpoint.aircon.async_update_ac(ac: str, data: dict)Update values on the AC systemaircon.async_update_zone(ac: str, zone: str, data: dict)Update values on a specific zonelights.async_update_state(id: str, state: str|bool)Update the state of a light. off|false = off, on|true = onlights.async_update_value(id: str, value: int)Update the brightness of a light, and assumes 0 is off.things.async_update_value(id: str, value : int|bool)Update the value of a thing. 0|false = off, 100|true = on*.async_update(data: dict)Directly update with data to the endpoint.Exampleimport asyncio
import aiohttp
from advantage_air import advantage_air
async def main():
async with aiohttp.ClientSession() as session:
aa = advantage_air("192.168.1.24", port=2025, session=session, retry=5)
if data := await aa.async_get(1):
print(data)
aa.aircon.async_update_ac("ac1",{"state": "on"})
await asyncio.gather(
aa.aircon.async_update_zone("ac1","z01", {"value": 25}),
aa.aircon.async_update_zone("ac1","z02", {"value": 50}),
)
asyncio.run(main()) |
advantech-daq-python | advantech_daq_python |
advarchs | OverviewAdvarchs is simple tool for retrieving data from web archives.
It is especially useful if you are working with remote data stored in compressed
spreadsheets or of similar format.Getting StartedSay you need to perform some data anlytics on an excel spreadsheet that gets
refreshed every month and stored in RAR format. You can target a that file
and convert it to apandasdataframe with the following procedure:importpdimportosimporttempfilefromadvarchsimportwebfilename,extract_web_archiveTEMP_DIR=tempfile.gettempdir()url="http://www.site.com/archive.rar"arch_file_name=webfilename(url)arch_path=os.path.join(TEMP_DIR,arch_file_name)xlsx_files=extract_web_archive(url,arch_path,ffilter=['xlsx'])forxlsx_finxlsx_files:xlsx=pd.ExcelFile(xlsx_f)...RequirementsPython 3.5+p7zipSpecial noteOn CentOS and Ubuntu <= 16.04, the following packages are needed:unrarInstallationpipinstalladvarchsContributingSeeCONTRIBUTINGCode of ConductThis project adheres to theContributor Covenant 1.2.
By participating, you are advised to adhere to this Code of Conduct in all your
interactions with this project.LicenseApache-2.0 |
advbox | AdvBox是一款由百度安全实验室研发,在百度大范围使用的AI模型安全工具箱,目前原生支持PaddlePaddle、PyTorch、Caffe2、MxNet、Keras以及TensorFlow平台,方便广大开发者和安全工程师可以使用自己熟悉的框架。 AdvBox同时支持GraphPipe,屏蔽了底层使用的深度学习平台,用户可以通过几个命令就可以对PaddlePaddle、PyTorch、Caffe2、MxNet、CNTK、ScikitLearn以及TensorFlow平台生成的模型文件进行黑盒攻击。 |
advbumpversion | ==============ADVbumpversion==============This Fork=========This is a fork (**ADVbumpversion**) of a fork (**bump2version**).The excellent original project that can be found here: https://github.com/peritus/bumpversion.Unfortunately it seems like development has been stuck for some time and no activity has been seen from theauthor.Christian Verkerk has made some Pull Request merges and this project (renamed **bump2version**) can be found here: https://github.com/c4urself/bump2version.I have merged other Push Requests, in particular the ability to have more than one rule for a file,in a new project **ADVbumpversion**. The major differences are:- It is possible to have more than one set of rules (``parse``, ``serialize``, ``search`` and ``replace``) for thesame file- Some parts of the version can be independent of the others: They are not incremented with the other parts. It is inparticular useful when you have a build number that is incremented independently of the version- Several examples of usage- More testing casesLook at ``CHANGELOG.rst`` to see all the changes.**Note**: For compatibility, this project declares ``advbumpversion``, ``bump2version`` and ``bumpversion``. They areidentical. The remaining of this document uses ``bumpversion`` in command-line examples.Introduction============Version-bump your software with a single command!A small command line tool to simplify releasing software by updating allversion strings in your source code by the correct increment. Also createscommits and tags:- version formats are highly configurable- works without any VCS, but happily reads tag information from and writescommits and tags to Git and Mercurial if available- just handles text files, so it's not specific to any programming languageInstallation============You can download and install the latest version of this software from the Python package index (PyPI) as follows::pip install --upgrade advbumpversionUsage=====**Note**: I have compiled several usage examples in ``EXAMPLES.rst``.There are two modes of operation: On the command line for single-file operationand using a `configuration file <#configuration>`_ for more complex multi-fileoperations.::bumpversion [options] part [file]``part`` (required)The part of the version to increase, e.g. ``minor``.Valid values include those given in the ``--serialize`` / ``--parse`` option.Example bumping ``0.5.1`` to ``0.6.0``::bumpversion --current-version 0.5.1 minor src/VERSION``[file]``**default: none** (optional)The file that will be modified.If not given, the list of ``[bumpversion:file:…]`` sections from theconfiguration file will be used. If no files are mentioned on theconfiguration file either, are no files will be modified.Example bumping ``1.1.9`` to ``2.0.0``::bumpversion --current-version 1.1.9 major setup.pyConfiguration+++++++++++++All options can optionally be specified in a config file called``.bumpversion.cfg`` so that once you know how ``bumpversion`` needs to beconfigured for one particular software package, you can run it withoutspecifying options later. You should add that file to VCS so others can alsobump versions.Options on the command line take precedence over those from the config file,which take precedence over those derived from the environment and then from thedefaults.Example ``.bumpversion.cfg``::[bumpversion]current_version = 0.2.9commit = Truetag = True[bumpversion:file:setup.py]If no ``.bumpversion.cfg`` exists, ``bumpversion`` will also look into``setup.cfg`` for configuration.Global configuration--------------------General configuration is grouped in a ``[bumpversion]`` section.``current_version =``**no default value** (required)The current version of the software package before bumping.Also available as ``--current-version`` (e.g. ``bumpversion --current-version 0.5.1 patch setup.py``)``new_version =``**no default value** (optional)The version of the software package after the increment. If not given will beautomatically determined.Also available as ``--new-version`` (e.g. to go from 0.5.1 directly to0.6.1: ``bumpversion --current-version 0.5.1 --new-version 0.6.1 patchsetup.py``).``tag = (True | False)``**default:** False (`Don't create a tag`)Whether to create a tag, that is the new version, prefixed with the character"``v``". If you are using git, don't forget to ``git-push`` with the``--tags`` flag.Also available on the command line as ``(--tag | --no-tag)``.``sign_tags = (True | False)``**default:** False (`Don't sign tags`)Whether to sign tags.Also available on the command line as ``(--sign-tags | --no-sign-tags)``.``tag_name =``**default:** ``v{new_version}``The name of the tag that will be created. Only valid when using ``--tag`` / ``tag = True``.This is templated using the `Python Format String Syntax<http://docs.python.org/2/library/string.html#format-string-syntax>`_.Available in the template context are ``current_version`` and ``new_version``as well as all environment variables (prefixed with ``$``). You can also usethe variables ``now`` or ``utcnow`` to get a current timestamp. Both acceptdatetime formatting (when used like as in ``{now:%d.%m.%Y}``).Also available as ``--tag-name`` (e.g. ``bumpversion --message 'Jenkins Build{$BUILD_NUMBER}: {new_version}' patch``).``tag_message =``**default:** ``Bump version: {current_version} -> {new_version}``The annotation of the tag that will be created. Only valid when using ``--tag`` / ``tag = True``.This is templated using the `Python Format String Syntax<http://docs.python.org/2/library/string.html#format-string-syntax>`_.Available in the template context are ``current_version`` and ``new_version``as well as all environment variables (prefixed with ``$``). You can also usethe variables ``now`` or ``utcnow`` to get a current timestamp. Both acceptdatetime formatting (when used like as in ``{now:%d.%m.%Y}``).Also available as ``--tag-message``.``commit = (True | False)``**default:** ``False`` (`Don't create a commit`)Whether to create a commit using git or Mercurial.Also available as ``(--commit | --no-commit)``.``message =``**default:** ``Bump version: {current_version} -> {new_version}``The commit message to use when creating a commit. Only valid when using ``--commit`` / ``commit = True``.This is templated using the `Python Format String Syntax<http://docs.python.org/2/library/string.html#format-string-syntax>`_.Available in the template context are ``current_version`` and ``new_version``as well as all environment variables (prefixed with ``$``). You can also usethe variables ``now`` or ``utcnow`` to get a current timestamp. Both acceptdatetime formatting (when used like as in ``{now:%d.%m.%Y}``).Also available as ``--message`` (e.g.: ``bumpversion --message'[{now:%Y-%m-%d}] Jenkins Build {$BUILD_NUMBER}: {new_version}' patch``)Part specific configuration---------------------------A version string consists of one or more parts, e.g. the version ``1.0.2``has three parts, separated by a dot (``.``) character. In the defaultconfiguration these parts are named `major`, `minor`, `patch`, however you cancustomize that using the ``parse``/``serialize`` option.By default all parts considered numeric, that is their initial value is ``0``and they are increased as integers. Also, the value ``0`` is considered to beoptional if it's not needed for serialization, i.e. the version ``1.4.0`` isequal to ``1.4`` if ``{major}.{minor}`` is given as a ``serialize`` value.For advanced versioning schemes, non-numeric parts may be desirable (e.g. toidentify `alpha or beta versions<http://en.wikipedia.org/wiki/Software_release_life_cycle#Stages_of_development>`_,to indicate the stage of development, the flavor of the software package ora release name). To do so, you can use a ``[bumpversion:part:…]`` sectioncontaining the part's name (e.g. a part named ``release_name`` is configured ina section called ``[bumpversion:part:release_name]``.The following options are valid inside a part configuration:``values =``**default**: numeric (i.e. ``0``, ``1``, ``2``, …)Explicit list of all values that will be iterated when bumping that specificpart.Example::[bumpversion:part:release_name]values =witty-warthogridiculous-ratmarvelous-mantis``optional_value =``**default**: The first entry in ``values =``.If the value of the part matches this value it is considered optional, i.e.it's representation in a ``--serialize`` possibility is not required.Example::[bumpversion]current_version = 1.alphaparse = (?P<num>\d+)(\.(?P<release>.*))?serialize ={num}.{release}{num}[bumpversion:part:release]optional_value = gammavalues =alphabetagammaHere, ``bumpversion release`` would bump ``1.alpha`` to ``1.beta``. Executing``bumpversion release`` again would bump ``1.beta`` to ``1``, because`release` being ``gamma`` is configured optional.``first_value =``**default**: The first entry in ``values =``.When the part is reset, the value will be set to the value specified here.``independent = ``**default**: FalseWhen this value is set to True, the part is not reset when other parts are incremented. Its incrementation isindependent of the other parts. It is in particular useful when you have a build number in your version that isincremented independently of the actual version.File specific configuration---------------------------``[bumpversion:file:…:…]``**Note**: If you want to specify different options (``parse``, ...) for the same file, you can have several sections for the same file.To distinguish these sections, append ``:`` and an identifier (its value has no importance) after the file name::[bumpversion:file.txt:0]parse = ...[bumpversion:file.txt:1]parse = ...``parse =``**default:** ``(?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)``Regular expression (using `Python regular expression syntax<http://docs.python.org/2/library/re.html#regular-expression-syntax>`_) onhow to find and parse the version string.Is required to parse all strings produced by ``serialize =``. Named matchinggroups ("``(?P<name>...)``") provide values to as the ``part`` argument.Also available as ``--parse````serialize =``**default:** ``{major}.{minor}.{patch}``Template specifying how to serialize the version parts back to a versionstring.This is templated using the `Python Format String Syntax<http://docs.python.org/2/library/string.html#format-string-syntax>`_.Available in the template context are parsed values of the named groupsspecified in ``parse =`` as well as all environment variables (prefixed with``$``).Can be specified multiple times, bumpversion will try the serializationformats beginning with the first and choose the last one where all values canbe represented like this::serialize ={major}.{minor}{major}Given the example above, the new version *1.9* it will be serialized as``1.9``, but the version *2.0* will be serialized as ``2``.Also available as ``--serialize``. Multiple values on the command line aregiven like ``--serialize {major}.{minor} --serialize {major}````search =``**default:** ``{current_version}``Template string how to search for the string to be replaced in the file.Useful if the remotest possibility exists that the current version numbermight be multiple times in the file and you mean to only bump one of theoccurrences. Can be multiple lines, templated using `Python Format String Syntax<http://docs.python.org/2/library/string.html#format-string-syntax>`_.``replace =``**default:** ``{new_version}``Template to create the string that will replace the current version number inthe file.Given this ``requirements.txt``::Django>=1.5.6,<1.6MyProject==1.5.6using this ``.bumpversion.cfg`` will ensure only the line containing``MyProject`` will be changed::[bumpversion]current_version = 1.5.6[bumpversion:file:requirements.txt]search = MyProject=={current_version}replace = MyProject=={new_version}Can be multiple lines, templated using `Python Format String Syntax<http://docs.python.org/2/library/string.html#format-string-syntax>`_.Options=======Most of the configuration values above can also be given as an option.Additionally, the following options are available:``--dry-run, -n``Don't touch any files, just pretend. Best used with ``--verbose``.``--allow-dirty``Normally, bumpversion will abort if the working directory is dirty to protectyourself from releasing unversioned files and/or overwriting unsaved changes.Use this option to override this check.``--verbose``Print useful information to stderr``--list``List machine readable information to stdout for consumption by otherprograms.Example output::current_version=0.0.18new_version=0.0.19``-h, --help``Print help and exitUsing bumpversion in a script=============================If you need to use the version generated by bumpversion in a script you can make use ofthe `--list` option, combined with `grep` and `sed`.Say for example that you are using git-flow to manage your project and want to automaticallycreate a release. When you issue `git flow release start` you already need to know thenew version, before applying the change.The standard way to get it in a bash script isbumpversion --dry-run --list <part> | grep <field name> | sed -r s,"^.*=",,where <part> is as usual the part of the version number you are updating. You need to specify`--dry-run` to avoid bumpversion actually bumping the version number.For example, if you are updating the minor number and looking for the new version number this becomesbumpversion --dry-run --list minor | grep new_version | sed -r s,"^.*=",,Development===========Development of this happens on GitHub, patches including tests, documentationare very welcome, as well as bug reports! Also please open an issue if thistool does not support every aspect of bumping versions in your developmentworkflow, as it is intended to be very versatile.How to release bumpversion itself+++++++++++++++++++++++++++++++++Execute the following commands::git checkout mastergit pullmake testbumpversion releasepython setup.py sdist bdist_wheel uploadbumpversion --no-tag patchgit push origin master --tagsLicense=======**ADVbumpversion** is licensed under the MIT License - see the LICENSE.rst file for details.. _changelog:Changes=======**v1.2.0**- Add ``independent`` flag for version parts. This part is not reset when other parts are incremented. For example, forbuild numbers- Add EXAMPLES.rst with several test cases- Add new test cases: update version and build date, build number, annotated tags, test cases for almost all cases inEXAMPLES.rst**v1.1.1**- Fix a bug with PR#117: allow multiple config sections per file. Add a test case.**v1.1.0**- Compatibility with Travis CI- Publish on PyPi**v1.0.0**Fork of fork. The project is renamed **advbumpversion** to avoid confusion with other forks.The following Push requests are merged in this project:- `PR#8 <https://github.com/c4urself/bump2version/pull/8>`_ from @ekoh: Add Python 3.5 and 3.6 to the supported versions: Add Python 3.5 and 3.6 to the supported versions- `PR#117 <https://github.com/peritus/bumpversion/pull/117>`_ from from @chadawagner: allow multiple config sections per file- `PR#136 <https://github.com/peritus/bumpversion/pull/136>`_ from @vadeg: Fix documentation example with 'optional_value'- `PR#138 <https://github.com/peritus/bumpversion/pull/138>`_ from @smsearcy: Fixes TypeError in Python 3 on Windows- `PR#157 <https://github.com/peritus/bumpversion/pull/157>`_ from todd-dembrey: Fix verbose tagsI consider this project stable enough to raise the version to 1.0.0.**v0.5.7**- Added support for signing tags (git tag -s)thanks: @Californian (`#6 <https://github.com/c4urself/bump2version/pull/6>`_)**v0.5.6**- Added compatibility with `bumpversion` by making script install as `bumpversion` as wellthanks: @the-allanc (`#2 <https://github.com/c4urself/bump2version/pull/2>`_)**v0.5.5**- Added support for annotated tagsthanks: @ekohl @gvangool (`#58 <https://github.com/peritus/bumpversion/pull/58>`_)**v0.5.4**- Renamed to bump2version to ensure no conflicts with original package**v0.5.3**- Fix bug where ``--new-version`` value was not used when config was present(thanks @cscetbon @ecordell (`#60 <https://github.com/peritus/bumpversion/pull/60>`_)- Preserve case of keys config file(thanks theskumar `#75 <https://github.com/peritus/bumpversion/pull/75>`_)- Windows CRLF improvements (thanks @thebjorn)**v0.5.1**- Document file specific options ``search =`` and ``replace =`` (introduced in 0.5.0)- Fix parsing individual labels from ``serialize =`` config even if there arecharacters after the last label (thanks @mskrajnowski `#56<https://github.com/peritus/bumpversion/pull/56>`_).- Fix: Don't crash in git repositories that have tags that contain hyphens(`#51 <https://github.com/peritus/bumpversion/pull/51>`_) (`#52<https://github.com/peritus/bumpversion/pull/52>`_).- Fix: Log actual content of the config file, not what ConfigParser printsafter reading it.- Fix: Support multiline values in ``search =``- also load configuration from ``setup.cfg`` (thanks @t-8ch `#57<https://github.com/peritus/bumpversion/pull/57>`_).**v0.5.0**This is a major one, containing two larger features, that require some changesin the configuration format. This release is fully backwards compatible to*v0.4.1*, however deprecates two uses that will be removed in a future version.- New feature: `Part specific configuration <#part-specific-configuration>`_- New feature: `File specific configuration <#file-specific-configuration>`_- New feature: parse option can now span multiple line (allows to comment complexregular expressions. See `re.VERBOSE in the Python documentation<https://docs.python.org/library/re.html#re.VERBOSE>`_ for details, `thistestcase<https://github.com/peritus/bumpversion/blob/165e5d8bd308e9b7a1a6d17dba8aec9603f2d063/tests.py#L1202-L1211>`_as an example.)- New feature: ``--allow-dirty`` (`#42 <https://github.com/peritus/bumpversion/pull/42>`_).- Fix: Save the files in binary mode to avoid mutating newlines (thanks @jaraco `#45 <https://github.com/peritus/bumpversion/pull/45>`_).- License: bumpversion is now licensed under the MIT License (`#47 <https://github.com/peritus/bumpversion/issues/47>`_)- Deprecate multiple files on the command line (use a `configuration file <#configuration>`_ instead, or invoke ``bumpversion`` multiple times)- Deprecate 'files =' configuration (use `file specific configuration <#file-specific-configuration>`_ instead)**v0.4.1**- Add --list option (`#39 <https://github.com/peritus/bumpversion/issues/39>`_)- Use temporary files for handing over commit/tag messages to git/hg (`#36 <https://github.com/peritus/bumpversion/issues/36>`_)- Fix: don't encode stdout as utf-8 on py3 (`#40 <https://github.com/peritus/bumpversion/issues/40>`_)- Fix: logging of content of config file was wrong**v0.4.0**- Add --verbose option (`#21 <https://github.com/peritus/bumpversion/issues/21>`_ `#30 <https://github.com/peritus/bumpversion/issues/30>`_)- Allow option --serialize multiple times**v0.3.8**- Fix: --parse/--serialize didn't work from cfg (`#34 <https://github.com/peritus/bumpversion/issues/34>`_)**v0.3.7**- Don't fail if git or hg is not installed (thanks @keimlink)- "files" option is now optional (`#16 <https://github.com/peritus/bumpversion/issues/16>`_)- Fix bug related to dirty work dir (`#28 <https://github.com/peritus/bumpversion/issues/28>`_)**v0.3.6**- Fix --tag default (thanks @keimlink)**v0.3.5**- add {now} and {utcnow} to context- use correct file encoding writing to config file. NOTE: If you are usingPython2 and want to use UTF-8 encoded characters in your config file, youneed to update ConfigParser like using 'pip install -U configparser'- leave current_version in config even if available from vcs tags (wasconfusing)- print own version number in usage- allow bumping parts that contain non-numerics- various fixes regarding file encoding**v0.3.4**- bugfix: tag_name and message in .bumpversion.cfg didn't have an effect (`#9 <https://github.com/peritus/bumpversion/issues/9>`_)**v0.3.3**- add --tag-name option- now works on Python 3.2, 3.3 and PyPy**v0.3.2**- bugfix: Read only tags from `git describe` that look like versions**v0.3.1**- bugfix: ``--help`` in git workdir raising AssertionError- bugfix: fail earlier if one of files does not exist- bugfix: ``commit = True`` / ``tag = True`` in .bumpversion.cfg had no effect**v0.3.0**- **BREAKING CHANGE** The ``--bump`` argument was removed, this is now the firstpositional argument.If you used ``bumpversion --bump major`` before, you can use``bumpversion major`` now.If you used ``bumpversion`` without arguments before, you nowneed to specify the part (previous default was ``patch``) as in``bumpversion patch``).**v0.2.2**- add --no-commit, --no-tag**v0.2.1**- If available, use git to learn about current version**v0.2.0**- Mercurial support**v0.1.1**- Only create a tag when it's requested (thanks @gvangool)**v0.1.0**- Initial public version |
advCounter | Multi-feature Counter having features like sorting, and nested counting |
advdef01 | long description |
advego-antiplagiat-api | advego-antiplagiat-apiОписаниеБиблиотека для работы с сервисомантиплагиатаотadvego.ru. Библиотека не является официальной, при обновлении сервисов advego возможно возникновение ошибок.ДокументацияДокументация по APIКак считается уникальность текстаТребованияPython 3.8+Установка$ pip install advego-antiplagiat-api
$ pipenv install advego-antiplagiat-apiПример использованияfrom antiplagiat import Antiplagiat
import time
TOKEN = os.getenv('ADVEGO_TOKEN')
api = Antiplagiat(TOKEN)
with open('example.txt', 'r') as fp:
text = fp.read()
result = api.unique_text_add(text)
key = result['key']
while True:
# дадим некоторое время на проверку
time.sleep(200)
result = api.unique_check(key)
if result['status'] == 'done':
print('Done!')
# сделать чтото с отчетом
return
elif result['status'] == 'error':
print(f'Error: {result}')
return
elif result['status'] == 'not found':
print('Not found!')
return
else:
print('In progress...')Реализованные методыunique_text_add(text, title=None, ignore_rules=None)Добавляет текст на проверку уникальности.Параметры:text- текст для проверки. Для корректной работы текст должен быть в кодировкеUTF-8.title- (необязательно) название проверки.ignore_rules— (необязательно) перечень правил, по которым будут игнорироваться сайты при проверки.Доступные правила:"u:<url>"- проверка уникальности будет игнорировать данный url;"b:<domain>"- проверка уникальности будет игнорировать все url, начинающиеся с domain;"r:<regex>"- проверка уникальности будет игнорировать все url, подходящие по заданное регулярное выражение. Если в регулярном выражении используется обратный слэш\или двойные кавычки"", их нужно экранировать.Для задания правил также можно использовать вспомогательные функции из модуляhelpers.py.from antiplagiat import Antiplagiat
from antiplagiat.helpers import url_rule, domain_rule, regex_url
TOKEN = 'token' # ваш токен
api = Antiplagiat(TOKEN)
text = """
Python — высокоуровневый язык программирования общего назначения,
ориентированный на повышение производительности разработчика и читаемости кода.
Синтаксис ядра Python минималистичен.
В то же время стандартная библиотека включает большой объём полезных функций.
"""
ignore_rules = [
domain_rule('ru.wikipedia.org'),
url_rule('https://ru.wikipedia.org/wiki/Python'),
regex_rule('.*wikipedia\\.org')
]
result = api.unique_text_add(text, ignore_rules=ignore_rules)
key = result['key']В случае если текст успешно добавлен на проверку метод возвращает словарь{'key': NNN}, где NNN - номер созданной проверки.В случае ошибки будет выброшено исключение, см.стандартные исключения.unique_check(key, agent=None, report_json=1, get_text=False)Возвращает состояние проверки иотчет, если проверка выполнена.Параметры:key— идентификатор проверки, полученный при добавлении.get_text- если указан, то вместе с отчетом будет возвращен проверенный текст.agent- тип проверки, указывается чтобы получить результат проверки работы или статьи. Для проверки текста указывать agent не нужно.report_json- формат отчета, рекомендуется значение 1.Возможны следующие ответы:{"msg": "", "status": "in progress"}- проверка выполняется.{"report": {...}, "status": "done", "text": "..."}- проверка выполнена.{"msg": "Error message", "status": "error_code"}- проверка завершилась с ошибкой, где"error_code"кодошибки.{"msg": "", "status": "not found"}- проверка с данным ключом не найдена.unique_recheck(key)Запускает новую проверку ранее добавленного текста. При этом удаляет предыдущие проверки из очереди.Параметры:key— идентификатор проверки, полученный при добавлении.в случает успеха возвращает1.В случае ошибки будет выброшено исключение, см.стандартные исключения.unique_get_text(key)Возвращает текст на проверке.Параметры:key— идентификатор проверки, полученный при добавлении.При успешном запросе возвращает словарь, содержащий проверяемый текст{"text": "..."}В случае ошибки будет выброшено исключение, см.стандартные исключения.ОтчетФормат возвращаемого отчета:{
"status": "done",
"report": {
"layers_by_domain": [
{
"rewrite": 33,
"equality": 19,
"layers": [
{
"equality": 19,
"rewrite": 33,
"uri": "https://site/",
"words": [
7,
30,
31,
32
],
"shingles": [
31,
32,
33,
34,
35,
36,
37,
38
]
},
],
}
]
"len": 1050,
"bad_words": [],
"equal_words": [
0,
1,
3,
5,
7,
15,
19,
20,
22,
24,
25,
27,
28,
30
],
"word_count": 154,
"lang": "russian",
"error_pages": 0,
"rewrite": 82,
"progress": 100,
"text_fragments": [
"",
"Слово1",
" ",
"Слово2",
" ",
"Слово3",
" ",
"Слово4",
" ",
"Слово5",
". "
],
"captchas": 0,
"found_pages": 11,
"checked_pages": 48,
"equal_shingles": [
31,
32,
36,
37,
38,
40
],
"checked_phrases": 8,
},
}Расшифровка:layers_by_domain- найденные страницы с совпадениями, сгруппированные по доменам (если найдено несколько страниц на одном сайте),layers- найденные страницы линейным списком,equality- количество найденных совпадений по фразам в указанном источнике (uri), процентов,rewrite- количество найденных совпадений по словам в указанном источнике (uri), процентов,uri- адрес страницы с найденными совпадениями,words- слова, входящие в найденные совпадения по словам (см. text_fragments),shingles- слова, входящие в найденные совпадения по фразам (см. text_fragments),len- длина текста в символах с пробелами,bad_words- слова с подменой символов,equal_words- аналогично words, но для всего текста,equal_shingles- аналогично shingles, но для всего текста,word_count- количество слов в проверяемом тексте,text_fragments- фрагменты текста для восстановления совпадений по словам и фразам.Порядковый номер фрагмента вычисляется по формуле 2n + 1, где n = номеру, указанному в соответствущей секции words, shingles, equal_words и equal_shingles.Для удобной работы с отчетом можно использовать вспомогательный классAdvanceReport. Атрибуты этого класса соответствуют ключам словаряreport, но в отличие от отчета получаемого от сервиса антиплагиата, такие значения какwords,shingles,equal_wordsи т.п. содержат не номера слов в тексте, а уже сами слова.Исключение: ключуlenсоответствует атрибутlength.ТакжеAdvanceReportпредоставляет атрибутыuniquenessиoriginality, соответствующие значениям уникальности и оригинальности текста, подробнее см.как считается уникальность текста.Методы AdvanceReportwords_by_numbers(numbers)Возвращает слова по номерам в тексте.Параметры:numbers- список номеров слов.save_as_json(file_path, indent=4)Сохранить отчет в json. Будет сохранен словарь, переданный при инициализации.Параметры:file_path- путь до файла.indent- размер отступов.Пример:from antiplagiat import Antiplagiat, AdvanceReport
TOKEN = os.getenv('ADVEGO_TOKEN')
api = Antiplagiat(TOKEN)
text = """some text"""
# ... отправляем текст на проверку и получаем ключ key
result = api.unique_check(key)
adv_report = AdvanceReport(result.get('report'), text)
print(f'Уникальность текста {adv_report.uniqueness}/{adv_report.originality}')
print('Найденные источники:')
for domain in adv_report.layers_by_domain:
for layer in domain.layers:
print(layer.uri)Стандартные исключенияAPIException- общее исключение для ошибок при запросе сервиса антиплагиата. От него наследуются все остальные исключения.CharAccountError- не хватает символов на счету. Код ошибки-1.AccountError- не хватает денежных средств на счету. Код ошибки-2.DatabaseError- ошибка подключения к БД. Код ошибки-5.TextKeyError- получен неверный ключ. Код ошибки-10.TokenError- ошибка авторизации по токену. Код ошибки-11.TextError- ошибка при проверке поля text. Код ошибки-13.TitleError- ошибка при проверке поля title. Код ошибки-14.AddCheckError- ошибка добавления работы. Код ошибки-17.TextNotFoundError- текст не найден. Код ошибки-21.NotEnoughSymbolsError- недостаточно символов на счету, минимальное количество – 100 000. Код ошибки-67. |
advenced-requests | Failed to fetch description. HTTP Status Code: 404 |
advene | Annotate DVds, Exchange on the NEtThe Advene (Annotate DVd, Exchange on the NEt) project is aimed
towards communities exchanging discourses (analysis, studies) about
audiovisual documents (e.g. movies) in DVD format. This requires that
audiovisual content and hypertext facilities be integrated, thanks to
annotations providing explicit structures on audiovisual streams, upon
which hypervideo documents can be engineered.
.
The cross-platform Advene application allows users to easily
create comments and analyses of video comments, through the
definition of time-aligned annotations and their mobilisation
into automatically-generated or user-written comment views (HTML
documents). Annotations can also be used to modify the rendition
of the audiovisual document, thus providing virtual montage,
captioning, navigation… capabilities. Users can exchange their
comments/analyses in the form of Advene packages, independently from
the video itself.
.
The Advene framework provides models and tools allowing to design and reuse
annotations schemas; annotate video streams according to these schemas;
generate and create Stream-Time Based (mainly video-centred) or User-Time
Based (mainly text-centred) visualisations of the annotations. Schemas
(annotation- and relation-types), annotations and relations, queries and
views can be clustered and shared in units called packages. Hypervideo
documents are generated when needed, both from packages (for annotation and
view description) and DVDs (audiovisual streams). |
advent | AdventAdvent is a Python package that contains useful functions for Advent of Code eg getting puzzle input or getting all integers present in a string.InstallationUse the package managerpipto install advent.pipinstalladventSetupTo use this tool you need to create a .env file in your project directory and place there your session ID.# .env file
SESSION_ID="your-session-id"It's stored in a cookie called session. You can find it by entering Inspect Mode in your browser, Application -> Cookies. Check out picture below:Usagefromadvent.functionsimportget_puzzle_input,get_ints# returns "B X\nB Y\nA Y\nB Y\n..."get_puzzle_input(2022,22)# returns [2, 3]list(get_ints("Point x=2, y=3"))ContributingPull requests are welcome. For major changes, please open an issue first
to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT |
advent-cli | 🎄 advent-cliadvent-cliis a command-line tool for interacting withAdvent of Code, specifically geared toward writing solutions in Python. It can be used to view puzzle prompts, download input, submit solutions, and view personal or private stats and leaderboards.Installationpip install advent-cliSetupBefore you do anything, you'll need to provide advent-cli with a session cookie so it can authenticate as you. To do this, log in to theAdvent of Code websiteand grab the cookie namedsessionfrom your browser's inspect element tool. Store it in an environment variable on your machine namedADVENT_SESSION_COOKIE. A fresh session cookie is good for about a month, after which you'll need to repeat these steps.A full list of configuration options can be foundhere.Usageadvent-cli can be invoked using theadventcommand, orpython -m advent_cli.Download a question$ advent get YYYY/DDThis will create the directoryYYYY/DD(e.g.2021/01) inside the current working directory. Inside, you'll find part 1 of the puzzle prompt inprompt.md, your puzzle input ininput.txt, and a generated solution template insolution.py. More about thathere.Test a solution$ advent test YYYY/DDThis will run the solution file in the directoryYYYY/DDand print the output without actually submitting. Use this to debug or check for correctness. Optional flags:-e,--example: Test the solution usingexample_input.txt. This is an empty file that gets created when you runadvent getwhere you can manually store the example input from the puzzle prompt. Useful for checking solutions for correctness before submitting.-f,--solution-file: Test a solution file other thansolution.py(e.g.-f solution2to runsolution2.py). This will assume you already have a working solution insolution.pyand check the new file's output against it. Useful for testing alternate solutions after you've already submitted since you cannot re-submit.Submit answers$ advent submit YYYY/DDThis will run the solution file in the directoryYYYY/DDand automatically attempt to submit the computed answers for that day. After implementing part 1, run this command to submit part 1 and (if correct) append the prompt for part 2 toprompt.md. Run again after implementing part 2 to submit part 2. Optional flags:-f,--solution-file: Submit using a solution file other thansolution.py(e.g.-f solution2to runsolution2.py). This can only be done if a correct answer hasn't already been submitted.Check personal stats$ advent stats [YYYY]This will print out your progress for the yearYYYYand output the table found onadventofcode.com/{YYYY}/leaderboard/selfwith your time, rank, and score for each day and part. If year is not specified, defaults to the current year.Check private leaderboards$ advent stats [YYYY] --privateThis will print out each of the private leaderboards given inADVENT_PRIV_BOARDS. Also works with-p.Countdown to puzzle unlock$ advent countdown YYYY/DDDisplays a countdown until the given puzzle unlocks. Can be chained withgetto auto-download files once the countdown finishes.Solution structureadvent-cli expects the following directory structure (example):2020/
└─ 01/
└─ example_input.txt
└─ input.txt
└─ prompt.md
└─ solution.py
└─ [alternate solution files]
└─ 02/
└─ ...
└─ ...
2021/
└─ 01/
└─ ...
└─ ...Thesolution.pyfile will look like this when first generated:## advent of code {year}## https://adventofcode.com/{year}## day {day}defparse_input(lines):passdefpart1(data):passdefpart2(data):passWhen the solution is run, the input will be read frominput.txtand automatically passed toparse_inputaslines, an array of strings where each string is a line from the input with newline characters removed. You should implementparse_inputto return your parsed input or inputs, which will then be passed topart1andpart2. Ifparse_inputreturns a tuple,part1andpart2will be expecting multiple parameters that map to those returned values. The parameter names can be changed to your liking. The only constraint is thatpart1andpart2must have the same number of parameters.Ifpart2is left unmodified or otherwise returnsNone, it will be considered unsolved andpart1will be run and submitted. If both functions are implemented,part2will be submitted.ConfigurationThe following environment variables can be set to change the default config:VariableFunctionADVENT_SESSION_COOKIEAdvent of Code session cookie for authentication.(required)ADVENT_PRIV_BOARDSComma-separated list of private leaderboard IDs.ADVENT_DISABLE_TERMCOLORSet to1to permanently disable coloring terminal output.ADVENT_MARKDOWN_EMMethod for converting<em>tags inside code blocks. See below for context and options.ADVENT_MARKDOWN_EMoptionsBy default,<em>emphasized text</em>inside code blocks will be converted to markdown format, i.e.*emphasized text*, but with AoC puzzle prompts this can often mess up the formatting. This option can be set to a couple of different things to change this behavior:ValueBehavioribPreserve the<pre><code>tags for code blocks rather than convert them to markdown format and render<em>tags as<i><b>. This can make rendered markdown more readable, although it makes the plaintext less readable if you aren't rendering the markdown.markSame as above, but replace<em>tags with<mark>. This maks the emphasis even more clear than<i><b>, but not all markdown renderers support it.noneIgnore<em>tags and do nothing with their contents. This will preserve plaintext formatting but also hinder the usefulness of the emphasis.defaultDefault behavior (convert<em>to*).ChangelogSeeReleases.CreditsThis started out as a simple script which was inspired byHazelandhaskal.Licenseadvent-cli is distributed under the GNUGPL-3.0 License. |
adventkit | Adventkit -- Advent of Code in PythonThese are my solutions in Python to some of theAdvent of Code
puzzles. My goal is to make the code readable
while also making it reasonably fast.InstallationYou can install Adventkit using pip:pip install adventkitAlternatively, after cloning the Adventkit repository, the code can be
run as is.RequirementsPython 3.6.1+Puzzle input files placed in the right locations: the input for day
1 of Advent of Code 2019 is read frominput/year2019/day01.txt(relative to the current working directory).Optional, to runsrc/run.shafter cloning the repository:A POSIX-compliant shell (/bin/sh)A shell commandpython3to run Python 3.6.1+ (any implementation)If extra speed is desired on selected puzzles: A shell commandpypy3to invoke PyPy 3.6+How to runAfter installing Adventkit, you can run a single day's solver like this:adventkit 2019 10Cloning the repository gives you more options:Run a single day's solver:src/run.sh 2019 10(orpython3 src/run.py 2019 10)Run all solvers:src/run.shRun all solvers from one year:src/run.sh 2019When invoked with the optional argument--time,src/run.shprints
the execution time for each day. If the commandpypy3invokes a
suitable version of PyPy,src/run.shuses it for any solvers that are
expected to run faster under PyPy than CPython.TestingTo run the tests, first installpytestandpytest-subtests, as well
as your local copy ofadventkit. Then run the commandpytest. |
adventlibs-SNOLDINATOR | AdventlibsThis package contains some convenience functions for solving Advent of Code challenges.https://adventofcode.com.NOTE: This package is in no way related to, endorsed by, or quite frankly, known by the creator of Advent of Code. |
adventocr | Advent of Code Optical Character Recognition (adventocr)Some Advent of Code puzzles provide visual outputs which must be
converted into text. This package is a helper to automate this task.Installationpip install adventocrUsageAn example puzzle output may look something likeword=['#','#','#','#',' ','#','#','#','#',' ','#',' ',' ',' ',' ','#',' ',' ',' ',' ','#','#','#',' ',' ','#','#','#',' ',' ','#',' ',' ',' ',' ','#',' ',' ',' ',' ','#',' ',' ',' ',' ','#',' ',' ',' ',' ','#','#','#','#',' ','#',' ',' ',' ',' ']When tidied and printed, this appears to be#### ####
# #
### ###
# #
# #
#### #This process can be automated through:importadventocrparsed=adventocr.word(word)parsed'EF' |
adventofcode | adventofcodeHelper utilities for solving Advent of Code puzzles.No copy-pasting puzzle inputs into files.No need to use low-level file APIs to read your inputs.Performance reports for example inputs and puzzle inputs.Submit the answer immediately when your code returns the result 🏎️UsageInstall the packageInstall the package with pip:pipinstalladventofcodeSet your session cookieAdd theadventofcode.comsession cookie value to your env:exportAOC_SESSION="..."Alternatively, you can save yourAOC_SESSION="******"value in a.envfile.[!NOTE]
Setting AOC_SESSION will allow you to get your personal puzzle output (aoc.get_input()) and submit your answers withaoc.submit_p1()andaoc.submit_p2().Use a template to solve puzzlesI use the following template to start solving puzzles, see my examples inmy repo for 2023.fromadventofcodeimportAoCdefpart1(inp):returnNonedefpart2(inp):returnNoneaoc=AoC(part_1=part1,part_2=part2)inp="""sample input"""# Run your part1 function with sample input and assert the expected result:aoc.assert_p1(inp,42)# Run your part1 function on puzzle input and submit the answer returned:aoc.submit_p1()# Run your part2 function with sample input and assert the expected result:aoc.assert_p2(inp,6*7)# Run your part2 function on puzzle input and submit the answer returned:aoc.submit_p2()[!NOTE]
All submissions and fetched results are cached locally in the.cache.dbfile so that we don't spam the AoC servers or resubmit the same answer multiple times.Or build your workflow using the AoC classfromadventofcodeimportAoCaoc=AoC()# defaults to current year and parses the day from the filename (e.g. 01.py will be day 1)aoc.print_p1()# prints the first part of the puzzleinp=aoc.get_input()# returns the input as a string# solve the puzzle here...aoc.submit_p1('part 1 answer')# submits the answer to the first part of the puzzleaoc.print_p2()# prints the second part of the puzzle# solve the puzzle here...aoc.submit_p2('part 2 answer')# submits the answer to the second part of the puzzle |
advent-of-code | advent-of-code-pythonSolutions toAdvent of Codeimplemented in Rust and exposed to Python usingPyO3.Usage as a libraryAdd dependency:pipinstall--upgradeadvent-of-codeTheadvent_of_codepackage exports a singlesolvefunction with the following signature:defsolve(year:int,day:int,part:int,input:str)->strExamples:fromadvent_of_codeimportsolveassertsolve(2019,1,1,"14")=="2"assertsolve(2019,3,2,"R8,U5,L5,D3\nU7,R6,D4,L4")=="30"Usage as a command line tool$pipinstall--upgradeadvent-of-code
$echo14|advent-of-code-py2019112 |
advent-of_code_2017_day_1 | No description available on PyPI. |
advent-of-code-data | Speedhackers, get your puzzle data with a single import statement:fromaocdimportdataIf that sounds too magical, use a simple function call to return your data in a string:>>>fromaocdimportget_data>>>get_data(day=24,year=2015)'1\n2\n3\n7\n11\n13\n17\n19\n23\n31...If you’d just like to print or keep your own raw input files, there’s a script for that:aocd>input.txt# saves today's dataaocd132018>day13.txt# save some other day's dataNote thataocdwill cache puzzle inputs and answers (including incorrect guesses) clientside, to save unnecessary requests to the server.New in version 2.0.0:Get the example data (and corresponding answers). From2022 day 5there was:$aocd20225--example---Day5:SupplyStacks---https://adventofcode.com/2022/day/5-------------------------------Exampledata1/1-------------------------------[D][N][C][Z][M][P]123move1from2to1move3from1to3move2from2to1move1from1to2--------------------------------------------------------------------------------answer_a:CMZanswer_b:MCD--------------------------------------------------------------------------------How does scraping the examples work? Checkaocd-example-parserfor the gory details.QuickstartInstall with pippipinstalladvent-of-code-dataIf you want to use this within a Jupyter notebook, there are some extra deps:pipinstall'advent-of-code-data[nb]'Puzzle inputs differ by user.So export your session ID, for example:exportAOC_SESSION=cafef00db01dfaceba5eba11deadbeefNote:Windows users should usesetinstead ofexporthere.The session ID is a cookie which is set when you login to AoC. You can find it
with your browser inspector. If you’re hacking on AoC at all you probably already
know these kind of tricks, but if you need help with that part then you canlook here.Note:If you don’t like the env var, you could also keep your token(s) in files.
By default the location is~/.config/aocd/token. Set theAOCD_DIRenvironment
variable to some existing directory if you wish to use another location to store token(s).New in version 0.9.0.There’s a utility scriptaocd-tokenwhich attempts to
find session tokens from your browser’s cookie storage. This feature is experimental
and requires you to additionally install the packagebrowser-cookie3. Only Chrome
and Firefox browsers are currently supported. On macOS, you may get an authentication
dialog requesting permission, since Python is attempting to read browser storage files.
This is expected, the scriptisactually scraping those private files to access AoC
session token(s).If this utility script was able to locate your token, you can save it to file with:$aocd-token>~/.config/aocd/tokenAutomated submissionNew in version 0.4.0.Basic use:fromaocdimportsubmitsubmit(my_answer,part="a",day=25,year=2017)Note that the same filename introspection of year/day also works for automated
submission. There’s also introspection of the “level”, i.e. part a or part b,
aocd can automatically determine if you have already completed part a or not
and submit your answer for the correct part accordingly. In this case, just use:fromaocdimportsubmitsubmit(my_answer)The response message from AoC will be printed in the terminal. If you gave
the right answer, then the puzzle will be refreshed in your web browser
(so you can read the instructions for the next part, for example).Proceed with caution!If you submit wrong guesses, your userWILLget rate-limited by Eric, so don’t call submit until you’re fairly confident
you have a correct answer!New in version 2.0.0: Prevent submission of an answer when it is certain the value
is incorrect. For example, if the server previously told you that your answer “1234”
was too high, then aocd will remember this info and prevent you from subsequently
submitting an even higher value such as “1300”.ModelsNew in version 0.8.0.There are classesUserandPuzzlefound in the submoduleaocd.models.
Input data is via regular attribute access. Example usage:>>>fromaocd.modelsimportPuzzle>>>puzzle=Puzzle(year=2017,day=20)>>>puzzle<Puzzle(2017,20)at0x107322978-ParticleSwarm>>>>puzzle.input_data'p=<-1027,-979,-188>, v=<7,60,66>, a=<9,1,-7>\np=<-1846,-1539,-1147>, v=<88,145,67>, a=<6,-5,2> ...Submitting answers is also by regular attribute access. Any incorrect answers you submitted are remembered, and aocd will prevent you from attempting to submit the same incorrect value twice:>>>puzzle.answer_a=299That's not the right answer; your answer is too high. If you'restuck,therearesomegeneraltipsontheaboutpage,oryoucanaskforhintsonthesubreddit.Pleasewaitoneminutebeforetryingagain.(Youguessed299.)[ReturntoDay20]>>>puzzle.answer_a=299aocdwillnotsubmitthatansweragain.You've previously guessed 299 and the server responded:That's not the right answer; your answer is too high. If you'restuck,therearesomegeneraltipsontheaboutpage,oryoucanaskforhintsonthesubreddit.Pleasewaitoneminutebeforetryingagain.(Youguessed299.)[ReturntoDay20]Your own solutions can be executed by writing and using anentry-pointinto your code, registered in the group"adventofcode.user". Your entry-point should resolve to a callable, and it will be called with three keyword arguments:year,day, anddata. For example,my entry-point is called “wim”and running againstmy code(afterpip installadvent-of-code-wim) would be like this:>>>puzzle=Puzzle(year=2018,day=10)>>>puzzle.solve_for("wim")('XLZAKBGZ','10656')If you’ve never written a plugin before, seehttps://entrypoints.readthedocs.io/for more info about plugin systems based on Python entry-points.Verify your code against multiple different inputsNew in version 0.8.0.Ever tried running your code against other people’s inputs? AoC is full of tricky edge cases. You may find that sometimes you’re only getting the right answer by luck, and your code will fail on some other dataset. Using aocd, you can collect a few different auth tokens for each of your accounts (github/google/reddit/twitter) and verify your answers across multiple datasets.To see an example of how to setup the entry-point for your code, look atadvent-of-code-samplefor some inspiration. After dumping a bunch of session tokens into~/.config/aocd/tokens.jsonyou could do something like this by running theaocconsole script:As you can see above, I actually had incorrect code for2017 Day 20: Particle Swarm, but thatbugonly showed up for the google token’s dataset. Whoops. Also, it looks like my algorithm for2017 Day 13: Packet Scannerswas kinda garbage. Too slow. According toAoC FAQ:every problem has a solution that completes in at most 15 seconds on ten-year-old hardwareBy the way, theaocrunner will kill your code if it takes more than 60 seconds, you can increase/decrease this by passing a command-line option, e.g.--timeout=120.New in version 1.1.0:Added option--quietto suppress any output from plugins so it doesn’t mess up theaocrunner’s display.New in version 2.0.0:You can verify your code against the example input data and answers, scraped from puzzle pages where available, usingaoc--example. This will pass the sample input data into your solver instead of passing the full user input data.How does this library work?It will automatically get today’s data at import time, if used within the
interactive interpreter. Otherwise, the date is found by introspection of the
path and file name from whichaocdmodule was imported.This means your filenames should be something sensible. The examples below
should all parse correctly, because they have digits in the path that are
unambiguously recognisable as AoC years (2015+) or days (1-25).q03.py
xmas_problem_2016_25b_dawg.py
~/src/aoc/2015/p8.pyA filename likeproblem_one.pywill not work, so don’t do that. If
you don’t like weird frame hacks, just use theaocd.get_data()function
instead and have a nice day!Cache invalidation?aocdsaves puzzle inputs, answers, prose, and your bad guesses to avoid hitting
the AoC servers any more often than strictly necessary (this also speeds things up).
All data is persisted in plain text files under~/.config/aocd. To remove any
caches, you may simply delete whatever files you want under that directory tree.
If you’d prefer to use a different path, then export anAOCD_DIRenvironment
variable with the desired location.New in version 1.1.0:By default, your token files are also stored under~/.config/aocd.
If you want the token(s) and cached inputs/answers to exist in separate locations, you can set
the environment variableAOCD_CONFIG_DIRto specify a different location for the token(s). |
advent-of-code-helpers | Advent of Code helper functionsfromaoc.helpersimportoutput,read_input_from_file,input_linesfromaocimporttemplateSetup GuideInstall with pippipinstalladvent-of-code-helpersHelper Usageread_input_from_filereads the data as a single line from a filefromaoc.helpersimportread_input_from_fileread_input_from_file('path/to/input_data')input_linesreturns a list of strings from the string inputfromaoc.helpersimportinput_linesinput_lines('single\nstring\ninput')outputprints the result to console and writes to an output file if
an output directory is providedfromaoc.helpersimportoutputoutput('result',part(int),day(int),year(int),output_dir(str),file_prefix(str))Template UsageYou can specify data from a file using thedata(input)function.You can specify an output directory for output using theoutput(output)function. If left empty, it will still print to screen, but will not write
the result to a file. If given an output directory, the results will be
appended to the file so you can easily go back and look at previous results.ExamplesfromaocimporttemplateclassPart1(template.Part1):defsolve(self):# Read inputlines=input_lines(self.input())# Do some work here# Sample outputresult=','.join(lines)returnresultdefmain():output_dir='../out'test_data=os.path.join(os.path.dirname(__file__),'resources/test_input.txt')Part1(1,2018).data(test_data).output(output_dir)data=os.path.join(os.path.dirname(__file__),'resources/input.txt')Part1(1,2018).data(data).output(output_dir)if__name__=="__main__":main()More usage in theexample.Template Usage with Other LibrariesIf you want to use your own input reader or a library likeadvent-of-code-data,
you can override theinputmethod.Examplesfromaocimporttemplatefromaoc.helpersimportinput_linesfromaocdimportget_dataclassPart1(template.Part1):definput(self):ifself.input_file:returnsuper().input()else:returnget_data(day=self.day,year=self.year)defsolve(self):# Read inputlines=input_lines(self.input())# Do some work here# Sample outputresult=','.join(lines)returnresultdefmain():Part1(1,2018).output('../out')if__name__=="__main__":main() |
advent-of-code-hhoppe | Moduleadvent_of_code_hhoppePython library to process Advent-of-Code puzzles in a Jupyter notebook.
Seea complete example.Usage summary:Thepreambleoptionally specifies reference inputs and answers for the puzzles:BASE_URL = 'https://github.com/hhoppe/advent_of_code_2021/blob/main/data/google.Hugues_Hoppe.965276/'
INPUT_URL = BASE_URL + '2021_{day:02}_input.txt'
ANSWER_URL = BASE_URL + '2021_{day:02}{part_letter}_answer.txt'
advent = advent_of_code_hhoppe.Advent(
year=2021, input_url=INPUT_URL, answer_url=ANSWER_URL)Foreach day(numbered 1..25), the first notebook cell defines apuzzleobject:puzzle = advent.puzzle(day=1)The puzzle input string is automatically read into the attributepuzzle.input.
This input string is unique to each Advent participant.For each of the two puzzle parts, a function (e.g.process1) takes an input string and returns a string or integer answer.
Using calls like the following, we time the execution of each function and verify the answers:puzzle.verify(part=1, func=process1)
puzzle.verify(part=2, func=process2)At the end of the notebook, a table summarizestimingresults.Alternative ways to specify puzzle inputs/answersThe puzzle inputs and answers can be more efficiently downloaded using a single ZIP file:PROFILE = 'google.Hugues_Hoppe.965276'
ZIP_URL = f'https://github.com/hhoppe/advent_of_code_2021/raw/main/data/{PROFILE}.zip'
!if [[ ! -d {PROFILE} ]]; then wget -q {ZIP_URL} && unzip -q {PROFILE}; fi
INPUT_URL = f'{PROFILE}/{{year}}_{{day:02d}}_input.txt'
ANSWER_URL = f'{PROFILE}/{{year}}_{{day:02d}}{{part_letter}}_answer.txt'
advent = advent_of_code_hhoppe.Advent(
year=2021, input_url=INPUT_URL, answer_url=ANSWER_URL)The puzzle inputs and answers can be obtained directly from adventofcode.com using a web-browser session cookie and theadvent-of-code-dataPyPI package:!pip install -q advent-of-code-data
import aocd
# Fill-in the session cookie in the following:
mkdir -p ~/.config/aocd && echo 53616... >~/.config/aocd/token
advent = advent_of_code_hhoppe.Advent(year=2021) |
adventofcode-initializer | Advent of Code InitializerThis utility allows downloadingAdvent of Codeproblems as markdown files. It also
downloads the problems' inputs. This tool creates a folder for the required problem and stores the markdown
and the input files.UsageThe utility has two main options (set-session-cookieanddownload).usage: adventofcode_initializer [-h] {download,set-session-cookie} ...
Download Advent of Code problems as markdown files and also its inputs
positional arguments:
{download,set-session-cookie}
download Download files
set-session-cookie Set the necessary cookie to download personal inputs
or ploblems' part 2
options:
-h, --help show this help message and exit
In order to download inputs or part 2, you have to set the 'session' cookie.Setting the correspondig cookie the user will be able to download custom inputs and new problem parts.usage: adventofcode_initializer set-session-cookie [-h] session-cookie
positional arguments:
session-cookie Cookie required to download inputs or problems' part 2
options:
-h, --help show this help message and exit
You only have to do save it onceBy default, the utility downloads the first part of the problem. In addition, part two can be appended to the README file.The utility can also download previous editions or already completed days.usage: adventofcode_initializer download [-h] [-a] [-d [1-25]] [-y YEAR]
[--both-parts] [--part-2]
options:
-h, --help show this help message and exit
-a, --all-days Download all problems from a given year
-d [1-25], --day [1-25]
The problem that is going to be downloaded
-y YEAR, --year YEAR Advent of Code edition
--both-parts Download both parts of the problem and its input (if
it is possible)
--part-2 Download part two for the given problem and its input
(if it is possible). It appends to part one's README
if it existsInstallationPip:pipinstalladventofcode-initializerBuild from source:gitclonehttps://github.com/Serms1999/advent-initializer.gitcdadvent-initializer
pipinstall. |
adventofcode-library | Failed to fetch description. HTTP Status Code: 404 |
advent-of-code-ocr | Advent of Code® OCRThis Python module helps with convertingAdvent of CodeASCII art letters into plain characters. At the moment, it only supports 6-pixel-tall characters as seen in 2016 Day 8, 2019 Days 8 and 11, and 2021 Day 13.Support for 10-pixel-tall characters (2018 Day 10) is coming soon.Put simply, it converts this toABC:██ ███ ██
█ █ █ █ █ █
█ █ ███ █
████ █ █ █
█ █ █ █ █ █
█ █ ███ ██InstallationThis module can be installed from PyPI:$pipinstalladvent-of-code-ocrUsageUsing this module is pretty easy. By default, this module recognizes#as a filled pixel and.as an empty pixel. However, you can change this using thefill_pixelandempty_pixelkeywork arguments respectively.fromadvent_of_code_ocrimportconvert_6print(convert_6(".##.\n#..#\n#..#\n####\n#..#\n#..#"))# Aprint(convert_6(" $$\n$ $\n$ $\n$$$$\n$ $\n$ $",fill_pixel="$",empty_pixel=" "))# AYou can also convert data that you have in a NumPy array or a nested list:fromadvent_of_code_ocrimportconvert_array_6array=[[0,1,1,0,0,0,1,1,0,0,0,1,1,0],[1,0,0,1,0,1,0,0,1,0,1,0,0,1],[1,0,0,1,0,1,0,0,1,0,1,0,0,0],[1,1,1,1,0,1,0,0,1,0,1,0,0,0],[1,0,0,1,0,1,0,0,1,0,1,0,0,1],[1,0,0,1,0,0,1,1,0,0,0,1,1,0],]print(convert_array_6(array,fill_pixel=1,empty_pixel=0))# AOCAdvent of Code is a registered trademark of Eric K Wastl in the United States. |
advent-of-code-py | Advent-of-code-pyAdvent of Codehelper CLI and library for python projects.Status & Info:Code styleLicenseProject VersionUsageInstallationTo install advent-of-code-py run following command which installs advent-of-code-py CLI and advent_of_code_py library.pipinstalladvent-of-code-pyORpoetryaddadvent-of-code-pyUsageInitially for advent-of-code-py to work it need session value or session ID which you can obtain by viewing cookie while visiting advent of code server.
After collecting session cookie value you need to add those values in config using advent-of-code-py CLIadvent-of-code-pyconfigadd<session-name><session-value>Now you can import library by usingimportadvent_of_code_pyAfter importing a library you can use either two decorator present which are solve and submit decorator for a function of puzzleFor example:-@advent_of_code_py.submit(2018,3,1,session_list="<session-name>")defpuzzle_2018_3_1(input=None):# do some calculation with data and return final outputreturnfinal_outputNow after decorating function now you can call function like regular function callpuzzle_2018_3_1()After calling functionfinal_outputvalue will be submitted by library to Advent of Code server for 2018 year day 3
problem, then returns whether the submitted answer was correct or not. If session value is not provided then
the solution will be submitted to all session value present in config file.You can also use advent-of-code-py builtin Initializer and runner to create appropriate CLI for problem so
problem can be run from CLI instead of modifying python file every time to run appropriate function
To set advent-of-code-py puzzle as CLI@advent_of_code_py.advent_runner()defmain_cli():initializer=advent_of_code_py.Initializer()initializer.add(<function_alias>=<function>)# for example to run above function you can writeinitializer.add(p_3_1=puzzle_2018_3_1)# add other functions ...returninitializerNow you can set main_cli as entry points, and it will create CLI with the appropriate name and function which was added.
So for example to run function puzzle_2018_3_1() you have to run command asentry-point-name run p_3_1which
will run the appropriate function as well as submit as desired if the function was decorated by submit decorator or else
prints its output if the function was decorated by solve decorator. |
advent-of-code-sample | advent-of-code-sample:Provides a working example plugin structure for using theaocrunner script provided byadvent-of-code-data.Theaocrunner allows you to easily verify yourAdvent of Codesolutions against multiple datasets, or verify other user's code against your own dataset.$cat~/.config/aocd/tokens.json# create this file with some auth tokens{"github":"53616c7465645f5f0775...","google":"53616c7465645f5f7238...","reddit":"53616c7465645f5ff7c8...","twitter":"53616c7465645f5fa524..."}$pipinstall~/src/advent-of-code-sample# install the directory which contains your setup.py file...
$pipinstall-qadvent-of-code-wim# can also install some other user's code if you want..?...
$aoc--years2015--days3411# run it!0.25s2015/3-PerfectlySphericalHousesinaVacuumwim/github✔parta:2565✔partb:26390.11s2015/3-PerfectlySphericalHousesinaVacuumwim/google✔parta:2592✔partb:23600.12s2015/3-PerfectlySphericalHousesinaVacuumwim/reddit✔parta:2592✔partb:23600.12s2015/3-PerfectlySphericalHousesinaVacuumwim/twitter✔parta:2565✔partb:26390.12s2015/3-PerfectlySphericalHousesinaVacuummyusername/github✖parta:1234(expected:2565)✖partb:5678(expected:2639)0.12s2015/3-PerfectlySphericalHousesinaVacuummyusername/google✖parta:1234(expected:2592)✖partb:5678(expected:2360)0.11s2015/3-PerfectlySphericalHousesinaVacuummyusername/reddit✖parta:1234(expected:2592)✖partb:5678(expected:2360)0.11s2015/3-PerfectlySphericalHousesinaVacuummyusername/twitter✖parta:1234(expected:2565)✖partb:5678(expected:2639)9.04s2015/4-TheIdealStockingStufferwim/github✔parta:254575✔partb:103873625.43s2015/4-TheIdealStockingStufferwim/google✔parta:117946✔partb:393803812.20s2015/4-TheIdealStockingStufferwim/reddit✔parta:254575✔partb:103873647.67s2015/4-TheIdealStockingStufferwim/twitter✔parta:282749✔partb:99626240.12s2015/4-TheIdealStockingStuffermyusername/github✖parta:1234(expected:254575)✖partb:5678(expected:1038736)0.12s2015/4-TheIdealStockingStuffermyusername/google✖parta:1234(expected:117946)✖partb:5678(expected:3938038)0.12s2015/4-TheIdealStockingStuffermyusername/reddit✖parta:1234(expected:254575)✖partb:5678(expected:1038736)0.12s2015/4-TheIdealStockingStuffermyusername/twitter✖parta:1234(expected:282749)✖partb:5678(expected:9962624)6.17s2015/11-CorporatePolicywim/github✔parta:vzbxxyzz✔partb:vzcaabcc6.26s2015/11-CorporatePolicywim/google✔parta:cqjxxyzz✔partb:cqkaabcc4.69s2015/11-CorporatePolicywim/reddit✔parta:hxbxxyzz✔partb:hxcaabcc5.75s2015/11-CorporatePolicywim/twitter✔parta:hxbxxyzz✔partb:hxcaabcc0.11s2015/11-CorporatePolicymyusername/github✖parta:1234(expected:vzbxxyzz)✖partb:5678(expected:vzcaabcc)0.12s2015/11-CorporatePolicymyusername/google✖parta:1234(expected:cqjxxyzz)✖partb:5678(expected:cqkaabcc)0.11s2015/11-CorporatePolicymyusername/reddit✖parta:1234(expected:hxbxxyzz)✖partb:5678(expected:hxcaabcc)0.12s2015/11-CorporatePolicymyusername/twitter✖parta:1234(expected:hxbxxyzz)✖partb:5678(expected:hxcaabcc)How to hook into your code:Theaocrunner uses setuptools'dynamic discovery of services and pluginsfeature to locate and run your code.
Define your plugin's entry point insetup.py. The group name to use is "adventofcode.user":# setup.pyfromsetuptoolsimportsetupsetup(...entry_points={"adventofcode.user":["myusername = mypackage:mysolve"]},)Changemypackageto whatever package or module name is used to import your stuff.
The namemysolveshould resolve to a callable in your package's namespace which accepts three named argumentsyear,day,data(any order ok) and returns two values, e.g.:defmysolve(year,day,data):...returnpart_a_answer,part_b_answerInside the entry-point you can do whatever you need in order to delegate to your code. For example, write out data to a scratch file then run a script, or import a function and just pass in the data directly as an argument.
The only requirement is that this entry-point should return a tuple of two values, with the answers for that day's puzzle, the rest is up to you.
You could fork this repo and edit it, or just write your own plugin manually. |
adventofcode.utils | Advent of Code Utility ClassesA bunch of utility classes for Advent of Code problems. As I have worked on Advent of Code problems a number of utility classes have been created that help solve problems.Point2D - 2D pointPoint3D - 3D pointStack - Simple implementation of a Stack (LIFO) |
advent-of-code-wim | *
|
+-|---+
/ | /|
+-----+ |
|:::::| |
+----+ |:::::| |---+ +-----------+
/ / \ |:::::| | /| / \\\\\\ [] /|
/ / / \|:::::| | / | / \\\\\\ [] / |
/ / / / \:::::|/ / | +-----------+ |
+----+ / / / \------+ ------|:::::::::::| |
|-----\ / / / \=====| ------|:::::::::::| |
|------\ / / / \====| | |:::::::::::| |
|-------\ / / / +===| | |:::::::::::| |
|--------\ / / /|===| | |:::::::::::| |
|---------\ / / |===| | /|:::::::::::| |
|----------\ / |===| / //|:::::::::::| /
+-----------+ |===| / //||:::::::::::|/
|:::::::::::| |===|/__//___________________
|:::::::::::| |______//|_____...._________
|:::::::::::| | //| ____/ /_/___
---|:::::::::::| |--------|[][]|_|[][]_\------
----|:::::::::::| |---------------------------
|| |:::::::::::| | //| || / / / || ||
|| |:::::::::::| | //| || / / || ||
|:::::::::::| |//| / / /
|:::::::::::| //| / / ____________
|:::::::::::| //| / / /___/ /#/ /#/#/ /
==============//======+...+====================
- - - - - - -// - - -/ / - - - - - - - - - -
==============//|==============================
//| |
advent-of-python | advent-of-python |
adventure | This is a faithful port of the “Adventure” game to Python 3 from the
original 1977 FORTRAN code by Crowther and Woods (it is driven by the
sameadvent.datfile!) that lets you explore Colossal Cave, where
others have found fortunes in treasure and gold, though it is rumored
that some who enter are never seen again.This page:http://rickadams.org/adventure/e_downloads.htmloffers the original PHP source code at this link:http://www.ifarchive.org/if-archive/games/source/advent-original.tar.gzTo encourage the use of Python 3, the game is designed to be played
right at the Python prompt. Single-word commands can be typed by
themselves, but two-word commands should be written as a function call
(since a two-word command would not be valid Python):>>> import adventure
>>> adventure.play()
WELCOME TO ADVENTURE!! WOULD YOU LIKE INSTRUCTIONS?
>>> no
YOU ARE STANDING AT THE END OF A ROAD BEFORE A SMALL BRICK BUILDING.
AROUND YOU IS A FOREST. A SMALL STREAM FLOWS OUT OF THE BUILDING AND
DOWN A GULLY.
>>> east
YOU ARE INSIDE A BUILDING, A WELL HOUSE FOR A LARGE SPRING.
THERE ARE SOME KEYS ON THE GROUND HERE.
THERE IS A SHINY BRASS LAMP NEARBY.
THERE IS FOOD HERE.
THERE IS A BOTTLE OF WATER HERE.
>>> get(lamp)
OK
>>> leave
YOU'RE AT END OF ROAD AGAIN.
>>> south
YOU ARE IN A VALLEY IN THE FOREST BESIDE A STREAM TUMBLING ALONG A
ROCKY BED.The original Adventure paid attention to only the first five letters of
each command, so a long command likeinventorycould simply be typed
asinven. This package defines a symbol for both versions of every
long word, so you can type the long or short version as you please.You can save your game at any time by calling thesave()command
with a filename, and then can resume it later:>>> save('advent.save')
GAME SAVED
>>> adventure.resume('advent.save')
GAME RESTORED
>>> look
SORRY, BUT I AM NOT ALLOWED TO GIVE MORE DETAIL. I WILL REPEAT THE
LONG DESCRIPTION OF YOUR LOCATION.
YOU ARE IN A VALLEY IN THE FOREST BESIDE A STREAM TUMBLING ALONG A
ROCKY BED.You can find two complete, working walkthroughs of the game in itstestsdirectory, which you can run using thediscovermodule that
comes built-in with Python 3:$ python3 -m unittest discover adventureI wrote most of this package over Christmas vacation 2010, to learn more
about the workings of the game that so enthralled me as a child; the
project also gave me practice writing Python 3. I still forget the
parentheses when writingprint()if I am not paying attention.Traditional ModeYou can also use this package to play Adventure at a traditional prompt
that does not require its input to be valid Python. Use your operating
system command line to run the package:$ python3 -m adventure
WELCOME TO ADVENTURE!! WOULD YOU LIKE INSTRUCTIONS?
>At the prompt that will appear, two-word commands can simply be
separated by a space:> get lamp
OKFor extra authenticity, the output of the Adventure game in this mode is
typed to your screen at 1200 baud. You will note that although this
prints the text faster than you can read it anyway, your experience of
the game will improve considerably, especially when a move results in a
surprise.Why is the game better at 1200 baud? When a paragraph of text is
allowed to appear on the screen all at once, your eyes scan the entire
paragraph for important information, often ruining any surprises before
you can then settle down and read it from the beginning. But at 1200
baud, you wind up reading the text in order as it appears, which unfolds
the narrative sequentially as the author of Adventure intended.If you created a file with the in-gamesavecommand, you can restore
it later by naming it on the command line:> save mygame
GAME SAVED
> quit
DO YOU REALLY WANT TO QUIT NOW?
> y
OK
$ python3 -m adventure mygame
GAME RESTORED
>NotesSeveral Adventure commands conflict with standard Python built-in
functions. If you want to run the normal Python functionexit(),open(),quit(), orhelp(), then import thebuiltinmodule and run the copy of the function stored there.The word “break” is a Python keyword, so there was no possibility of
using it in the game. Instead, use one of the two synonyms defined by
the PDP version of Adventure: “shatter” or “smash.”CopyrightTheadvent.datgame data file distributed with this Python package,
like the rest of the original source code for Adventure, is a public
domain work. Phrases from the original work that have been copied into
my source code from the FORTRAN source (the famous phrase “You have
gotten yourself killed” and so forth) remain public domain and can be
used without attribution.My own Python code that re-implements the game engine is:Copyright 2010–2015 Brandon RhodesLicensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.Changelog1.6 — 2020 August 15 — add support for upper-case commands typed at the terminal; and fix exception if user dies with water in their bottle(see #26).1.5 — 2020 July 18 — fix for fatal exception when “lamp turn” is entered.1.4 — 2016 January 31 — readline editing; added license; bug fix; test fix.1.3 — 2012 April 27 — installs on Windows; fixed undefined commands1.2 — 2012 April 5 — restoring saves from command line; 5-letter commands1.1 — 2011 March 12 — traditional mode; more flexible Python syntax1.0 — 2011 February 15 — 100% test coverage, feature-complete0.3 — 2011 January 31 — first public release |
adventure-anywhere | Adventure Anywhere is a port of the 1977 ADVENTURE game that allows for long-running game sessions peristed amongst multiple players. Seeadventure_anywhere.use_adventure.do_commandfor the top-level use case.This repo was originally a fork of Brandon Rhodes'python-adventurewhich itself is a python port of the original 1977 ADVENTURE fortran code. The code design of Adventure Anywhere is also influenced by Brandon Rhodes' PyOhio 2014 talkThe Clean Architecture in Python.Adventure Anywhere can be played:via SMSby texting +17028002877source code:sms_adventurewebsite:www.smsadventure.com |
adventure-cards | You can reach out me at,[email protected] |
adventure-game | No description available on PyPI. |
adventuregeneratorlib | No description available on PyPI. |
adventurelib | adventurelibadventurelibprovides basic functionality for writing text-based adventure
games, with the aim of making it easy enough for young teenagers to do.The foundation of adventurelib is the ability to define functions that are
called in response to commands. For example, you could write a function to
be called when the user types commands like "take hat":@when('take THING')
def take(thing):
print(f'You take the {thing}.')
inventory.append(thing)It also includes the foundations needed to write games involving rooms, items,
characters and more... but users will have to implement these features for
themselves as they explore Python programming concepts.Installingadventurelib.pyis a single file that can be copied into your project. You
can also install it with pip:pip install adventurelibDocumentationComprehensive documentation is on Read The Docs. |
adventurous-gauging-koplik | Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content. |
adverity-json-parser | adverity-json-parserExamplefrom adverity_json_parser import parser
DATASTREAM_ID = "1234"
INSTANCE_URL = "https://reprise.datatap.adverity.com/api/columns/"
TOKEN = "qwertyuiopasdfghjklzxcvbnmqwertyu"
JSON_PATH = "/Users/example/Documents/data-streams/table_schemas/general.json"
SCPECIAL_CHARS = ",().- %:!#&*§±+=" #Optional
instance = parser.Parser(DATASTREAM_ID, INSTANCE_URL, TOKEN, JSON_PATH)
instance.transform() |
adver_mng | UNKNOWN |
adversal-embedding | No description available on PyPI. |
adversarial-friend | CS_5523_Final_ProjectThis repo contains code and files of final project in CSCI 5523: Data MiningGetting StartedTo import AdversarialFriend API, use the following pip command:pip install adversarial-friendTeam MembersYangyang [email protected] [email protected] [email protected] AbtractWe will explore parameter interpretation techniques for computer vision tasks, in order to get a deeper understanding of how to deal with real image data. Specifically speaking, the models to be used include generalized linear models like Logistic Regression, as well as more powerful techniques such as convolutional neural networks. What is more, we will document and implement these algorithms to extract the learned concepts that our models detect in input images. Finally, we will use the information, along with automatic input optimization, to programmatically generate adversarial examples using model-agnostic gradient-based methods that trick the learned models into misclassifying input images.For full project report, go to: [add link])()Reference[1] C. Olah, A. Satyanarayan, I, Johnson, S. Carter, L. Schubert, K. Ye, A. Mordvinstev, “The Building Blocks of Interpretability” Distill, 6-Mar-2018.Online[Accessed: 1-Dec-2020][2] J. Johnson, EECS 498-007 / 598-005 Deep Learning for Computer Vision, University of Michigan, 10-Aug-2020.Online[Accessed: 1-Dec-2020][3] A. Kurakin, I. Goodfellow, S. Bengio, “Adversarial Examples in the Physical World” ICLR, 8-Jul-2016.Online[Accessed: 1-Dec-2020]LicenseThis is free and unencumbered software released into the public domain.Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means. |
adversarial-gym | adversarial-gymAdversarial gym hosts a range of adversarial turn based games within the OpenAI gym framework.
The games currently supported are:ChessTicTacToeInstallationDepending on the use case you can install in developer mode or using pypi.Use the package managerpipto install adversarial_gympipinstalladversarial-gymInstall from SourceInstallation from source can be used to edit the environments.This is useful when developing or if your use case requires changes to the current APIcdDir/To/Install/In
[email protected]:OperationBeatMeChess/adversarial-gym.gitcdadversarial-gym
pipinstall-e.Usageimportgymimportadversarial_gym# env = gym.make("Chess-v0", render_mode='human')env=gym.make("TicTacToe-v0",render_mode='human')print('reset')env.reset()terminal=Falsewhilenotterminal:action=env.action_space.sample()observation,reward,terminal,truncated,info=env.step(action)env.close()Adversarial Environment APIEach adversarial api follows the structure of the defined base class. This API has a few small additions to the standard OpenAI gym environment to help with the turn based structure of adversarial games. The basic adversarial API follows the below criteria:classAdversarialEnv(gym.Env):"""Abstract Adversarial Environment"""@abstractpropertydefcurrent_player(self):"""Returns:current_player: Returns identifier for which player currently has their turn."""pass@abstractpropertydefprevious_player(self):"""Returns:previous_player: Returns identifier for which player previously had their turn."""pass@abstractpropertydefstarting_player(self):"""Returns:starting_player: Returns identifier for which player started the game."""pass@abstractmethoddefget_string_representation(self):"""Returns:board_string: Returns string representation of current game state."""pass@abstractmethoddefset_string_representation(self,board_string):"""Input:board_string: sets game state to match the string representation of board_string."""pass@abstractmethoddef_get_canonical_observation(self):"""Returns:canonical_state: returns canonical form of board. The canonical formshould be independent of players turn. For e.g. in chess,the canonical form can be chosen to be from the povof white. When the player is white, we can returnboard as is. When the player is black, we can invertthe colors and return the board.current_player: returns indentifier of which player is the current player in the canonicial state.This is used to decode the invariant canonical form."""pass@abstractmethoddef_game_result(self):"""Returns:winner: returns None when game is not finished else returns int valuefor the winning player or draw.reward: Reward value given the game result. Should not consider the player who won."""pass@abstractmethoddef_do_action(self,action):"""Input:action: Execute action from current game state."""pass@abstractmethoddef_reset_game(self):"""Reset the state of the game to the initial state.This includes reseting the current player to the starting player."""@abstractmethoddef_get_frame(self):"""Returns:frame: returns py_game frame for the current state of the game.This will be used by render to render the frame for human visualization"""pass@abstractmethoddef_get_img(self):"""Returns:img: returns rgb_array of the image for the current state of the game."""passdefgame_result(self):returnself._game_result()[0]defskip_next_human_render(self):"""Skips the next automatic human render in step or reset.Used for rollouts or similar non visualized moves."""self.skip_next_render=Truedefstep(self,action):self._do_action(action)observation=self._get_canonical_observation()info=self._get_info()result,reward=self._game_result()terminated=resultisnotNoneifself.render_mode=="human":self.render()returnobservation,reward,terminated,False,infodefreset(self,seed=None,options=None):super().reset(seed=seed)self._reset_game()observation=self._get_canonical_observation()info=self._get_info()ifself.render_mode=="human":self.render()returnobservation,infodefrender(self):ifself.render_mode=="human":ifself.clockisNone:self.clock=pygame.time.Clock()ifself.windowisNone:pygame.init()pygame.display.init()self.window=pygame.display.set_mode((self.render_size,self.render_size))canvas=self._get_frame()# The following line copies our drawings from `canvas` to the visible windowself.window.blit(canvas,canvas.get_rect())pygame.display.update()# We need to ensure that human-rendering occurs at the predefined framerate.# The following line will automatically add a delay to keep the framerate stable.self.clock.tick(self.metadata["render_fps"])elifself.render_mode=="rgb_array":returnself._get_img()The major differences between a standard gym environment and the adversarial environment is the adversarial environment keeps track of both the game state and each players state. In other words we must know which player is currently making a move and the state which corresponds with this player. Additionally this must be expressed in the result of the game.Additional features which were added for convenience were the ability to hash the environment state with a string representation (useful for representing the game as an action tree where each hashed state can search some position). Also, there are a few private member functions required for step and reset.Finally, there are two functions used for rendering the pygame window or getting the rgb_array of state.This adversarial environment is then also paired with its corresponding adversarial Action_Space. This is required because most games have a subset of the total moves which are legal dependent on the current state of the game. This means it is non trivial to represent the move space with the vanilla gym spaces. To work around this while staying compliant with OpenAI gym API we created the following action space.classAdversarialActionSpace(gym.spaces.Space):defsample(self):actions=self.legal_actionsreturnactions[np.random.randint(len(actions))]defcontains(self,action,is_legal=True):is_contained=actioninrange(self.action_space_size())and_legal=actioninself.legal_actionsifis_legalelseTruereturnis_containedandand_legal@abstractpropertydeflegal_actions(self):"""Returns:legal_actions: Returns a list of all the legal moves in the current position."""pass@abstractpropertydefaction_space_size(self):"""Returns:action_space_size: returns the number of all possible actions."""passThe action space is assumed to be a value in the set{1, 2, 3, 4, ..., total_number_actions}This means the action space is linear. however, we will have to decode the action into its corresponding move in which ever game. The legal actions will then just be a mask of which actions in the total set of actions can be played in any position. The action space size is just thetotal_number_actions.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT |
adversarial-insight-ml | Adversarial Insight ML (AIML)“Why does your machine lie?”Adversarial Insight ML (AIML) is a python package that evaluates the robustness of image classification models against adversarial attacks. AIML provides the functionality to automatically test your models against generated adversarial examples and outputs precise, insightful and robust feedback based on the several attack methods we have carefully chosen. Furthermore, AIML aims to be straightforward and beginner-friendly to allow non-technical users to take full advantage of its functionalities.For more information, you can also visit thePyPI pageand thedocumentation page.Table of ContentsInstallationUsageFeaturesContributingLicenseInstallationTo install Adversarial Insight ML, you can use pip:pipinstalladversarial-insight-mlUsageHere's a simple overview of the usage of our package:You can evaluate your model with theevaluate()function:fromaiml.evaluation.evaluateimportevaluateevaluate(model,test_dataset)Theevaluate()function hastwo required parameters:input_model (str or model): A string of the name of the machine learning model or the machine learning model itself.input_test_data (str or dataset): A string of the name of the testing dataset or the testing dataset itself.Theevaluate()function has the followingoptional parameters:input_train_data (str or dataset, optional): A string of the name of the training dataset or the training dataset itself (default is None).input_shape (tuple, optional): Shape of input data (default is None).clip_values (tuple, optional): Range of input data values (default is None).nb_classes (int, optional): Number of classes in the dataset (default is None).batch_size_attack (int, optional): Batch size for attack testing (default is 64).num_threads_attack (int, optional): Number of threads for attack testing (default is 0).batch_size_train (int, optional): Batch size for training data (default is 64).batch_size_test (int, optional): Batch size for test data (default is 64).num_workers (int, optional): Number of workers to use for data loading (default is half of the available CPU cores).dry (bool, optional): When True, the code should only test one example.attack_para_list (list, optional): List of parameter combinations for the attack.See the demos inexamples/directory for usage in action:demo_basicdemo_huggingfacedemo_robustbenchFeaturesAfter evaluating your model withevaluate()function, we provide
the following insights:Summary of adversarial attacks performed, found in a text file namedattack_evaluation_result.txtfollowed by date. For example:Samples of the images can be found in a directoryimg/followed by date, for example:ContributingCode StyleAlways adhere to thePEP 8style guide for writing Python code, allowing upto 99 characters per line as the absolute maximum. Alternatively, just useblack.Commit MessagesWhen making changes to the codebase, please refer to theDocumentation/SubmittingPatchesin the Git repo:Write commit messages in present tense and imperative mood, e.g., "Add feature" instead of "Added feature" or "Adding feature."Craft your messages as if you're giving orders to the codebase to change its behaviour.BranchingWe conform to a variation of the "GitHub Flow'' convention, but not strictly. For example, see the following types of branches:main: This branch is always deployable and reflects the production state.bugfix/*: For bug fixes.LicenseThis project is licensed under the MIT License - see theLICENSEfile for details.AcknowledgementsWe extend our sincere appreciation to the following individuals who have been instrumental in the success of this project:Firstly, our client Mr. Luke Chang. His invaluable guidance and insights guided us from the beginning through every phase, ensuring our work remained aligned with practical needs. This project would not have been possible without his efforts.We'd also like to express our gratitude to Dr. Asma Shakil, who has coordinated and provided an opportunity for us to work together on this project.Thank you for being part of this journey.Warm regards,
Team 7ContactsSungjae [email protected] [email protected] [email protected] [email protected] [email protected] |
adversarial-labeller | adversarial_labellerAdversarial labeller is a sklearn compatible labeller that scores instances as belonging to the test dataset or not to help model selection under data drift. Adversarial labeller is distributed under the MIT license.InstallationDependenciesAdversarial validator requires:Python (>= 3.7)scikit-learn (>= 0.21.0)[imbalanced learn](>= 0.5.0)[pandas](>= 0.25.0)User installationThe easiest way to install adversarial validator is usingpip install adversarial_labellerExample Usageimportnumpyasnpimportpandasaspdfromsklearn.datasets.samples_generatorimportmake_blobsfromsklearn.metricsimportaccuracy_scorefromsklearn.model_selectionimportcross_val_scorefromsklearn.ensembleimportRandomForestClassifierfromadversarial_labellerimportAdversarialLabelerFactory,Scorerscoring_metric=accuracy_score# Our blob data generation parameters for this examplenumber_of_samples=1000number_of_test_samples=300# Generate 1d blob data and label a portion as test data# ... 1d blob data can be visualized as a rug plotvariables,labels=\make_blobs(n_samples=number_of_samples,centers=2,n_features=1,random_state=0)df=pd.DataFrame({'independent_variable':variables.flatten(),'dependent_variable':labels,'label':0# default to train data})test_indices=df.index[-number_of_test_samples:]train_indices=df.index[:-number_of_test_samples]df.loc[test_indices,'label']=1# ... now we mark instances that are test data# Now perturb the test samples to simulate data drift/different test distributiondf.loc[test_indices,"independent_variable"]+=\np.std(df.independent_variable)# ... now we have an example of data drift where adversarial labeling can be used to better estimate the actual test accuracyfeatures_for_labeller=df.independent_variablelabels_for_labeller=df.labelpipeline,flip_binary_predictions=\AdversarialLabelerFactory(features=features_for_labeller,labels=labels_for_labeller,run_pipeline=False).fit_with_best_params()scorer=Scorer(the_scorer=pipeline,flip_binary_predictions=flip_binary_predictions)# Now we evaluate a classifer on training data only, but using# our fancy adversarial labeller_X=df.loc[train_indices]\.independent_variable\.values\.reshape(-1,1)_X_test=df.loc[test_indices]\.independent_variable\.values\.reshape(-1,1)# ... sklearn wants firmly defined shapesclf_adver=RandomForestClassifier(n_estimators=100,random_state=1)adversarial_scores=\cross_val_score(X=_X,y=df.loc[train_indices].dependent_variable,estimator=clf_adver,scoring=scorer.grade,cv=10,n_jobs=-1,verbose=1)# ... and we get ~ 0.70 - 0.68average_adversarial_score=\np.array(adversarial_scores).mean()# ... let's see how this compares with normal cross validationclf=RandomForestClassifier(n_estimators=100,random_state=1)scores=\cross_val_score(X=_X,y=df.loc[train_indices].dependent_variable,estimator=clf,cv=10,n_jobs=-1,verbose=1)# ... and we get ~ 0.92average_score=\np.array(scores).mean()# now let's see how this compares with the actual test scoreclf_all=RandomForestClassifier(n_estimators=100,random_state=1)clf_all.fit(_X,df.loc[train_indices].dependent_variable)# ... actual test score is 0.70actual_score=\accuracy_score(clf_all.predict(_X_test),df.loc[test_indices].dependent_variable)adversarial_result=abs(average_adversarial_score-actual_score)print(f"... adversarial labelled cross validation was{adversarial_result:.2f}points different than actual.")# ... 0.00 - 0.02 pointscross_val_result=abs(average_score-actual_score)print(f"... regular validation was{cross_val_result:.2f}points different than actual.")# ... 0.23 points# See tests/ for additional examples, including against the Titanic and stock market trading |
adversarial-lib | No description available on PyPI. |
adversarial-robustness-toolbox | Adversarial Robustness Toolbox (ART) v1.17中文README请按此处Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART is hosted by theLinux Foundation AI & Data Foundation(LF AI & Data). ART provides tools that enable
developers and researchers to defend and evaluate Machine Learning models and applications against the
adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks
(TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types
(images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition,
generation, certification, etc.).Adversarial ThreatsART for Red and Blue Teams (selection)Learn moreGet StartedDocumentationContributing-Installation-Examples-Notebooks-Attacks-Defences-Estimators-Metrics-Technical Documentation-Slack,Invitation-Contributing-Roadmap-CitingThe library is under continuous development. Feedback, bug reports and contributions are very welcome!AcknowledgmentThis material is partially based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under
Contract No. HR001120C0013. Any opinions, findings and conclusions or recommendations expressed in this material are
those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA). |
adversarial-test | Adversarial test: simple way to know if your train data and test data are similarWe combine our train and test data, labeling them 0 for the training data and 1 for the test data, mix them up, then see if we are able to correctly re-identify them using a binary classifier.If a classifier can identify whether a sample comes from train or test data set, we know that there's at least one feature in your data is shifted; use feature importance methods to point out the shifted feature(s)Get Started and DocumentationTo install from pip:pip install adversarial-testCode example:Using adversarial test with category featuresSee more usages innotebooksdirectory |
adversarial-vision-challenge | No description available on PyPI. |
adverse-event-app | adverse_event_appSample Adverse Event App |
adverseml | A library implementing adversarial ML algorithms |
advert-cafe | No description available on PyPI. |
advertest | =========advertest=========.. image:: https://img.shields.io/pypi/v/advertest.svg:target: https://pypi.python.org/pypi/advertest.. image:: https://img.shields.io/travis/eliasdabbas/advertest.svg:target: https://travis-ci.org/eliasdabbas/advertest.. image:: https://readthedocs.org/projects/advertest/badge/?version=latest:target: https://advertest.readthedocs.io/en/latest/?badge=latest:alt: Documentation Status.. image:: https://pyup.io/repos/github/eliasdabbas/advertest/shield.svg:target: https://pyup.io/repos/github/eliasdabbas/advertest/:alt: UpdatesProductivity and analysis tools for online marketingadvertools: create, scale, and manage online campaigns======================================================| A digital marketer is a data scientist.| Your job is to manage, manipulate, visualize, communicate, understand,and make decisions based on data.You might be doing basic stuff, like copying and pasting text on spreadsheets, you might be running large scale automated platforms withsophisticated algorithms, or somewhere in between. In any case your jobis all about working with data.| As a data scientist you don’t spend most of your time producing cool visualizations or finding great insights. The majority of your time is spent wrangling with URLs, figuring out how to stitch together two tables, hoping that the dates, won’t break, without you knowing, or trying to generate the next 124,538 keywords for an upcoming campaign, by the end of the week!| advertools is a Python package, that can hopefully make that part of your job a little easier.I have a tutorial on DataCamp that demonstrates a real-life example ofhow to use `Python for creating a Search Engine Marketing campaign`_.I also have an interactive tool based on this package, where you can`generate keyword combinations easily`_... image:: app_screen_shot.png:width: 600 px:align: centerMain Uses:~~~~~~~~~~- **Generate keywords:** starting from a list of products, and a listof words that might make sense together, you can generate a fulltable of many possible combinations and permutations of relevantkeywords for that product.The output is a ready-to-upload table to get you started withkeywords... code:: python>>> import advertools as adv>>> adv.kw_generate(products=['toyota'],words=['buy', 'price'],match_types=['Exact']).head()... Campaign Ad Group Keyword Criterion Type0 SEM_Campaign toyota toyota buy Exact1 SEM_Campaign toyota toyota price Exact2 SEM_Campaign toyota buy toyota Exact3 SEM_Campaign toyota price toyota Exact4 SEM_Campaign toyota toyota buy price Exact- **Create ads:** Two main ways to create text ads, one is from scratch(bottom-up) and the other is top down (given a set of product names).1. From scratch: This is the tradiditional way of writing ads. You havea template text, and you want to insert the product name dynamicallyin a certain location. You also want to make sure you are within thecharacter limits. For more details, I have a `tutorial on how tocreate multiple text ads from scratch`_... code:: python>>> ad_create(template='Let\'s count {}',replacements=['one', 'two', 'three'],fallback='one', # in case the total length is greater than max_lenmax_len=20)["Let's count one", "Let's count two", "Let's count three"]>>> ad_create('My favorite car is {}', ['Toyota', 'BMW', 'Mercedes', 'Lamborghini'], 'great', 28)['My favorite car is Toyota', 'My favorite car is BMW', 'My favorite car is Mercedes','My favorite car is great'] # 'Lamborghini' was too long, and so was replace by 'great'2. Top-down approach: Sometimes you need to start with a given a list ofproduct names, which you can easily split them into the relevant adslots, taking into consideration the length restrictions imposed bythe ad platform.Imagine having the following list of products, and you want to spliteach into slots of 30, 30, and 80 characters (based on the AdWordstemplate):.. code:: python>>> products = ['Samsung Galaxy S8+ Dual Sim 64GB 4G LTE Orchid Gray','Samsung Galaxy J1 Ace Dual Sim 4GB 3G Wifi White','Samsung Galaxy Note 8 Dual SIM 64GB 6GB RAM 4G LTE Midnight Black','Samsung Galaxy Note 8 Dual SIM 64GB 6GB RAM 4G LTE Orchid Grey']>>> [adv.ad_from_string(p) for p in products]... [['Samsung Galaxy S8+ Dual Sim', '64gb 4g Lte Orchid Gray', '', '', '', ''],['Samsung Galaxy J1 Ace Dual Sim', '4gb 3g Wifi White', '', '', '', ''],['Samsung Galaxy Note 8 Dual Sim', '64gb 6gb Ram 4g Lte Midnight', 'Black', '', '', ''],['Samsung Galaxy Note 8 Dual Sim', '64gb 6gb Ram 4g Lte Orchid', 'Grey', '', '', '']]| Each ad is split into the respective slots, making sure they containcomplete words, and that each slot has at most the specific number ofslots allowed.| This can save time when you have thousands of products to create adsfor.- **Analyze word frequency:** Calculate the absolute and weightedfrequency of words in a collection of documents to uncover hiddentrends in the data. This is basically answering the question, ‘Whatdid we write about vs. what was actually read?’Here is a tutorial on DataCamp on `measuring absolute vs weightedfrequency of words`_.| The package is still under heavy development, so expect a lot ofchanges.| Feedback and suggestions are more than welcomed.Installation~~~~~~~~~~~~.. code:: bashpip install advertoolsConventions~~~~~~~~~~~Function names mostly start with the object you are working on:| ``kw_``: for keywords-related functions| ``ad_``: for ad-related functions| ``url_``: URL tracking and generation.. _measuring absolute vs weighted frequency of words: https://www.datacamp.com/community/tutorials/absolute-weighted-word-frequency.. _Python for creating a Search Engine Marketing campaign: https://www.datacamp.com/community/tutorials/sem-data-science.. _generate keyword combinations easily: https://www.dashboardom.com/advertools.. _tutorial on how to create multiple text ads from scratch: https://nbviewer.jupyter.org/github/eliasdabbas/ad_create/blob/master/ad_create.ipynb* Free software: MIT license* Documentation: https://advertest.readthedocs.io.=======History=======0.1.0 (2018-07-02)------------------* First release on PyPI. |
advertion | adversarial-validationA tiny framework to perform adversarial validation of your training and test data.What is adversarial validation?A common workflow in machine learning projects (especially in Kaggle competitions) is:train your ML model in a training dataset.tune and validate your ML model in a validation dataset (typically is a discrete fraction of the training dataset).finally, assess the actual generalization ability of your ML model in a “held-out” test dataset.This strategy is widely accepted, but it heavily relies on the assumption that the training and test datasets are drawn
from the same underlying distribution. This is often referred to as the “identically distributed” property in the
literature.This package helps you easily assert whether the "identically distributed" property holds true for your training and
test datasets or equivalently whether your validation dataset is a good proxy for your model's performance on the unseen
test instances.If you are a person of details, feel free to take a deep dive to the following companion article:adversarial validation: can i trust my validation dataset?InstallThe recommended installation is viapip:pipinstalladvertion(advertion stands foradversarial validation)Usagefromadvertionimportvalidatetrain=pd.read_csv("...")test=pd.read_csv("...")validate(trainset=train,testset=test,target="label",)# // {# // "datasets_follow_same_distribution": True,# // 'mean_roc_auc': 0.5021320833333334,# // "adversarial_features': ['id'],# // }How to contributeIf you wish to contribute,thisis a great place to start!LicenseDistributed under theApache License 2.0. |
advertools | AnnouncingData Science with Python for SEO course: Cohort based course, interactive, live-coding.advertools: productivity & analysis tools to scale your online marketingA digital marketer is a data scientist.Your job is to manage, manipulate, visualize, communicate, understand,
and make decisions based on data.You might be doing basic stuff, like copying and pasting text on spread
sheets, you might be running large scale automated platforms with
sophisticated algorithms, or somewhere in between. In any case your job
is all about working with data.As a data scientist you don’t spend most of your time producing cool
visualizations or finding great insights. The majority of your time is spent
wrangling with URLs, figuring out how to stitch together two tables, hoping
that the dates, won’t break, without you knowing, or trying to generate the
next 124,538 keywords for an upcoming campaign, by the end of the week!advertoolsis a Python package that can hopefully make that part of your job a little easier.Installationpython3-mpipinstalladvertoolsPhilosophy/approachIt’s very easy to learn how to use advertools. There are two main reasons for that.First, it is essentially a set of independent functions that you can easily learn and
use. There are no special data structures, or additional learning that you need. With
basic Python, and an understanding of the tasks that these functions help with, you
should be able to pick it up fairly easily. In other words, if you know how to use an
Excel formula, you can easily use any advertools function.The second reason is thatadvertoolsfollows the UNIX philosophy in its design and
approach. Here is one of the various summaries of the UNIX philosophy by Doug McIlroy:Write programs that do one thing and do it well. Write programs to work together.
Write programs to handle text streams, because that is a universal interface.Let’s see how advertools follows that:Do one thing and do it well:Each function in advertools aims for that. There is a
function that just extracts hashtags from a text list, another one to crawl websites,
one to test which URLs are blocked by robots.txt files, and one for downloading XML
sitemaps. Although they are designed to work together as a full pipeline, they can be
run independently in whichever combination or sequence you want.Write programs to work together:Independence does not mean they are unrelated. The
workflows are designed to aid the online marketing practitioner in various steps for
understanding websites, SEO analysis, creating SEM campaigns and others.Programs to handle text streams because that is a universal interface:In Data
Science the most used data structure that can be considered “universal” is the
DataFrame. So, most functions return either a DataFrame or a file that can be read into
one. Once you have it, you have the full power of all other tools like pandas for
further manipulating the data, Plotly for visualization, or any machine learning
library that can more easily handle tabular data.This way it is kept modular as well as flexible and integrated.
As a next step most of these functions are being converted to no-codeinteractive appsfor non-coders, and taking them to the next
level.SEM CampaignsThe most important thing to achieve in SEM is a proper mapping between the
three main elements of a search campaignKeywords(the intention) ->Ads(your promise) ->Landing Pages(your delivery of the promise)
Once you have this done, you can focus on management and analysis. More importantly,
once you know that you can set this up in an easy way, you know you can focus
on more strategic issues. In practical terms you need two main tables to get started:Keywords: You cangenerate keywords(note I didn’t say research) with thekw_generatefunction.Ads: There are two approaches that you can use:Bottom-up: You can create text ads for a large number of products by simple
replacement of product names, and providing a placeholder in case your text
is too long. Check out thead_createfunction for more details.Top-down: Sometimes you have a long description text that you want to split
into headlines, descriptions and whatever slots you want to split them into.ad_from_stringhelps you accomplish that.Tutorials and additional resourcesGet started withData Science for Digital Marketing and SEO/SEMSetting a full SEM campaignfor DataCamp’s website tutorialProject to practicegenerating SEM keywords with Pythonon DataCampSetting up SEM campaigns on a large scaletutorial on SEMrushVisualtool to generate keywordsonline based on thekw_generatefunctionSEOProbably the most comprehensive online marketing area that is both technical
(crawling, indexing, rendering, redirects, etc.) and non-technical (content
creation, link building, outreach, etc.). Here are some tools that can help
with your SEOSEO crawler:A generic SEO crawler that can be customized, built with Scrapy, & with several
features:Standard SEO elements extracted by default (title, header tags, body text,
status code, response and request headers, etc.)CSS and XPath selectors: You probably have more specific needs in mind, so
you can easily pass any selectors to be extracted in addition to the
standard elements being extractedCustom settings: full access to Scrapy’s settings, allowing you to better
control the crawling behavior (set custom headers, user agent, stop spider
after x pages, seconds, megabytes, save crawl logs, run jobs at intervals
where you can stop and resume your crawls, which is ideal for large crawls
or for continuous monitoring, and many more options)Following links: option to only crawl a set of specified pages or to follow
and discover all pages through linksrobots.txt downloaderA simple downloader of robots.txt files in a DataFrame format, so you can
keep track of changes across crawls if any, and check the rules, sitemaps,
etc.XML Sitemaps downloader / parserAn essential part of any SEO analysis is to check XML sitemaps. This is a
simple function with which you can download one or more sitemaps (by
providing the URL for a robots.txt file, a sitemap file, or a sitemap indexSERP importer and parser for Google & YouTubeConnect to Google’s API and get the search data you want. Multiple search
parameters supported, all in one function call, and all results returned in a
DataFrameTutorials and additional resourcesA visual tool built with theserp_googfunction to getSERP rankings on GoogleA tutorial onanalyzing SERPs on a large scale with Pythonon SEMrushSERP datasets on Kagglefor practicing on different industries and use casesSERP notebooks on Kagglesome examples on how you might tackle such dataContent Analysis with XML Sitemaps and PythonXML dataset examples:news sites,Turkish news sites,Bloomberg newsText & Content Analysis (for SEO & Social Media)URLs, page titles, tweets, video descriptions, comments, hashtags are some
examples of the types of text we deal with.advertoolsprovides a few
options for text analysisWord frequencyCounting words in a text list is one of the most basic and important tasks in
text mining. What is also important is counting those words by taking in
consideration their relative weights in the dataset.word_frequencydoes
just that.URL AnalysisWe all have to handle many thousands of URLs in reports, crawls, social media
extracts, XML sitemaps and so on.url_to_dfconverts your URLs into
easily readable DataFrames.EmojiProduced with one click, extremely expressive, highly diverse (3k+ emoji),
and very popular, it’s important to capture what people are trying to communicate
with emoji. Extracting emoji, get their names, groups, and sub-groups is
possible. The full emoji database is also available for convenience, as well
as anemoji_searchfunction in case you want some ideas for your next
social media or any kind of communicationextract_ functionsThe text that we deal with contains many elements and entities that have
their own special meaning and usage. There is a group of convenience
functions to help in extracting and getting basic statistics about structured
entities in text; emoji, hashtags, mentions, currency, numbers, URLs, questions
and more. You can also provide a special regex for your own needs.StopwordsA list of stopwords in forty different languages to help in text analysis.Tutorial on DataCamp for creating theword_frequencyfunction and
explaining the importance of the difference betweenabsolute and weighted word frequencyText Analysis for Online MarketersAn introductory article on SEMrushSocial MediaIn addition to the text analysis techniques provided, you can also connect to
the Twitter and YouTube data APIs. The main benefits of usingadvertoolsfor this:Handles pagination and request limits: typically every API has a limited
number of results that it returns. You have to handle pagination when you
need more than the limit per request, which you typically do. This is handled
by defaultDataFrame results: APIs send you back data in a formats that need to be
parsed and cleaned so you can more easily start your analysis. This is also
handled automaticallyMultiple requests: in YouTube’s case you might want to request data for the
same query across several countries, languages, channels, etc. You can
specify them all in one request and get the product of all the requests in
one responseTutorials and additional resourcesA visual tool tocheck what is trending on Twitterfor all available locationsATwitter data analysis dashboardwith many optionsHow to use theTwitter data API with PythonExtracting entities from social media poststutorial on KaggleAnalyzing 131k tweetsby European Football clubs tutorial on KaggleAn overview of theYouTube data API with PythonConventionsFunction names mostly start with the object you are working on, so you can use
autocomplete to discover other options:kw_: for keywords-related functionsad_: for ad-related functionsurl_: URL tracking and generationextract_: for extracting entities from social media posts (mentions, hashtags, emoji, etc.)emoji_: emoji related functions and objectstwitter: a module for querying the Twitter API and getting results in a DataFrameyoutube: a module for querying the YouTube Data API and getting results in a DataFrameserp_: get search engine results pages in a DataFrame, currently available: Google and YouTubecrawl: a function you will probably use a lot if you do SEO*_to_df: a set of convenience functions for converting to DataFrames
(log files, XML sitemaps, robots.txt files, and lists of URLs)Change Log - advertools0.14.2 (2024-02-24)ChangedAllowsitemap_to_dfto work on offline sitemaps.0.14.1 (2024-02-21)FixedPreserve the order of supplied URLs in the output ofurl_to_df.0.14.0 (2024-02-18)AddedNew modulecrawlyticsfor analyzing crawl DataFrames. Includes functions to
analyze crawl DataFrames (images,redirects, andlinks), as well as
functions to handle large files (jl_to_parquet,jl_subset,parquet_columns).Newencodingoption forlogs_to_df.Option to save the output ofurl_to_dfto a parquet file.ChangedRemove requirement to delete existing log output and error files if they exist.
The function will now overwrite them if they do.Autothrottling is enabled by default incrawl_headersto minimize being blocked.FixedAlways get absolute path for img src while crawling.Handle NA src attributes when extracting images.Change fillna(method=”ffill”) to ffill forurl_to_df.0.13.5 (2023-08-22)AddedInitial experimental functionality forcrawl_images.ChangedEnable autothrottling by default forcrawl_headers.0.13.4 (2023-07-26)Fixed
- Make img attributes consistent in length, and support all attributes.0.13.3 (2023-06-27)ChangedAllow optional trailing space in log files (contributed by @andypayne)FixedReplace newlines with spaces while parsing JSON-LD which was causing
errors in some cases.0.13.2 (2022-09-30)AddedCrawling recipe for how to use theDEFAULT_REQUEST_HEADERSto change
the default headers.ChangedSplit long lists of URL while crawling regardless of thefollow_linksparameterFixedClarify that while authenticating for Twitter onlyapp_keyandapp_secretare required, with the option to provideoauth_tokenandoauth_token_secretif/when needed.0.13.1 (2022-05-11)AddedCommand line interface with most functionsMake documentation interactive for most pages usingthebe-sphinxChangedUsenp.nanwherever there are missing values inurl_to_dfFixedDon’t remove double quotes from etags when downloading XML sitemapsReplace instances ofpd.DataFrame.appendwithpd.concat, which is
depracated.Replace empty values with np.nan for the size column inlogs_to_df0.13.0 (2022-02-10)AddedNew functioncrawl_headers: A crawler that only makesHEADrequests
to a known list of URLs.New functionreverse_dns_lookup: A way to get host information for a
large list of IP addresses concurrently.New options for crawling:exclude_url_params,include_url_params,exclude_url_regex, andinclude_url_regexfor controlling which links to
follow while crawling.FixedAnycustom_settingsoptions given to thecrawlfunction that were
defined using a dictionary can now be set without issues. There was an
issue if those options were not strings.ChangedTheskip_url_paramsoption was removed and replaced with the more
versatileexclude_url_params, which accepts eitherTrueor a list
of URL parameters to exclude while following links.0.12.3 (2021-11-27)FixedCrawler stops when provided with bad URLs in list mode.0.12.0,1,2 (2021-11-27)AddedNew functionlogs_to_df: Convert a log file of any non-JSON format
into a pandas DataFrame and save it to aparquetfile. This also
compresses the file to a much smaller size.Crawler extracts all availableimgattributes: ‘alt’, ‘crossorigin’,
‘height’, ‘ismap’, ‘loading’, ‘longdesc’, ‘referrerpolicy’, ‘sizes’,
‘src’, ‘srcset’, ‘usemap’, and ‘width’ (excluding global HTML attributes
likestyleanddraggable).New parameter for thecrawlfunctionskip_url_params: Defaults to
False, consistent with previous behavior, with the ability to not
follow/crawl links containing any URL parameters.New column forurl_to_df“last_dir”: Extract the value in the last
directory for each of the URLs.ChangedQuery parameter columns inurl_to_dfDataFrame are now sorted by how
full the columns are (the percentage of values that are notNA)0.11.1 (2021-04-09)AddedThenofollowattribute for nav, header, and footer links.FixedTimeout error while downloading robots.txt files.Make extracting nav, header, and footer links consistent with all links.0.11.0 (2021-03-31)AddedNew parameterrecursiveforsitemap_to_dfto control whether or not
to get all sub sitemaps (default), or to only get the current
(sitemapindex) one.New columns forsitemap_to_df:sitemap_size_mb(1 MB = 1,024x1,024 bytes), andsitemap_last_modifiedandetag(if available).Option to request multiple robots.txt files withrobotstxt_to_df.Option to save downloaded robots DataFrame(s) to a file withrobotstxt_to_dfusing the new parameteroutput_file.Two new columns forrobotstxt_to_df:robotstxt_last_modifiedandetag(if available).RaiseValueErrorincrawlifcss_selectorsorxpath_selectorscontain any of the default crawl column headersNew XPath code recipes for custom extraction.New functioncrawllogs_to_dfwhich converts crawl logs to a DataFrame
provided they were saved while using thecrawlfunction.New columns incrawl:viewport,charset, allhheadings
(whichever is available), nav, header and footer links and text, if
available.Crawl errors don’t stop crawling anymore, and the error message is
included in the output file under a newerrorsand/orjsonld_errorscolumn(s).In case of having JSON-LD errors, errors are reported in their respective
column, and the remainder of the page is scraped.ChangedRemoved column prefixresp_meta_from columns containing itRedirect URLs and reasons are separated by ‘@@’ for consistency with
other multiple-value columnsLinks extracted while crawling are not unique any more (all links are
extracted).Emoji data updated with v13.1.Heading tags are scraped even if they are empty, e.g. <h2></h2>.Default user agent for crawling is now advertools/VERSION.FixedHandle sitemap index files that contain links to themselves, with an
error message included in the final DataFrameError in robots.txt files caused by comments preceded by whitespaceZipped robots.txt files causing a parsing issueCrawl issues on some Linux systems when providing a long list of URLsRemovedColumns from thecrawloutput:url_redirected_to,links_fragment0.10.7 (2020-09-18)AddedNew functionknowledge_graphfor querying Google’s APIFastersitemap_to_dfwith threadsNew parametermax_workersforsitemap_to_dfto determine how fast
it could goNew parametercapitalize_adgroupsforkw_generateto determine
whether or not to keep ad groups as is, or set them to title case (the
default)FixedRemove restrictions on the number of URLs provided tocrawl,
assumingfollow_linksis set toFalse(list mode)JSON-LD issue breaking crawls when it’s invalid (now skipped)RemovedDeprecate theyoutube.guide_categories_list(no longer supported by
the API)0.10.6 (2020-06-30)AddedJSON-LD support in crawling. If available on a page, JSON-LD items will
have special columns, and multiple JSON-LD snippets will be numbered for
easy filteringChangedStricter parsing for rel attributes, making sure they are in link
elements as wellDate column names forrobotstxt_to_dfandsitemap_to_dfunified
as “download_date”Numbering OG, Twitter, and JSON-LD where multiple elements are present in
the same page, follows a unified approach: no numbering for the first
element, and numbers start with “1” from the second element on. “element”,
“element_1”, “element_2” etc.0.10.5 (2020-06-14)AddedNew features for thecrawlfunction:Extract canonical tags if availableExtract alternatehrefandhreflangtags if availableOpen Graph data “og:title”, “og:type”, “og:image”, etc.Twitter cards data “twitter:site”, “twitter:title”, etc.FixedMinor fixes torobotstxt_to_df:Allow whitespace in fieldsAllow case-insensitive fieldsChangedcrawlnow only supportsoutput_filewith the extension “.jl”word_frequencydropswtd_freqandrel_valuecolumns ifnum_listis not provided0.10.4 (2020-06-07)AddedNew functionurl_to_df, splitting URLs into their components and to a
DataFrameSlight speed up forrobotstxt_test0.10.3 (2020-06-03)AddedNew functionrobotstxt_test, testing URLs and whether they can be
fetched by certain user-agentsChangedDocumentation main page relayout, grouping of topics, & sidebar captionsVarious documentation clarifications and new tests0.10.2 (2020-05-25)AddedUser-Agent info to requests getting sitemaps and robotstxt filesCSS/XPath selectors support for the crawl functionSupport for custom spider settings with a new parametercustom_settingsFixedUpdate changed supported search operators and values for CSE0.10.1 (2020-05-23)ChangedLinks are better handled, and new output columns are available:links_url,links_text,links_fragment,links_nofollowbody_textextraction is improved by containing <p>, <li>, and <span>
elements0.10.0 (2020-05-21)AddedNew functioncrawlfor crawling and parsing websitesNew functionrobotstxt_to_dfdownloading robots.txt files into
DataFrames0.9.1 (2020-05-19)AddedAbility to specify robots.txt file forsitemap_to_dfAbility to retreive any kind of sitemap (news, video, or images)Errors column to the returnd DataFrame if any errors occurA newsitemap_downloadedcolumn showing datetime of getting the
sitemapFixedLogging issue causingsitemap_to_dfto log the same action twiceIssue preventing URLs not ending with xml or gz from being retreivedCorrect sitemap URL showing in thesitemapcolumn0.9.0 (2020-04-03)AddedNew functionsitemap_to_dfimports an XML sitemap into aDataFrame0.8.1 (2020-02-08)ChangedColumnquery_timeis now namedqueryTimein theyoutubefunctionsHandle json_normalize import from pandas based on pandas version0.8.0 (2020-02-02)AddedNew moduleyoutubeconnecting to all GET requests in APIextract_numbersnew functionemoji_searchnew functionemoji_dfnew variable containing all emoji as a DataFrameChangedEmoji database updated to v13.0serp_googwith expandedpagemapand metadataFixedserp_googerrors, some parameters not appearing in result
dfextract_numbersissue when providing dash as a separator
in the middle0.7.3 (2019-04-17)AddedNew functionextract_exclamationsvery similar toextract_questionsNew functionextract_urls, also counts top domains and
top TLDsNew keys toextract_emoji;top_emoji_categories&top_emoji_sub_categoriesGroups and sub-groups toemoji db0.7.2 (2019-03-29)ChangedEmoji regex updatedSimpler extraction of Spanishquestions0.7.1 (2019-03-26)FixedMissing __init__ imports.0.7.0 (2019-03-26)AddedNewextract_functions:Genericextractused by all others, and takes
arbitrary regex to extract text.extract_questionsto get question mark statistics, as
well as the text of questions asked.extract_currencyshows text that has currency symbols in it, as
well as surrounding text.extract_intense_wordsgets statistics about, and extract words with
any character repeated three or more times, indicating an intense
feeling (+ve or -ve).New functionword_tokenize:Used byword_frequencyto get tokens of
1,2,3-word phrases (or more).Split a list of text into tokens of a specified number of words each.New stop-words from thespaCypackage:current:Arabic, Azerbaijani, Danish, Dutch, English, Finnish,
French, German, Greek, Hungarian, Italian, Kazakh, Nepali, Norwegian,
Portuguese, Romanian, Russian, Spanish, Swedish, Turkish.new:Bengali, Catalan, Chinese, Croatian, Hebrew, Hindi, Indonesian,
Irish, Japanese, Persian, Polish, Sinhala, Tagalog, Tamil, Tatar, Telugu,
Thai, Ukrainian, Urdu, VietnameseChangedword_frequencytakes new parameters:regexdefaults to words, but can be changed to anything ‘S+’
to split words and keep punctuation for example.sepnot longer used as an option, the aboveregexcan
be used insteadnum_listnow optional, and defaults to counts of 1 each if not
provided. Useful for countingabs_freqonly if data not
available.phrase_lenthe number of words in each split token. Defaults
to 1 and can be set to 2 or higher. This helps in analyzing phrases
as opposed to words.Parameters supplied toserp_googappear at the beginning
of the result dfserp_youtubenow containsnextPageTokento make
paginating requests easier0.6.0 (2019-02-11)New functionextract_wordsto extract an arbitrary set of wordsMinor updatesad_from_stringslots argument reflects new text
ad lenghtshashtagregex improved0.5.3 (2019-01-31)Fix minor bugsHandle Twitter search queries with 0 results in final request0.5.2 (2018-12-01)Fix minor bugsProperly handle requests for >50 items (serp_youtube)Rewrite test for _dict_productFix issue with string printing error msg0.5.1 (2018-11-06)Fix minor bugs_dict_product implemented with listsMissing keys in some YouTube responses0.5.0 (2018-11-04)New functionserp_youtubeQuery YouTube API for videos, channels, or playlistsMultiple queries (product of parameters) in one function callReponse looping and merging handled, one DataFrameserp_googreturn Google’s original error messagestwitter responses with entities, get the entities extracted, each in a
separate column0.4.1 (2018-10-13)New functionserp_goog(based on Google CSE)Query Google search and get the result in a DataFrameMake multiple queries / requests in one function callAll responses merged in one DataFrametwitter.get_place_trends results are ranked by town and country0.4.0 (2018-10-08)New Twitter module based on twythonWraps 20+ functions for getting Twitter API dataGets data in a pands DataFrameHandles looping over requests higher than the defaultsTested on Python 3.70.3.0 (2018-08-14)Search engine marketing cheat sheet.New set of extract_ functions with summary stats for each:extract_hashtagsextract_mentionsextract_emojiTests and bug fixes0.2.0 (2018-07-06)New set of kw_<match-type> functions.Full testing and coverage.0.1.0 (2018-07-02)First release on PyPI.Functions available:ad_create: create a text ad place words in placeholdersad_from_string: split a long string to shorter string that fit intogiven slotskw_generate: generate keywords from lists of products and wordsurl_utm_ga: generate a UTM-tagged URL for Google Analytics trackingword_frequency: measure the absolute and weighted frequency of words incollection of documents |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.