package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
alchemista
AlchemistaTools to generate Pydantic models from SQLAlchemy models.Still experimental.InstallationAlchemista isavailable in PyPI. To install it withpip, run:pipinstallalchemistaUsageSimply call themodel_fromfunction with a SQLAlchemy model. EachColumnin its definition will result in an attribute of the generated model via the PydanticFieldfunction.For example, a SQLAlchemy model like the followingfromsqlalchemyimportColumn,Integer,Stringfromsqlalchemy.ormimportdeclarative_baseBase=declarative_base()classPersonDB(Base):__tablename__="people"id=Column(Integer,primary_key=True)age=Column(Integer,default=0,nullable=False,doc="Age in years")name=Column(String(128),nullable=False,doc="Full name")could have a generated Pydantic model viafromalchemistaimportmodel_fromPerson=model_from(PersonDB)and would result in a Pydantic model equivalent tofrompydanticimportBaseModel,FieldclassPerson(BaseModel):id:intage:int=Field(0,description="Age in years")name:str=Field(...,max_length=128,description="Full name")classConfig:orm_mode=TrueNote that the string length from the column definition was sufficient to add amax_lengthconstraint. Additionally, by default, the generated model will haveorm_mode=True. That can be customized via the__config__keyword argument.There is also anexcludekeyword argument that accepts a set of field names tonotinclude in the generated model, and anincludekeyword argument accepts a set of field names todoinclude in the generated model. However, they are mutually exclusive and cannot be used together.This example is available in a short executable form in theexamples/directory.Fieldarguments andinfoCurrently, the type, default value (either scalar or callable), and the description (from thedocattribute) are extracted directly from theColumndefinition. However, except for the type, all of them can be overridden via theinfodictionary attribute. All other custom arguments to theFieldfunction are specified there too. The supported keys are listed inalchemista.field.Info.Everything specified ininfois preferred from what has been extracted fromColumn. This means that the default value and the description can beoverriddenif so desired. Also, similarly to using Pydantic directly,defaultanddefault_factoryare mutually-exclusive, so they cannot be used together. Usedefault_factoryif the default value comes from calling a function (without any arguments).For example, in the case above,name=Column(String(128),nullable=False,doc="Full name",info=dict(description=None,max_length=64))would instead result inname:str=Field(...,max_length=64)fields_fromandmodel_fromThefields_fromfunction is the function that actually inspects the SQLAlchemy model and builds a dictionary in a format that can be used to generate a Pydantic model. Somodel_fromis just a shortcut for callingfields_fromand thenpydantic.create_model. The model name thatmodel_fromsets isdb_model.__name__.If desired, or extra control is needed,pydantic.create_modelcan be used directly, in conjunction withfields_from. This allows the customization of the name of the model that will be created and the specification of othercreate_modelarguments, like__base__and__validators__(model_fromcurrently only accepts__config__).For example:fromalchemistaimportfields_fromfrompydanticimportcreate_modelMyModel=create_model("MyModel",**fields_from(DBModel))transformBothfields_fromandmodel_fromhave atransformargument. It is a callable used to transform the fields before generating a Pydantic model. The provided transformation functions are in thefuncmodule. By default, thetransformargument is set tofunc.unchanged, which, as the name implies, does nothing.The other provided transformation function isfunc.nonify, which makes all fields optional (if they weren't already) and nullable and sets the default value toNone. This is useful when some kind of "input model" is desired. For example, the database model might have an auto-generated primary key, and some other columns with default values. When updating an entity of this model, one wouldn't want to receive the primary key as an update candidate (which can be solved using theexcludeargument) and probably wouldn't want the fields with default values to actually have these values, since the update is meant to change only what the user asked to update.For example:fromsqlalchemyimportColumn,Integer,Stringfromsqlalchemy.ormimportdeclarative_basefromalchemistaimportmodel_fromfromalchemista.funcimportnonifyBase=declarative_base()classPersonDB(Base):__tablename__="people"id=Column(Integer,primary_key=True)age=Column(Integer,default=0,nullable=False,doc="Age in years")name=Column(String(128),nullable=False,doc="Full name")PersonInput=model_from(PersonDB,exclude={"id"},transform=nonify)The modelPersonInputis equivalent to the hand-written version below:fromtypingimportOptionalfrompydanticimportBaseModel,FieldclassPersonInput(BaseModel):age:Optional[int]=Field(None,description="Age in years")name:Optional[str]=Field(None,max_length=128,description="Full name")classConfig:orm_mode=TrueNote that bothageandnameweren't originally nullable (Optional, in Python) and the name was required (age wasn't because it has a default value). Now, both fields are nullable and their default value isNone.User-defined transformationsYou can also create your own transformation functions. The expected signature is as follows:fromtypingimportTuplefrompydantic.fieldsimportFieldInfodeftransformation(name:str,python_type:type,field:FieldInfo)->Tuple[type,FieldInfo]:passWherenameis the name of the field currently being created,python_typeis its Python type, andfieldis its full Pydantic field specification. The return type is a tuple of the Python type and the field specification. These two can be changed freely (the name can't).LicenseThis project is licensed under the terms of the MIT license.
alchemist-armet
UNKNOWN
alchemist.audit
Alchemist auditing provides a facility for auditing changes to an object to a relational database. It automatically captures and records events for object added, modified, deleted. in addition if the respective packages are present it also records worklow events ( ore.workflow ) and versioning ( alchemist.versioning ).All events record time, action, active user, in addition modification events capture field change descriptions, to allow for listing attributes changed.Auditing can be done on either a per table or0.3.4 - December 23rd, 2008packaging fixes ( template not included in package data )0.3.2 - December 17th, 2008update package metadata / classifiersdon’t require active user for auditing functionalityautomation api for adapter registration and recorder generation ( provideRecorder )0.3.0 - June 1st, 2008First public release
alchemist_lib
Alchemist_libDescriptionAlchemist_lib is an automatic trading library for cryptocurrencies that allow to personalize the portfolio based on a specific strategy.FeaturesEasy to use: The interface is similar tozipline, a popular backtesting software for stocks.Portfolio personalization: You can choose the weight of every element on the portfolio.Most common technical analysis indicators already integrated.Execute orders on the most famous exchanges.Possibility to visualize the asset allocation and the portfolio value charts for every strategy thanks toalchemist-view.Fully documented and hosted onreadthedocs.Supported ExchangesThe following exchanges are available to trade on:PoloniexBittrexRequirementsPython3MysqlInstallationSee theinstalling documentation.Code exampleStrategy description:Hold a portfolio equally composed by Ethereum and BitcoinCash.from alchemist_lib.portfolio import LongsOnlyPortfolio from alchemist_lib.broker import PoloniexBroker from alchemist_lib.tradingsystem import TradingSystem import alchemist_lib.exchange as exch import pandas as pd def set_weights(df): df["weight"] = 0.5 #Because there are just two assets. return df def select_universe(session): poloniex_assets = exch.get_assets(session = session, exchange_name = "poloniex") my_universe = [] for asset in poloniex_assets: if asset.ticker == "ETH" or asset.ticker == "BCH": my_universe.append(asset) return my_universe def handle_data(session, universe): #The value of alpha is useless in this case. df = pd.DataFrame(data = {"asset" : universe, "alpha" : 0}, columns = ["asset", "alpha"]).set_index("asset") return df algo = TradingSystem(name = "BuyAndHold", portfolio = LongsOnlyPortfolio(capital = 0.02), set_weights = set_weights, select_universe = select_universe, handle_data = handle_data, broker = PoloniexBroker(api_key = "APIKEY", secret_key = "SECRETKEY"), paper_trading = True) algo.run(delay = "15M", frequency = 1)ScreenshotBasic conceptsAlchemist_lib works with three methods:set_weightsselect_universehandle_dataset_weightsis used to set the weight that an asset has respect the others within the portfolio. The sum of every weight must be close to 1. Must returns a pandas dataframe with two columns: “asset” and “alpha”, where “asset” is the index.select_universefilters the assets saved on the database and returns just the ones the strategy will take into consideration.handle_datais the most importat one because it manages the trading logic. Must returns a pandas dataframe with two columns: “asset” and “alpha”, where “asset” is the index.You can find other examples in theexamplesdirectory.Reporting bugsAbug trackeris provided by Github.
alchemistry-flamel
FlamelThe aim of the project is to develop acommand line interface (CLI) toalchemlyb, the well-tested and actively developed library for alchemical free energy calculations. It is supposed tobecome the successorof the now unsupportedalchemical-analysisscript.InstallationThe package containingflamelis calledalchemistry-flamel. The latest release can be installed withpipor alternatively, install from source. Both methods are explained below.pipflamelis available from the Python Package index (PyPi) under the namealchemistry-flameland can be installed withpipinstallalchemistry-flamelThe installed package makes theflamelscript available.From sourcesClone theflamelrepositoryhttps://github.com/alchemistry/flameland install [email protected]:alchemistry/flamel.git pipinstallflamel/UninstallingIf you want to removeflamelafter having it installed withpip, runpipuninstallalchemistry-flamelto deleteflameland its associated files.UsageThe analysis can be invoked with the following commandflamel-aGROMACS-ddhdl_data-f10-g-i50-jresult.csv-mTI,BAR,MBAR-ndE-oout_data-pdhdl-qxvg-r3-s50-t298-v-wRunflamel -hto see the full description of the options.OutputThis script is a wrapper around theABFEworkflow inalchemlyb. The script will generate the output from ABFE workflow, includingO_MBAR.pdf,dF_t.pdf,dF_state.pdf,dF_t.pdf,dhdl_TI.pdf.The script will also generate theresult.csvandresult.p, which is a pandas DataFrame summarising the results. ::TI TI_Error BAR BAR_Error MBAR MBAR_Error States 0 -- 1 0.962 0.007 0.956 0.007 0.964 0.006 1 -- 2 0.567 0.006 0.558 0.006 0.558 0.004 2 -- 3 0.264 0.005 0.258 0.005 0.254 0.004 3 -- 4 0.035 0.004 0.035 0.004 0.030 0.003 Stages fep 1.828 0.014 1.806 0.016 1.807 0.014 TOTAL 1.828 0.014 1.806 0.011 1.807 0.014NameIn the tradition to associate free energy estimations with alchemistry it's named afterNicolas FlamelCopyrightCopyright (c) 2022, theAUTHORS.Acknowledgements@harlor startedflamelas a replacement for the originalalchemical-analyis.pyscript.Project template based on theComputational Molecular Science Python Cookiecutterversion 1.1.
alchemist.security
A relational implementation of zope security components, includingauthentication, principal role mappings (global and local), permission role mappings ( global and local ).
alchemist-stack
alchemist-stackPackage Author: H.D. 'Chip' McCullough IVLast Updated: April 23rd, 2018Description:A Flexible Model-Repository-Database stack for use with SQL AlchemyOverviewAlchemist Stack is intended to be a thread-safe, multi-session/multi-connectionUsageExample ORM Table:# table_example.pyfromalchemist_stack.repository.modelsimportBasefromsqlalchemyimportColumn,Integer,DateTimeclassExampleTable(Base):__tablename__='example'primary_key=Column('id',Integer,primary_key=True)timestamp=Column(DateTime(timezone=True),nullable=False)def__repr__(self):return'<Example(timestamp={timestamp})>'.format(timestamp=self.timestamp)Example Model:# model_example.pyfromtables.table_exampleimportExampleTablefromdatetimeimportdatetime,timezonefromtypingimportTypeVarE=TypeVar('E',bound="Example")classExample(object):"""Example Model class."""def__init__(self,timestamp:datetime=datetime.now(timezone.utc).astimezone(),primary_key:int=None,*args,**kwargs):self.__pk=primary_keyself.__timestamp=timestampself.__args=argsself.__kwargs=kwargsdef__call__(self,*args,**kwargs)->ExampleTable:"""Called when an instance of Example is called, e.g.:`x = Example(...)``x(...)`This is equivalent to calling `to_orm()` on the object instance.:returns: The ORM of the Example.:rtype: ExampleTable"""returnself.to_orm()def__repr__(self)->str:"""A detailed String representation of Example.:returns: String representation of Example object."""return'<class Test(pk={pk}timestamp={timestamp}) at{hex_id}>'.format(pk=self.__pk,timestamp=self.__timestamp,hex_id=hex(id(self)))@propertydefid(self)->int:returnself.__pk@propertydeftimestamp(self)->datetime:returnself.__timestampdefto_orm(self)->ExampleTable:returnExampleTable(primary_key=self.__pk,timestamp=self.__timestamp)@classmethoddeffrom_orm(cls,obj:ExampleTable)->E:returncls(timestamp=obj.timestamp,primary_key=obj.primary_key)Example Repository:# repository_example.pyfromalchemist_stack.contextimportContextfromalchemist_stack.repositoryimportRepositoryBasefrommodels.model_exampleimportExamplefromtables.table_exampleimportExampleTableclassExampleRepository(RepositoryBase):""""""def__init__(self,context:Context,*args,**kwargs):"""Test Repository Constructor:param database: The Database object containing the engine used to connect to the database.:param args: Additional Arguments:param kwargs: Additional Keyword Arguments"""super().__init__(context=context,*args,**kwargs)def__repr__(self):""":return:"""return'<class ExampleRepository->RepositoryBase(context={context}) at{hex_id}>'\.format(context=str(self.context),hex_id=hex(id(self.context)))defcreate_example(self,obj:Example):self._create_object(obj=obj.to_orm())defget_example_by_id(self,example_id:int)->Example:self._create_session()__query=self._read_object(cls=ExampleTable)__t=__query.with_session(self.session).get(example_id)self._close_session()ifisinstance(__t,ExampleTable):returnExample.from_orm(__t)defupdate_example_by_id(self,example_id:int,values:dict,synchronize_session:str='evaluate')->int:self._create_session()__query=self._update_object(cls=ExampleTable,values=values)rowcount=__query.with_session(self.session)\.filter(ExampleTable.primary_key==example_id)\.update(values=values,synchronize_session=synchronize_session)self._commit_session()returnrowcount
alchemist.traversal
Traversal of objects by foreign keys for relational applicationsChanges0.4.0 - December 17th, 2008switch to buildout based testing environmentfix, only set parent on domain container if we have an instancefix, if parent not specified, don’t set constraint0.3.1 - June 1st, 2008fix an initialization exception during sqlalchemy introspection of variables, inspection of managed container properties on a class/ie no instance is passed, returns a container without query modifiers.
alchemist.ui
0.4.2 - December 23rd, 2008fix packaging issue with binary egg0.4.1 - December 17th, 2008fix, unique validator short circuits on form field validation and invariantsremoved browser menu registrationsdisallow field fields as container columnsredo unique validation in the presence of property/column aliasesserver side sort, batching, paging json container views ( extjs / yui compatible )traversable viewlet
alchemite-apiclient
alchemite-apiclientThis is a client for interacting with Alchemite Analytics, an applied machine learning platform to accelerate industrial R&D and optimise manufacturing by extracting information from sparce or noisy datasets. To obtain a licence for this product, pleasecontact Intellegensfor more information.API version: 0.70.0Requirements.Python >=3.8Installation & Usagepip installEither you can install this from the public pip repository using:pipinstallalchemite-apiclientAlternatively, you can install it from a zip archive using:pipinstall./api_client_python-version.zip(you may need to runpipwith root permission:sudo pip install ./api_client_python-version.zip)Then import the package:importalchemite_apiclientGetting StartedPlease follow theinstallation procedure.Examples can be found in the source distribution, downloadable fromhttps://pypi.org/project/alchemite-apiclient/#filesThen place yourcredentials.jsonfile in the "example" directory and runpythonexample_connect.pyThis should connect to the API server and, if successful, print something like this to the terminal (the numbers you see may be different):------ API version ----- {'alchemite_version': '20200414', 'api_application_version': '0.15.3', 'api_definition_version': '0.14.3'}If instead you encounter an error at this stage please contact Intellegens for further guidance.Next, look through and try running example/example_basic.py. This will upload a small dataset, train a basic model with the default hyperparameters and predict the missing values from a dataset.Examples of other functionality possible through the Alchemite API are given by:example/example_hyperopt.py train an optimal model using hyperparameter optimization and impute the training datasetexample/example_chunk.py upload a larger dataset in chunksexample/example_delete.py delete models and datasetsexample/example_optimize.py search the model's parameter space for parameters predicted to meet certain targetsexample/example_outliers.py find outliers in the model's training datasetexample/example_preload.py preload a model into memory to make predictions for larger models fasterCredentialsThe credentials.json file requires the following elements:host: The base uri of the Alchemite api you are attempting to use. (Ordinarilyhttps://alchemiteapi.intellegens.ai/v0)client_id: The client id to use for authentication. (OrdinarilyPythonClient)grant_type: One ofpassword,client_credentials,authorization_code.Grant types each have additional elements:Authorization Code:This will open a browser to prompt for user credentials for using the API. This is the recommended way of authenticating.offline(optional): If true, the client will attempt to acquire an offline token to persist user authentication between sessions. This token is stored in a.alchemite_tokenfile in the working directory.Password:This will use credentials collected from the commandline to authenticate with the API.username(optional): The username to log in with. If omitted the user will be prompted to enter itpassword(optional): The password to log in with. If omitted the user will be prompted to enter itoffline(optional): If true, the client will attempt to acquire an offline token to persist user authentication between sessions. This token is stored in a.alchemite_tokenfile in the working directory.Client Credentials:Attempts to authenticate using a client secret.client_secret: The client secret to use for authentication.Offline tokensOffline tokens persist indefinitely, but will expire if unused for more than 30 days. In the event that the token is lost or stolen, it can be revoked from your profile page in the Applications tab.Reference documentation corresponding to each API endpoint can be found in the docs directory of the source distribution.This Python package is automatically generated by theOpenAPI Generatorproject.
alchemize
Alchemize is designed to be a simple serialization and deserialization library for Python. The primary use-case for Alchemize is to allow for users to quickly build ReST clients using simple model mappings to transform data from Python objects to a serializable form and vice-versa.The power of Alchemize is that you can use it to augment existing model structures from other libraries. For example, you can use Alchemize to easily serialize your ORM models.InstallationAlchemize is available on PyPIpipinstallalchemizeDocumentation:ReadTheDocsTravis CI:Coverage:
alchemlyb
alchemlybmakes alchemical free energy calculations easier to do by leveraging the full power and flexibility of the PyData stack. It includes:Parsers for extracting raw data from output files of common molecular dynamics engines such asGROMACS,AMBER,NAMDandother simulation codes.Subsamplers for obtaining uncorrelated samples from timeseries data (including extracting independent, equilibrated samples[Chodera2016]as implemented in thepymbarpackage).Estimators for obtaining free energies directly from this data, using best-practices approaches for multistate Bennett acceptance ratio (MBAR)[Shirts2008]and BAR (frompymbar) and thermodynamic integration (TI).InstallationInstallviapipfromPyPi (alchemlyb)pip install alchemlybor as acondapackage from theconda-forge (alchemlyb)channelconda install -c conda-forge alchemlybUpdatewithpippip install --update alchemlybor withcondarunconda update -c conda-forge alchemlybto get the latest released version.Getting involvedContributions of all kinds are very welcome.If you have questions or want to discuss alchemlyb please post in thealchemlyb Discussions.If you have bug reports or feature requests then please get in touch with us through theIssue Tracker.We also welcome code contributions: have a look at ourDeveloper Guide. Open an issue with the proposed fix or change in theIssue Trackerand submit a pull request against thealchemistry/alchemlybGitHub repository.References[Shirts2008]Shirts, M.R., and Chodera, J.D. (2008). Statistically optimal analysis of samples from multiple equilibrium states. The Journal of Chemical Physics 129, 124105.[Chodera2016]Chodera, J.D. (2016). A Simple Method for Automated Equilibration Detection in Molecular Simulations. Journal of Chemical Theory and Computation 12, 1799–1805.
alchemtest
alchemtestis a collection of test datasets for alchemical free energy calculations. The datasets come from a variety of software packages, primarily molecular dynamics engines, and are used as the test set foralchemlyb. The package is standalone, however, and can be used for any purpose.Datasets are released under anopen licensethat conforms to theOpen Definition 2.1that allows free use, re-use, redistribution, modification, separation, for any purpose and without a charge.
alchemy
Experiments logging & visualizationProjectmanifest. Part ofCatalyst Ecosystem:Alchemy- Experiments logging & visualizationCatalyst- Accelerated Deep Learning Research and DevelopmentReaction- Convenient Deep Learning models servingInstallationCommon installation:pipinstall-UalchemyPrevious namealchemy-catalystGetting startedGotoAlchemyand get your personal token.Run followingexample.py:importrandomfromalchemyimportLogger# insert your personal token heretoken="..."project="default"forgidinrange(1):group=f"group_{gid}"foreidinrange(2):experiment=f"experiment_{eid}"logger=Logger(token=token,experiment=experiment,group=group,project=project,)formidinrange(4):metric=f"metric_{mid}"# let's sample some random datan=300x=random.randint(-10,10)foriinrange(n):logger.log_scalar(metric,x)x+=random.randint(-1,1)logger.close()Now you should see your metrics onAlchemy.Catalyst.EcosystemGotoAlchemyand get your personal token.Log your Catalyst experiment withAlchemyLogger:fromcatalyst.dlimportSupervisedRunner,AlchemyLoggerrunner=SupervisedRunner()runner.train(model=model,criterion=criterion,optimizer=optimizer,loaders=loaders,logdir=logdir,num_epochs=num_epochs,verbose=True,callbacks={"logger":AlchemyLogger(token="...",# your Alchemy tokenproject="your_project_name",experiment="your_experiment_name",group="your_experiment_group_name",)})Now you should see your metrics onAlchemy.ExamplesFor mode detailed tutorials, please followCatalyst examples.
alchemyapi_python
# alchemyapi_python #A sdk for AlchemyAPI using Python## AlchemyAPI ##AlchemyAPI offers artificial intelligence as a service. We teach computers to learn how to read and see, and apply our technology to text analysis and image recognition through a cloud-based API. Our customers use AlchemyAPI to transform their unstructured content such as blog posts, news articles, social media posts and images into much more useful structured data.AlchemyAPI is a tech startup located in downtown Denver, Colorado. As the world’s most popular text analysis service, AlchemyAPI serves over 3.5 billion monthly API requests to over 35,000 developers. To enable our services, we use artificial intelligence, machine learning, neural networks, natural language processing and massive-scale web crawling. Our technology powers use cases in a variety of industry verticals, including social media monitoring, business intelligence, content recommendations, financial trading and targeted advertising.More information at:http://www.alchemyapi.com## API Key ##To use AlchemyAPI, you’ll need to obtain an API key and attach that key to all requests. If you do not already have a key, please visit:http://www.alchemyapi.com/api/register.html## Requirements ##The Python SDK requires that you install the [Requests Python module](http://docs.python-requests.org/en/latest/user/install/#install).
alchemy-catalyst
Experiments logging & visualizationPart ofCatalyst Ecosystem. Projectmanifest.InstallationCommon installation:pipinstall-Ualchemy-catalystGetting startedGotoAlchemyand get your personal token.Run followingexample.py:importrandomfromalchemyimportLogger# insert your personal token heretoken="..."project="default"forgidinrange(1):group=f"group_{gid}"foreidinrange(2):experiment=f"experiment_{eid}"logger=Logger(token=token,experiment=experiment,group=group,project=project,)formidinrange(4):metric=f"metric_{mid}"# let's sample some random datan=300x=random.randint(-10,10)foriinrange(n):logger.log_scalar(metric,x)x+=random.randint(-1,1)logger.close()Now you should see your metrics onAlchemy.Catalyst.EcosystemGotoAlchemyand get your personal token.Log your Catalyst experiment withAlchemyRunner:fromcatalyst.dlimportSupervisedAlchemyRunnerrunner=SupervisedAlchemyRunner()runner.train(model=model,criterion=criterion,optimizer=optimizer,loaders=loaders,logdir=logdir,num_epochs=num_epochs,verbose=True,monitoring_params={"token":"...",# insert your personal token here"project":"default","experiment":"your_experiment_name","group":"your_experiment_group_name",})Now you should see your metrics onAlchemy.ExamplesFor mode detailed tutorials, please followCatalyst examples.
alchemy-config
Alchemy ConfigTheaconfiglibrary provides simpleyamlconfiguration inpythonwith environment-based overrides.InstallationTo install, simply usepippipinstallalchemy-configQuick Startconfig.yamlfoo:1bar:baz:"bat"main.pyimportaconfigif__name__=="__main__":config=aconfig.Config.from_yaml("config.yaml")print(config.foo)print(config.bar.baz)exportBAR_BAZ="buz"python3main.pyCorner-case BehaviorYou CAN set builtin method names as attributes on theconfig. However, you should only access/delete them via dictionary access methods.For example:importaconfigcfg={"update":True}config=aconfig.Config(cfg)# DO NOT DO THIS:config.update# DO THIS INSTEAD:config["update"]This is because there is no way in Python to tell whether you want the method or the attribute"update"when "getting" it from the object.
alchemy-graph
:small_red_triangle: alchemy_graph :small_red_triangle:SQLAlchemy mapper to Strawberry types:pencil2: InstallationYou can install mapper using pip:pipinstallalchemy-graphFunctions:get_only_selected_fieldsGiven a SQLAlchemy model class and a Strawberry Info object representing a selection set, returns a SQLAlchemy Select object that loads only the fields and relations specified in the selection set.Parameters:sqlalchemy_class: The SQLAlchemy model class to select fields from.info: The Strawberry Info object representing the selection set.inner_selection_name: The name of an inner selection set to consider. If specified, only fields and relations under this selection set will be included in the Select object.Returns:A SQLAlchemy Select object that loads only the specified fields and relations.orm_to_strawberryFunction maps sqlalchemy model to strawberry class.Parameters:input_data: SqlAlchemy Base Model or list of base models.strawberry_type: Strawberry class wrapped in strawberry.input or strawberry.type.Returns:Strawberry objects or list of them.strawberry_to_dictGiven a Strawberry object and an optional list of allowed keys, returns a dictionary representation of the object.Parameters:obj: A Strawberry object to convert to a dictionary.allowed_keys: An optional list of keys to include in the output dictionary. If not specified, all keys are included.Returns:A dictionary representation of the input object.orm_mapperFunction returns decorator for your Query strawberry.field().Parameters:strawberry_type: Strawberry type that should be return. Required if result_to_strawberry=True.inject_query: Inject into current function SqlAlchemy Query. Default value: False.sqlalchemy_class: SqlAlchemy model class.inner_selection_name: The name of an inner selection set to consider. If specified, only fields and relations under this selection set will be included in the Select object.result_to_strawberry: If True, it returns Strawberry object(s). Default value: True.get_dict_objectGiven an SQLAlchemy object, returns a dictionary representation of the object.Parameters:obj: An SQLAlchemy object to convert to a dictionary.Returns:A dictionary representation of the input object.LICENSEThis project is licensed under the terms of the MIT license.
alchemyjsonschema
featuresalchemyjsonschema is the library for converting sqlalchemys’s model to jsonschema.using alchemyjsonschema as commandusing alchemyjsonschema as libraryas libraryhaving three output styles.NoForeignKeyWalker – ignore relationshipsForeignKeyWalker – expecting the information about relationship is foreign keyStructuralWalker – fullset output(expecting the information about relationship is full JSON data)examplesdumping json with above three output styles.target models are here. Group and User.# -*- coding:utf-8 -*-importsqlalchemyassaimportsqlalchemy.ormasormfromsqlalchemy.ext.declarativeimportdeclarative_baseBase=declarative_base()classGroup(Base):"""model for test"""__tablename__="Group"pk=sa.Column(sa.Integer,primary_key=True,doc="primary key")name=sa.Column(sa.String(255),default="",nullable=False)classUser(Base):__tablename__="User"pk=sa.Column(sa.Integer,primary_key=True,doc="primary key")name=sa.Column(sa.String(255),default="",nullable=True)group_id=sa.Column(sa.Integer,sa.ForeignKey(Group.pk),nullable=False)group=orm.relationship(Group,uselist=False,backref="users")NoForeignKeyWalkerimportpprintasppfromalchemyjsonschemaimportSchemaFactoryfromalchemyjsonschemaimportNoForeignKeyWalkerfactory=SchemaFactory(NoForeignKeyWalker)pp.pprint(factory(User))""" {'properties': {'name': {'maxLength': 255, 'type': 'string'}, 'pk': {'description': 'primary key', 'type': 'integer'}}, 'required': ['pk'], 'title': 'User', 'type': 'object'} """ForeignKeyWalkerimportpprintasppfromalchemyjsonschemaimportSchemaFactoryfromalchemyjsonschemaimportForeignKeyWalkerfactory=SchemaFactory(ForeignKeyWalker)pp.pprint(factory(User))""" {'properties': {'group_id': {'type': 'integer'}, 'name': {'maxLength': 255, 'type': 'string'}, 'pk': {'description': 'primary key', 'type': 'integer'}}, 'required': ['pk', 'group_id'], 'title': 'User', 'type': 'object'} """StructuralWalkerimportpprintasppfromalchemyjsonschemaimportSchemaFactoryfromalchemyjsonschemaimportStructuralWalkerfactory=SchemaFactory(StructuralWalker)pp.pprint(factory(User))""" {'definitions': {'Group': {'properties': {'pk': {'description': 'primary key', 'type': 'integer'}, 'name': {'maxLength': 255, 'type': 'string'}}, 'type': 'object'}}, 'properties': {'pk': {'description': 'primary key', 'type': 'integer'}, 'name': {'maxLength': 255, 'type': 'string'}, 'group': {'$ref': '#/definitions/Group'}}, 'required': ['pk'], 'title': 'User', 'type': 'object'} """pp.pprint(factory(Group))""" {'definitions': {'User': {'properties': {'pk': {'description': 'primary key', 'type': 'integer'}, 'name': {'maxLength': 255, 'type': 'string'}}, 'type': 'object'}}, 'description': 'model for test', 'properties': {'pk': {'description': 'primary key', 'type': 'integer'}, 'name': {'maxLength': 255, 'type': 'string'}, 'users': {'items': {'$ref': '#/definitions/User'}, 'type': 'array'}}, 'required': ['pk', 'name'], 'title': 'Group', 'type': 'object'} """as commandusing alchemyjsonschema as command (the command name is alsoalchemyjsonschema).help$alchemyjsonschema--helpusage:alchemyjsonschema[-h][--walker{noforeignkey,foreignkey,structural}][--decision{default,fullset}][--depthDEPTH][--outOUT]targetpositionalarguments:targetthemoduleorclasstoextractschemasfromoptionalarguments:-h,--helpshowthishelpmessageandexit--walker{noforeignkey,foreignkey,structural}--decision{default,fullset}--depthDEPTH--outOUToutputtofileIf above two model definitions (User,Group) are existed inalchemyjsonschema.tests.models.Target is the class position or module position. for example,class position –alchemyjsonschema.tests.models:Usermodule position –alchemyjsonschema.tests.modelsexampleUsing StructuralWalker via command line (–walker structural). Of course, NoForeignKeyWalker is noforeignkey, and ForeignKeyWalker is foreignkey.$alchemyjsonschema--walkerstructuralalchemyjsonschema.tests.models:Group{"definitions":{"Group":{"properties":{"color":{"enum":["red","green","yellow","blue"],"maxLength":6,"type":"string"},"created_at":{"format":"date-time","type":"string"},"name":{"maxLength":255,"type":"string"},"pk":{"description":"primary key","type":"integer"},"users":{"items":{"$ref":"#/definitions/User"},"type":"array"}},"required":["pk"],"title":"Group","type":"object"},"User":{"properties":{"created_at":{"format":"date-time","type":"string"},"name":{"maxLength":255,"type":"string"},"pk":{"description":"primary key","type":"integer"}},"required":["pk"],"type":"object"}}}Output is not same when using Walker-class, directly. This is handy output for something like a swagger(OpenAPI 2.0)’s tool.appendix: what is–decision?what is–decision? (TODO: gentle description)$alchemyjsonschema--walkerstructuralalchemyjsonschema.tests.models:User|jq.-S>/tmp/default.json$alchemyjsonschema--decisionuseforeignkey--walkerstructuralalchemyjsonschema.tests.models:User|jq.-S>/tmp/useforeignkey.json$diff-u/tmp/default.json/tmp/useforeignkey.json--- /tmp/default.json 2017-01-02 22:49:44.000000000 +0900+++ /tmp/useforeignkey.json 2017-01-02 22:53:13.000000000 +0900@@ -1,43 +1,14 @@{"definitions": {- "Group": {- "properties": {- "color": {- "enum": [- "red",- "green",- "yellow",- "blue"- ],- "maxLength": 6,- "type": "string"- },- "created_at": {- "format": "date-time",- "type": "string"- },- "name": {- "maxLength": 255,- "type": "string"- },- "pk": {- "description": "primary key",- "type": "integer"- }- },- "required": [- "pk"- ],- "type": "object"- },"User": {"properties": {"created_at": {"format": "date-time","type": "string"},- "group": {- "$ref": "#/definitions/Group"+ "group_id": {+ "relation": "group",+ "type": "integer"},"name": {"maxLength": 255,
alchemy-logging
Alchemy Logging (alog) - PythonThealogframework provides tunable logging with easy-to-use defaults and power-user capabilities. The mantra ofalogis"Log Early And Often". To accomplish this goal,alogmakes it easy to enable verbose logging at develop/debug time and trim the verbosity at production run time.SetupTo use thealogmodule, simply install it withpip:pipinstallalchemy-loggingChannels and LevelsThe primary components of the framework arechannelsandlevelswhich allow for each log statement to be enabled or disabled when appropriate.Levels: Each logging statement is made at a specific level. Levels provide sequential granularity, allowing detailed debugging statements to be placed in the code without clogging up the logs at runtime. The sequence of levels and their general usage is as follows:off: Disable the given channel completelyfatal: A fatal error has occurred. Any behavior after this statement should be regarded as undefined.error: An unrecoverable error has occurred. Any behavior after this statement should be regarded as undefined unless the error is explicitly handled.warning: A recoverable error condition has come up that the service maintainer should be aware of.info: High-level information that is valuable at runtime under moderate load.trace: Used to log begin/end of functions for debugging code paths.debug: High-level debugging statements such as function parameters.debug1: High-level debugging statements.debug2: Mid-level debugging statements such as computed values.debug3: Low-level debugging statements such as computed values inside loops.debug4: Ultra-low-level debugging statements such as data dumps and/or statements inside multiple nested loops.Channels: Each logging statement is made to a specific channel. Channels are independent of one another and allow for logical grouping of log messages by functionality. A channel can be any string. A channel may have a specificlevelassigned to it, or it may use the configured default level if it is not given a specific level filter.Using this combination ofChannelsandLevels, you can fine-tune what log statements are enabled when you run your application under different circumstances.UsageConfigurationimportalogif__name__=="__main__":alog.configure(default_level="info",filters="FOO:debug,BAR:off")In this example, the channel"FOO"is set to thedebuglevel, the channel"BAR"is fully disabled, and all other channels are set to use theINFOlevel.In addition to the above, theconfigurefunction also supports the following arguments:formatter: May be"pretty","json", or any class derived fromAlogFormatterBasethread_id: Bool indicating whether or not to include a unique thread ID with the logging header (pretty) or structure (json).handler_generator: This allows users to provide their own output handlers and replace the standard handler that sends log messages tostderr. Seetheloggingdocumentationfor details.Logging FunctionsFor each log level, there are two functions you can use to create log lines: The standardloggingpackage function, or the correspondingalog.use_channel(...).<level>function. The former will always log to therootchannel while the later requires that a channel string be specified viause_channel().importalogimportloggingdeffoo(age):alog.use_channel("FOO").debug3("Debug3 line on the FOO channel with an int value%d!",age)logging.debug("debug line on the MAIN channel")Channel LogIn a given portion of code, it often makes sense to have a common channel that is used by many logging statements. Re-typing the channel name can be cumbersome and error-prone, so the concept of theChannel Loghelps to eliminate this issue. To create a Channel Log, call theuse_channelfunction. This gives you a handle to a channel log which has all of the same standard log functions as the top-levelalog, but without the requirement to specify a channel. For example:importaloglog=alog.use_channel("FOO")deffoo(age):log.info("Hello Logging World! I am%dyears old",age)NOTE: In this (python) implementation, this is simply a wrapper aroundlogging.getLogger()Extra Log InformationThere are several other types of information thatalogsupports adding to log records:Log CodesThis is an optional argument to all logging functions which adds a specified code to the record. It can be useful for particularly high-profile messages (such as per-request error summaries in a server) that you want to be able to track in a programmatic way. The only requirement for alog_codeis that it begin with<and end with>. The log code always comes before themessage. For example:ch=alog.use_channel("FOO")ch.debug("<FOO80349757I>","Logging is fun!")Dict DataSometimes, it's useful to log structured key/value pairs in a record, rather than a plain-text message, even when using theprettyoutput formatter. To do this, simply use adictin place of astrin themessageargument to the logging function. For example:ch=alog.use_channel("FOO")ch.debug({"foo":"bar"})When adictis logged with thejsonformatter enabled, all key/value pairs are added as key/value pairs under the top-levelmessagekey.Log ContextsOne of the most common uses for logging is to note events when a certain block of code executes. To facilitate this,aloghas the concept of log contexts. The two primary contexts thatalogsupports are:ContextLog: Thiscontextmanagerlogs aSTART:message when the context starts and anEND:message when the context ends. All messages produced within the same thread inside of the context will have an incremented level of indentation.importalogalog.configure("debug2")log=alog.use_channel("DEMO")withalog.ContextLog(log.info,"Doing some work"):log.debug("Deep in the muck!")2021-07-29T19:09:03.819422 [DEMO :INFO] BEGIN: Doing some work 2021-07-29T19:09:03.820079 [DEMO :DBUG] Deep in the muck! 2021-07-29T19:09:03.820178 [DEMO :INFO] END: Doing some workContextTimer: Thiscontextmanagerstarts a timer when the context begins and logs a message with the duration when the context ends.importalogimporttimealog.configure("debug2")log=alog.use_channel("DEMO")withalog.ContextTimer(log.info,"Slow work finished in: "):log.debug("Starting the slow work")time.sleep(1)2021-07-29T19:12:00.887949 [DEMO :DBUG] Starting the slow work 2021-07-29T19:12:01.890839 [DEMO :INFO] Slow work finished in: 0:00:01.002793Function DecoratorsIn addition to arbitrary blocks of code that you may wish to scope or time, a very common use case for logging contexts is to provide function tracing. To this end,alogprovides two useful function decorators:@logged_function: Thisdecoratorwraps theContextLogand provides aSTART/ENDscope where the message is prepopulated with the name of the function.importalogalog.configure("debug")log=alog.use_channel("DEMO")@alog.logged_function(log.trace)deffoo():log.debug("This is a test")foo()2021-07-29T19:16:40.036119 [DEMO :TRCE] BEGIN: foo() 2021-07-29T19:16:40.036807 [DEMO :DBUG] This is a test 2021-07-29T19:16:40.036915 [DEMO :TRCE] END: foo()@timed_function: Thisdecoratorwraps theContextTimerand performs a scoped timer on the entire function.importalogimporttimealog.configure("debug")log=alog.use_channel("DEMO")@alog.timed_function(log.trace)deffoo():log.debug("This is a test")time.sleep(1)foo()2021-07-29T19:19:47.468428 [DEMO :DBUG] This is a test 2021-07-29T19:19:48.471788 [DEMO :TRCE] 0:00:01.003284TipVisual Studio Code (VSCode) users can take advantage ofalchemy-logging extensionthat provides automatic log code generation and insertion.
alchemyml
AlchemyML API DocumentationVersion Date: 2021-05-28PrerequisitesPython >= 3.6requests >= 2.22.0urllib3 >= 1.25.7Module OverviewDescriptionAlchemyML is a multi-environment solution for data exploiation.To maximize customer convenience, there are three ways to run it: via the AlchemyML Platform, via the API, and via ad hoc solutions. The one documented below is the second tool, AlchemyML API.AlchemyML API is an easy way to use advanced data analysis techniques in Python, accelerating the work of the data scientist and optimizing her/his time and resources.AlchemyML API has operations at the dataset level (upload, list, delete...), at the experiment level (create, send, add to project, view metrics and logs...) and at the project level (create, update, delete...). Moreover, it also has specific actions so that the client can perform her/his own experiment manually: pre-process the dataset, remove highly correlated columns, detect outliers, impute missings...List of scripts and their functionsinitalchemyml()get_api_token_CRUD_classesdataset()uploadviewupdatedeletestatistical_descriptorsdownloadsendexperiment()createviewupdatedeletestatistical_descriptorsresultsadd_to_projectextract_from_projectsendproject()createviewupdatedelete_manual_opsactions()list_preprocessed_dataframesdownload_dataframeprepare_dataframeencode_dataframedrop_highly_correlated_componentsimpute_inconsistenciesdrop_invalid_columnstarget_column_analysisbalancing_dataframeimpute_missing_valuesmerge_cols_into_dt_indexdetect_experiment_typebuild_modeloperational_infodetect_outliersimpute_outliersinit.py - Code explanationsPrerequisites - ImportsPythonpackages:JSON:import jsonInternal classes and functions fromalchemyml:from ._CRUD_classes import dataset, experiment, projectfrom ._manual_ops import actionsfrom ._request_handler import retry_sessionclassalchemymlMain class containing all AlchemyML functionalitiesmethodget_api_tokendefget_api_token(self,username,password):from._request_handlerimportretry_sessionurl='https://alchemyml.com/api/token/'data=json.dumps({'username':username,'password':password})session=retry_session(retries=10)r=session.post(url,data)ifr.status_code==200:tokenJSON=json.loads(r.text)self.dataset.token=tokenJSON['access']self.experiment.token=tokenJSON['access']self.project.token=tokenJSON['access']self.actions.token=tokenJSON['access']returntokenJSON['access']else:msgJSON=json.loads(r.text)msg=msgJSON['message']returnmsgDescriptionThis method returns the necessary token to be used from now on for the API requests. To be able to make use of the API before all it is necessary to sign-up.I/OParameters:username(str): Username.password(str): Password._CRUD_classes.py - Code explanationsPrerequisites - ImportsPythonpackages:JSON:import jsonOS:import osSys:import sysFunctions from_request_handler:from ._request_handler import retry_session, general_callclassdatasetThis class unifies and condenses all the operations related to datasets in their most general sense: uploading them to the workspace, listing them, removing them...Each and every operation (request) needs the token that is obtained through the classauthentication.methoduploaddefupload(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThrough the call to this method, you will be able to upload a dataset.We recommend you to consider the next points before uploading your dataset:The accepted reading formats are: .xlsx, .xls, .csv, .json, .xml, .sql.Files whose name contains two extensions will not be accepted. E.g.: 'Iris.xlsx.csv'.Your data set should contain at least 50 observations.The file must not exceed the size limit specified by the AlchemyML team.Make sure that your data are not empty. Otherwise, this file will be rejected.I/OParameters:file_path(str): The path where the dataset file is located.dataset_name(str): Custom name for the dataset file.description(str, optional): Optional parameter to specify description if needed for the dataset. If no description is inputted, no description is added to the dataset.methodgetdefget(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method lists the datasets available in the workspace.By setting thedetailparameter toTrueorFalse, you can control receiving the details of each uploaded dataset or simply a list with the names of the datasets.By setting thedataset_nameparameter, you can control for which datasets return the details.I/OParameters:dataset_name(str/list, optional): Name or list of names of the dataset(s) for which details will be returned.detail(bool, optional): Optional boolean parameter to return the details for the specified dataset(s) (False/ True).methodupdatedefupdate(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method gives the option to rename a dataset and/ or update the datasets description. At least one of the two previous options must be selected.I/OParameters:dataset_name(str, optional): Name of the dataset to update.new_dataset_name(str, optional): New name of the specified dataset. If no name is inputted, the dataset won't be renamed.new_description(str/list, optional): New description for the specified dataset. If no description is inputted, the description is not going to be updated.methoddeletedefdelete(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThrough the use of the Delete method you will be able to delete one, several or all uploaded datasets. Note that if a dataset consists of experiments associated with it, you must first remove the experiments that have been created.AlchemyML is not responsible for any consequences that may be caused by removing one, several or all datasets.I/OParameters:dataset_name(str/list): Name or list of names of the datasets to be removed from workspace. IfAllused, then all datasets will be removed. Datasets will be removed only if were not used in any experiment.methodstatistical_descriptorsdefstatistical_descriptors(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method returns the most relevant statistical descriptors for each column of a dataset.I/OParameters:dataset_name(str): Name of the dataset to return statistical descriptors.methoddownloaddefdownload(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionMethod to download a dataset from the workspaceI/OParameters:dataset_name(str): Name of the dataset to downloadfile_path(str, optional): Path to download the dataset. By default, the dataset is downloaded to Downloads folder.methodsenddefsend(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionMethod to send a dataset to another userI/OParameters:dataset_name(str): Name of the dataset to senddestination_email(str): E-mail of the user to whom the dataset is to be sentclassexperimentThis class unifies and condenses all the operations related to experiments in their most general sense: uploading them to the workspace, listing them, removing them... this class also contains the methods for adding experiments to projects, updating them, deleting them...Each and every operation (request) needs the token that is obtained through the classauthentication.methodcreatedefcreate(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionBy default, an automatic experiment will be created.This option implies the execution of a sequence of steps that go from the dataset intake to the construction of the predictive model, including the pre-processing and cleaning of the data.If the experiment procedure is set to manual, then the user has the possibility to control each phase of the experiment by running the available modules in the desired order.The possible operations that can be executed are those that appear in the manual operations section.I/OParameters:experiment_name(str): Name used for the creation the experiment. This name is given by the user.description(str, optional): Optional parameter to specify description if needed for the experiment. If no description is inputted, no description is going to be added to the experiment.dataset_name(str): Name of the dataset used in the creation of experiment.target_column(str): Specifying the target column name.clients_choice(str): Type of experiment. Valid options: Regression, Classification, Time Series, Auto Detect.experiment_procedure(str, optional): Valid options are: auto or manual.methodgetdefget(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionSuch as the datasets section, this method will let you know which experiments you have in your workspace.By setting the detail parameter toTrueorFalseyou can control receiving details of each experiment or simply get a list with the names of the experiments.By setting theexperiment_nameparameter, you control for which experiments return the details (one or some).I/OParameters:experiment_name(str/list, optional): The name or list of experiment names to be listed.detail(bool, optional): Optional boolean parameter to return the details for the specified experiment(s) (False/ True).methodupdatedefupdate(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method gives the option to rename an experiment and/ or update the experiments description. At least one of the two previous options must be selected.I/OParameters:experiment_name(str): Name of the experiment to update.new_experiment_name(str, optional): New name of the specified experiment. If no name is inputted, the experiment is not going to be renamed.new_description(str/list, optional): New description for the specified experiment. If no description is inputted, the description is not going to be updated.methoddeletedefdelete(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThrough the use of the endpoint Delete you will be able to delete one, several or all the experiments created.AlchemyML is not responsible for any consequences that may be caused by removing one, several or all experiments.I/OParameters:experiment_name(str/list): Name or list of experiment names to be deleted. IfAllused, then all experiments will be removed.methodstatistical_descriptorsdefstatistical_descriptors(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method returns the most relevant statistical descriptors for each column of the preprocessed dataset used in the experiment creation.I/OParameters:experiment_name(str): Name of the experiment to return statistical descriptors.dataset_name(str): Name of the dataset used in the experiment creation.methodresultsdefresults(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThe creation of an experiment (previous method CREATE) returns the results of that experiment. This method gives the option to retrieve the previous results whenever these are needed.The results are delivered in a JSON structure consisting of two keys:log,model_metrics.logcontains the information related to the decisions that AlchemyML has taken throughout the creation of the experiment, until finishing the construction of the predictive model.On the other hand,model_metricswill include the analytical information of these results: metrics obtained, relevant variables, type of experiment, etc.I/OParameters:experiment_name(str): Name of the experiment to return the results.methodadd_to_projectdefadd_to_project(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method gives the possibility to include an experiment or various into a specified project.Projects are the way to order and group different experiments that are included within a general topic. For example, you could create a project under the theme of Smart Cities that includes experiments related to this topic.I/OParameters:associated_experiments(str/list): Name or list of experiment names to be included into a specified project.project_name(str): The name of the project in which experiment(s) will be included.methodextract_from_projectdefextract_from_project(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionGiven a project this method gives the possibility to extract specified experiments from it.I/OParameters:experiment_name(str/list): Name or list of experiment names that are desired to be extracted from a given project.project_name(str): The project from which will be extracted the specified experiments.methodsenddefsend(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis endpoint gives the possibility to send one or various experiments to another registered user.If the user exists, a confirmation email will be sent. When the recipient confirms that he wants to receive an experiment from another user, an exact copy of the experiment will appear within his/her experiments section and will also be visible through the Workspace.I/OParameters:destination_email(str): The receivers email address.experiment_name(str/list): The name or list of experiment names to be sent.classprojectThis class unifies and condenses all the operations related to projects in their most general sense: creating them, listing them, deleting them...Each and every operation (request) needs the token that is obtained through the classauthentication.methodcreatedefcreate(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method creates a new project.Projects are the way to order and group different experiments that are included within a general topic. For example, you could create a project under the theme of Smart Cities that includes experiments related to this topic.I/OParameters:project_name(str): Name of the project.description(str, optional): Optional parameter to specify description if needed for the project. If no description is inputted, no description is going to be added to the project.associated_experiments(str/list, optional): Name or list of experiment names to be added to the project. If no experiments are inputted, an empty project is going to be created.methodgetdefget(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionSuch as the datasets section, this method will let you know which projects you have in your workspace.By setting the detail parameter toTrueorFalseyou can control receiving details of each project or simply get a list with the names of the projects.By setting theproject_nameparameter, you control for which projects return the details (one or some).I/OParameters:project_name(str/list, optional): Name or list of names of the project(s).detail(bool, optional): Optional boolean parameter to return the details for the specified project(s) (False/ True).methodupdatedefupdate(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method gives the option to rename a project and/ or update the projects description. At least one of the two previous options must be selected.I/OParameters:project_name(str): Name of the project to be updated.new_project_name(str, optional): New name of the specified project. If no name is inputted, the project is not going to be renamed.new_description(str/list, optional): New description for the specified project. If no description is inputted, the description is not going to be updated.methoddeletedefdelete(self,*args,**kwargs):str_meth_name=self.class_name+'.'+sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThrough the use of the method Delete you will be able to delete one, several or all the projects created.AlchemyML is not responsible for any consequences that may be caused by removing one, several or all projects.I/OParameters:project_name(str/list): Name or list of names of the projects to be deleted. If All used then, all projects will be removed._manual_ops.py - Code explanationsPrerequisites - ImportsPythonpackages:Sys:import sysFunctions from_request_handler:from ._request_handler import general_callclassactionsClass that encompasses all the operations available in a manual experiment.methodlist_preprocessed_dataframesdeflist_preprocessed_dataframes(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionMethod for listing the available processed dataframes for the given experiment.I/OParameters:experiment_name(str): Experiment name to which processed dataframes will be returned.methoddownload_dataframedefdownload_dataframe(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionAs the name of the endpoint suggests, this method gives the option to download the available processed dataframes for a given experiment.If keyword all in dataframe_name, all available dataframes will be downloaded.If unknown the available processed dataframes, call first List preprocessed dataframes.I/OParameters:experiment_name(str): Name of experiment for which dataframe(s) needed to be download.dataframe_name(str): Dataframe name to be downloaded. Using the keyword all, all dataframes available for the experiment will be downloaded in a rar archive.methodprepare_dataframedefprepare_dataframe(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis module is responsible for performing a first pre-processing of the dataset loaded by the user before the data goes through the AlchemyMLs next modules.In general terms, it seeks to remove spaces to the left and right of a string, remove quotes from cells that are of type string, convert numerical data that comes in string format to numerical format, interpret and convert data that is of type date but comes in string format.I/OParameters:experiment_name(str): Name of the experiment to be prepared.download(bool, optional): Optional boolean parameter to be set up if results needed to be downloaded.methodencode_dataframedefencode_dataframe(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis is the sub-module in charge of coding the variables that indicate a category and are string type in numerical codes.This operation is carried out because the automatic learning algorithms need to understand the nature of the data converted into numbers.I/OParameters:experiment_name(str): Name of the experiment to be encoded.download(bool, optional): Optional boolean parameter to be set up if results needed to be downloaded.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.methoddrop_highly_correlated_componentsdefdrop_highly_correlated_components(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis is the method responsible for dropping highly correlated columns and duplicate rows.The threshold to consider a column as highly correlated with another one is 0.9999.Highly correlated columns can be both numerical and categorical columns.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.download(bool, optional): Optional boolean parameter to be set up if results needed to be downloaded.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.component(str, optional): Specifying whether you want to drop: "rows", "columns" or "both".delete_duplicated_indices(bool, optional): You can specify wether to take into account the index when dropping duplicated rows.keep(bool, optional): keep = False will drop the first duplicated index, and keep = True will drop the last duplicated index.methodimpute_inconsistenciesdefimpute_inconsistencies(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis is the method responsible for iterating over each column of a dataset to find and correct inconsistencies. It is basically a submodule that searches for misspelled words, numbers or dates in an attempt to correct them.You can choose between applying the operations to the entire dataset or just to the target column.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.download(bool, optional): Optional boolean parameter to be set up if results needed to be downloaded.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.just_target(bool, optional): Specifying whether you want to treat existing inconsistencies on the target or on the whole dataset (True/False).methoddrop_invalid_columnsdefdrop_invalid_columns(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionMethod to drop invalid columns in a experiment.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.download(bool, optional): Optional boolean parameter to be set up if results needed to be downloaded.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.invalid_cols(list, optional): Optional parameter to specify a column or list of columns to be considered as invalid.methodtarget_column_analysisdeftarget_column_analysis(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis is the method responsible for telling the user wether the dataset is balanced or not by inspecting the target column.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.methodbalancing_dataframedefbalancing_dataframe(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis is the method that deals with unbalanced classification datasets.It detects unbalanced data, decides whether the data can be balanced (extreme cases are rejected), collects information on unbalance indicators and determines the method to be applied at the classification stage in order to adjust a balanced classifier.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.download(bool, optional): Optional boolean parameter to be set up if results needed to be downloaded.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.auto_strategy(bool, optional): Determines wether to force the generation of a balanced dataset or not. If auto_strategy is set to False, a balanced dataset will always be generated.methodinitial_exp_infodefinitial_exp_info(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method returns initial information for the specified experiment.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.methodimpute_missing_valuesdefimpute_missing_values(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionMethod to use for missing values imputation.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.download(bool, optional): Optional boolean parameter to be set up if results needed to be downloaded.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.methodmerge_cols_into_dt_indexdefmerge_cols_into_dt_index(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis is the method in charge of finding candidate columns with which to try to build a single datetime column.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.download(bool, optional): Optional boolean parameter to be set up if results needed to be downloaded.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.methoddetect_experiment_typedefdetect_experiment_type(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionMethod that gives the option to detect experiment type.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.target_col_name(str, optional): Specifying the target column name.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.selected_option(str, optional): For detect experiment type, the options available are: Regression, Classification, Time Series, Auto Detect.methodbuild_modeldefbuild_model(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionMethod to build the model for a given experiment.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.target_col_name(str, optional): Specifying the target column name.selected_option(str, optional): For build the model the options available are: Regression, Classification, Time Series, Auto Detect.methodoperational_infodefoperational_info(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThrough this method you can enter operational information related to each column: in this way you can specify what are the operating limits of a column and its tolerances.You can also indicate some values that you know and that occur within the values of the column so that theimpute_outliersmodule does not take them into account.In addition, you can group the time-dependent columns by intervals (morning/evening/night) and you can detail whether the behavior of a column depends on the categories of another categorical column.I/OParameters:experiment_name(str): Name of the experiment on which process will take place.columns_info(str/list/dict): Information on columns.methoddetect_outliersdefdetect_outliers(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThis method gives the option of detect outliers. Different strategies are available, as univariate, bivariate, multivariate, complete.I/OParameters:experiment_name(str): Name of the experiment to be used for outlier detection.detection_strategy_info(dict): Strategies available to employ for detection: univariate, bivariate, multivariate. The general form of the dictionary is: {'univariate':cols (string-list), 'bivariate':cols (string-list), 'multivariate':cols:(string-list)}.prepare_dataset(bool, optional): Optional boolean parameter that specifies if the dataset needs preparation or not.methodimpute_outliersdefimpute_outliers(self,*args,**kwargs):str_meth_name=sys._getframe().f_code.co_nameinput_args=locals()['args']input_kwargs=locals()['kwargs']returngeneral_call(self,str_meth_name,input_args,input_kwargs)DescriptionThrough this method outliers may be imputed using one of the available strategies.I/OParameters:experiment_name(str): Experiment name on which outliers imputation is going to take place.cols_to_impute(str/list/float): Defines to columns on which outliers imputation is going to take place.handling_strategy(str/dict): Available options: 'auto', 'mean', 'median', 'mode', 'random_values', 'clipping', 'n_neighbors', 'quartile'.
alchemy-mock
SQLAlchemy mock helpers.Free software: MIT licenseGitHub:https://github.com/miki725/alchemy-mockInstallingYou can installalchemy-mockusing pip:$ pip install alchemy-mockWhy?SQLAlchemy is awesome. Unittests are great. Accessing DB during tests - not so much. This library provides easy way to mock SQLAlchemy’s session in unittests while preserving ability to do sane asserts. Normally SQLAlchemy’s expressions cannot be easily compared as comparison on binary expression produces yet another binary expression:>>> type((Model.foo == 5) == (Model.bar == 5)) <class 'sqlalchemy.sql.elements.BinaryExpression'>But they can be compared with this library:>>> ExpressionMatcher(Model.foo == 5) == (Model.bar == 5) FalseUsingExpressionMatchercan be directly used:>>> from alchemy_mock.comparison import ExpressionMatcher >>> ExpressionMatcher(Model.foo == 5) == (Model.foo == 5) TrueAlternativelyAlchemyMagicMockcan be used to mock out SQLAlchemy session:>>> from alchemy_mock.mocking import AlchemyMagicMock >>> session = AlchemyMagicMock() >>> session.query(Model).filter(Model.foo == 5).all() >>> session.query.return_value.filter.assert_called_once_with(Model.foo == 5)In real world though session can be interacted with multiple times to query some data. In those casesUnifiedAlchemyMagicMockcan be used which combines various calls for easier assertions:>>> from alchemy_mock.mocking import UnifiedAlchemyMagicMock >>> session = UnifiedAlchemyMagicMock() >>> m = session.query(Model) >>> q = m.filter(Model.foo == 5) >>> if condition: ... q = q.filter(Model.bar > 10).all() >>> data1 = q.all() >>> data2 = m.filter(Model.note == 'hello world').all() >>> session.filter.assert_has_calls([ ... mock.call(Model.foo == 5, Model.bar > 10), ... mock.call(Model.note == 'hello world'), ... ])Also real-data can be stubbed by criteria:>>> from alchemy_mock.mocking import UnifiedAlchemyMagicMock >>> session = UnifiedAlchemyMagicMock(data=[ ... ( ... [mock.call.query(Model), ... mock.call.filter(Model.foo == 5, Model.bar > 10)], ... [Model(foo=5, bar=11)] ... ), ... ( ... [mock.call.query(Model), ... mock.call.filter(Model.note == 'hello world')], ... [Model(note='hello world')] ... ), ... ( ... [mock.call.query(AnotherModel), ... mock.call.filter(Model.foo == 5, Model.bar > 10)], ... [AnotherModel(foo=5, bar=17)] ... ), ... ]) >>> session.query(Model).filter(Model.foo == 5).filter(Model.bar > 10).all() [Model(foo=5, bar=11)] >>> session.query(Model).filter(Model.note == 'hello world').all() [Model(note='hello world')] >>> session.query(AnotherModel).filter(Model.foo == 5).filter(Model.bar > 10).all() [AnotherModel(foo=5, bar=17)] >>> session.query(AnotherModel).filter(Model.note == 'hello world').all() []FinallyUnifiedAlchemyMagicMockcan partially fake session mutations such assession.add(instance). For example:>>> session = UnifiedAlchemyMagicMock() >>> session.add(Model(pk=1, foo='bar')) >>> session.add(Model(pk=2, foo='baz')) >>> session.query(Model).all() [Model(foo='bar'), Model(foo='baz')] >>> session.query(Model).get(1) Model(foo='bar') >>> session.query(Model).get(2) Model(foo='baz')Note that its partially correct since if added models are filtered on, session is unable to actually apply any filters so it returns everything:>>> session.query(Model).filter(Model.foo == 'bar').all() [Model(foo='bar'), Model(foo='baz')]History0.4.3 (2019-11-05)Unifyingdistinct.0.4.2 (2019-09-25)Adding supportlabel()inExpressionMatcher. For examplecolumn.label('foo').0.4.1 (2019-06-26)Adding support forone_or_none(). Thanks @davidroeca0.4.0 (2019-06-06)Adding basic mutation capability withaddandadd_all.0.3.5 (2019-04-13)Fixing compatibility with latestmock.0.3.4 (2018-10-03)Unifyinglimit.0.3.3 (2018-09-17)Unifyingoptionsandgroup_by.0.3.2 (2018-06-27)Added support forcount()andget()between boundaries.0.3.1 (2018-03-28)Added support forfunccalls inExpressionMatcher. For examplefunc.lower(column).0.3.0 (2018-01-24)Added support for.one()and.first()methods when stubbing data.Fixed bug which incorrectly unified methods after iterating on mock.0.2.0 (2018-01-13)Added ability to stub real-data by filtering criteria. See#2.0.1.1 (2018-01-12)Fixed alembic typo in README. oops.0.1.0 (2018-01-12)First release on PyPI.CreditsDevelopment LeadMiroslav Shubernetskiy -https://github.com/miki725ContributorsSerkan Hoscai -https://github.com/shoscaLicenseThe MIT License (MIT)Copyright (c) 2018, Miroslav ShubernetskiyPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
alchemy-modelgen
sqlalchemy-modelgenCreate sqlalchemy python model files by defining tables and columns in a yaml file or by specifying database urlInstallationpip install alchemy-modelgenUsageInitialize modelgen folder:modelgen init -d /path/to/YOUR_FOLDER cd /path/to/YOUR_FOLDERCreate sqlalchemy model code from:(Option 1)yamltemplate:For details on how to write the yaml file, please followdocsmodelgen createmodel --source yaml --path templates/example.yaml --alembic # path to your schema yaml file(Option 2)existingdatabase:modelgen createmodel --source database --path mysql+mysqlconnector://root:example@localhost:3306/modelgen --outfile models/YOUR_FILENAME.py --alembicRunning alembic migrations:modelgen migrate revision --autogenerate -m "COMMIT_MESSAGE" -p mysql+mysqlconnector://root:example@localhost:3306/modelgen modelgen migrate upgrade head -p mysql+mysqlconnector://root:example@localhost:3306/modelgenThe arguments passed aftermodelgen migrateare based on alembic. Any command true foralembiccan be used withmodelgen migrate.The database url can be passed using-por--pathargument, or can be set in the environment by the env varDATABASE_URI. IfDATABASE_URIis set,-por--pathwill be ignoredAlter table support:Change column type, length, add contraint, etc in the yaml file. Then run:modelgen createmodel --source yaml --path templates/example.yaml --alembic modelgen migrate revision --autogenerate -m "COMMIT_MESSAGE" -p mysql+mysqlconnector://root:example@localhost:3306/modelgen modelgen migrate upgrade head -p mysql+mysqlconnector://root:example@localhost:3306/modelgenCreditsThe code that reads the structure of an existing database and generates the appropriate SQLAlchemy model code is based onagronholm/sqlacodegen'srepository (Copyright (c) Alex Grönholm), license:MIT License
alchemymodel-xlsx
Alchemy Model Xlsx
alchemy-permissions
Custom system of roles and permissions
alchemy-provider
Dynamic query build based on SQLAlchemy Core and ORM
alchemyrohan
AlchemyrohanAlchemyrohan is an extension package forSqlAlchemywhich automatically creates the database models according to the database schema.📖 ContentHow to InstallDatabase SupportHow to useFunctionsModelsExampleDependenciesImportant NoteRelease NotesLicense🔧 How to InstallWith pip package installer fromPyPIdirectly:pip install alchemyrohanor from source:git clone --recursive https://github.com/wamberger/alchemyrohan.git cd alchemyrohan python3 setup.py install🗄 Database SupportThis project is currently designed to work with the following databases:SqLiteOracle🔨 How to useImport in your code:import alchemyrohanorimport alchemyrohan as ar🪄 Functionsassemble_model()is the main function. This function is used to create a SqlAlchemy database model and is accepting the following arguments:argumentdescriptionenginethis is SqlAlchemy enginefrom sqlalchemy import create_enginetable_namethis is the name of the database tableabs_os_path_to_modelabsolute path to the model's folderpy_path_to_modelpythonic path to the modelsreload_module()when the code and file are created, the new code needs to be compiled - if the function <assemble_model> is called inside a program which will use the new created models. Thus you need to call thereloadfunction. You will need to add pythonic path/import:importtests.test_model...somecode...reload_module(tests.test_model)is_model()this function is used to check if the model was created. You need to pass the <table_name> and <abs_os_path_to_model> arguments.get_model()you will retrieve the wanted database object of the SqlAlchemy model. It needs the <table_name> and <py_path_to_model> arguments.is_module()this is optional function if you want to check the pythonic path. It needs <py_path_to_model> argument.🗂 ModelsCreated SqlAlchemy models have some additional features:default valuesparent-child relationships<post_init> method is used as validationwhen 'printing' the string will contain the model/object name and attributes names with their values.All models are named with the same naming convention as they are in the database with one difference, they are capitalized (python class naming convention).📝 ExampleSimple example how to use the code:importosfromsqlalchemyimportcreate_enginefromalchemyrohan.assembleimportassemble_modelfromalchemyrohan.utilsimportis_modelfromalchemyrohan.utilsimportget_modelfromalchemyrohan.utilsimportreload_moduleimporttests.test_modeldir=os.path.dirname(__file__)conn=f"sqlite:///{dir}{os.sep}test_sqlite{os.sep}test.db"engine=create_engine(conn)table_name='child'# all names will be capitilizedabs_os_path_to_model=os.path.join(dir,'test_model')# pathpy_path_to_model='tests.test_model'# pythonic pathtry:assemble_model(engine,table_name,abs_os_path_to_model,py_path_to_model)exceptExceptionase:print(e)exit(1)reload_module(tests.test_model)# compile the new codeifis_model(table_name,abs_os_path_to_model):model=get_model(table_name,py_path_to_model)print(f'SqlAlchemy model exist:{model}')exit(0)print(f'Something unexpected went wrong')exit(-1)Example of one created model:fromsqlalchemyimportColumnfromtests.test_modelimportBasefromsqlalchemy.dialects.sqliteimportINTEGERfromsqlalchemy.dialects.sqliteimportTEXTfromsqlalchemy.ormimportrelationshipclassChild(Base):__tablename__='child'id=Column(INTEGER,primary_key=True)parent_id=Column(INTEGER,nullable=True,default=None)name=Column(TEXT,nullable=True,default=None)grade=Column(INTEGER,nullable=True,default=None)parent_Parent=relationship("Parent",back_populates="children_Child",lazy="joined")def__post_init__(self):ifnotisinstance(self.id,int):try:self.id=int(self.id)except:raiseSyntaxError(f'<{self.id}> is not integer')ifnotisinstance(self.parent_id,int):try:self.parent_id=int(self.parent_id)except:raiseSyntaxError(f'<{self.parent_id}> is not integer')ifnotisinstance(self.name,str):try:self.name=str(self.name)except:raiseSyntaxError(f'<{self.name}> is not string')ifnotisinstance(self.grade,int):try:self.grade=int(self.grade)except:raiseSyntaxError(f'<{self.grade}> is not integer')def__str__(self):returnf'User(id={self.id},'\f'parent_id={self.parent_id},'\f'name={self.name},'\f'grade={self.grade})'📚 Dependenciessqlalchemy(version 2.0.x) is an ORM and provides code for its models,oracledb(version 1.4.x) is used to shape a database table model with an oracle table schema.❗Important NoteIn some cases you will need to correct the code manually. This will be in case when:Your are creating only one model which has relationship to other tables, thus you will need to create also those models or delete the part of the codeYour tables have no primary keys. SqlAlchemy needs one primary keyYour database has some datatypes or features which were not yet been testet📋 Release Notesv0.1.0- creation of the initial code and tested with SqLite databasev0.2.0- tested with Oracle databasev0.3.0- added additional functionsv0.3.1- bug fixingv0.3.2- text fixing and adding third party licenses📄 License and Third-Party LicensesAlchemyrohan is MIT licensed, as found in theLICENSEfile.The following software components are included in this project:SqlAlchemy (MIT License)python-oracledb (Apache License 2.0)THIRD PARTY LICENSES
alchemy-sdk
Alchemy SDK for PythonAn Alchemy SDK to use theAlchemy API.It supports the exact same syntax and functionality of the Web3eth, making it a 1:1 mapping for anyone using the Web3eth. However, it adds a significant amount of improved functionality on top of Web3, such as easy access to Alchemy’s Enhanced and NFT APIs, and quality-of-life improvements such as automated retries.The SDK leverages Alchemy's hardened node infrastructure, guaranteeing best-in-class node reliability, scalability, and data correctness, and is undergoing active development by Alchemy's engineers.🙋‍♀️FEATURE REQUESTS:We'd love your thoughts on what would improve your web3 dev process the most! If you have 5 minutes, tell us what you want on ourFeature Request feedback form, and we'd love to build it for you.The SDK currently supports the following chains:Ethereum: Mainnet, GoerliPolygon: Mainnet, MumbaiOptimism: Mainnet, Goerli, KovanArbitrum: Mainnet, Goerli, RinkebyAstar: MainnetGetting startedUse the package managerpipto install alchemy_sdk.pip3installalchemy-sdkAfter installing the app, you can then import and use the SDK:fromalchemyimportAlchemy,Network# create Alchemy object using your Alchemy api key, default is "demo"api_key="your_api_key"# choose preferred network from Network, default is ETH_MAINNETnetwork=Network.ETH_MAINNET# choose the maximum number of retries to perform, default is 5max_retries=3# create Alchemy objectalchemy=Alchemy(api_key,network,max_retries=max_retries)ℹ️ Creating a unique Alchemy API KeyThe public "demo" API key may be rate limited based on traffic. To create your own API key,sign up for an Alchemy account hereand use the key created on your dashboard for the first app.Using the Alchemy SDKThe Alchemy SDK currently supports 2 different namespaces, including:core: All web3.eth methods and Alchemy Enhanced API methodsnft: All Alchemy NFT API methodsIf you are already using web3.eth, you should be simply able to replace the web3.eth object withalchemy.coreand it should work properly.ℹ️ ENS Name ResolutionThe Alchemy SDK supports ENS names (e.g.vitalik.eth) for every parameter where you can pass in a Externally Owned Address, or user address (e.g.0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045).fromalchemyimportAlchemyalchemy=Alchemy()# Access standard Web3 request. Gets latest block hashblock_hash=Alchemy.to_hex(alchemy.core.get_block('latest')['hash'])# Access Alchemy Enhanced API requests. Gets all transaction receipts for a given block hash.alchemy.core.get_transaction_receipts(block_hash=block_hash)# Access the Alchemy NFT API. Gets contract metadata for NFT and gets collection namecontract="0x01234567bac6ff94d7e4f0ee23119cf848f93245"print(alchemy.nft.get_contract_metadata(contract).opensea.collection_name)The Alchemy class also supports static methods from Web3 object that streamline the development process:Encoding, Decoding, Hashing:to_bytes,to_int,to_hex,to_text,to_json,keccakCurrency Utility:to_wei,from_weiAddress Utility:is_address,is_checksum_address,to_checksum_addressAlchemy CoreThe core namespace contains all commonly-used Web3.eth methods.It also includes the majority of Alchemy Enhanced APIs, including:get_token_metadata(): Get the metadata for a token contract address.get_token_balances(): Gets the token balances for an owner given a list of contracts.get_asset_transfers(): Get transactions for specific addresses.get_transaction_receipts(): Gets all transaction receipts for a given block.Alchemy NFT APIThe SDK currently supports the followingNFT APIendpoints under thealchemy.nftnamespace:get_nft_metadata(): Get the NFT metadata for an NFT contract address and tokenId.get_nft_metada_batch(): Get the NFT metadata for multiple NFT contract addresses/token id pairs.get_contract_metadata(): Get the metadata associated with an NFT contract.get_contracts_for_owner(): Get all NFT contracts that the provided owner address owns.get_nfts_for_owner(): Get NFTs for an owner address.get_nfts_for_contract(): Get all NFTs for a contract address.get_owners_for_nft(): Get all the owners for a given NFT contract address and a particular token ID.get_owners_for_contract(): Get all the owners for a given NFT contract address.get_minted_nfts(): Get all the NFTs minted by the owner address.is_spam_contract(): Check whether the given NFT contract address is a spam contract as defined by Alchemy (see theNFT API FAQ)get_spam_contracts(): Returns a list of all spam contracts marked by Alchemy.refresh_contract(): Enqueues the specified contract address to have all token ids' metadata refreshed.get_floor_price(): Return the floor prices of a NFT contract by marketplace.compute_rarity(): Get the rarity of each attribute of an NFT.PaginationThe Alchemy NFT endpoints return 100 results per page. To get the next page, you can pass in thepageKeyreturned by the previous call.SDK vs API DifferencesThe NFT API in the SDK standardizes response types to reduce developer friction, but note this results in some differences compared to the Alchemy REST endpoints:Methods referencingCollectionhave been renamed to use the nameContractfor greater accuracy: e.g.get_nfts_for_contract.Some methods have different naming that the REST API counterparts in order to provide a consistent API interface ( e.g.get_nfts_for_owner()isalchemy_getNfts,get_owners_for_nft()isalchemy_getOwnersForToken).SDK standardizes toomit_metadataparameter (vs.withMetadata).Standardization topage_keyparameter for pagination (vs.nextToken/startToken)Emptytoken_urifields are omitted.Token ID is always normalized to an integer string onBaseNftandNft.Some fields omitted in the REST response are included in the SDK response in order to return anNftobject.Some fields in the SDK'sNftobject are named differently than the REST response.Usage ExamplesBelow are a few usage examples.Getting the NFTs owned by an addressfromalchemyimportAlchemyfromalchemy.nftimportNftFiltersalchemy=Alchemy()# Get how many NFTs an address owns.response=alchemy.nft.get_nfts_for_owner('vitalik.eth')print(response['total_count'])# Get all the image urls for all the NFTs an address owns.fornftinresponse['owned_nfts']:print(nft.media)# Filter out spam NFTs.nfts_without_spam=alchemy.nft.get_nfts_for_owner('vitalik.eth',exclude_filters=[NftFilters.SPAM])Getting all the owners of the BAYC NFTfromalchemyimportAlchemyalchemy=Alchemy()# Bored Ape Yacht Club contract address.bayc_address='0xBC4CA0EdA7647A8aB7C2061c2E118A18a936f13D'# Omit the NFT metadata for smaller payloads.response=alchemy.nft.get_nfts_for_contract(bayc_address,omit_metadata=True,page_size=5)fornftinresponse['nfts']:owners=alchemy.nft.get_owners_for_nft(contract_address=nft.contract.address,token_id=nft.token_id)print(f"owners:{owners}, tokenId:{nft.token_id}")Get all outbound transfers for a provided addressfromalchemyimportAlchemyalchemy=Alchemy()print(alchemy.core.get_token_balances('vitalik.eth'))Questions and FeedbackIf you have any questions, issues, or feedback, please file an issue onGitHub, or drop us a message on ourDiscordchannel for the SDK.LicenseMIT
alchemy-sdk-py
alchemy_sdk_py (Beta)An SDK to use theAlchemy APIalchemy_sdk_py (Pre-Alpha)Getting StartedRequirementsInstallationQuickstartGet an API KeyUseageGet all ERC20, value, and NFT transfers for an addressGet contract metadata for any NFTWhat's here and what's notWhat this currently hasCurrently not implementedGetting StartedRequirementsPython3.7 or higherYou'll know you've done it right if you can runpython3 --versionin your terminal and see something likePython 3.10.6Installationpip3installalchemy_sdk_pyQuickstartGet an API KeyAfterinstalling, you'll need to sign up for an API key and set it as anALCHEMY_API_KEYenvironment variable. You can place them in a.envfile if you likejust please don't push the.envfile to GitHub..envALCHEMY_API_KEY="asdfasfsfasfIf you're unfamiliar with environment variables, you can use the API to set the key directly using the SDK - please don't do this in production code.fromalchemy_sdk_pyimportAlchemyalchemy=Alchemy(api_key="asdfasfsfasf",network="eth_mainnet")If you have your environment variable set, and you want to use eth mainnet, you can just do this:fromalchemy_sdk_pyimportAlchemyalchemy=Alchemy()You can also set the network ID using the chainId, or hex, and even update it later.# For Goerli ETHalchemy=Alchemy(network=5)# For Polygon ("0x89" is hex for 137)alchemy.set_network("0x89")Useagefromalchemy_sdk_pyimportAlchemyalchemy=Alchemy()current_block_number=alchemy.get_current_block_number()print(current_block_number)# prints the current block numberWith web3.pyfromalchemy_sdk_pyimportAlchemyfromweb3importWeb3alchemy=Alchemy()w3=Web3(Web3.HTTPProvider(alchemy.base_url))Get all ERC20, value, and NFT transfers for an addressThe following code will get you every transfer in and out of a single wallet address.fromalchemy_sdk_pyimportAlchemyalchemy=Alchemy()address="YOUR_ADDRESS_HERE"transfers,page_key=alchemy_with_key.get_asset_transfers(from_address=address)print(transfers)# prints every transfer in or out that's ever happened on the addressGet contract metadata for any NFTENS="0x57f1887a8BF19b14fC0dF6Fd9B2acc9Af147eA85"contract_metadata=alchemy_with_key.get_contract_metadata(ENS)print(contract_metadata["contractMetadata"]["openSea"]["collectionName"])# prints "ENS: Ethereum Name Service"What's here and what's notWhat this currently hasJust about everything in theAlchemy SDKsection of the docs.Currently not implementedbatchRequestsweb socketsNotify API&filtersieeth_newFilterAsync supportENS Support for addressesDouble check the NFT, Transact, and Token docs for functionTrace APIDebug API
alchemy-xiao-mcg
No description available on PyPI.
alcheonengine
No description available on PyPI.
alcherializer
AlcherializerA "Django like" model serializer.Declaring SerializerIt's very simples to declare a serializer. Just like Django, the only thing you need is to create a class with a Meta class inside and a model attribute.This instantly maps all fields declared in model.fromdatetimeimportdatetimefromalcherializerimportSerializerimportsqlalchemyfromsqlalchemy.ormimportrelationshipclassManager:name=sqlalchemy.Column(sqlalchemy.String(100))classUser:name=sqlalchemy.Column(sqlalchemy.String(100))age=sqlalchemy.Column(sqlalchemy.Integer)is_active=sqlalchemy.Column(sqlalchemy.Boolean)created_at=sqlalchemy.Column(sqlalchemy.DateTime,default=datetime.utcnow)organization=relationship(Manager,uselist=False)classUserSerializer(Serializer):classMeta:model=UserPS: For further exemplifications we will always useUserandUserSerializer.DataGets a dictionary of a single model.model=User(name="Clark Kent",age=31,is_active=True)serializer=UserSerializer(model)serializer.data# { "name": "Clark Kent", ... }Or, a list of modelsmodel=User(name="Clark Kent",age=31,is_active=True)serializer=UserSerializer([model],many=True)serializer.data# [{ "name": "Clark Kent", ... }]Related SerializersclassManagerSerializer(Serializer):classMeta:model=ManagerclassUserSerializer(Serializer):manager=ManagerSerializer()classMeta:model=Usermodel=User(name="Peter Parker",manager=Manager(name="J. Jonah Jameson"))serializer=UserSerializer(model)serializer.data# {"name": "Peter Parker", "manager": {"name": "J. Jonah Jameson"}}Custom fieldsfromdatetimeimportdatetime,timedeltafromalcherializerimportfieldsclassUserSerializer(Serializer):year_of_birth=fields.MethodField()defget_year_of_birth(self,user:User)->datetime:returndatetime.utcnow()-timedelta(days=user.age*365)classMeta:model=Userfields=["id","name","year_of_birth"]model=User(id=1,name="Batman",age=30)serializer=UserSerializer(model)serializer.data# {"id": 1, "name": "Batman", "year_of_birth": 1991}ValidationTo validate a payload, it's possible to send it through data argument while instantiating the serializer and call.is_validmethod.serializer=UserSerializer(data={"name":"Clark Kent","age":31,"is_active":True})serializer.is_valid()# TrueFetching validation errorsIf any error happens you can fetch the information through error attribute.serializer=UserSerializer(data={"name":"",# If ommitted or None should present error too"age":31,"is_active":True})serializer.is_valid()# Falseserializer.errors# {"name": ["Can't be blank"]}FieldsThis shows off how fields are mapped from SQLAlchemy models.Model attributeAlcherializer fieldValidationsBooleanBooleanField[x] Required[x] Valid booleanBigInteger, Integer, SmallIntegerIntegerField[x] RequiredString, Text UnicodeStringField[x] Required[x] Max length
alchimia
alchimialets you use most of the SQLAlchemy-core API with Twisted, it does not allow you to use the ORM.Getting startedfromalchimiaimportwrap_enginefromsqlalchemyimport(create_engine,MetaData,Table,Column,Integer,String)fromsqlalchemy.schemaimportCreateTablefromtwisted.internet.deferimportinlineCallbacksfromtwisted.internet.taskimportreact@inlineCallbacksdefmain(reactor):engine=wrap_engine(reactor,create_engine("sqlite://"))metadata=MetaData()users=Table("users",metadata,Column("id",Integer(),primary_key=True),Column("name",String()),)# Create the tableyieldengine.execute(CreateTable(users))# Insert some usersyieldengine.execute(users.insert().values(name="Jeremy Goodwin"))yieldengine.execute(users.insert().values(name="Natalie Hurley"))yieldengine.execute(users.insert().values(name="Dan Rydell"))yieldengine.execute(users.insert().values(name="Casey McCall"))yieldengine.execute(users.insert().values(name="Dana Whitaker"))result=yieldengine.execute(users.select(users.c.name.startswith("D")))d_users=yieldresult.fetchall()# Print out the usersforuserind_users:print("Username:%s"%user[users.c.name])# Queries that return results should be explicitly closed to# release the connectionresult.close()if__name__=="__main__":react(main,[])DocumentationThe documentation is all onRead the Docs.LimitationsThere are some limitations toalchimia'sability to expose the SQLAlchemy API.Some methods simply haven’t been implemented yet. If you file a bug, we’ll implement them! SeeCONTRIBUTING.rstfor more info.Some methods in SQLAlchemy either have no return value, or don’t have a return value we can control. Since most of thealchimiaAPI is predicated on returningDeferredinstances which fire with the underlying SQLAlchemy instances, it is impossible for us to wrap these methods in a useful way. Luckily, many of these methods have alternate spelling.The docscall these out in more detail.
alchina
Alchina /al.ki.na/Alchina is a Machine Learning framework.CapabilitiesRegressorsLinear regressorRidge regressorClassifiersLinear classifierRidge classifierClustersK-Means clusteringOptimizersGradient descentStochastic gradient descentMini-batch gradient descentPreprocessorsMin-max normalizationStandardizationPCAMetricsR2 scoreConfusion matrixAccuracy scorePrecision scoreRecall scoreF-Beta scoreF-1 scoreModel selectionSplit dataset
alchql
Please readUPGRADE-v2.0.mdto learn how to upgrade to Graphene2.0.AlchQLASQLAlchemyintegration forGraphene.InstallationFor instaling graphene, just run this command in your shellpipinstall"alchql>=3.0"ExamplesHere is a simple SQLAlchemy model:fromsqlalchemyimportColumn,Integer,Stringfromsqlalchemy.ext.declarativeimportdeclarative_baseBase=declarative_base()classUserModel(Base):__tablename__='user'id=Column(Integer,primary_key=True)name=Column(String)last_name=Column(String)To create a GraphQL schema for it you simply have to write the following:importgraphenefromalchqlimportSQLAlchemyObjectTypeclassUser(SQLAlchemyObjectType):classMeta:model=UserModel# use `only_fields` to only expose specific fields ie "name"# only_fields = ("name",)# use `exclude_fields` to exclude specific fields ie "last_name"# exclude_fields = ("last_name",)classQuery(graphene.ObjectType):users=graphene.List(User)defresolve_users(self,info):query=awaitUser.get_query(info)# SQLAlchemy queryreturnquery.all()schema=graphene.Schema(query=Query)Then you can simply query the schema:query='''query {users {name,lastName}}'''result=schema.execute(query,context_value={'session':db_session})You may also subclass SQLAlchemyObjectType by providingabstract = Truein your subclasses Meta:fromalchqlimportSQLAlchemyObjectTypeimportsqlalchemyassaimportgrapheneclassActiveSQLAlchemyObjectType(SQLAlchemyObjectType):classMeta:abstract=True@classmethodasyncdefget_node(cls,info,id):return(awaitcls.get_query(info)).filter(sa.and_(cls._meta.model.deleted_at==None,cls._meta.model.id==id)).first()classUser(ActiveSQLAlchemyObjectType):classMeta:model=UserModelclassQuery(graphene.ObjectType):users=graphene.List(User)defresolve_users(self,info):query=awaitUser.get_query(info)# SQLAlchemy queryreturnquery.all()schema=graphene.Schema(query=Query)Full ExamplesTo learn more check out the followingexamples:Flask SQLAlchemy exampleNameko SQLAlchemy exampleFastAPI SQLAlchemy example
alchy
A SQLAlchemy extension for its declarative ORM that provides enhancements for model classes, queries, and sessions.MAINTENANCE MODEPROJECT IS IN MAINTENANCE MODE: NO NEW FEATURES, BUG FIXES ONLYUsesqlserviceinstead.LinksProject:https://github.com/dgilland/alchyDocumentation:https://alchy.readthedocs.ioPyPi:https://pypi.python.org/pypi/alchy/TravisCI:https://travis-ci.org/dgilland/alchyChangelogv2.2.2 (2017-01-03)Fix bug in handling of session options when providing explicitbindsvalue viasession_optionsduringManagerin initialization. Thanksbrianbruggeman!v2.2.1 (2016-05-18)Fix bug withevents.before_deletewhere decorator was defined using invalid parent class making it completely non-functional as a decorator.v2.2.0 (2016-03-21)Addmetadata``argumentto ``alchy.model.make_declarative_baseto provide custom metaclass for declarative base model. Thanksfabioramponi!v2.1.0 (2016-03-11)AddMetaargument toalchy.model.make_declarative_baseto provide custom metaclass for declarative base model. Thankselidchan!v2.0.1 (2015-07-29)MakeSession.get_bind()mapperargument have default value ofNone.v2.0.0 (2015-04-29)AddQuery.index_by.AddQuery.chain.Addpydashas dependency and incorporate into existingQuerymethods:map,reduce,reduce_right, andpluck.Improve logic for setting__tablename__to work with all table inheritance styles (joined, single, and concrete), to handle@declared_attrcolumns, and not to duplicate underscore characters. Thankssethp!Modify logic that sets a Model class’__table_args__and__mapper_args__(unless overridden in subclass) by merging__global_table_args__and__global_mapper_args__from all classes in the class’smro()with__local_table_args__and__local_mapper_args__from the class itself. A__{global,local}_{table,mapper}_args__may be callable or classmethod, in which case it is evaluated on the class whose__{table,mapper}_args__is being set. Thankssethp! (breaking change)v1.5.1 (2015-01-13)Add support for callable__table_args__and__local_table_args__. Thankssethp!v1.5.0 (2014-12-16)AddModel.is_modified(). Thankssethp!AddModel.filter().AddModel.filter_by().v1.4.2 (2014-11-18)Addsearch.inenumandsearch.notinenumfor performing anin_andnot(in_)comparision usingDeclarativeEnum.v1.4.1 (2014-11-17)AllowModel.__bind_key__to be set at the declarative base level so that model classes can properly inherit it.v1.4.0 (2014-11-09)MakeModelBase’s__table_args__and__mapper_args__inheritable via mixins. Thankssethp!Add__enum_args__toDeclarativeEnum. Thankssethp!Allow enum name to be overridden when callingDeclarativeEnum.db_type(). Thankssethp!v1.3.1 (2014-10-14)DuringModel.update()when setting a non-list relationship automatically instantiatedictvalues using the relationship model class.v1.3.0 (2014-10-10)Convert null relationships to{}when callingModel.to_dict()instead of leaving asNone.v1.2.0 (2014-10-10)DuringModel.update()when setting a list relationship automatically instantiatedictvalues using the relationship model class.v1.1.2 (2014-09-25)Allowaliaskeyword argument toQuery.join_eager()andQuery.outerjoin_eager()to be adictmapping aliases to join keys. Enables nested aliases.v1.1.1 (2014-09-01)Fix handling of nestedModel.update()calls to relationship attributes so that setting relationship to emptydictwill propagateNoneto relationship attribute value correctly.v1.1.0 (2014-08-30)Addquery.LoadOptionto support nesting load options when calling thequery.Queryload methods:join_eager,outerjoin_eager,joinedload,immediateload,lazyload,noload, andsubqueryload.v1.0.0 (2014-08-25)Replace usage of@classpropertydecorators inModelBasewith@classmethod. Any previously defined class properties now require method access. Affected attributes are:session,primary_key,primary_keys,primary_attrs,attrs,descriptors,relationships,column_attrs, andcolumns. (breaking change)Proxygetitemandsetitemaccess togetattrandsetattrinModelBase. Allows models to be accessed like dictionaries.Makealchy.eventsdecorators class based.Requirealchy.eventsdecorators to be instantiated using a function call ([email protected]_update()instead [email protected]_update). (breaking change)Addalchy.searchcomparators,eqenumandnoteqenum, for comparingDeclarativeEnumtypes.v0.13.3 (2014-07-26)Fixutils.iterflatten()by callingiterflatten()instead offlattenin recursive loop.v0.13.2 (2014-06-12)AddModelBase.primary_attrsclass property that returns a list of class attributes that are primary keys.UseModelBase.primary_attrsinQueryModel.search()so that it handles cases where primary keys have column names that are different than the class attribute name.v0.13.1 (2014-06-11)Modify internals ofQueryModel.search()to better handle searching on a query object that already has joins and filters applied.v0.13.0 (2014-06-03)Addsearch.icontainsandsearch.noticontainsfor case insensitive contains filter.Remove strict update support fromModel.update(). Require this to be implemented in user-land. (breaking change)v0.12.0 (2014-05-18)Merge originating query where clause inQuery.searchso that pagination works properly.Addsession_classargument toManagerwhich can override the default session class used.v0.11.3 (2014-05-05)InModelMetawhen checking whether to do tablename autogeneration, tranverse all base classes when trying to determine if a primary key is defined.InModelMetasetbind_keyin__init__method instead of__new__. This also fixes an issue where__table_args__was incorrectly assumed to always be adict.v0.11.2 (2014-05-05)Supportorder_byas list/tuple inQueryModel.search().v0.11.1 (2014-05-05)Fix bug inQueryModel.search()whereorder_bywasn’t applied in the correct order. Needed to come before limit/offset are applied.v0.11.0 (2014-05-04)PEP8 compliance with default settings.Removequery_propertyargument frommake_declarative_base()andextend_declarative_base(). (breaking change)AddModelBase.primary_keysclass property which returns a tuple always (ModelBase.primary_keyreturns a single key if only one present or a tuple if multiple).Move location of classQueryPropertyfromalchy.modeltoalchy.query. (breaking change)Create newQuerysubclass namedQueryModelwhich is to be used within a query property context. ReplaceQuerywithQueryModelas default query class. (breaking change)Move__advanced_search__and__simple_search__class attributes fromModelBasetoQueryModel. (breaking change)IntroduceQueryModel.__search_filters__which can define a canonical set of search filters which can then be referenced in the list version of__advanced_search__and__simple_search__.Modify the logic ofQueryModel.search()to use a subquery joined onto the originating query in order to support pagination when one-to-many and many-to-many joins are present on the originating query. (breaking change)Support passing in a callable that returns a column attribute foralchy.search.<method>(). Allows foralchy.search.contains(lambda: Foo.id)to be used at the class attribute level whenFoo.idwill be defined later.Add search operatorsany_/notany_andhas/nothaswhich can be used for the corresponding relationship operators.v0.10.0 (2014-04-02)Issue warning instead of failing when installed version of SQLAlchemy isn’t compatible withalchy.Query’s loading API (i.e. missingsqlalchemy.orm.strategy_options.Load). This allowsalchyto be used with earlier versions of SQLAlchemy at user’s own risk.Addalchy.searchmodule which provides compatible search functions forModelBase.__advanced_search__andModelBase.__simple_search__.v0.9.1 (2014-03-30)ChangeModelBase.sessionto proxyModelBase.query.session.AddModelBase.object_sessionproxy toorm.object_session(ModelBase).v0.9.0 (2014-03-26)Removeengine_config_prefixargument toManager(). (breaking change)Add explicitsession_optionsargument toManager(). (breaking change)Change theManager.configoptions to follow Flask-SQLAlchemy. (breaking change)AllowManager.configto be either adict,class, ormodule object.Add multiple database engine support using a singleManagerinstance.Add__bind_key__configuration option forModelBasefor binding model to specific database bind (similar to Flask-SQLAlchemy).v0.8.0 (2014-03-18)ForModelBase.update()don’t nestupdate()calls if field attribute is adict.Deprecatedrefresh_on_emptyargument toModelBase.to_dict()and instead implementModelBase.__to_dict__configuration property as place to handle processing of model before casting todict. (breaking change)AddModelBase.__to_dict__configuration property which handles preprocessing for model instance and returns a set of fields as strings to be used as dict keys when callingto_dict().v0.7.0 (2014-03-13)Renamealchy.ManagerBasetoalchy.ManagerMixin. (breaking change)Addpylintsupport.Remove dependency onsix.v0.6.0 (2014-03-10)Prefix event decorators which did not start withbefore_orafter_withon_. Specifically,on_set,on_append,on_remove,on_append_result,on_create_instance,on_instrument_class,on_mapper_configured,on_populate_instance,on_translate_row,on_expire,on_load, andon_refresh. (breaking change)Remove lazy engine/session initialization inManager. Require thatModelandconfigbe passed in at init time. While this removes some functionality, it’s done to simplify theManagercode so that it’s more straightforward. If lazy initialization is needed, then a proxy class should be used. (breaking change)v0.5.0 (2014-03-02)AddModelBase.primary_keyclass property for retrieving primary key(s).AddBase=Noneargument tomake_declarative_base()to support passing in a subclass ofModelBase. Previously had to create a declarativeModelto pass in a subclassedModelBase.Let any exception occurring inModelBase.queryattribute access bubble up (previously,UnmappedClassErrorwas caught).Python 2.6 and 3.3 support.PEP8 compliance.New dependency:six(for Python 3 support)v0.4.2 (2014-02-24)InModelBase.to_dict()only include fields which are mapper descriptors.Supportto_dictmethod hook when iterating over objects inModelBase.to_dict().Addto_dictmethod hook toEnumSymbol(propagates toDeclarativeEnum).v0.4.1 (2014-02-23)Support__iter__method in model so thatdict(model)is equilvalent tomodel.to_dict().Addrefresh_on_empty=Trueargument toModelBase.to_dict()which supports callingModelBase.refresh()if__dict__is empty.v0.4.0 (2014-02-23)AddModelBase.save()method which adds model instance loaded from session to transaction.AddModelBase.get_by()which proxies toModelBase.query.filter_by().first().Add model attributeevents.Add support for multiple event decoration.Add named events for all supported events.Add composite events forbefore_insert_updateandafter_insert_update.v0.3.0 (2014-02-07)RenameModelBase.advanced_search_configtoModelBase.__advanced_search__.RenameModelBase.simple_search_configtoModelBase.__simple_search__AddModelMetametaclass.Implement__tablename__autogeneration from class name.Add mapper event support viaModelBase.__events__and/ormodel.eventdecorator.v0.2.1 (2014-02-03)Fix reference tomodel.make_declarative_baseinManagerclass.v0.2.0 (2014-02-02)Add defaultquery_classto declarative model if none defined.Letmodel.make_declarative_base()accept predefined base and just extend its functionality.v0.1.0 (2014-02-01)First release
alcli
Installing the Alert Logic CLIThepippackage manager for Python is used to install, upgrade and remove Alert Logic CLI.Installing the current version of the Alert Logic CLIAlert Logic CLI only works on Python 3.7 or higher. Please follow this instructions on how to install Python on your system:https://www.python.org/downloads/Usepip3to install the Alert Logic CLI$ pip3 install alcli --upgrade --userMake sure to use--userto to install the program to a subdirectory of your user directory to avoid modifying libraries used by your operating system.Windows installerFor windows users there is self-contained Alert Logic CLI distribution is available, please download latest version from here:executable installation packagemsi installation packageAlternatively, please viewhistoryof the releases.Upgrading to the latest version of the Alert Logic CLIWe regularly introduce support for new Alert Logic services. We recommend that you check installed packages version and upgrade to the latest version regularly.$ pip3 install --upgrade --force-reinstall alcliConfigure the Alert Logic CLI with Your CredentialsBefore you can run a CLI command, you must configure the Alert Logic's CLI with your credentials.By default,alcliuses ~/.alertlogic/config configuration file in a user's home directory. File can contain multiple profiles. Here's an example of a configuration file that has credentials for an integration and production deployments:[default] access_key_id=1111111111111111 secret_key=eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee global_endpoint=integration [production] access_key_id=2222222222222222 secret_key=dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd global_endpoint=productionThe location of the configuration file can be also specified by settingALERTLOGIC_CONFIGenvironment variable to contain file's location.Notes:--queryoption requires JMESPath language expression. Seehttp://jmespath.org/tutorial.htmlfor language tutorial.======= History1.0.1 (2020-02-06)First release on PyPI.1.0.7 (2020-02-07)First version of main help page and bug fixes.
al-cloudinsight
This is an example project which shows how to access the Cloud Insight API using Python. OverviewThe Cloud Insight API is a REST API which provides many services related to the Cloud Insight system. The data transmission protocol is JSON objects. The API receives and sends answers as JSON objects and HTTP errors or confirmation as HTTP status code. The CloudInsightAPI class provides an interface and some example methods to access the Cloud Insight API. All the objects accessed by the CloudInsightAPI class will have the JSON response converted to generic Python objects (Bunch) which can have their properties accessed by obj.property syntax instead of dictionary syntax. The print of the objects will return a JSON formatted string to facilitate the visualization of the object data. The requests are made using the Requests library and will raise requests.exceptions.RequestException when some request fail according to the status code error.The program.py provide an example of a command line script implementation of the CloudInsightAPI class
al-cloud-insight
# al-cloud-insight Python Library for using the Alert Logic Cloud Insight API
alcmixer
No description available on PyPI.
alco
What’s the problemThere is a widely used stack of technologies for parsing, collecting and analysing logs -ELK Stack. It has very functional web interface, search cluster and a log transformation tool. Very cool, but:It’s Java with well-known requirements for memory and CPUsIt’s ElasticSearch with it’s requirements for disk spaceIt’s Logstash which suddenly stops processing logs in some conditions.It’s Kibana with very cool RICH interface which looses on all counts togrepandlessin a task of log reading and searching.Introducing ALCOALCO is a simple ELK analog which primary aim is to provide a online replacement forgrepandless. Main features are:Django application for incident analysis in distributed systemsschemeless full-text index with filtering and searchingconfigurable log collection and rotation from RabbitMQ messaging servernot a all-purpose monsterTechnology stackLet’s trace log message path from some distributed system to ALCO web interface.Python-based project callslogger.debug()method with text ‘hello world’At startup timeLogcollectlibrary automatically configures python logging (or evenDjangoandCeleryone’s) to send log messages to RabbitMQ server in JSON format readable both with ELK and ALCO projects.ALCO log collector binds a queue to RabbitMQ exchange and processes messages in a batch.It uses Redis to collect unique values for filterable fields and SphinxSearch to store messages in a realtime index.When a message is inserted to sphinxsearch, it contains indexedmessagefield, timestamp information and schemeless JSON field namedjswith all log record attributes sent by python log.Django-based web interface provides API and client-side app for searching collected logs online.RequirementsPython 2.7 or 3.3+Logcollectfor python projects which logs are collectedRabbitMQserver for distributed log collectionSphinxSearchserver 2.3 or later for log storageRedisfor SphinxSearch docid management and field values storagedjango-sphinxsearchas a database backend forDjango>=1.8(will be available from PyPI)SetupYou need to configure logcollect in analyzed projects (seeREADME). If RabbitMQ admin interface shows non-zero message flow inlogstashexchange - “It works” :-)Install alco and it’s requirements from PyPipipinstallalcoNext, create django project, addsphinxsearchdatabase connection and configuresettings.pyto enable alco applications# For SphinxRouterSPHINX_DATABASE_NAME='sphinx'DATABASES[SPHINX_DATABASE_NAME]={'ENGINE':'sphinxsearch.backend.sphinx','HOST':'127.0.0.1','PORT':9306,}}# Auto routing log models to SphinxSearch databaseDATABASE_ROUTERS=('sphinxsearch.routers.SphinxRouter',)INSTALLED_APPS+=['rest_framework',# for API to work'alco.collector','alco.grep']ROOT_URLCONF='alco.urls'Configure ALCO resources insettings.py:ALCO_SETTINGS={# log messaging server'RABBITMQ':{'host':'127.0.0.1','userid':'guest','password':'guest','virtual_host':'/'},# redis server'REDIS':{'host':'127.0.0.1','db':0},# url for fetching sphinx.conf dynamically'SPHINX_CONF_URL':'http://127.0.0.1:8000/collector/sphinx.conf',# name of django.db.connection for SphinxSearch'SPHINX_DATABASE_NAME':'sphinx',# number of results in log view API'LOG_PAGE_SIZE':100}# override defaults for sphinx.conf templateALCO_SPHINX_CONF={# local index definition defaults override'index':{'min_word_len':8},# searchd section defaults override'searchd':{'dist_threads':8}}Runsyncdbor bettermigratemanagement command to create database tables.Run webserver and create a LoggerIndex fromdjango admin.Created directories for sphinxsearch:/var/log/sphinx/ /var/run/sphinx/ /data/sphinx/Next, configure sphinxsearch to use generated config:searchd-csphinx_conf.pysphinx_conf.pyis a simple script that importsalco.sphinx_confmodule which fetches generatedsphinx.conffrom alco http api and created directories for SphinxSearch indices:#!/data/alco/virtualenv/bin/python# coding: utf-8importosos.environ.setdefault('DJANGO_SETTINGS_MODULE','settings')fromalcoimportsphinx_confRun log collectors:pythonmanage.pystart_collectors--no-daemonIf it shows number of collected messages periodically - then log collecting is set up correctly.Configure system services to start subsystems automatically:nginx or apache http serverdjango uwsgi backendalco collectors (start_collectorsmanagement command)sphinxsearch, redis, default database for DjangoOpenhttp://127.0.0.1:8000/grep/<logger_name>/to read and search logs online.VirtualenvWe successfully configured SphinxSearch to use python fromvirtualenv, adding some environment variables to start script (i.e. FreeBSD rc.d script):sphinxsearch_prestart(){# nobody user has no HOMEexportPYTHON_EGG_CACHE=/tmp/.python-eggs# python path for virtualenv interpreter should be redeclaredexportPYTHONPATH=${venv_path}/lib/python3.4/:${venv_path}/lib/python3.4/site-packages/."${virtualenv_path}/bin/activate"||err1"Virtualenv is not found"echo"Virtualenv${virtualenv_path}activated: `which python`"}In this caseshebangforsphinx_conf.pymust point virtualenv’s python interpreter.Production usageFor now ALCO stack is tested in preproduction environment in our company and is actively developed. There are no reasons to say that it’s not ready for production usage.
al-codes
No description available on PyPI.
alcohol
.. code::from alcohol.mixins.sqlalchemy import SQLAlchemyUserMixinclass User(Base, SQLAlchemyUserMixin):id = Column(Integer, primary_key=True)bob = User()# stores a hash of bobs password (using passlib)bob.password = 'bobs_very_secret_password'if bob.check_password(some_password):print 'hello, bob!'# creates a password-reset token that will work once to change his password# after he forgot it, signed with the servers secret keytoken = bob.create_password_reset_token(SECRET_KEY)alcohol is a framework for handling user :doc:`authentication` and:doc:`authorization`. Both of these parts can be used independently and supportSQLAlchemy_ and in-memory backends.Authorization is handled using *Role Based Access Controls* (a`NIST <https://en.wikipedia.org/wiki/NIST>`_-standard) as the underlyingmodel::from alcohol.rbac import DictRBACacl = DictRBAC()acl.assign('bob', 'programmer')acl.assign('alice', 'ceo')acl.permit('programmer', 'run_unittests')acl.permit('ceo', 'hire_and_fire')acl.allowed('bob', 'run_unittests') # Trueacl.allowed('bob', 'hire_and_fire') # Falseacl.allowed('alice', 'hire_and_fire') # True.. this should be put back in once flask-alcohol is stable/in better shape.. While suitable for use in stand-alone, non-web applications it is also a core.. ingredient to `Flask-Alcohol <http://pypi.python .org/pypi/flask-alcohol/>`_, a.. `Flask <http://flask.pocoo.org/>`_ library that takes this concept even.. further.Utilities---------alcohol also ships with a few SQLAlchemy_ mixins for handling updated/modifiedtimestamps, email fields, password-hashes and generating activation/resettokens for the latter two. See :doc:`mixins` for details... [1] http://csrc.nist.gov/rbac/sandhu-ferraiolo-kuhn-00.pdf.. _SQLAlchemy: http://www.sqlalchemy.org/
alcohol_consumption
UNKNOWN
alcoholic-tfe22540
Install in localIf you want to download the lastest version directly from GitHub, you can close this repositorygit clone https://github.com/mdausort/TFE22-540TFE22-540The different steps to follow in order to obtain our results :The first thing to do when receiving the data is to anonymise it and convert it with MRIcron. A naming convention is adopted "sub#_E1" or "sub#_E2" representing respectively the first and the second diffusion scan for each patient. While "sub#_T1_E." stands for the anatomical scan.Those files have to be downloaded on the clusters inalcoholic_studyto be preprocessed byElikopy:data_1 file containing all "_E1";data_2 file containing all "_E2";reverse_encoding (respectivelly in the two previous files) containing the so-called corrected diffusion scans with the same naming convention than for the diffusion scan. If the DICOM files are corrupted, you can usereverse_corr.pyto obtain the right files and have a correct conversion in NIFTI;T1 file containing all the anatomical scans (E1 and E2).useful_fct.py: Creation of the needed directories (already done for this study but have to be repeated if new patients → only thing to change is thepatient_numbersvariable).preprocessing.py:4.1) Submit only thePatient listjob.4.2) Submit only thePreprocessingjob.4.3) Submit only theMask de matière blanchejob.4.4) Submit theMicrostructural modelone at the time.Rest of this file can be used but was not necessary for us.perso_path.py: Before doing the following steps, you have to adapt the parameter of theperso_path_stringfunction,on_cluster. If you put it at False, you have to change also theperso_pathvariable. And finally, you can also adapt thepatient_numbersvariable.atlas_registration.py: Now that all patients have been pre-processed, we can perform an analysis by region. Thus, all the regions used are accessible through a list built with theatlas_modif_name.pyfile and called by other files. They are divided into "WM", "GM", "Lobes", "Subcortical" and "Cerebellum" areas. However, all those regions are not in the proper space so they need to be transformed to fit to each patient space and we usedatlas_registration.pycode in order to do that.→ To lauch this, usejob_submission.pyfirst line ofpatientlist_wrappercommand only.Corpus_callosum_reg.py: This is the code corresponding to the creation of our CC. As we can see on the following image, the downloaded Corpus Callosum was not of good quality (Fig A.) so we drew it ourselves (thanks to MRIcron) as depicted in Fig B. and its 3D representation (Fig C.). However, you don't need theregistration_CC_on_perfectfunction and the last part of this file (MASK FA). You will only need to resubmit thereg_CC_on_subif you have new patients.→ To lauch this, usejob_submission.pythrid line ofpatientlist_wrappercommand only.opening_closing.py: Just run this code to get a really smooth and good CC for each patient by applying some morphological operations. The upper part of the following image represents the drawn Corpus Callosum registered on one patient and the bottom part represents it after two morphological operations.Corpus_callosum_division.py: Code to obtained a subdivision of the CC.→ To lauch this, usejob_submission.pyfourth line of patientlist_wrapper command only.f0_f1_to_ftot.py: Creation of some files for DIAMOND and MF models.FA_DMD.py: Creation of weigthed version of the DTI metric for DIAMOND model.→ To lauch this, usejob_submission.pyfifth line of patientlist_wrapper command only and after the sixth line only.moyenne_par_ROI.py: Creation of different excels containing the different metric evolution.→ To lauch this, usejob_submission.pyseventh line of patientlist_wrapper command only.clustering.py: Creation of the clusters based on the method inplemented inDTI_kmeans_clustering.py, then creation of excel calledResult_ttest. The second codeclustering_v2.pyis another method of clustering.analyse_ttest.py: Creation of all the plots concerning the analysis of each model separately (they are saved in the filePlotsinAnalyse). Then, creation of excel calledCluster_ROIused to do the coherence analysis.DTI_tissue_classification.py: To analyse change in volume for WM, GM and CSF.volume_zones.py: To analyse change in volume for certain areas of the brain.comportement.py: To analyse the data coming from behavioral information.
alcokit
ALgorithmic COmposition KIT (alcokit)This is a small library for performing various tasks on musical data in the frequency domain. The goal ofalcokitis to offer practical abstractions to programming composers and musicians.Available functionalities range from time-segmentation and polyphonic pitch detection to efficient hdf5 storage and sequence modeling.alcokitis still in early development. Documentation and use-case examples are coming soon...
alcom
ALCOMComments aligner for assemblerCode factorPlatformsPacket infoInstallationFromPyPipy-mpipinstallalcom pip3installalcomUsageCLI Optionsshortlongdescriptionhelp-f--fileSets filename for aligningIf not setted it will align all files in directory recursively-nbc--align_no_blank_commentsLave no blank commentsIf not setted no splitters would be placed after codelineRunningalcom alcom-fasmfile.asm alcom-fasmfile.asm-nbcExampleBefore.MODELTINY;set memory model.DOSSEG.DATAMSGDB"Hello,World!",0Dh,0Ah,'$'; message.CODE.STARTUPMOVAH,09h; moves 09h into ahMOVDX,OFFSETMSGINT21h;run int 21hMOVAH,4ChINT21h;exitENDAfter.MODELTINY; set memory model.DOSSEG;.DATA;MSGDB"Hello,World!",0Dh,0Ah,'$'; message.CODE;.STARTUP;MOVAH,09h; moves 09h into ahMOVDX,OFFSETMSG;INT21h; run int 21hMOVAH,4Ch;INT21h; exitEND;TipsVS CodeTo add auto aligning after save:AddRun on SaveextensionPressctrl+Pand search forPreferences: Open Workspace Settings (JSON)Add code below into the opened file and save{"emeraldwalk.runonsave":{"commands":[{"match":".asm","cmd":"alcom -nbc"}]}}You are done!TODO[❌] Add marging options[❌] Issue that comments separator cam be placed in strings
alcor
No description available on PyPI.
alcov
AlcovAbundance learning for SARS-CoV-2 variants. The primary purpose of the tool is:Estimating abundace of variants of concern from wastewater sequencing dataYou can read more about how Alcov works in the preprint,Alcov: Estimating Variant of Concern Abundance from SARS-CoV-2 Wastewater Sequencing DataThe tool can also be used for:Converting nucleotide and amino acid mutations for SARS-CoV-2 such as those found onhttps://covariants.org/variants/S.N501Determining the frequency of mutations of interest in BAM filesPlotting the depth for each ARTIC amplicon (https://github.com/artic-network/artic-ncov2019/tree/master/primer\_schemes/nCoV-2019/V3)Comparing amplicon GC content with its read depth (as a measure of degredation)The tool is under active development. If you have questions or issues, please open an issue on GitHub or email me (email in setup.py).InstallingThe latest release can be downloaded from PyPIpip install alcovThis will install the Python library and the CLI.To install the development version, clone the repository and runpip install .Usage examplePreprocessingAlcov expects a BAM file of reads aligned to the SARS-CoV-2 reference genome. For an example of how to process Illumina reads, check theprepdirectory for a script named "prep.py" which outlines our current preprocessing pipeline, including the generation of a "samples.txt" file used by alcov "find_lineages" command.Estimating relative abundance of variants of concern:alcov find_lineages reads.bamFinding lineages in BAM files for multiple samples:alcov find_lineages samples.txtWheresamples.txtlooks like:path/to/reads1.bam Sample 1 name path/to/reads2.bam Sample 2 name ...Example usage: To estimate the relative abundance of lineages in a list of samples (samples.txt), while considering only positions with a minimum depth of 10 reads, the following command can be used. This will also save the heatmap as a .png image and the corresponding frequencies as a csv file.alcov find_lineages --min_depth=10 --save_img=True --csv=True samples.txtOptionally specify which VOCs to look for (Note: This will restrict alcov to only consider the lineages specified in this text file. Do not provide this file if you wish alcov to consider all lineages for which it has constellation files.)alcov find_lineages reads.bam lineages.txtWherelineages.txtlooks like: Note: These lineages must be chosen from the list of lineages that alcov has constellation files for (updated weekly) found in "alcov/alcov/data/constellations/"BA.5-like BQ.1.1-like XBB-like XBB.1.5-like ...Optionally change minimum read depth (default 40)alcov find_lineages --min_depth=5 reads.bamOptionally show how predicted mutation rates agree with observed mutation ratesalcov find_lineages --show_stacked=True reads.bamUse mutations which are found in multiple VOCs (can help for low coverage samples) - This is now the default behaviour.alcov find_lineages --unique=False reads.bamPlotting change in lineage distributions over time for multiple sitesalcov find_lineages --ts samples.txtWheresamples.txtlooks like:reads1.bam SITE1_2021-09-10 reads2.bam SITE1_2021-09-12 ... reads3.bam SITE2_2021-09-10 reads4.bam SITE2_2021-09-12 ...Converting mutation names:$ alcov nt A23063T A23063T causes S:N501Y $ alcov aa S:E484K G23012A causes S:E484KFinding mutations in BAM file:alcov find_mutants reads.bamFinding mutations in BAM files for multiple samples:alcov find_mutants samples.txtWheresamples.txtlooks like:reads1.bam Sample 1 name reads2.bam Sample 2 name ...Runningfind_mutantswill print the number of reads with and without each mutation in each sample and then generate a heatmap showing the frequencies for all samples.You can also specify a custom mutations file:alcov find_mutants samples.txt mutations.txtWheremutations.txtlooks like:S:N501Y G23012A ...Getting the read depth for each ampliconalcov amplicon_coverage reads.bamoralcov amplicon_coverage samples.txtPlotting amplicon GC content against amplicon depthalcov gc_depth reads.bamoralcov gc_depth samples.txt
alcpack
ALCPackA python package to create an edge between any two given nodes in a simple, connected, and undirected graph via a sequence of local complementation operations.RequirementsALCPackis developed based onPython 3.0(or more recent releases), and usesNetworkX-- A python package to analyze complex networksInstallationTo installALCPackusingpip:$ python3 -m pip install --upgrade pip$ python3 -m pip install alcpackThe steps to build the package locally for installation viapipare given inALCPack_References, available in thedocumentationfolder.DescriptionALCPackprovides three functions, as listed below.local_complementation(G, target):Performs a local complementation operation on the input graph G w.r.t. the node 'target' and returns the transformed graphpath_category(G, path):Determines the category of a simple path connecting two chosen nodes in a simple, connected, and undirected graph, and distills a category 1 path out of the chosen path, if the chosen path is of category 2. Returns the category (category 1 or category 2) of the chosenn path, and a distilled category 1 path (chosen path) if the chosen path is of category 2 (category 1)alc_function(G, path):Performs adaptive local complementation operation on the input graph G w.r.t. the chosen simple path 'path'. Returns the modified graph with an edge between the source and the destination nodesUseTo callALCPackfunctions in the Python 3.0 (or higher) environment,>>> import alcpack as alcTo useALCPackfunctions in the Python 3.0 (or higher) environment,>>> H=alc.local_complementation(G,target)>>> n,newpath=alc.path_category(G,path)>>> H=alc.alc_function(G,path)The parameters used in the functions are as below.InputG: Networkx graph, Input parameterpath: List of nodes, Input parameterRepresents a path between a source (first node in 'path') and a destination (last node in 'path')target: Node, Input parameterRepresents a node with respect to which the local complementation operation is performedOutputH: NetworkX graph, Output parameterTransformed graph due to ALCPack functionsn: Integer, Output parameterTakes value 1 or 2, represents the category of the input 'path'newpath: List of nodes, Output parameterCategory 1 pathInformation on the theoretical background ofALCPackis given inALCPack_References, which is available in thedocumentationfolder.
alcss
✨ ALCSS ✨Aligner of a splitter in css filesExampleImagine you have file -my-awesome-style.cssbody{margin:25px;background-color:rgb(240,240,240);font-family:arial,sans-serif;font-size:14px;}Run this programalcssmy-awesome-style.cssResultbody{margin:25px;background-color:rgb(240,240,240);font-family:arial,sans-serif;font-size:14px;}RequrirementsCheck that you have:python 3.xpip(optionally)InstallationThis program can be installed frompypipy-mpipinstallalcss pip3installalcssArgumentsshortlongdescriptiondefault-l--lmarginSets spaces before:character2-r--rmarginSets spaces after:character2-i--indentSets indentation inside {} block4-s--shoutForces program to print info to stdoutFalse-h--helpShows helpOptions meaningdiv{____border_____:_____1pxsolidblack;↑↑↑indentlmarginrmargin}TipsUse after default formmater of VS code as default formatter removes spaces before:characterTo add auto aligning after save:Add Run on Save extensionPress ctrl + P and search for Preferences: Open Workspace Settings (JSON)Add code below into the opened file and save{"emeraldwalk.runonsave":{"commands":[{"match":".css","cmd":"alcss ${file}"}]}}
alcyone
# alcyone
ald
Failed to fetch description. HTTP Status Code: 404
alda-python
alda-pythonPython client for Alda (https://alda.io/).UsageDownload and install Alda per theInstall instructionsRun the Alda REPL as a server using port 12345:$ alda repl --server --port 12345In a different terminal, run an interactive Python session (e.g. IPython)Installalda-python!pip install --user alda-pythonImport and initialize the Alda Python client:from alda import Client client = Client()Create some Alda code, for example:code = """ (tempo! 90) piano: o3 c1/e/g/b | f2/a/>c/e ~ <e2/g/b/>d violin: o2 c1 ~ | f2 ~ g2 percussion: o2 [c8 r8 c8 r8 e8 c8 r8 c8] * 2 """Play the code:client.play(code)
ald-distributions
No description available on PyPI.
aldebaran
Aldebaran Python Client is a client library for accessing Aldebaran from python code. This library also gets bundled with any Python algorithms in Aldebaran.
aldegonde
aldegondeis a Python library for classical cryptography. It is written to accomodate non-standard alphabets (i.e. not just A-Z).
aldemsubs
AldemsubsAldemsubs is a command line application to subscribe to Youtube channels and automatically download videos. It provides a way to keep track of channels without the need of a Youtube account. It stores channel and video information in a database and automatically updates them based on the RSS feed Youtube provides for every channel.Aldemsubs works on GNU/Linux and MacOS (only tested on Mojave). It should also be compatible with Windows, this is not thoroughly tested however.Installationpip install --user aldemsubsFor user installation make sure that~/.local/bin/is in your$PATH.UsageSubscribe to a channel:aldemsubs -s <channel id>In most cases the channel id is just part of the URL to a channel page. If that is not the case it can be found by examining the source code of the channel page. The most reliable way to find a channel id is by searching for "rss" in the page source. This will take you to the RSS-Link of the channel, which contains the channel id.Unsubscribe from a channel:aldemsubs -r <channel id>Removes the channel and video data from the database. Downloaded videos will not be deleted.List subscriptions:aldemsubs -lLists channel id and title of all subscribed channels.Update all subscriptions:aldemsubs -uThis loads the current version of the RSS-feed for each channel and adds new videos to the database.Download newly added videos:aldemsubs -dDownloads all new videos to the directory set in the configuration file. By default videos are downloaded to~/Youtube.Delete old videos:aldemsubs -xDeletes all video files older than a set amount of days (with respect to the download date not publication date).ConfigurationDefault configuration:[aldemsubs]# Videos will be downloaded to this folder (download_path/channel_title/video).download_path=~/Youtube/# Where the database is storeddb_file_path=~/Youtube/aldemsubs.sqlite# A video marked as new will be downloaded by aldemsubs -d (see usage). This# setting controls after how many days the new flag is removed by aldemsubs -u.# This only matters if you frequently update the database without downloading# videosmark_videos_old_after=5# days# Controls for how many days to keep a video file after download. Videos older# than this will be removed by aldemsubs -xdelete_downloads_after=5# days# How many videos should be marked as new / for download after you subscribe to# a channel? Set this to a negative number to download all videos in the RSS# feed (usually Youtube only lists the last 15 videos in the feed)after_subscribe_download_n_videos=3Changes to the configuration can be stored in~/.config/aldemsubs.ini.systemd service and timerTo update your subscriptions and download new videos you can install a systemd service and timer. Find installation scripts in this repo undersystemd/install_service.shandsystemd/install_timer.sh. Please read the scripts before executing them and adjust them to your liking.Don't forget tosystemctl enable aldemsubs.timerafter installation.The service and the timer are incompatible with MacOS and Windows.Windows compatibilityAldemsubs should work on Windows. However, as of now, some of my tests do not run properly on Windows (For more details take a look at the issues).If you want to try running aldemsubs on Windows inspite of that, the command ispython -m aldemsubs. All the options are the same. User configuration should be stored at%APPDATA%\aldemsubs\aldemsubs.iniand the default download directory is%USERPROFILE%\Youtube\.
alder
Consensus DHT database. Nested key value store.
ald-harishpvv
Failed to fetch description. HTTP Status Code: 404
aldian-probability
No description available on PyPI.
aldine
aldineA Wagtail app for facilitating responsive rendering. Please bear with us while we prepare more detailed documentation.Compatibilityaldine' major.minor version number indicates the Wagtail release it is compatible with. Currently this is Wagtail 4.1.xInstallationInstall usingpip:pipinstallaldineAddaldineto yourINSTALLED_APPSsetting:INSTALLED_APPS=[# ...'aldine'# ...]
aldjemy
Aldjemy integrates SQLAlchemy into an existing Django project, to help you build complex queries that are difficult for the Django ORM.While other libraries use SQLAlchemy reflection to generate SQLAlchemy models, Aldjemy generates the SQLAlchemy models by introspecting the Django models. This allows you to better control what properties in a table are being accessed.InstallationAddaldjemyto yourINSTALLED_APPS. Aldjemy will automatically add ansaattribute to all models, which is an SQLAlchemyModel.Example:User.sa.query().filter(User.sa.username=='Brubeck')User.sa.query().join(User.sa.groups).filter(Group.sa.name=="GROUP_NAME")Explicit joins are part of the SQLAlchemy philosophy, so don’t expect Aldjemy to be a Django ORM drop-in replacement. Instead, you should use Aldjemy to help with special situations.SettingsYou can add your own field types to map django types to sqlalchemy ones withALDJEMY_DATA_TYPESsettings parameter. Parameter must be adict, key is result offield.get_internal_type(), value must be a one arg function. You can get idea fromaldjemy.table.Also it is possible to extend/override list of supported SQLALCHEMY engines usingALDJEMY_ENGINESsettings parameter. Parameter should be adict, key is substring after last dot from Django database engine setting (e.g.sqlite3fromdjango.db.backends.sqlite3), value is SQLAlchemy driver which will be used for connection (e.g.sqlite,sqlite+pysqlite). It could be helpful if you want to usedjango-postgrespool.MixinsOften django models have helper function and properties that helps to represent the model’s data (__str__), or represent some model based logic.To integrate it with aldjemy models you can put these methods into a separate mixin:classTaskMixin:def__str__(self):returnself.codeclassTask(TaskMixin,models.Model):aldjemy_mixin=TaskMixincode=models.CharField(_('code'),max_length=32,unique=True)Voilà! You can use__str__on aldjemy classes, because this mixin will be mixed into generated aldjemy model.If you want to expose all methods and properties without creating a separate mixin class, you can use thealdjemy.meta.AldjemyMetametaclass:classTask(models.Model,metaclass=AldjemyMeta):code=models.CharField(_('code'),max_length=32,unique=True)def__str__(self):returnself.codeThe result is same as with the example above, only you didn’t need to create the mixin class at all.Release ProcessMake a Pull Request with updated changelog and bumped version of the projectpoetryversion(major|minor|patch)# choose which version to bumpOnce the pull request is merged, create a github release with the same version, on the web console or with github cli.ghreleasecreateEnjoy!
aldkit
ALDALD package is a tool to parse the output of ALD code which is written by Prof. Ankit Jain, IIT Bombay. ALD code is to solve the linearized Boltzman transport equation for three, four phonon and electron-phonon scattering rates from abinitio methods. For more details [email protected],[email protected] package asfrom aldkit.ald import ALDALD class instance has to be created asald = ALD(outdir='path')Then methods can be called asald.get_frequencies()Cite: Phonon properties and thermal conductivity from first principles, lattice dynamics, and the Boltzmann transport equation, A Jain et al,https://doi.org/10.1063/1.5064602
aldovilela1
Failed to fetch description. HTTP Status Code: 404
aldovilela2
Failed to fetch description. HTTP Status Code: 404
aldovilela3
No description available on PyPI.
aldream_test
UNKNOWN
aldryn-addons
Aldryn Addonsare re-usable django apps that follow certain conventions to abstract out complicated configuration from the individual django website project into upgradable packages. With this approach it is possible to avoid repetitive “add this toINSTALLED_APPSand that toMIDDLEWARE_CLASSESand add these tourls.py” work. The settings logic is bundled with the addon and only interesting “meta” settings are exposed. It is a framework to utilise such addons in django projects.The goal is to keep the footprint inside the django website project as small as possible, so updating things usually just mean bumping a version inrequirements.txtand no other changes in the project.This addon still uses the legacy “Aldryn” naming. You can read more about this in oursupport section.ContributingThis is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourcontribution guidelines.We’re grateful to all contributors who have helped create and maintain this package. Contributors are listed at thecontributorssection.DocumentationSeeREQUIREMENTSin thesetup.pyfile for additional dependencies:Installationaldryn-addonsis part of the Divio Cloud platform.For a manual install:Addaldryn-addonsto your projectsrequirements.txtor pip install it. It is also highly recommended to installaldryn-django. This is django itself bundled as an addon:pip install aldryn-addons aldryn-django==1.6.11At the top if thesettings.pyadd the following code snippet:INSTALLED_ADDONS = [ 'aldryn-django', ] # add your own settings here that are needed by the installed Addons import aldryn_addons.settings aldryn_addons.settings.load(locals()) # add any other custom settings hereAddons can automatically add code to the rooturls.pyso it’s necessary to addaldryn_addons.urls.patterns()andaldryn_addons.urls.i18n_patterns(). The code below is for Django 1.8 and above. For older versions of Django, please add the prefix parameter toi18n_patterns:i18n_patterns('',...from django.urls import re_path, include from django.conf.urls.i18n import i18n_patterns import aldryn_addons.urls urlpatterns = [ # add your own patterns here ] + aldryn_addons.urls.patterns() + i18n_patterns( # add your own i18n patterns here re_path(r'^myapp/', include('myapp.urls')), *aldryn_addons.urls.i18n_patterns() # MUST be the last entry! )Please follow the installation instructions for aldryn-django for complete integration. Then follow the setup instructions for aldryn-django-cms for the examples below.Adding AddonsIn this example we’re going to installdjango CMS Link, which requiresAldryn django CMS.pip install the Addon:pip install djangocms-linkAdd it toINSTALLED_ADDONSinsettings.py:INSTALLED_ADDONS = [ 'aldryn-django', 'aldryn-cms', 'djangocms-link', ]Copyaldryn_config.pyandaddon.jsonfrom the addon into theaddonsdirectory within your project (addons/djangocms-link/aldryn_config.pyandaddons/djangocms-link/addon.json). Ifaldryn_config.pydefines any settings on the settings Form, put them inaddons/djangocms-link/settings.json, if not put{}into it.NoteThe need to manually copyaldryn_config.pyandaddon.jsonis due to legacy compatibility with the Divio Cloud platform and will no longer be necessary in a later release.NoteFuture versions will include a little webserver with a graphical UI to edit the settings insettings.json, much like it is provided on the Divio Cloud platform.You are all set. The code inaldryn_config.pywill take care of configuring the addon.Running TestsYou can run tests by executing:virtualenv env source env/bin/activate pip install -r tests/requirements.txt python setup.py test
aldryn-apphook-reload
Reload urls of django CMS Apphooks without a restartWarningThis is a Prototype.IntroductionDjango CMS allowsextending cms pages with Apphooks. Apphooks are saved in the Database, which means urls depend on the database contents. For changes to Apphooks to be reflected inreverse()and{% url ... %}calls, a webserver restart is usually necessary.aldryn-apphook-reload will automatically reload urls from Django CMS apphooks, without the need of a webserver restart. It listens tocms.signals.urls_need_reloadingand causes a reload.The signal is only available in the process where the change to the database was made. In order for other processes to know when to reload (be it a gunicorn worker or a process on a other server) a token is saved in the database. This implies a performance hit: 1 database query per request.Installationaddaldryn_apphook_reloadtoINSTALLED_APPS.addaldryn_apphook_reload.middleware.ApphookReloadMiddlewaretoMIDDLEWARE(place it as close to the top as possible)run migrations:python manage.py migrate aldryn_apphook_reloadAdvancedIf the process that triggerscms.signals.urls_need_reloadingis a simplerunserverunder load ( ~2 requests per second), the reload sometimes fails on the other processes. This might be due to an unknown race condition, where the token in the database is refreshed already, but the new apphooks are not in the database yet. The other processes would try to reload right away and would reload the old apphooks. Tests with gunicorn in the default mode and in the gevent mode worked fine though.Why not save the token in the cache backend for better performance? - Because altering the cache would happen right away, before the database transaction is committed at the end of the request. Thus other process would reload their urls prematurely.
aldryn-apphooks-config
aldryn-apphooks-configNamespaces based configuration for ApphooksBasic conceptsThe concept of apphooks-config is to store all the configuration in an applications-specific model, and let the developer specify the desired option in a form. In the views the model instance specific for the current application namespace is loaded (through a mixin) and it’s thus available in the view to provide the configuration for the current namespace.Namespaces can be created on the fly in thePageadminAdvanced settingsby following the steps above. When creating an application configuration, you are in fact defining a namespace, which is saved in the same field in thePagemodel as the plain namespaces.ContributingWe’re grateful to all contributors who have helped create and maintain this package.Contributors are listed atcontributions page.Supported versionsPython: 3.9 - 3.11 Django: 3.2 - 4.2 django CMS: 3.9 - 3.11Implementation step-guideDefine a AppHookConfig model incms_appconfig.py:from aldryn_apphooks_config.models import AppHookConfig class NewsBlogConfig(AppHookConfig): passImplementation can be completely empty as the schema is defined in the parent (abstract) modelUse apphooks managers in your model:from aldryn_apphooks_config.managers import AppHookConfigManager class Article(models.Model): title = models.CharField() objects = AppHookConfigManager()AppHookConfigManageraddsnamespacemethod to manager and queryset:Article.objects.namespace('foobar')There is also a proper queryset, theApphooksConfigQueryset. Parler integrated variants can be found inaldryn_apphooks_config.managers.parler. Names areAppHookConfigTranslatableManagerandAppHookConfigTranslatableQueryset.Define a ConfigForm incms_appconfig.py:from app_data import AppDataForm from django import forms from aldryn_newsblog.models import NewsBlogConfig from aldryn_apphooks_config.utils import setup_config class BlogOptionForm(AppDataForm): # fields are totally arbitrary: any form field supported by # django-appdata is supported show_authors = forms.BooleanField(required=False) ... # this function will register the provided form with the model created # at the above step setup_config(BlogOptionForm, NewsBlogConfig) # setup_config can be used as a decorator too, but the `model` # attribute must be added to the form class @setup_config class BlogOptionForm(AppDataForm): model = NewsBlogConfigDefine an admin class for the AppHookConfig model (usually inadmin.py:from django.contrib import admin from aldryn_apphooks_config.admin import BaseAppHookConfig class BlogConfigAdmin(BaseAppHookConfig): def get_config_fields(self): # this method **must** be implemented and **must** return the # fields defined in the above form, with the ``config`` prefix # This is dependent on the django-appdata API return ('config.show_authors', ...)Define a CMSApp derived from CMSConfigApp provided by this application (incms_app.py/cms_apps.py):from aldryn_apphooks_config.app_base import CMSConfigApp from cms.apphook_pool import apphook_pool from django.utils.translation import ugettext_lazy as _ from .models import NewsBlogConfig class NewsBlogApp(CMSConfigApp): name = _('NewsBlogApp') urls = ['aldryn_newsblog.urls'] app_name = 'aldryn_newsblog' # this option is specific of CMSConfigApp, and links the # CMSApp to a specific AppHookConfig model app_config = NewsBlogConfig apphook_pool.register(NewsBlogApp)Implements your views inheriting theAppConfigMixin:from django.views.generic.detail import DetailView from aldryn_apphooks_config.mixins import AppConfigMixin class ArticleDetail(AppConfigMixin, DetailView): def get_queryset(self): return Article.objects.namespace(self.namespace)AppConfigMixinprovides a complete support to namespaces, so the view is not required to set anything specific to support them; the following attributes are set for the view class instance:current namespace inself.namespacenamespace configuration (the instance of NewsBlogConfig) inself.configcurrent application in thecurrent_appparameter passed to the Response classTest setupTo properly setup the data for tests to run for a apphook-config enabled application, make sure you add the following code to your TestCase:MyTestCase(): def setUp(self): # This is the namespace represented by the AppHookConfig model instance self.ns_newsblog = NewsBlogConfig.objects.create(namespace='NBNS') self.page = api.create_page( 'page', self.template, self.language, published=True, # this is the name of the apphook defined in the CMSApp class apphook='NewsBlogApp', # The namespace is the namespace field of the AppHookConfig instance created above apphook_namespace=self.ns_newsblog.namespace) # publish the page to make the apphook available self.page.publish(self.language)Changelog0.7.0 (2023-05-07)Add Django 3.2+ support0.6.0 (2020-05-12)Add Django 3.0 support0.5.3 (2019-10-19)Fix media asset declaration on django 2.2+0.5.2 (2019-01-02)Changed deprecatedrel.totoremote_field.modelFixed migration for example appFixed issues for Django 2.0 and up0.5.1 (2018-12-18)Added support for Django 2.0 and 2.1Removed support for Django < 1.11Adapted testing infrastructure (tox/travis) to incorporate django CMS 3.6Fixed setup.py0.4.2 (2018-12-17)Fixed issue with Django 1.10 and below in AppHookConfigWidget0.4.1 (2018-04-10)django-appdata>=0.2.0 is now required0.4.0 (2018-03-19)Added Django 1.11 compatibilityAdded django CMS 3.5 compatibilityImplemented django-appdata 0.2 interfaceRemoved south migrationsDropped support for django CMS 3.3 and belowAllowed use setup_config as decorators0.3.3 (2017-03-06)Fixed MANIFEST.in typo0.3.2 (2017-03-06)Fixed setup.py issueAdded locale files to MANIFEST.in0.3.1 (2017-03-02)Added translation systemAdded german translation0.3.0 (2017-01-06)Allowed override AppHookConfigField attributesDropped Django 1.7 and belowDropped django CMS 3.1 and belowAdded Django 1.10 support0.2.7 (2016-03-03)Set namespace as readonlyAdded official Django 1.9 supportUpdated readmeUsed path_info instead of path in resolve0.2.6 (2015-10-05)Added support for Python 3.5Added support for Django 1.9a1Code style cleanup and tests0.2.5 (2015-09-25)Added support for Django 1.8, django CMS 3.2AppHookConfigTranslatableManager.get_queryset should use queryset_classSkipped overriding admin form if app_config field not present0.2.4 (2015-04-20)Fixed issue where an apphook could not be changed, once set.Added optional ‘default’ kwarg to namespace_url templatetag0.1.0 (2014-01-01)Released first version on PyPI.
aldryn-background-image
============Aldryn Background Image============Aldryn Background Image provides a plugin that allows you to set, well,background images.------------Installation------------This plugin requires `django CMS` 3.0.12 or higher to be properly installed.* Within your ``virtualenv`` run ``pip install aldryn-background-image``* Add ``'aldryn_background_image'`` to your ``INSTALLED_APPS`` setting.* Run ``manage.py migrate aldryn_background_image``.-----Usage-----TBD------------Translations------------If you want to help translate the plugin please do it on Transifex:https://www.transifex.com/projects/p/django-cms/resource/aldryn-background-image/
aldryn-background-image-hf
No description available on PyPI.
aldryn-blog
Simple blogging application. It allows you to:write a tagable post messageplug in latest post messages (optionally filtered by tags)attach post message archive viewInstallationAldryn Platform UsersChoose a site you want to install the add-on to from the dashboard. Then go toApps->Install appand clickInstallnext toBlogapp.Redeploy the site.Manual InstallationNOTE: If you are using a database other than PostgresSQL, check out the table below.Database support:SQLite3MySQLPostgresSQLNot supportedRequires Time zone supportFully supportedRunpip installaldryn-blog.Add below apps toINSTALLED_APPS:INSTALLED_APPS = [ … 'aldryn_blog', 'aldryn_common', 'django_select2', 'djangocms_text_ckeditor', 'easy_thumbnails', 'filer', 'hvad', 'taggit', # for search 'aldryn_search', 'haystack', … ]PostingYou can add post messages in the admin interface now. Search for the labelAldryn_Blog.In order to display them, create a CMS page and install the app there (chooseBlogfrom theAdvanced Settings->Applicationdropdown).Now redeploy/restart the site again.The above CMS site has become a blog post archive view.About the Content of a PostIn Aldryn Blog, there are two content fields in each Post which may be confusing:Lead-In andBodyThe Lead-In is text/html only and is intended to be a brief “teaser” or introduction into the blog post. The lead-in is shown in the blog list-views and is presented as the first paragraph (or so) of the blog post itself.It is not intended to be the whole blog post.To add the body of the blog post, the CMS operator will:Navigate to the blog post view (notthe list view);Click the “Live” button in the CMS toolbar to go into edit-mode;Click the “Structure” button to enter the structure sub-mode;Here the operator will see the placeholder “ALDRYN_BLOG_POST_CONTENT”, use the menu on the far right of the placeholder to add whatever CMS plugin the operator wishes –– this will often be the Text plugin;Double-click the new Text plugin (or whatever was selected) to add the desired content;Save changes on the plugin’s UI;Press the “Publish” button in the CMS Toolbar.Available CMS Plug-insLatest Blog Entriesplugin lets you listnmost frequent blog entries filtered by tags.Blog Authorsplugin lists blog authors and the number of posts they have authored.Tagsplugin lists the tags applied to all posts and allows filtering by these tags.SearchIf you want the blog posts to be searchable, be sure to installaldryn-searchand its dependencies. Your posts will be searchable usingdjango-haystack.You can turn it this behavior off by settingALDRYN_BLOG_SEARCH = Falsein your django settings.Additional SettingsALDRYN_BLOG_SHOW_ALL_LANGUAGES: By default, only the blog posts in the current language will be displayed. By setting the value of this option toTrue, you can change the behaviour to display all posts from all languages instead.ALDRYN_BLOG_USE_RAW_ID_FIELDS: Enable raw ID fields in admin (default = False)
aldryn-boilerplates
The conceptAldryn Boilerplates aims to solve a familiar Django problem. Sometimes re-usable applications need to provide their own templates and staticfiles, but in order to be useful, these need to commit themselves to particular frontend expectations - thereby obliging the adopter to override these files in order to adapt the application to other frontends, or create a new fork of the project aimed at a different frontend setup.It’s especially difficult to provide a rich and complete frontend for a re-usable application, because there’s a conflict between creating ausefulfrontend and creating anagnosticone.The solution is to build in provision for different, switchable, frontend expectations into the re-usable application, and this is what Aldryn Boilerplates does.On theAldrynplatform, aBoilerplateis a complete set of frontend expectations, assumptions, opinions, conventions, frameworks, templates, static files and more - a standard way of working for frontend development.Many developers do in fact work with their own preferred standard sets of frontend tools and code for all their projects; in effect, with their own Boilerplates, even if they don’t use that name. Aldryn Boilerplates is intended to make it easier to provide support for multiple Boilerplates in res-usable applications, and to switch between them.If users of a particular frontend framework or system would like to use it with a certain re-usable application, they now no longer need to rip out and replace the existing one, or override it at the project level every single time. Instead with Aldryn Boilerplates they can simplyaddthe frontend files to the application, alongside the ones for existing supported Boilerplates.A simple setting in the project tells applications that support Aldryn Boilerplates which one to use.Using Aldryn BoilerplatesAldryn Boilerplates doesn’t change the way regular files intemplatesandstaticare discovered - a re-usable application that supports Aldryn Boilerplates can also work perfectly well in a project that doesn’t have it installed.However, to support Aldryn Boilerplates, your application should place Boilerplate-specific frontend files inboilerplates/my-boilerplate-name/templates/andboilerplates/my-boilerplate-name/static/.For example, to add support for the Standard Aldryn Boilerplate (aldryn-boilerplate-bootstrap3) to your application, place the files inboilerplates/bootstrap3/templates/andboilerplates/bootstrap3/static/.Hintdon’t forget to addboilerplatestoManifest.in, alongsidestaticandtemplateswhen creating Python packages.NoteThe convention is to prefix the github repository name withaldryn-boilerplate-. Your Boilerplate could be called something likealdryn-boilerplate-mycompany-awesome. To use it in a project, you’d setALDRYN_BOILERPLATE_NAME ='mycompany-awesome'and put templates and static files intoboilerplates/mycompany-awesome/in Addons.ALDRYN_BOILERPLATE_NAMEis set automatically on Aldryn based on"identifier":"mycompany-awesome"inboilerplate.jsonwhen submitting a boilerplate to Aldryn.InstallationNotealdryn-boilerplates comes pre-installed on the Aldryn Platform andALDRYN_BOILERPLATE_NAMEis set automatically.pip install aldryn-boilerplatesConfigurationDjango 1.8+In general configuration stays the same but you should respect changes that were introduced by django 1.8. In particular in Django 1.8 context processors were moved fromdjango.coretodjango.template.Be sure to includealdryn_boilerplatestoINSTALLED_APPS, adjustSTATICFILES_FINDERSand finally configureTEMPLATES.ForTEMPLATESyou need to addaldryn_boilerplates.context_processors.boilerplatetocontext_processorsand alterloadersin the same way as we do it for Django versions prior to 1.8.Notethat in the example below we are altering the default values, so if you are using something that is custom - don’t forget to add that too.Here is an example of a simple configuration:INSTALLED_APPS = [ ... 'aldryn_boilerplates', ... ] STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'aldryn_boilerplates.staticfile_finders.AppDirectoriesFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ) TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'OPTIONS': { 'context_processors': [ 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', 'django.template.context_processors.i18n', 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.template.context_processors.media', 'django.template.context_processors.csrf', 'django.template.context_processors.tz', 'sekizai.context_processors.sekizai', 'django.template.context_processors.static', 'cms.context_processors.cms_settings', 'aldryn_boilerplates.context_processors.boilerplate', ], 'loaders': [ 'django.template.loaders.filesystem.Loader', 'aldryn_boilerplates.template_loaders.AppDirectoriesLoader', 'django.template.loaders.app_directories.Loader', ], }, }, ]Adding aldryn-boilerplate support to existing packagesThe recommended approach is to add a dependency to aldryn-boilerplates and to move existingstaticandtemplatefiles to a boilerplate folder (completely removestaticandtemplates). If you’re in the process of re-factoring your existing templates with something new, put them into thelegacyboilerplate folder and setALDRYN_BOILERPLATE_NAME='legacy'on projects that are still using the old templates. The new and shiny project can then useALDRYN_BOILERPLATE_NAME='bootstrap3'to use the new Aldryn Bootstrap Boilerplate (aldryn-boilerplate-bootstrap3). Or any other boilerplate for that matter.Removingstaticandtemplateshas the benefit of removing likely deprecated templates from the very prominent location, that will confuse newcomers. It also prevents having not-relevant templates and static files messing up your setup.
aldryn-bootstrap3
Aldryn Bootstrap 3is a plugin bundle for django CMS providing several components from the popularBootstrap 3framework.This addon is compatible withDivio Cloudand is also available on thedjango CMS Marketplacefor easy installation.ContributingThis is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourcontribution guidelines.One of the easiest contributions you can make is helping to translate this addon onTransifex.DocumentationSeeREQUIREMENTSin thesetup.pyfile for additional dependencies:Python 2.7, 3.3 or higherDjango 1.6 or higherDjango Filer 1.2.4 or higherDjango Text CKEditor 3.1.0 or higherMake suredjango Fileranddjango CMS Text CKEditorare installed and configured appropriately.InstallationFor a manual install:runpip installaldryn-bootstrap3addaldryn_bootstrap3to yourINSTALLED_APPSrunpython manage.py migrate aldryn_bootstrap3ConfigurationAldryn Bootstrap 3replacesthe following django CMS plugins:django CMS Link:Link and Buttondjango CMS Picture:Imagedjango CMS File:FileIt provides the followingstandardBootstrap 3 components:AccordionAlertBlockquoteCarouselCodeGrid (Row and Column)GlyphiconsJumbotronLabelList GroupPanel (Heading, Body and Footer)ResponsiveTabsWellIt also provides the following3rd partycomponents:Font AwesomeSpacerThese components need to be manually configured in order to work properly inside your project. Seethis gistfor additional information on a recommended spacer configuration.SettingsThis addon provides astandardtemplate for Carousels. You can provide additional style choices by adding aALDRYN_BOOTSTRAP3_CAROUSEL_STYLESsetting:ALDRYN_BOOTSTRAP3_CAROUSEL_STYLES = [ ('feature', _('Featured Version')), ]You’ll need to create thefeaturefolder insidetemplates/aldryn_bootstrap/plugins/carousel/otherwise you will get atemplate does not existerror. You can do this by copying thestandardfolder inside that directory and renaming it tofeature.In addition you can set or extend your own icon fonts usingALDRYN_BOOTSTRAP3_ICONSETS:ALDRYN_BOOTSTRAP3_ICONSETS = [ ('glyphicons', 'glyphicons', 'Glyphicons'), ('fontawesome', 'fa', 'Font Awesome'), # custom iconsets have to be JSON ('{"iconClass": "icon", "iconClassFix": "icon-", "icons": [...]}', 'icon', 'Custom Font Icons'), ('{"svg": true, "spritePath": "sprites/icons.svg", "iconClass": "icon", "iconClassFix": "icon-", "icons": [...]}', 'icon', 'Custom SVG Icons'), ]The default grid size is set to24when validating the column input, you can override this by setting:ALDRYN_BOOTSTRAP3_GRID_SIZE = 12Running TestsYou can run tests by executing:virtualenv env source env/bin/activate pip install -r tests/requirements.txt python setup.py test
aldryn-bootstrap3-resurrected
ResurrectedWhile this has been deprecated by Divio/Aldryn… I have resurrected it so we can upgrade some old applications still using it.pip installaldryn-bootstrap3-resurrectedDeprecatedThis project has been succeeded bydjangocms-bootstrap4, and is no longer supported.Divio will undertake no further development or maintenance of this project. If you are interested in taking responsibility for this project as its maintainer, please contact us via www.divio.com.Aldryn Bootstrap3Aldryn Bootstrap 3is a plugin bundle for django CMS providing several components from the popularBootstrap 3framework.This addon is compatible withDivio Cloudand is also available on thedjango CMS Marketplacefor easy installation.ContributingThis is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourcontribution guidelines.One of the easiest contributions you can make is helping to translate this addon onTransifex.DocumentationSeeREQUIREMENTSin thesetup.pyfile for additional dependencies:Python 2.7, 3.3 or higherDjango 1.6 or higherDjango Filer 1.2.4 or higherDjango Text CKEditor 3.1.0 or higherMake suredjango Fileranddjango CMS Text CKEditorare installed and configured appropriately.InstallationFor a manual install:runpip installaldryn-bootstrap3addaldryn_bootstrap3to yourINSTALLED_APPSrunpython manage.py migrate aldryn_bootstrap3ConfigurationAldryn Bootstrap 3replacesthe following django CMS plugins:django CMS Link:Link and Buttondjango CMS Picture:Imagedjango CMS File:FileIt provides the followingstandardBootstrap 3 components:AccordionAlertBlockquoteCarouselCodeGrid (Row and Column)GlyphiconsJumbotronLabelList GroupPanel (Heading, Body and Footer)ResponsiveTabsWellIt also provides the following3rd partycomponents:Font AwesomeSpacerThese components need to be manually configured in order to work properly inside your project. Seethis gistfor additional information on a recommended spacer configuration.SettingsThis addon provides astandardtemplate for Carousels. You can provide additional style choices by adding aALDRYN_BOOTSTRAP3_CAROUSEL_STYLESsetting:ALDRYN_BOOTSTRAP3_CAROUSEL_STYLES = [ ('feature', _('Featured Version')), ]You’ll need to create thefeaturefolder insidetemplates/aldryn_bootstrap/plugins/carousel/otherwise you will get atemplate does not existerror. You can do this by copying thestandardfolder inside that directory and renaming it tofeature.In addition you can set or extend your own icon fonts usingALDRYN_BOOTSTRAP3_ICONSETS:ALDRYN_BOOTSTRAP3_ICONSETS = [ ('glyphicons', 'glyphicons', 'Glyphicons'), ('fontawesome', 'fa', 'Font Awesome'), # custom iconsets have to be JSON ('{"iconClass": "icon", "iconClassFix": "icon-", "icons": [...]}', 'icon', 'Custom Font Icons'), ('{"svg": true, "spritePath": "sprites/icons.svg", "iconClass": "icon", "iconClassFix": "icon-", "icons": [...]}', 'icon', 'Custom SVG Icons'), ]The default grid size is set to24when validating the column input, you can override this by setting:ALDRYN_BOOTSTRAP3_GRID_SIZE = 12Running TestsYou can run tests by executing:virtualenv env source env/bin/activate pip install -r tests/requirements.txt python setup.py test
aldryn-categories
No description available on PyPI.
aldryn-client
Installingpip installaldryn-clientUsing the clientfor more information seehttp://docs.aldryn.com/en/latest/tutorial/commandline/installation.htmlReleasing the binaryAll of the binaries have to be built on the operating systems they’re being built for.OS XNative:./scripts/build-unixLinuxNative:./scripts/build-unixWith Docker:docker-composebuilddocker-composerun--rmbuilderWindowsConnect to a Windows VM (the only requirement is Python 2.7) and open a PowerShell.\scripts\build-windows.ps1
aldryn-common
No description available on PyPI.
aldryn-dashboard
UNKNOWN
aldryn-disqus
IntegrateDisqusinto yourdjango CMSandAldryn projects, and allow users to comment on and discuss content on your site.Disqus is one of the most popular commenting systems available. It’s especially suited to news and weblog content, but can be applied anywhere that you’d like to provide discussion functionality.With the Aldryn Disqus Addon you can integrate it into your projects and start building an online community in just a few simple steps.See it in action on the django CMS weblog.InstallationAldryn InstallationChoose a site you want to install the add-on to from the dashboard. Then go toApps->Install appand clickInstallnext toAldryn Disqusapp.Ensure you correctly set theDISQUS_SHORTNAMEsetting in the control panel to the identifier you configured for your project at Disqus.Redeploy the site.Manual InstallationAdd ‘aldryn_disqus’ to your project’s settings and runpython manage.py migrate.AddDISQUS_SHORTNAME = 'projectname'to your settings. Whereprojectnameis the identifier you configured for your project at Disqus.UsageAfter configuring your Disqus account athttps://disqus.com/, simply add theAldryn Disqusplugin on the desired page(s) in your project.
aldryn-django
An opinionated Django setup bundled as a Divio Cloud addon.This package will auto configure Django, including admin and some other basic packages. It also handles sane configuration of the database connection and static and media files.The goal is to keep the footprint inside the Django website project as small as possible, so updating things usually just means bumping a version inrequirements.txtand no other changes in the project.This addon still uses the legacy “Aldryn” naming. You can read more about this in oursupport section.ContributingThis is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourcontribution guidelines.We’re grateful to all contributors who have helped create and maintain this package. Contributors are listed at thecontributorssection.DocumentationSeeREQUIREMENTSin thesetup.pyfile for additional dependencies:InstallationNothing to do.aldryn-djangois part of the Divio Cloud platform.For a manual install:ImportantPlease follow the setup instructions for installingaldryn-addonsfirst!Addaldryn-djangoto your projectsrequirements.txtor pip install it.The version is made up of the Django release with an added digit for the release version of this package itself.If you followed thealdryn-addonsinstallation instructions, you should already have aALDRYN_ADDONSsetting. Addaldryn-djangoto it:INSTALLED_ADDONS = [ 'aldryn-django', ]Create theaddons/aldryn-djangodirectory at the same level as yourmanage.py. Then copyaddon.json,aldryn_config.pyfrom the matching sourcecode into it.Also create asettings.jsonfile in the same directory with the following content:{ "languages": "[\"en\", \"de\"]" }NoteThe need to manually copyaldryn_config.pyandaddon.jsonis due to legacy compatibility with the Divio Cloud platform and will no longer be necessary in a later release of aldryn-addons.Configurationaldryn-django comes with entrypoints formanage.pyandwsgi.py. This makes it possible to just have a small snippet of code in the website project that should never change inside those files. The details of local project setup (e.g reading environment variables from a.envfile) are then up to the currently installed version ofaldryn-django. Also other opinionated things can be done, like using a production-grade wsgi middleware to serve static and media files.Put this in manage.py:#!/usr/bin/env python import os from aldryn_django import startup if __name__ == "__main__": startup.manage(path=os.path.dirname(os.path.abspath(__file__)))put this in wsgi.py:import os from aldryn_django import startup application = startup.wsgi(path=os.path.dirname(__file__))APIsMigrationsTo run migrations, call the commandaldryn-djangomigrate. This will run a series of commands for the migration stage of a project.aldryn-djangowill runpython manage.py migrate. But any addon can add stuff to this migration step by appending commands to theMIGRATION_COMMANDSsetting. For examplealdryn-cms(django-cms as an Addon) will runpython manage.py cmsfix-treeat the migration stage.Production ServerCallingaldryn-djangowebwill start an opinionated Django setup for production (currently uWSGI based).Running TestsYou can run tests by executing:virtualenv env source env/bin/activate pip install -r tests/requirements.txt python setup.py test
aldryn-django-cms
No description available on PyPI.
aldryn-events
Aldryn Events is an Aldryn-compatible application for publishing information about events in django CMS.Please see the officialAldryn Events documentation, which includes information on installation and getting started.It also containsdocumentation for content editors and end-users.ContributingAldryn Events is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourguidelines for Aldryn addons.RequirementsPython 2.7, 3.4, 3.5django CMS 3.4 or laterDjango 1.8 - 1.11
aldryn-faq
Aldryn FAQ is anAldryn-compatible simple Frequently Asked Questions (FAQ) application fordjango CMS.It allows you to present categorized lists of frequently asked questions and their answers.Content editorslooking for documentation on how to use the editing interface should refer to ouruser manualsection.Django developerswho want to learn more about django CMS, as well as how to install, configure and customize it for their own projects should refer to thedocumentationsections.Installation & UpdatesPlease head over to ourdocumentationfor all the details on how to install, configure and use Aldryn FAQ.You can also find instructions onhow to upgradefrom earlier versions.ContributingThis is a community project. We love to get any feedback in the form ofissuesandpull requests. Before submitting your pull request, please review our guidelines forAldryn addons.
aldryn-forms
Aldryn Forms allows you to build flexible HTML forms for yourAldrynanddjango CMSprojects, and to integrate them directly in your pages.Forms can be assembled using the form builder, with the familiar simple drag-and-drop interface of the django CMS plugin system.Submitted data is stored in the Django database, and can be explored and exported using the admin, while forms can be configured to send a confirmation message to users.ContributingThis is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourcontribution guidelines.We’re grateful to all contributors who have helped create and maintain this package. Contributors are listed at thecontributorssection.InstallationAldryn Platform UsersChoose a site you want to install the add-on to from the dashboard. Then go toApps->Install appand clickInstallnext toFormsapp.Redeploy the site.Upgrading from < 2.0Version 2.0 introduced a new model for form data storage calledFormSubmission. The oldFormDatamodel has been deprecated. Although theFormDatamodel’s data is still accessible through the admin, all new form data will be stored in the newFormSubmissionmodel.Manuall InstallationRunpip installaldryn-forms.UpdateINSTALLED_APPSwithINSTALLED_APPS = [ ... 'absolute', 'aldryn_forms', 'aldryn_forms.contrib.email_notifications', 'emailit', 'filer', ... ]Configurealdryn-boilerplates(https://pypi.python.org/pypi/aldryn-boilerplates/).To use the old templates, setALDRYN_BOILERPLATE_NAME='legacy'. To usehttps://github.com/aldryn/aldryn-boilerplate-standard(recommended, will be renamed toaldryn-boilerplate-bootstrap3) setALDRYN_BOILERPLATE_NAME='bootstrap3'.Also ensure you define ane-mail backendfor your app.Creating a FormYou can create forms in the admin interface now. Search for the labelAldryn_Forms.Create a CMS page and install theFormsapp there (chooseFormsfrom theAdvanced Settings->Applicationdropdown).Now redeploy/restart the site again.The above CMS site has become a forms POST landing page - a place where submission errors get displayed if there are any.Available Plug-insFormplugin lets you embed certain forms on a CMS page.Fieldsetgroups fields.Text Fieldrenders text input.Text Area Fieldrenders text input.Yes/No Fieldrenders checkbox.Select Fieldrenders single select input.Multiple Select Fieldrenders multiple checkboxes.File fieldrenders a file upload input.Image fieldsame asfile fieldbut validates that the uploaded file is an image.
aldryn-forms-recaptcha-plugin
Aldryn Forms Recaptcha PluginThis python module is open-source, available here:https://gitlab.com/what-digital/aldryn-forms-recaptcha-plugin/Setuppip install aldryn-forms-recaptcha-pluginAdd the following to yoursettings.py:INSTALLED_APPS = [ 'aldryn_forms_recaptcha_plugin', 'snowpenguin.django.recaptcha3', # must be below the plugin ] RECAPTCHA_PUBLIC_KEY = env('RECAPTCHA_PUBLIC_KEY', '123') RECAPTCHA_PRIVATE_KEY = env('RECAPTCHA_PRIVATE_KEY', '123') # set this to 0 (or 1) to deactivate (or always activate) the captcha protection RECAPTCHA_SCORE_THRESHOLD = 0.85If you're using bootstrap4, beware that django renders the form errors with classinvalid-feedback, which is invisible in bs4.Versioning and Packagesversioning is done in versioning inaldryn_forms_recaptcha_plugin/__init__.pyfor each version a tag is added to the gitlab repository in the form of^(\d+\.)?(\d+\.)?(\*|\d+)$, example: 0.0.10There is a PyPI version which relies on the gitlab tags (the download_url relies on correct gitlab tags being set):https://pypi.org/project/aldryn-forms-recaptcha-plugin/There is a DjangoCMS / Divio Marketplace add-on which also relies on the gitlab tags:https://marketplace.django-cms.org/en/addons/browse/aldryn-forms-recaptcha-plugin/In order to release a new version of the Divio add-on:Increment version number inaddons-dev/aldryn-forms-recaptcha-plugin/aldryn_forms_recaptcha_plugin/__init__.pydivio addon validatedivio addon uploadThen git add, commit and tag with the version number and push to the repogit add . git commit -m "<message>" git tag 0.0.XX git push origin 0.0.19Then, in order to release a new pypi version:python3 setup.py sdist bdist_wheeltwine upload --repository-urlhttps://test.pypi.org/legacy/dist/*twine upload dist/*DevelopmentRunpip install -e ../aldryn-forms-recaptcha-plugin/in your demo projectYou can open aldryn_forms_recaptcha_plugin in pycharm and set the python interpreter of the demo project to get proper django support and code completion.Dependenciesaldryn_forms
aldryn-gallery
Aldryn Gallery is build on the principle of plugin-in-plugin provided by django-cms since version 3.0.Installationpip install aldryn-galleryAddaldryn_gallerytoINSTALLED_APPS.Configurealdryn-boilerplates(https://pypi.python.org/pypi/aldryn-boilerplates/).To use the old templates, setALDRYN_BOILERPLATE_NAME='legacy'. To usehttps://github.com/aldryn/aldryn-boilerplate-standard(recommended, will be renamed toaldryn-boilerplate-bootstrap3) setALDRYN_BOILERPLATE_NAME='bootstrap3'.When using thelegacyboilerplate,jQueryandclassjscl.gallery are required.
aldryn-gallery-timed
UNKNOWN
aldryn-installer
Command to easily bootstrap django CMS projectsFree software: BSD licenseFeaturesaldryn-installeris a console wizard to help bootstrapping a django CMS project.Refer todjango CMS Tutorialon how to properly setup your first django CMS project.InstallationCreate an empty virtualenv:virtualenv /virtualenv/path/my_projectInstallaldryn-installer:pip install aldryn-installeror:pip install -e git+https://github.com/nephila/aldryn-installer#egg=aldryn-installerDocumentationSeehttp://aldryn-installer.readthedocs.orgCaveatsWhile this wizard try to handle most of the things for you, it doesn’t check for all the proper native (non python) libraries to be installed. Before running this, please check you have the proper header and libraries installed and available for packages to be installed.Libraries you would want to check:libjpeg (for JPEG support inPillow)zlib (for PNG support inPillow)postgresql (forpsycopg)libmysqlclient (forMysql-Python)History0.1.0 (2013-10-19)First public release.0.1.1 (2013-10-20)Improved documentation on how to fix installation in case of missing libraries.
aldryn-jobs
Aldryn Jobs is an Aldryn-compatible django CMS application for publishing job openings and receiving applications.Please see theAldryn Jobs documentation, which includes information on installation and getting started.It also containsdocumentation for content editors and end-users.ContributingAldryn Jobs is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourguidelines for Aldryn Addons.Requirementsdjango CMS 3.4+Django 1.8 - 1.11
aldryn-lightslider
# aldryn-lightslider Aldryn-fied lightslider djangocms plugin Lightslider: [http://sachinchoolur.github.io/lightslider/index.html]
aldryn-locations
Aldryn Locations is the easiest way to integrate Google Maps intoAldrynanddjango CMSsites via Google’s API.It’s fully featured, and includes several plugins to provide support for:multiple locationslocation informationroutes and directionssearchingAldryn Platform UsersChoose a site you want to install the add-on to from the dashboard. Then go toApps->Install appand clickInstallnext toLocationsapp.Redeploy the site.Manual Installationpip install aldryn-locationsAddaldryn_locationstoINSTALLED_APPSand runmanage.py migrate aldryn_locations.AddALDRYN_LOCATIONS_GOOGLEMAPS_APIKEYto yoursettings.pyusing the provided key from googleConfigurealdryn-boilerplates(https://pypi.python.org/pypi/aldryn-boilerplates/).To use the old templates, setALDRYN_BOILERPLATE_NAME='legacy'. To usehttps://github.com/aldryn/aldryn-boilerplate-standard(recommended, will be renamed toaldryn-boilerplate-bootstrap3) setALDRYN_BOILERPLATE_NAME='bootstrap3'.PluginsAldryn Locations offers five different plugins to use. First one is theMapwhich works with the Google Maps JavaScript API. The other four,Place,Directions,SearchandView, are based on [Google’s embed maps](https://developers.google.com/maps/documentation/embed/guide).PlacePlacemode displays a map pin at a particular place or address, such as a landmark, business, geographic feature, or town.DirectionsDirectionsmode displays the path between two or more specified points on the map, as well as the distance and travel time.SearchSearchmode displays results for a search across the visible map region. It’s recommended that a location for the search be defined, either by including a location in the search term (record stores in Seattle) or by including a center and zoom parameter to bound the search.ViewViewmode returns a map with no markers or directions baed on the search term.
aldryn-mailchimp
Aldryn MailChimp is the easiest way to integrate MailChimp intoAldrynanddjango CMSsites.With Aldryn MailChimp you can:Allow users to subscribe to mailing listsDisplays existing campaignsTo activate MailChimp integration:provide anAPI Keywhile installing an appcreate an CMS page for hooking the app (navigate toAdvanced Settings->Applicationon CMS edit page)When all above is done, you can add MailChimp integration plugins to placeholders.Categories + Automatic MatchingVersion 0.2.4 introduced categories with automatic matching. You can define categories and add keywords to those categories to automatically sort synced campaigns into categories. You can define priorities for both campaigns and their keywords.MatchingOnce the campaigns have been fetched, the automatic matcher will go through all categories (starting from the top as defined in/admin/aldryn_mailchimp/category/) and scan each campaign for the defined keywords. You can specify keywords to be searched in any or multiple of the following three:campaign titlecampaign subjectcampaign contentOnce a match is found, the search for the current campaign will be stopped, the found category will be assigned to the campaign and the matcher will then continue with the next campaign.
aldryn-news
Simple news application. It allows you to:write a tagable newsplug in latest new messages (optionally filtered by tags)attach news archive viewInstallationAldryn Platrofm UsersChoose a site you want to install the add-on to from the dashboard. Then go toApps->Install appand clickInstallnext toNewsapp.Redeploy the site.Manuall InstallationRunpip installaldryn-news.Add below apps toINSTALLED_APPS:INSTALLED_APPS = [ … 'taggit', 'aldryn_news', 'aldryn_search', 'django_select2', 'djangocms_text_ckeditor', 'easy_thumbnails', 'filer', 'hvad', 'haystack', # for search … ]Posting newsYou can add news in the admin interface now. Search for the labelAldryn_News.In order to display them, create a CMS page and install the app there (chooseNewsfrom theAdvanced Settings->Applicationdropdown).Now redeploy the site again.The above CMS site has become a news archive view.Available Plug-insLatest News Entriesplugin lets you listnmost frequent news filtered by tags.SearchIf you want the news entries to be searchable, be sure to installaldryn-searchand its dependencies. Your entries will be searchable usingdjango-haystack.You can turn it this behavior off by settingALDRYN_NEWS_SEARCH = Falsein your django settings.
aldryn-newsblog
Aldryn News & Blog is anAldryn-compatible news and weblog application fordjango CMS.Content editorslooking for documentation on how to use the editing interface should refer to ouruser manualsection.Django developerswho want to learn more about django CMS, as well as how to install, configure and customize it for their own projects should refer to thedocumentationsections.Aldryn News & Blog is intended to serve as a model of good practice for development of django CMS and Aldryn applications.Installation & UpdatesPlease head over to ourdocumentationfor all the details on how to install, configure and use Aldryn News & Blog.You can also find instructions onhow to upgradefrom earlier versions.ContributingThis is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourcontribution guidelines.We’re grateful to all contributors who have helped create and maintain this package. Contributors are listed at thecontributorssection.
aldryn-newsblog-extra-plugins
aldryn-newsblog-extra-pluginsThis projects contains extra plugins for thealdryn-newsblogblogging system for DjangoCMS. It requiresaldryn-newsblogto be installed and setup correctly.ConfigurationPlugins that show a list of articles can be configured to show template choices, inspired byaldryn-events. To add, for example, a template namedlist, save it attemplates/aldryn-newsblog/plugins/(template name)and add the following to yoursettings.py:ALDRYN_NEWSBLOG_PLUGIN_STYLES=('list',)PluginsNewsBlogTaggedArticlesPluginShownor all articles that are tagged with a certain tag. This plugin shows style choices as described in "Configuration".NewsBlogTagRelatedPluginShownor all articles that are similarly tagged as the currently displayed article.This plugin only works correctly in a static placeholder on the aldryn_newsblog detail view.Changelog0.1.0NewsBlogTaggedArticlesPlugin now uses a separate template (aldryn_newsblog/plugins/tagged_articles.html).0.2.0Added NewsBlogCategoryRelatedPlugin.Changed NewsBlogTagRelatedPlugin behaviour: The plugin'sexcluded_tagsnow don't exclude articles from the queryset anymore, they just aren't taken into account for selecting the related articles of the currently displayed article.
aldryn-people
Aldryn People allows you to:add people and groups of people to your websitedisplay them on CMS pagesdownload vCardsPlease see theAldryn People documentation, which includes information on installation and getting started.It also containsdocumentation for content editors and end-users.ContributingAldryn People is a an open-source project. We’ll be delighted to receive your feedback in the form of issues and pull requests. Before submitting your pull request, please review ourguidelines for Aldryn Addons.We’re grateful to all contributors who have helped create and maintain this package.Contributors are listed atcontributions page.RequirementsPython 2.7, 3.4, 3.5, 3.6django CMS 3.4.5, 3.5, 3.6Django 1.11, 2.0, 2.1
aldryn-pypi-stats
UNKNOWN