code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# ProjectZero Undub
Undub project for Tecmo's Project Zero - the EU version (the first one) for the PS2.

## Why?
I first played Project Zero in 2010 and fell in love with everything about the game ...
well, everything except the English audio. I soon started making my own undubbed version
by merging the Japanese release (Zero) with the European one. At the time, it was an
entertaining journey of nights spent disassembling the main ELF, reverse-engineering
the game data, and transcribing the game audio from FMVs, and cutscenes. It was fun,
and I can safely say that I learned a lot from that experience.
## So, why now?
By chance, I stumbled on [wagrenier's GitHub page](https://github.com/wagrenier/ZeroUndub)
of the very same project more than 10 years later. That made me remember the good old
times, and I suddenly felt the urge to rewrite my old and ugly C code into modern python,
with a lot more additions! In fact, initially, the code was a mess of uncoordinated tools
that had to be run by hand. It produced a bunch of files (not a nice bootable ISO) that
required a not so easy to find program to be converted into an ISO image. Luckily, things
are a lot better now!
## What can it do?
With this code, it's possible to merge the European and Japanese versions of the game into
an undubbed European version. That is a European version but with all the audio / voices /
FMVs taken from the Japanese one. The original game was localized into 5 languages: English,
French, German, Spanish, and Italian. All languages share the same English audio but have
localized text and graphics. All languages except English have subtitles because, for some
reason, the developers decided not to include English subtitles in the English localization.
That is understandable but leaves the undubber with a severe problem since, once the English
audio is replaced with the Japanese one, subtitles become slightly necessary unless you are
fluent in Japanese. Still, I would argue that you are probably better off playing the original
Japanese game at that point.
This code re-enables the English subtitles and re-constructs the localized English file from
scratch, re-injecting the subtitles. I say re-injecting because the original English
localization does not have the English text of each FMV or in-game cutscene. Since they were
not to be shown in the first place, why bother? So a part of the allocated space for the text
has been filled with dummy text. But only a part of it. There are, in fact, 271 text entries
in each localization, but the English one has only 225 in it. By simply forcing the English
subtitles to show, the game WILL crash when trying to display the missing ones. By
reverse-engineering the localization binary file, it is possible to unpack it, replace the 225
dummy texts with the whole 271 English subtitles, and rebuild it into a perfectly working
English localization.
## Features
The idea of unpacking and re-constructing is a crucial aspect of this tool. This first
iteration of the Project Zero franchise packs all game-related files (except for FMVs) into
a huge 1.2GB binary file (IMG*BD.BIN). The file is accompanied by a small file (IMG_HD.BIN)
that serves as table of contents. From what I understand, the standard undubbing procedure
consists in replacing the specific bytes of each content to undub into the ISO at the correct
position. For example, to replace the English localization, one would need to find the exact
offset in the iso where the original one is stored and replace it, being very careful not to
exceed the original size. Doing so would overwrite data of other files, rendering the ISO
corrupted or not properly working. On the contrary, this tool takes a similar yet different
approach, by recreating and replacing the whole IMG_BD.BIN binary file with a new one containing
the patched contents. To do so, it parses the original IMG_BD.BIN binary file and extracts
all the necessary files for the undub process (localization files, audio, etc.). These files are
then replaced with the Japanese ones (or patched ones like for the English localization),
and a new binary file that can be replaced into the ISO is rebuilt from scratch. The only
constraint, as previously said, is that the new binary file has to be smaller or equal in size
to the original one. Luckily, the guys at Tecmo decided to align each file in the IMG_BD.BIN
file not just to an LBA multiple of 2048 (the size of a sector in an ISO) but to an LBA
multiple of 16 times 2048! The reason probably lies in DVD access timings, but according
to my tests, aligning the files to a smaller multiple of 2048 does not incur in access
timing problems. The only effect is the reduction in size of the resulting IMG_BD.BIN. In fact,
by aligning the files to LBAs just multiple of 2048, it is possible to save around 30MB, which
is plenty enough to compensate for the extra few kB of the English localization and the
difference in size that some Japanese audio files have with respect to the equivalent
English ones. This method effectively removes \_any* limitation to what can be injected into
the European version! So, contrary to other tools, this one does not need to sacrifice anything
in the original game, say a non-English language containing all 271 subtitles, to accommodate
the extra English subtitles. This results in a clean undubbed ISO where all the languages are
fully functional!
### In summary:
- The main ELF is patched to show the English subtitles
- English subtitles have been transcribed and injected into the English localization
- FMVs (both PAL and NTSC) have been remuxed with the Japanese audio track
- All other languages are left unchanged and functional
- The ELF can be further patched to add extra nice features! (see below)
### Extra features:
Usually, the game asks to choose a language only the first time, when no save data is
present. The following times the game starts in the previous language, forcing anyone
who would like to change the language to start the game without the memory card inserted. The
ELF can be patched to restore the language selection screen at every boot. The only
downside is that the game does not remember the video format preference. Meaning that if
previously the game was being played in NTSC format, that option must be reselected every
time. This is an optional feature, so it is left to the user's preference.
Given the recent progress in PS2 emulation, specifically regarding [PCSX2](https://pcsx2.net/)
emulator, the great community behind it has come up with nice enhancements, such as 16:9
(not stretched, but actual 16:9 in-game rendering) widescreen patches. Thanks to a tool
called "PS2 Patch Engine" developed by
[pelvicthrustman](https://www.psx-place.com/threads/ps2-patch-engine-by-pelvicthrustman.19167/)
and later ported to Linux ([PS2_Pnacher](https://github.com/Snaggly/PS2_Pnacher)) by
[Snaggly](https://github.com/Snaggly), it is possible to patch the ELF to incorporate some
of these patches. Obviously, I decided to include the relevant bits into the game ELF,
allowing to:
- Enable 16:9 aspect ratio in-game
- Enable 19:9 aspect ratio in FMVs via adding black borders instead of stretching (strongly
suggested if the 19:9 in-game patch is enabled, but the user is free to leave them stretched
if against the pillarbox effect)
- Remove in-game bloom effect
- Remove in-game dark filter
- Remove in-game noise effect
- Remove main menu noise effect
All the above community patches are optional, and the choice of which to enable is left
to the user of this software (although I recommend enabling them all, except for the main menu
noise effect which I actually prefer to leave it enabled).
## What's next?
During the complete rewrite of my old code, I rediscovered a lot of stuff about the game
files and their format. With a bit of effort, this information can be found on the internet,
but there is not a centralized source for it all. With time, I would like to create a small
compendium of what I ended up researching about PK2 (game packages), STR (game audio), OBJ
Localization files, Font files, and other stuff as well. This repository already contains a
lot of code to handle a good part of them. Still, if possible, I would like to release
specific tools to address each one of them and the relative documentation.
#### *Small Update*
*I've published some of the tools I used for the undub together with an initial documentation for some game formats here: [https://github.com/karas84/ProjectZeroTools](https://github.com/karas84/ProjectZeroTools).*
## Usage
The program comes as both a python3 command-line tool (with very few dependencies) or as a tkinter GUI.
To run it, you will need:
- A European Project Zero (SLES508.21) ISO image dumped from your own original copy of the game
- A Japanese Zero (SLPS250.74) ISO image dumped from your own original copy of the game
- Python 3.7 or newer
The program has been developed on Linux, but it also works on Windows if both python
and the needed dependencies are correctly installed (both natively or on WSL, although in
the first versions of WSL the disk access is quite slower than the native one and the
undubbing process may require more time to complete).
### Command Line Interface (CLI)
The command-line interface (CLI) can be launched using the `undub.py` file. It accepts 3
mandatory arguments:
- The path to the European ISO
- The path to the Japanese ISO
- The output path for the undubbed ISO
Additionally, several flags can be specified to patch the ELF with extra features. Please
refer to the program's help for further details.
### Graphical User Interface (GUI)
The graphical user interface (GUI) can be launched using the `undub_gui.py` file. It is
built upon the tkinter library, which should be installed manually (but often comes
preinstalled with python3).
Instructions on how to preform the undub are shown in the "Info" section of the GUI.
## Copyright Disclaimer
This tool is supposed to be used with ISO files obtained from your own legally owned copies
of the games (both European and Japanese versions). I do not condone piracy and strongly
believe that game developers MUST be supported. It is for this precise reason that this
tool does not contain any asset whatsoever of the original game. Because I believe that
only the copyright holder has the right to distribute the files. For the very same reason,
the missing English subtitles are not stored as plaintext English but are kept as
hex strings of the bitwise or operation with the original English localization binary file.
This way, only by using the original ISO file, the subtitles can be reconstructed and
injected into the final ISO.
## Final Notes
I've tested the undubbed version on both PCSX2 and my own PS2. I've played it through
various spots using several save files and never encountered a problem. This does not
mean that there are no bugs, so if you find anything, please submit an issue here, and
I will try to address it!
Moreover, I decided to release this tool to let other people, who share the same love
for this game like me, be able to enjoy it to the fullest. I do not seek recognition, so
there are no modifications in the undubbed version that carry my name or any way to
trace back the undubbed game to me. What you get is an exact copy of the European version,
but with Japanese audio.
## Acknowledgements
This tool would not exist if it wasn't for the hard work of many individuals, to whom
I'd like to give my thanks:
- First, [wagrenier](https://github.com/wagrenier), for his
[ZeroUndub](https://github.com/wagrenier/ZeroUndub) project, who inspired me again
to work on this software
- [wagrenier](https://github.com/wagrenier) again, for his excellent python version of
[PssMux](https://github.com/wagrenier/PssMux), which makes this tool able to automatically
undub all FMVs in the game!
- [wagrenier](https://github.com/wagrenier) again, for his help with replacing all game models with the Japanese ones
- [pgert](https://forums.pcsx2.net/Thread-PCSX2-Widescreen-Game-Patches?pid=240786#pid240786)
and the whole [PCSX2 community](https://forums.pcsx2.net/) for the great patches that can
be injected into the game
- [pelvicthrustman](https://www.psx-place.com/threads/ps2-patch-engine-by-pelvicthrustman.19167/)
and [Snaggly](https://github.com/Snaggly) for their tools to inject patches into the ELF
- [weirdbeardgame](https://github.com/weirdbeardgame/), for the Kirie camera bug fix
- All the guys out there on so many forums I've lost track of, that released bits of
information about many game file formats
- Finally, to 2010 me, who painstakingly spent weeks building his own undub, reversed-engineered
the game data and transcribed all FMVs and cutscenes
| zeroundub | /zeroundub-1.3.2.tar.gz/zeroundub-1.3.2/README.md | README.md |
ZeroVM Theme for Sphinx
=======================
This package bundles the ZeroVM_ theme for Sphinx_.
Install the package and add this to your ``conf.py``::
import zerovm_sphinx_theme
html_theme_path = [zerovm_sphinx_theme.theme_path]
html_theme = 'zerovm'
That will configure the theme path correctly and activate the theme.
Changelog
---------
1.1 (2014-03-28):
Version 1.0 did not work since ``README.rst`` wasn't distributed.
1.0 (2014-03-28):
First release.
.. _zerovm: http://zerovm.org/
.. _sphinx: http://sphinx-doc.org/
| zerovm-sphinx-theme | /zerovm-sphinx-theme-1.1.tar.gz/zerovm-sphinx-theme-1.1/README.rst | README.rst |
# pyzerto-unofficial

## Overview
pyzerto-unofficial is a simple python3 API wrapper for the Zerto product by the eponymous corporation. It is intended to
simplify the deployment and management of Zerto in a code-driven manner. Potential uses for this API wrapper are as
diverse as you may wish to make it. An example script of automatically protecting tagged virtual machines (VMs) is
included in this library.
## Motiviation and Audience
The official Zerto API is a REST-driven architecture. This wrapper was developed to reduce the barrier of entry for
anyone interested in automating Zerto tasks via Python by abstracting the effort of generating a session key,
constructing correctly formatted CRUD requests, and identifying the correct URL endpoints for any given API call. This
can benefit new Zerto users, Zerto partners deploying at-scale environments, and anyone with a Python and Zerto
background.
## Disclaimer
This is not an official Zerto product in any way, shape, or form. Support is community-driven and you should thoroughly
test your scripts to ensure that you do not break anything by utilizing pyzerto-unofficial. In other words, don't blame
us if it messes something up!!
## Usage
### The Zerto API, explained
The Zerto Virtual Manager (or Zerto Cloud Appliance) acts as the management plane for all things Zerto-related in your
environment. To use the Zerto API, you will need:
* The IP address of your target ZVM or ZCA and access from the machine running the Python library over TCP port 9669
* Credentials of a valid admin-level account for your ZVM or ZCA. For a ZVM, this usually means an admin- or nearly-
admin level vCenter account, and for a ZCA, this is usually the Administrator account of the underlying Windows instance
or VM in AWS or Azure.
Using the IP address of the ZVM/ZCA and a valid user account, you can generate an API session token, which will then
need to be passed back to the ZVM/ZCA in the form of a properly formed header as well as a request body, if applicable,
in order to run API calls to do interesting things like get local site information, create a virtual protection group
(VPG) object, add VMs to said VPG, and so on. Furthermore, the ZVM/ZCA must have a valid license key installed in order
to successfully run API calls (except for generating an API session token and applying a license!)
### Using the wrapper
Start by executing zerto_auth.py. The function `login` accepts three arguments: `zvm_ip`, `zvm_user`, and
`zvm_password`. This function will return a `dict` with the properly formatted headers for running API commands against
the ZVM/ZCA, including a valid session token.
From here, you can instansiate classes which can be found in `zvm.py`, `vra.py`, and `vpg.py` modules. The theory behind
how the modules (and subsequent classes) were organized was based on the Zerto "object" that you would want to do
interesting things to. For example, if you want to get information about datastores visible to the ZVM, you will find
that method located in the `zvm` class under the `zvm` module. If you want to install or delete a VRA, you will find
`installVra` and `delVra` in the `vra` class under the `vra` module. Even though all API calls are executed against a
ZVM or ZCA, this modularization is intended to make sense to an admin.
#### A special note on Virtual Protection Groups
Virtual Protection Groups, or VPGs, require some specific clarification. To create and manage VPGs, you need to first
create a "VPG Object", to which you will then add VMs and specify specific instructions such as length of journal,
networks to attach do during a live failover or a test failover, and the like. However, none of your changes will be
applied unless and until you *commit* the settings.
One a VPG has been committed, the vpgSettings identifier *goes away*. Do not confuse the VPG identifier with a
vpgSettings identifier; these are two different things. Roughly speaking, you can think of the `vpgs` class in the `vpg`
module as anything having to do with existing VPG "containers" as a whole, and the `vpgSettings` class in the `vpg`
module as having to do with what's *inside* a VPG (with the exception of creating a VPG to begin with).
### Example: Adding a license
vczvmSession = login('10.0.10.50', '[email protected]', 'password')
z = zvm.zvm('10.0.10.50, vczvmSession)
z.addLicense('VALIDZERTOLICENSEKEYHERE')
### Example: Installing a VRA
import json
vczvmSession = login('10.0.10.50', '[email protected]', 'password')
v = vra.vra('10.0.10.50', vczvmSession)
testVraDict = {
"DatastoreIdentifier":"GUID-OF-VCENTER.DATASTOREMOREF",
"HostIdentifier":"GUID-OF-VCENTER.HOSTMOREF",
"HostRootPassword":"ESXIPASSWORD,
"MemoryInGb":3,
"NumOfCpus":1,
"NetworkIdentifier":"GUID-OF-VCENTER.NETWORKMOREF",
"UsePublicKeyInsteadOfCredentials":False,
"PopulatePostInstallation":False,
"VraNetworkDataApi":{
"DefaultGateway":"192.168.1.1",
"SubnetMask":"255.255.255.0",
"VraIPAddress":"192.168.1.90",
"VraIPConfigurationTypeApi":"Static"
}
}
v.installVRA(json.dumps(testVraDict))
### Example: Pairing a site with another site
import json, requests
vczvmSession = login('10.0.10.50', '[email protected]', 'password')
awszcaSession = login('172.16.20.21', 'Administrator', 'password')
zvmOnsite = zvm.zvm('10.0.10.50', vczvmSession)
zcaInCloud = zvm.zvm('172.16.20.21', awszcaSession)
zcaTokenObject = zcaInCloud.generatePeeringToken()
zcaTokenActual = zcaTokenObject.json().get('Token')
pairOutput = requests.post('https://10.0.10.50:9669/v1/peersites', headers=vczvmSession, data=json.dumps(
{"HostName": '172.16.20.21', "Port":"9071", "Token":zcaTokenActual}), verify=False)
### Example: Create a VPG
import json
vczvmSession = login('10.0.10.50', '[email protected]', 'password')
v = vpg.vpgSettings(zvm_ip, vczvmSession)
vpgPayload = {
"Basic":{
"JournalHistoryInHours":2,
"Name":"TestVpg",
"Priority":"Medium",
"ProtectedSiteIdentifier":"IDENTIFIER1",
"RecoverySiteIdentifier":"IDENTIFIER2",
"RpoInSeconds":600,
"ServiceProfileIdentifier": null,
"TestIntervalInMinutes":0,
"UseWanCompression":"True",
"ZorgIdentifier": null
},
"BootGroups":{
"BootGroups":[
{
"BootDelayInSeconds":0,
"BootGroupIdentifier":"00000000-0000-0000-0000-000000000000",
"Name":"Default"
}
]
},
"Journal":{
"DatastoreIdentifier":"GUIDOFVCENTER.DATASTOREMOREF",
"Limitation":{
"HardLimitInMB":0,
"HardLimitInPercent":0,
"WarningThresholdInMB":0,
"WarningThresholdInPercent":0
}
},
"LongTermRetention":null,
"Networks":{
"Failover":{
"Hypervisor":{
"DefaultNetworkIdentifier":"GUIDOFVCENTER.NETWORKMOREF"
}
},
"FailoverTest":{
"Hypervisor":{
"DefaultNetworkIdentifier":"GUIDOFVCENTER.NETWORKMOREF"
}
}
},
"Recovery":{
"DefaultDatastoreClusterIdentifier":null,
"DefaultDatastoreIdentifier":"GUIDOFVCENTER.DATASTOREMOREF",
"DefaultFolderIdentifier":"GUIDOFVCENTER.FOLDERMOREF",
"DefaultHostClusterIdentifier":null,
"DefaultHostIdentifier":"GUIDOFVCENTER.HOSTMOREF",
"ResourcePoolIdentifier":null
},
"Scripting":{
"PostRecovery":{
"Command":null,
"Parameters":null,
"TimeoutInSeconds":0
},
"PreRecovery":{
"Command":null,
"Parameters":null,
"TimeoutInSeconds":0
}
},
"Vms":[]
}
vpgSettingsId = v.createNewVpgSettingsObject(json.dumps(vpgPayload))
v.commitSettingsObject(vpgSettingsId)
## Acknowledgements
I would like to acknowledge several people for assisting, either directly or indirectly, in the creation of this
library. Shaun Finn directly contributed to the zerto_auth and vra modules, and Wes Carroll provided insight and
assistance based on his experiences designing and developing his excellent PowerShell API wrapper. I would also like
to acknowledge Nick Costigan, Jacob Lucas and Chris Pacejo in providing their insight as professional developers.
| zertoapilib | /zertoapilib-0.0.1.tar.gz/zertoapilib-0.0.1/README.md | README.md |
import json
import requests
#from zerto_auth import testUrl, testHeaders
class vra():
"""
The VRA class houses VRA specific methods
...
Attributes
----------
zvmurl : str
the IP address of the target ZVM
headerwithkey : dict
a properly formatted dict containing the following key:value pairs:
{
'Accept': 'application/json',
'Content-Type': 'application/json'
'x-zerto-session': str-type containing valid session key generated with zerto_auth.py
}
vra_id: str
unique identifier for individual VRA
vra_dict: dict
dictionary object for VRA inputs
Methods
-------
infoAllVRAs()
Get info on all VRAs.
upgradeGroupVRAs()
Upgrade a group of VRAs which are specified in the vraid body
upgradeVRA()
Upgrade individual VRA
installVRA()
Install individual VRA
editVRA()
Edit individual VRA
delVRA()
Uninstall individual VRA
"""
endPoint = '/vras'
def __init__(self, zvmip, headerwithkey):
"""
Parameters
----------
zvmip : str
The IP of the ZVM or ZCA
headerwithkey : dict
A properly formatted dict containing the following key:value pairs:
{
'Accept': 'application/json',
'Content-Type': 'application/json'
'x-zerto-session': str-type containing valid session key generated with zerto_auth.py
}
"""
self.zvmurl = 'https://' + zvmip + ':9669/v1'
self.headerwithkey = headerwithkey
def infoAllVRAs(self):
"""
Returns information on all VRAs
Returns
-------
type requests.models.Response object
"""
return requests.get(self.zvmurl + self.endPoint, headers=self.headerwithkey, verify=False)
def upgradeGroupVRAs(self, vra_id):
"""
Upgrade a group of VRAs which are specified in the vra_id body
Parameters
----------
vra_id: dict , required
A properly formatted dict containing the following key:value pairs:
{
"VraIdentifiers":
[
"String content",
"String content",
...
"String content"
]
}
Returns
-------
type requests.models.Response object
"""
return requests.post(self.zvmurl + self.endPoint + "/upgrade", headers=self.headerwithkey, data=vra_id, verify=False)
def upgradeVRA(self, vra_id):
"""
Upgrade individual VRA
Parameters
----------
vra_id: str, required
the unique identifier for the vra requiring upgrade
Returns
-------
type requests.models.Response object
"""
return requests.post(self.zvmurl + self.endPoint +"/" + vra_id + "/upgrade", headers=self.headerwithkey, verify=False)
def installVRA(self, vra_dict):
"""
Performs installation of a VRA
Parameters
----------
vra_dict: dict , required
A properly formatted dict containing the following key:value pairs:
{
"DatastoreIdentifier": "String content",
"GroupName": "String content",
"HostIdentifier": "String content",
"HostRootPassword": "String content",
"MemoryInGb":2,
"NumOfCpus":1,
"NetworkIdentifier": "String content",
"UsePublicKeyInsteadOfCredentials": Boolean,
"PopulatePostInstallation": Boolean,
"VraNetworkDataApi": {
"DefaultGateway": "String content",
"SubnetMask": "String content",
"VraIPAddress": "String content",
"VraIPConfigurationTypeApi": "String content"
}
}
Returns
-------
type requests.models.Response object
"""
return requests.post(self.zvmurl + self.endPoint, headers=self.headerwithkey, data=vra_dict, verify=False)
def editVRA(self, vra_dict, vra_id):
"""
Edit exiting VRA
Parameters
----------
vra_dict: dict , required
{
"GroupName": "String content",
"HostRootPassword": "String content",
"UsePublicKeyInsteadOfCredentials": Boolean,
"VraNetworkDataApi": {
"DefaultGateway": "String content",
"SubnetMask": "String content",
"VraIPAddress": "String content",
"VraIPConfigurationTypeApi": "String content"
}
}
vra_id: str, required
the unique identifier for the vra requiring edit
Returns
-------
type requests.models.Response object
"""
return requests.put(self.zvmurl + self.endPoint +"/" + vra_id, headers=self.headerwithkey, data=vra_dict, verify=False)
def delVRA(self, vra_id):
"""
Uninstall individual VRA
Parameters
----------
vra_id: str, required
the unique identifier for the vra requiring uninstall
Returns
-------
type requests.models.Response object
"""
return requests.delete(self.zvmurl + self.endPoint +"/" + vra_id, headers=self.headerwithkey, verify=False) | zertoapilib | /zertoapilib-0.0.1.tar.gz/zertoapilib-0.0.1/zertoapl/vra.py | vra.py |
# pyzerto-unofficial

## Overview
pyzerto-unofficial is a simple python3 API wrapper for the Zerto product by the eponymous corporation. It is intended to
simplify the deployment and management of Zerto in a code-driven manner. Potential uses for this API wrapper are as
diverse as you may wish to make it. An example script of automatically protecting tagged virtual machines (VMs) is
included in this library.
## Motiviation and Audience
The official Zerto API is a REST-driven architecture. This wrapper was developed to reduce the barrier of entry for
anyone interested in automating Zerto tasks via Python by abstracting the effort of generating a session key,
constructing correctly formatted CRUD requests, and identifying the correct URL endpoints for any given API call. This
can benefit new Zerto users, Zerto partners deploying at-scale environments, and anyone with a Python and Zerto
background.
## Disclaimer
This is not an official Zerto product in any way, shape, or form. Support is community-driven and you should thoroughly
test your scripts to ensure that you do not break anything by utilizing pyzerto-unofficial. In other words, don't blame
us if it messes something up!!
## Usage
### Installation
This package can be installed directly from PyPi or via git.
```shell
> pip install zertoapl
> pip install git+https://github.com/pilotschenck/pyzerto-unofficial.git#master
```
### The Zerto API, explained
The Zerto Virtual Manager (or Zerto Cloud Appliance) acts as the management plane for all things Zerto-related in your
environment. To use the Zerto API, you will need:
* The IP address of your target ZVM or ZCA and access from the machine running the Python library over TCP port 9669
* Credentials of a valid admin-level account for your ZVM or ZCA. For a ZVM, this usually means an admin- or nearly-
admin level vCenter account, and for a ZCA, this is usually the Administrator account of the underlying Windows instance
or VM in AWS or Azure.
Using the IP address of the ZVM/ZCA and a valid user account, you can generate an API session token, which will then
need to be passed back to the ZVM/ZCA in the form of a properly formed header as well as a request body, if applicable,
in order to run API calls to do interesting things like get local site information, create a virtual protection group
(VPG) object, add VMs to said VPG, and so on. Furthermore, the ZVM/ZCA must have a valid license key installed in order
to successfully run API calls (except for generating an API session token and applying a license!)
### Using the wrapper
Start by executing zerto_auth.py. The function `login` accepts three arguments: `zvm_ip`, `zvm_user`, and
`zvm_password`. This function will return a `dict` with the properly formatted headers for running API commands against
the ZVM/ZCA, including a valid session token.
From here, you can instansiate classes which can be found in `zvm.py`, `vra.py`, and `vpg.py` modules. The theory behind
how the modules (and subsequent classes) were organized was based on the Zerto "object" that you would want to do
interesting things to. For example, if you want to get information about datastores visible to the ZVM, you will find
that method located in the `zvm` class under the `zvm` module. If you want to install or delete a VRA, you will find
`installVra` and `delVra` in the `vra` class under the `vra` module. Even though all API calls are executed against a
ZVM or ZCA, this modularization is intended to make sense to an admin.
#### A special note on Virtual Protection Groups
Virtual Protection Groups, or VPGs, require some specific clarification. To create and manage VPGs, you need to first
create a "VPG Object", to which you will then add VMs and specify specific instructions such as length of journal,
networks to attach do during a live failover or a test failover, and the like. However, none of your changes will be
applied unless and until you *commit* the settings.
One a VPG has been committed, the vpgSettings identifier *goes away*. Do not confuse the VPG identifier with a
vpgSettings identifier; these are two different things. Roughly speaking, you can think of the `vpgs` class in the `vpg`
module as anything having to do with existing VPG "containers" as a whole, and the `vpgSettings` class in the `vpg`
module as having to do with what's *inside* a VPG (with the exception of creating a VPG to begin with).
### Example: Adding a license
```python
from zertoapl.zerto import zvm
from zertoapl.zerto_auth import login
vczvmSession = login('10.0.10.50', '[email protected]', 'password')
z = zvm('10.0.10.50, vczvmSession')
z.addLicense('VALIDZERTOLICENSEKEYHERE')
```
### Example: Installing a VRA
```python
import json
from zertoapl.vra import vra
from zertoapl.zerto_auth import login
vczvmSession = login('10.0.10.50', '[email protected]', 'password')
v = vra('10.0.10.50', vczvmSession)
testVra = json.dumps({
"DatastoreIdentifier": "GUID-OF-VCENTER.DATASTOREMOREF",
"HostIdentifier": "GUID-OF-VCENTER.HOSTMOREF",
"HostRootPassword": "ESXIPASSWORD",
"MemoryInGb": 3,
"NumOfCpus": 1,
"NetworkIdentifier": "GUID-OF-VCENTER.NETWORKMOREF",
"UsePublicKeyInsteadOfCredentials": False,
"PopulatePostInstallation": False,
"VraNetworkDataApi": {
"DefaultGateway": "192.168.1.1",
"SubnetMask": "255.255.255.0",
"VraIPAddress": "192.168.1.90",
"VraIPConfigurationTypeApi": "Static"
}
})
v.installVRA(testVra)
```
### Example: Pairing a site with another site
```python
import json, requests
from zertoapl.zerto_auth import login
from zertoapl.zvm import zvm
vczvmSession = login('10.0.10.50', '[email protected]', 'password')
awszcaSession = login('172.16.20.21', 'Administrator', 'password')
zvmOnsite = zvm('10.0.10.50', vczvmSession)
zcaInCloud = zvm('172.16.20.21', awszcaSession)
zcaTokenObject = zcaInCloud.generatePeeringToken()
zcaTokenActual = zcaTokenObject.json().get('Token')
pairOutput = requests.post('https://10.0.10.50:9669/v1/peersites',
headers=vczvmSession,
data=json.dumps({"HostName": '172.16.20.21',
"Port":"9071",
"Token":zcaTokenActual}),
verify=False)
```
### Example: Create a VPG
```python
import json
from zertoapl.zerto_auth import login
from zertoapl.vpg import vpgSettings
vczvmSession = login('10.0.10.50', '[email protected]', 'password')
v = vpgSettings(zvm_ip, vczvmSession)
vpgPayload = json.dumps({
"Basic": {
"JournalHistoryInHours": 2,
"Name": "TestVpg",
"Priority": "Medium",
"ProtectedSiteIdentifier": "IDENTIFIER1",
"RecoverySiteIdentifier": "IDENTIFIER2",
"RpoInSeconds": 600,
"ServiceProfileIdentifier": null,
"TestIntervalInMinutes": 0,
"UseWanCompression": "True",
"ZorgIdentifier": null
},
"BootGroups": {
"BootGroups": [
{
"BootDelayInSeconds": 0,
"BootGroupIdentifier": "00000000-0000-0000-0000-000000000000",
"Name": "Default"
}
]
},
"Journal": {
"DatastoreIdentifier": "GUIDOFVCENTER.DATASTOREMOREF",
"Limitation": {
"HardLimitInMB": 0,
"HardLimitInPercent": 0,
"WarningThresholdInMB": 0,
"WarningThresholdInPercent": 0
}
},
"LongTermRetention": null,
"Networks": {
"Failover": {
"Hypervisor": {
"DefaultNetworkIdentifier": "GUIDOFVCENTER.NETWORKMOREF"
}
},
"FailoverTest": {
"Hypervisor": {
"DefaultNetworkIdentifier": "GUIDOFVCENTER.NETWORKMOREF"
}
}
},
"Recovery": {
"DefaultDatastoreClusterIdentifier": null,
"DefaultDatastoreIdentifier": "GUIDOFVCENTER.DATASTOREMOREF",
"DefaultFolderIdentifier": "GUIDOFVCENTER.FOLDERMOREF",
"DefaultHostClusterIdentifier": null,
"DefaultHostIdentifier": "GUIDOFVCENTER.HOSTMOREF",
"ResourcePoolIdentifier": null
},
"Scripting": {
"PostRecovery": {
"Command": null,
"Parameters": null,
"TimeoutInSeconds": 0
},
"PreRecovery": {
"Command": null,
"Parameters": null,
"TimeoutInSeconds": 0
}
},
"Vms": []
})
vpgSettingsId = v.createNewVpgSettingsObject(vpgPayload)
v.commitSettingsObject(vpgSettingsId)
```
## Acknowledgements
I would like to acknowledge several people for assisting, either directly or indirectly, in the creation of this
library. Shaun Finn directly contributed to the zerto_auth and vra modules, and Wes Carroll provided insight and
assistance based on his experiences designing and developing his excellent PowerShell API wrapper. I would also like
to acknowledge Nick Costigan, Jacob Lucas and Chris Pacejo in providing their insight as professional developers.
| zertoapl | /zertoapl-0.0.4.tar.gz/zertoapl-0.0.4/README.md | README.md |
# Zerv :fire:
Yet another AWS Lambda [+ API Gateway] CLI deployment tool.
## IMPORTANT
This is a draft-project which means a lot, if not all, could change in next couple of weeks.
## Documentation
No docs for the time being.
This will create/update a lambda function and if you want you can attach a API Gateway trigger to it.
## Usage
For the time being the only way you can test is:
`python zerv/handler.py`
`python zerv/handler.py --dir=/path/to/your/project`
`python zerv/handler.py --function=prt_mx_rfc_validation`
This uses Boto3 so you need to check credentials config for it
### Settings
It will look like:
#### Project settings
```
project:
name: 'default'
root_dir: 'lambdas'
settings_file: 'settings'
source_code_folder: 'code'
requirements_file: 'requirements.txt'
precompiled_packages:
- requests: "/path/to"
permissions:
iam_role: "arn:aws:iam::9848734876:role/AROLE"
execution:
timeout: 300
memory_size: 128
```
#### Function settings
```
api_gateway:
enabled: true
endpoint: null
stage: default
environment:
required_variables:
- ENV
function:
description: "My fancy description"
arn: "some ARN so it doesnt create a new one"
name: "some name so it doesn't create a new one"
runtime: python3.6
handler: handler
```
#### Default settings
```
project:
name: 'Zerv Project'
root_dir: 'lambdas'
settings_file: 'settings'
source_code_folder: 'code'
requirements_file: 'requirements.txt'
precompiled_packages: ~
function:
arn: ~
description: ~
handler: handler
name: ~
requirements_file: 'requirements.txt'
runtime: python3.6
append_project_name: true
api_gateway:
enabled: false
endpoint: ~
stage: default
permissions:
iam_role: ~
execution:
timeout: 300
memory_size: 128
environment:
required_variables: ~
source_path: ~
```
## Contributors:
- [@pablotrinidad](https://github.com/pablotrinidad/)
- [@henocdz](https://github.com/henocdz/)
## TODOs
- [ ] Read/install requirements.txt
- [ ] Only install packages compatible with manylinux
- [ ] Include environment variables
- [ ] Documentation
- [ ] Replace argparse with click
- [ ] Handle errors properly
- [ ] ...
## CONTRIBUTING
Please don't do it... *yet*, this a draft-project with a lot of spaghetti-code, bad practices and not even ready for being a PyPi package, and of course, I'll squash several commits. If you're interested please drop me an email: henocdz [AT] gmail
If curious...
- Create a virtualenv
- Clone the project
- cd zerv
- pip install -e .
**Thx**
| zerv | /zerv-0.1.0.tar.gz/zerv-0.1.0/README.md | README.md |
import string
import random
import base64
import codecs
class Obfuscator:
def __init__(self, code):
self.code = code
self.__obfuscate()
def __xorED(self, text, key = None):
newstring = ""
if key is None:
key = "".join(random.choices(string.ascii_letters + string.digits + string.punctuation, k=random.randint(32, 64)))
if not key[0] == " ":
key = " " + key
for i in range(len(text)):
newstring += chr((ord(text[i]) ^ ord(key[(len(key) - 2) + 1])) % 256)
return (newstring, key)
def __encodestring(self, string):
newstring = ''
for i in string:
if random.random() < 0.5:
newstring += '\\x' + codecs.encode(i.encode(), 'hex').decode()
else:
newstring += '\\' + oct(ord(i))[2:].zfill(3)
return newstring
def __obfuscate(self):
for _ in range(10):
xorcod = self.__xorED(self.code)
self.code = xorcod[0]
encoded_code = base64.b64encode(codecs.encode(codecs.encode(self.code.encode(), 'bz2'), 'uu')).decode()
encoded_code = [encoded_code[i:i + int(len(encoded_code) / 4)] for i in range(0, len(encoded_code), int(len(encoded_code) / 4))]
new_encoded_code = []
new_encoded_code.append(codecs.encode(encoded_code[0].encode(), 'uhex').decode() + 'u')
new_encoded_code.append(codecs.encode(encoded_code[1], 'rot13') + 'r')
new_encoded_code.append(codecs.encode(encoded_code[2].encode(), 'base64').decode() + 'h')
new_encoded_code.append(base64.b85encode(codecs.encode(encoded_code[3].encode(), 'zlib')).decode() + 'x')
self.code = ''.join(new_encoded_code)
self.code = self.__encodestring(self.__xorED(self.code)[0])
class Deobfuscator:
def init(self, code):
self.code = code
self.__deobfuscate()
def __unXOR(self, text, key):
newstring = ""
for i in range(len(text)):
newstring += chr((ord(text[i]) ^ ord(key[(len(key) - 2) + 1])) % 256)
return newstring
def __decodestring(self, string):
index = 0
newstring = ''
while index < len(string):
if string[index] == '\\' and string[index + 1] == 'x':
newstring += codecs.decode(string[index + 2:index + 4], 'hex').decode()
index += 4
elif string[index] == '\\':
newstring += chr(int(string[index + 1:index + 4], 8))
index += 4
else:
newstring += string[index]
index += 1
return newstring
def __deobfuscate(self):
self.code = self.__decodestring(self.code)
for _ in range(10):
encoded_code = [self.code[i:i + 4] for i in range(0, len(self.code), 4)]
new_encoded_code = []
new_encoded_code.append(codecs.decode(encoded_code[0][:-1], 'uhex'))
new_encoded_code.append(codecs.decode(encoded_code[1][:-1], 'rot13'))
new_encoded_code.append(codecs.encode(encoded_code[2][:-1].encode(), 'base64').decode())
new_encoded_code.append(codecs.decode(base64.b85decode(encoded_code[3][:-1].encode()), 'zlib').decode())
self.code = base64.b64decode(codecs.decode(new_encoded_code[0].encode(), 'bz2')).decode()
key = self.__unXOR(self.code, self.__decodestring(new_encoded_code[3]))
self.code = self.__unXOR(self.code, key) | zescator | /zescator-1.0.tar.gz/zescator-1.0/obfuscator.py | obfuscator.py |
<p align="center">
<img src="https://user-images.githubusercontent.com/66521670/209605632-24e913e6-aa9b-4515-8b27-906b60b7695f.svg" />
</p>
# ***Zayne's Extremely Simple Language***
Inspired by **Tom's Obvious Minimal Language (TOML)** and **YAML**
This project isn't serious btw; This was just a side project 👀.
**ZESL** won't be getting regularly scheduled updates.
## **Grammar**
**ZESL** uses the **BNF** grammar format. It's grammar is as follows:
```bnf
<program> ::= <statement>
<statement> ::= <key>
| <comment>
<letter> ::= "A" | "B" | "C" | "D" | "E" | "F" | "G"
| "H" | "I" | "J" | "K" | "L" | "M" | "N"
| "O" | "P" | "Q" | "R" | "S" | "T" | "U"
| "V" | "W" | "X" | "Y" | "Z" | "a" | "b"
| "c" | "d" | "e" | "f" | "g" | "h" | "i"
| "j" | "k" | "l" | "m" | "n" | "o" | "p"
| "q" | "r" | "s" | "t" | "u" | "v" | "w"
| "x" | "y" | "z"
<digit> ::= "0" | "1" | "2" | "3" | "4" | "5"
| "6" | "7" | "8" | "9"
<symbol> ::= "|" | " " | "!" | "#" | "$" | "%" | "&" | "(" | ")" | "*" | "+" | "," | "-" | "." | "/" | ":" | ";" | ">" | "=" | "<" | "?" | "@" | "[" | "\\" | "]" | "^" | "_" | "`" | "{" | "}" | "~"
<comment> ::= ">" " "* (<digit> | <letter> | " " | <symbol>)+
<value> ::= "\"" (<digit> | <letter> | " " | <symbol>)* "\"" | <digit>+ | "."* <digit>+ "."* <digit>*
<key> ::= <letter>+ "=" <value> " "* <comment>*
```
## **Syntax**
**ZESL** supports strings, integers, and floats. **ZESL** also support usage of comments.
```
> This is a comment.
city="Salt Lake City, Utah"
area=801
precipitation=50.0
```
There are multiple ways to type a value, however there're some *big* **no-no(s)**.
----
#### The correct way to enclose an string in **ZESL** would be in *double quotes*. Attempting to use single quotes will result in an error.
```
quote='Do not fear mistakes. There are none. (Miles Davis)' > Wrong way to write strings.
quote="Do not fear mistakes. There are none. (Miles Davis)" > Correct way to write strings. Note the double quotes.
```
# Goals
If I do decide to at least dedicate **SOME** of my time to this project, here's what I would do.
- Improve BNF grammar. Adding double-quotes and/or back-ticks for string would be nice. Make context a bit more loose.
- Add `[]` and `{}`, allows for defining dicts and/or lists.
- Syntax sugar!?
| zesl | /zesl-0.1.0.tar.gz/zesl-0.1.0/README.md | README.md |
# ZenHub-SP-Aggregator
Aggregate issue StoryPoint(Estimate) by labels and time.
Calculate sum of estimate point in specific duration for the repository.
## Install
```
$ pip install zespa
```
# Usage
for initial configuration, you need to get GitHub API Token and ZenHub API Token.
```
$ zespa configure
$ zespa aggregate 2018/10/01 2018/10/31 --labels Bug Refactoring # all issues / issues labeled `Bug` or `Refactoring`
+--------------------+
| Bug or Refactoring |
+--------------------+
| 110 |
+--------------------+
+------------------------+
| Not Bug or Refactoring |
+------------------------+
| 262 |
+------------------------+
(and export CSV file which shows individual issue title and estimate point.)
```
| zespa | /zespa-0.1.4.tar.gz/zespa-0.1.4/README.md | README.md |
Introduction
============
This Plone product allows some cache tuning for your site.
Currently it only allows username replacment, but we might add some
more features in the future.
Username replacement
--------------------
If you enable this option, you can cache the whole pages and an Ajax
request will replace the username in the page in order to reduce the
load.
If you use a custom theme, you can specify the query used to select
the HTML node containing the username. This query is written in the
classic CSS format.
Extra libraries
---------------
The Javascript cookie setting is done using jquery.cookie.js, written
by Klaus Hartl.
The project is hosted at https://github.com/carhartl/jquery-cookie.
| zest.cachetuning | /zest.cachetuning-0.4.zip/zest.cachetuning-0.4/README.rst | README.rst |
import os, shutil, sys, tempfile, urllib2
from optparse import OptionParser
tmpeggs = tempfile.mkdtemp()
is_jython = sys.platform.startswith('java')
# parsing arguments
parser = OptionParser(
'This is a custom version of the zc.buildout %prog script. It is '
'intended to meet a temporary need if you encounter problems with '
'the zc.buildout 1.5 release.')
parser.add_option("-v", "--version", dest="version", default='1.4.4',
help='Use a specific zc.buildout version. *This '
'bootstrap script defaults to '
'1.4.4, unlike usual buildpout bootstrap scripts.*')
parser.add_option("-d", "--distribute",
action="store_true", dest="distribute", default=False,
help="Use Disribute rather than Setuptools.")
parser.add_option("-c", None, action="store", dest="config_file",
help=("Specify the path to the buildout configuration "
"file to be used."))
options, args = parser.parse_args()
# if -c was provided, we push it back into args for buildout' main function
if options.config_file is not None:
args += ['-c', options.config_file]
if options.version is not None:
VERSION = '==%s' % options.version
else:
VERSION = ''
USE_DISTRIBUTE = options.distribute
args = args + ['bootstrap']
to_reload = False
try:
import pkg_resources
if not hasattr(pkg_resources, '_distribute'):
to_reload = True
raise ImportError
except ImportError:
ez = {}
if USE_DISTRIBUTE:
exec urllib2.urlopen('http://python-distribute.org/distribute_setup.py'
).read() in ez
ez['use_setuptools'](to_dir=tmpeggs, download_delay=0, no_fake=True)
else:
exec urllib2.urlopen('http://peak.telecommunity.com/dist/ez_setup.py'
).read() in ez
ez['use_setuptools'](to_dir=tmpeggs, download_delay=0)
if to_reload:
reload(pkg_resources)
else:
import pkg_resources
if sys.platform == 'win32':
def quote(c):
if ' ' in c:
return '"%s"' % c # work around spawn lamosity on windows
else:
return c
else:
def quote (c):
return c
ws = pkg_resources.working_set
if USE_DISTRIBUTE:
requirement = 'distribute'
else:
requirement = 'setuptools'
env = dict(os.environ,
PYTHONPATH=
ws.find(pkg_resources.Requirement.parse(requirement)).location
)
cmd = [quote(sys.executable),
'-c',
quote('from setuptools.command.easy_install import main; main()'),
'-mqNxd',
quote(tmpeggs)]
if 'bootstrap-testing-find-links' in os.environ:
cmd.extend(['-f', os.environ['bootstrap-testing-find-links']])
cmd.append('zc.buildout' + VERSION)
if is_jython:
import subprocess
exitcode = subprocess.Popen(cmd, env=env).wait()
else: # Windows prefers this, apparently; otherwise we would prefer subprocess
exitcode = os.spawnle(*([os.P_WAIT, sys.executable] + cmd + [env]))
assert exitcode == 0
ws.add_entry(tmpeggs)
ws.require('zc.buildout' + VERSION)
import zc.buildout.buildout
zc.buildout.buildout.main(args)
shutil.rmtree(tmpeggs) | zest.cachetuning | /zest.cachetuning-0.4.zip/zest.cachetuning-0.4/bootstrap.py | bootstrap.py |
from AccessControl import ClassSecurityInfo
from Products.Archetypes import atapi
from Products.CMFCore.utils import ImmutableId
from Products.CMFCore.permissions import ModifyPortalContent
from Products.ATContentTypes.content.document import ATDocument
from Products.ATContentTypes.content.document import ATDocumentSchema
from zope.interface import Interface, implements
from Products.CMFCore.utils import getToolByName
from zest.cachetuning import config
from zest.cachetuning import ZestCacheTuningMessageFactory as _
class ICacheTuningTool(Interface):
"""marker interface"""
cacheTuningToolSchema = ATDocumentSchema.copy() + atapi.Schema((
atapi.BooleanField(
name = 'jq_replace_username',
default=True,
widget = atapi.BooleanWidget(
label=_(u'label_jq_replace_username',
default=u'Replace username using Javascript'),
description=_(u'help_jq_replace_username',
default=u'Enable caching the pages ' + \
'and keeping username displayed.')
),
schemata='username',
),
atapi.StringField(
name='jq_replace_username_selector',
default_method = 'default_js_query',
widget = atapi.StringWidget(
label=_(u'label_jq_replace_username_selector',
default=u'CSS query to select username'),
description=_(u'help_jq_replace_username_selector',
default=u'You can here specify a custom query ' + \
'to find the element containing the username.')
),
schemata='username',
),
atapi.StringField(
name='jq_replace_username_cookie',
default = 'zest.cachetuning.username',
widget = atapi.StringWidget(
label=_(u'label_jq_replace_username_cookie',
default=u'Name of the cookie used to store the username'),
description=_(u'help_jq_replace_username_cookie',
default=u'We store the username in a cookie so we ' +\
'do not need to fetch it via Ajax.')
),
schemata='username',
),
))
# Hides the default fields.
for field in ['title', 'description', 'text']:
if field in cacheTuningToolSchema:
cacheTuningToolSchema[field].widget.visible={'edit': 'invisible',
'view': 'invisible'}
# Hides the fields other than the ones we defined.
allowed_schematas = ['username']
for key in cacheTuningToolSchema.keys():
if cacheTuningToolSchema[key].schemata not in allowed_schematas:
cacheTuningToolSchema[key].widget.visible={'edit': 'invisible',
'view': 'invisible'}
class CacheTuningTool(ImmutableId, ATDocument):
""" Tool for zest.cachetuning product.
Allows to set various options.
"""
security = ClassSecurityInfo()
__implements__ = ()
implements(ICacheTuningTool)
id = 'portal_zestcachetuning'
typeDescription = "Configure Zest cache tuning"
typeDescMsgId = 'description_edit_zestcachetuning_tool'
schema = cacheTuningToolSchema
def __init__(self, *args, **kwargs):
self.setTitle('Zest Cache tuning configuration')
security.declareProtected(ModifyPortalContent, 'indexObject')
def indexObject(self):
pass
security.declareProtected(ModifyPortalContent, 'reindexObject')
def reindexObject(self, idxs=[]):
pass
security.declareProtected(ModifyPortalContent, 'reindexObjectSecurity')
def reindexObjectSecurity(self, skip_self=False):
pass
def default_js_query(self):
migration = getToolByName(self, 'portal_migration')
versions = migration.coreVersions()
plone_major_version = versions.get('Plone', '').split('.')[0]
if plone_major_version == '4':
return 'a#user-name'
return 'a#user-name span'
atapi.registerType(CacheTuningTool, config.PROJECTNAME) | zest.cachetuning | /zest.cachetuning-0.4.zip/zest.cachetuning-0.4/zest/cachetuning/tool.py | tool.py |
Changelog
=========
1.6 (2013-05-13)
----------------
- Support plone.app.discussion.
[maurits]
1.5 (2012-09-12)
----------------
- Moved to github.
[maurits]
1.4 (2011-01-26)
----------------
- Also catch AttributeError in @@find-catalog-comments, which may
happen for objects in portal_skins/custom.
[maurits]
1.3 (2011-01-26)
----------------
- Moved the remove-buttons more the the left, so they do not hop
around after deleting an item with a long title, or that causes a
new long title to appear.
[maurits]
- On the overview page offer to also index comments in objects that
currently do not allow comments but may have done so in the past.
[maurits]
- When switching comments off for an object, uncatalog its existing
comments.
[maurits]
- When turning comments on for an object, catalog its possibly
already existing comments, when needed.
[maurits]
- On the details page, also show number of actual comments, instead of
only the comments in the catalog.
[maurits]
- Added @@find-catalog-comments page (linked from the overview page)
that finds and catalogs all comments for objects that currently
allow commenting. This is needed after a clear and rebuild of the
portal_catalog, as the catalog then loses all info about comments.
[maurits]
1.2 (2011-01-04)
----------------
- Sort the cleanup-comments-list on creation date.
[maurits]
1.1 (2011-01-04)
----------------
- Handle redirection in the same way everywhere, so you also get to
the same batched page using a came_from parameter.
[maurits]
- Added '@@cleanup-comments-list' page that lists the latest comments.
[maurits]
1.0 (2010-12-21)
----------------
- Initial release
| zest.commentcleanup | /zest.commentcleanup-1.6.zip/zest.commentcleanup-1.6/CHANGES.rst | CHANGES.rst |
Introduction
============
You have enabled commenting in your Plone Site. Now you wake up and
see that during the night some spammer has added 1337 comments in your
site. What do you do now? Sure, you first shoot him and then you
integrate some captcha solution, but you still have those 1337
comments. You do not want to click 1337 times on a delete button.
Have no fear: ``zest.commentcleanup`` will rescue you! Or at least
it will help you get rid of those spam comments faster.
How does it work?
-----------------
Just add ``zest.commentcleanup`` to the eggs parameter of the
instance section of your buildout.cfg. On Plone 3.2 or earlier add it
to the zcml parameter as well.
The package simply works by registering some browser views. Start
your instance, go to the root of your site and add
``/@@cleanup-comments-overview`` to the url. This will give you an
overview of which items in your site have comments. It is sorted so
the item with the most comments is at the top.
Note that the overview works on other contexts as well, for example on
a folder.
In the overview click on the ``manage`` link of an item with comments.
This takes you to the ``cleanup-comments-details`` page of that item.
This lists all comments, ordered by creation date. From there you can
delete single items.
But the biggest thing you can do there is: select a comment and delete
this **and all following comments**. The idea is that the first three
comments may be valid comments, then there are a gazillion spam
comments, and very likely no actual human has added a valid comment
somewhere in that spam flood anymore. So you keep the first few
comments and delete the rest without having to buy a new mouse because
you have clicked too much.
From the overview page you can also go to the
``@@cleanup-comments-list`` page. Here you see the latest comments,
which you can remove one at a time. This is handier when you have
done the big cleanup already and only need to check the new comments
of the last few days.
All the used views are only available if you have the ``Manage
portal`` permission.
Requirements
------------
This has been tested on Plone 3.3.5 with the standard comments. It
might or might not work with packages like
``quintagroup.plonecomments`` or ``plone.app.discussion``. It
probably works on Plone 2.5 and 4 as well, but I have not checked.
Hey, it might even work in a default CMF site.
| zest.commentcleanup | /zest.commentcleanup-1.6.zip/zest.commentcleanup-1.6/README.rst | README.rst |
Introduction
============
You have enabled commenting in your Plone Site. Now you wake up and
see that during the night some spammer has added 1337 comments in your
site. What do you do now? Sure, you first shoot him and then you
integrate some captcha solution, but you still have those 1337
comments. You do not want to click 1337 times on a delete button.
Have no fear: ``zest.commentcleanup`` will rescue you! Or at least
it will help you get rid of those spam comments faster.
How does it work?
-----------------
Just add ``zest.commentcleanup`` to the eggs parameter of the
instance section of your buildout.cfg. On Plone 3.2 or earlier add it
to the zcml parameter as well.
The package simply works by registering some browser views. Start
your instance, go to the root of your site and add
``/@@cleanup-comments-overview`` to the url. This will give you an
overview of which items in your site have comments. It is sorted so
the item with the most comments is at the top.
Note that the overview works on other contexts as well, for example on
a folder.
In the overview click on the ``manage`` link of an item with comments.
This takes you to the ``cleanup-comments-details`` page of that item.
This lists all comments, ordered by creation date. From there you can
delete single items.
But the biggest thing you can do there is: select a comment and delete
this **and all following comments**. The idea is that the first three
comments may be valid comments, then there are a gazillion spam
comments, and very likely no actual human has added a valid comment
somewhere in that spam flood anymore. So you keep the first few
comments and delete the rest without having to buy a new mouse because
you have clicked too much.
From the overview page you can also go to the
``@@cleanup-comments-list`` page. Here you see the latest comments,
which you can remove one at a time. This is handier when you have
done the big cleanup already and only need to check the new comments
of the last few days.
All the used views are only available if you have the ``Manage
portal`` permission.
Requirements
------------
This has been tested on Plone 3.3.5 with the standard comments. It
might or might not work with packages like
``quintagroup.plonecomments`` or ``plone.app.discussion``. It
probably works on Plone 2.5 and 4 as well, but I have not checked.
Hey, it might even work in a default CMF site.
| zest.commentcleanup | /zest.commentcleanup-1.6.zip/zest.commentcleanup-1.6/README.txt | README.txt |
import os, shutil, sys, tempfile, urllib, urllib2, subprocess
from optparse import OptionParser
if sys.platform == 'win32':
def quote(c):
if ' ' in c:
return '"%s"' % c # work around spawn lamosity on windows
else:
return c
else:
quote = str
# See zc.buildout.easy_install._has_broken_dash_S for motivation and comments.
stdout, stderr = subprocess.Popen(
[sys.executable, '-Sc',
'try:\n'
' import ConfigParser\n'
'except ImportError:\n'
' print 1\n'
'else:\n'
' print 0\n'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
has_broken_dash_S = bool(int(stdout.strip()))
# In order to be more robust in the face of system Pythons, we want to
# run without site-packages loaded. This is somewhat tricky, in
# particular because Python 2.6's distutils imports site, so starting
# with the -S flag is not sufficient. However, we'll start with that:
if not has_broken_dash_S and 'site' in sys.modules:
# We will restart with python -S.
args = sys.argv[:]
args[0:0] = [sys.executable, '-S']
args = map(quote, args)
os.execv(sys.executable, args)
# Now we are running with -S. We'll get the clean sys.path, import site
# because distutils will do it later, and then reset the path and clean
# out any namespace packages from site-packages that might have been
# loaded by .pth files.
clean_path = sys.path[:]
import site # imported because of its side effects
sys.path[:] = clean_path
for k, v in sys.modules.items():
if k in ('setuptools', 'pkg_resources') or (
hasattr(v, '__path__') and
len(v.__path__) == 1 and
not os.path.exists(os.path.join(v.__path__[0], '__init__.py'))):
# This is a namespace package. Remove it.
sys.modules.pop(k)
is_jython = sys.platform.startswith('java')
setuptools_source = 'http://peak.telecommunity.com/dist/ez_setup.py'
distribute_source = 'http://python-distribute.org/distribute_setup.py'
# parsing arguments
def normalize_to_url(option, opt_str, value, parser):
if value:
if '://' not in value: # It doesn't smell like a URL.
value = 'file://%s' % (
urllib.pathname2url(
os.path.abspath(os.path.expanduser(value))),)
if opt_str == '--download-base' and not value.endswith('/'):
# Download base needs a trailing slash to make the world happy.
value += '/'
else:
value = None
name = opt_str[2:].replace('-', '_')
setattr(parser.values, name, value)
usage = '''\
[DESIRED PYTHON FOR BUILDOUT] bootstrap.py [options]
Bootstraps a buildout-based project.
Simply run this script in a directory containing a buildout.cfg, using the
Python that you want bin/buildout to use.
Note that by using --setup-source and --download-base to point to
local resources, you can keep this script from going over the network.
'''
parser = OptionParser(usage=usage)
parser.add_option("-v", "--version", dest="version",
help="use a specific zc.buildout version")
parser.add_option("-d", "--distribute",
action="store_true", dest="use_distribute", default=False,
help="Use Distribute rather than Setuptools.")
parser.add_option("--setup-source", action="callback", dest="setup_source",
callback=normalize_to_url, nargs=1, type="string",
help=("Specify a URL or file location for the setup file. "
"If you use Setuptools, this will default to " +
setuptools_source + "; if you use Distribute, this "
"will default to " + distribute_source + "."))
parser.add_option("--download-base", action="callback", dest="download_base",
callback=normalize_to_url, nargs=1, type="string",
help=("Specify a URL or directory for downloading "
"zc.buildout and either Setuptools or Distribute. "
"Defaults to PyPI."))
parser.add_option("--eggs",
help=("Specify a directory for storing eggs. Defaults to "
"a temporary directory that is deleted when the "
"bootstrap script completes."))
parser.add_option("-t", "--accept-buildout-test-releases",
dest='accept_buildout_test_releases',
action="store_true", default=False,
help=("Normally, if you do not specify a --version, the "
"bootstrap script and buildout gets the newest "
"*final* versions of zc.buildout and its recipes and "
"extensions for you. If you use this flag, "
"bootstrap and buildout will get the newest releases "
"even if they are alphas or betas."))
parser.add_option("-c", None, action="store", dest="config_file",
help=("Specify the path to the buildout configuration "
"file to be used."))
options, args = parser.parse_args()
if options.eggs:
eggs_dir = os.path.abspath(os.path.expanduser(options.eggs))
else:
eggs_dir = tempfile.mkdtemp()
if options.setup_source is None:
if options.use_distribute:
options.setup_source = distribute_source
else:
options.setup_source = setuptools_source
if options.accept_buildout_test_releases:
args.insert(0, 'buildout:accept-buildout-test-releases=true')
try:
import pkg_resources
import setuptools # A flag. Sometimes pkg_resources is installed alone.
if not hasattr(pkg_resources, '_distribute'):
raise ImportError
except ImportError:
ez_code = urllib2.urlopen(
options.setup_source).read().replace('\r\n', '\n')
ez = {}
exec ez_code in ez
setup_args = dict(to_dir=eggs_dir, download_delay=0)
if options.download_base:
setup_args['download_base'] = options.download_base
if options.use_distribute:
setup_args['no_fake'] = True
if sys.version_info[:2] == (2, 4):
setup_args['version'] = '0.6.32'
ez['use_setuptools'](**setup_args)
if 'pkg_resources' in sys.modules:
reload(sys.modules['pkg_resources'])
import pkg_resources
# This does not (always?) update the default working set. We will
# do it.
for path in sys.path:
if path not in pkg_resources.working_set.entries:
pkg_resources.working_set.add_entry(path)
cmd = [quote(sys.executable),
'-c',
quote('from setuptools.command.easy_install import main; main()'),
'-mqNxd',
quote(eggs_dir)]
if not has_broken_dash_S:
cmd.insert(1, '-S')
find_links = options.download_base
if not find_links:
find_links = os.environ.get('bootstrap-testing-find-links')
if not find_links and options.accept_buildout_test_releases:
find_links = 'http://downloads.buildout.org/'
if find_links:
cmd.extend(['-f', quote(find_links)])
if options.use_distribute:
setup_requirement = 'distribute'
else:
setup_requirement = 'setuptools'
ws = pkg_resources.working_set
setup_requirement_path = ws.find(
pkg_resources.Requirement.parse(setup_requirement)).location
env = dict(
os.environ,
PYTHONPATH=setup_requirement_path)
requirement = 'zc.buildout'
version = options.version
if version is None and not options.accept_buildout_test_releases:
# Figure out the most recent final version of zc.buildout.
import setuptools.package_index
_final_parts = '*final-', '*final'
def _final_version(parsed_version):
for part in parsed_version:
if (part[:1] == '*') and (part not in _final_parts):
return False
return True
index = setuptools.package_index.PackageIndex(
search_path=[setup_requirement_path])
if find_links:
index.add_find_links((find_links,))
req = pkg_resources.Requirement.parse(requirement)
if index.obtain(req) is not None:
best = []
bestv = None
for dist in index[req.project_name]:
distv = dist.parsed_version
if distv >= pkg_resources.parse_version('2dev'):
continue
if _final_version(distv):
if bestv is None or distv > bestv:
best = [dist]
bestv = distv
elif distv == bestv:
best.append(dist)
if best:
best.sort()
version = best[-1].version
if version:
requirement += '=='+version
else:
requirement += '<2dev'
cmd.append(requirement)
if is_jython:
import subprocess
exitcode = subprocess.Popen(cmd, env=env).wait()
else: # Windows prefers this, apparently; otherwise we would prefer subprocess
exitcode = os.spawnle(*([os.P_WAIT, sys.executable] + cmd + [env]))
if exitcode != 0:
sys.stdout.flush()
sys.stderr.flush()
print ("An error occurred when trying to install zc.buildout. "
"Look above this message for any errors that "
"were output by easy_install.")
sys.exit(exitcode)
ws.add_entry(eggs_dir)
ws.require(requirement)
import zc.buildout.buildout
# If there isn't already a command in the args, add bootstrap
if not [a for a in args if '=' not in a]:
args.append('bootstrap')
# if -c was provided, we push it back into args for buildout's main function
if options.config_file is not None:
args[0:0] = ['-c', options.config_file]
zc.buildout.buildout.main(args)
if not options.eggs: # clean up temporary egg directory
shutil.rmtree(eggs_dir) | zest.commentcleanup | /zest.commentcleanup-1.6.zip/zest.commentcleanup-1.6/bootstrap.py | bootstrap.py |
import logging
from Acquisition import aq_inner
from Acquisition import aq_base
from plone.protect import PostOnly
from zope.component import getMultiAdapter
from Products.CMFCore.utils import getToolByName
from Products.CMFDefault.exceptions import DiscussionNotAllowed
from Products.statusmessages.interfaces import IStatusMessage
from Products.Five import BrowserView
try:
from plone.app.discussion.interfaces import IConversation
IConversation # pyflakes
except ImportError:
IConversation = None
logger = logging.getLogger('zest.content')
class CommentManagement(BrowserView):
def redirect(self):
"""Redirect.
Should we redirect to the details page of the current context
or to the list page of the site or something else? We handle
that with a came_from parameter with a fallback.
"""
context = aq_inner(self.context)
portal_url = getToolByName(context, 'portal_url')
came_from = self.request.get('came_from')
if came_from and portal_url.isURLInPortal(came_from):
self.request.RESPONSE.redirect(came_from)
else:
# Redirect to the manage comments view on the current context
self.request.RESPONSE.redirect(
context.absolute_url() + '/@@cleanup-comments-details')
def actual_comment_count(self):
"""Count the actual comments on this context, not the comments
in the catalog.
"""
context = aq_inner(self.context)
if IConversation is not None:
conversation = IConversation(context, None)
if conversation is not None:
return len(conversation.objectIds())
portal_discussion = getToolByName(context, 'portal_discussion')
try:
talkback = portal_discussion.getDiscussionFor(context)
except DiscussionNotAllowed:
# Try anyway:
if not hasattr(aq_base(context), 'talkback'):
return 0
talkback = getattr(context, 'talkback')
return len(talkback.objectIds())
def num_total_comments(self):
"""Total number of comments from this point on, including
children, in the catalog.
"""
context = aq_inner(self.context)
catalog = getToolByName(context, 'portal_catalog')
search_path = '/'.join(context.getPhysicalPath())
filter = dict(
portal_type='Discussion Item',
path=search_path,
)
brains = catalog.searchResults(**filter)
return len(brains)
def comments(self):
"""Comments on this context.
Note that we only want comments directly in this context, not
in any children in case folders can have comments.
"""
context = aq_inner(self.context)
catalog = getToolByName(context, 'portal_catalog')
search_path = '/'.join(context.getPhysicalPath())
depth = len(context.getPhysicalPath()) - 1
path = dict(
query=search_path,
depth=depth,
)
filter = dict(
portal_type='Discussion Item',
path=path,
sort_on='created',
)
brains = catalog.searchResults(**filter)
return brains
def info(self):
"""Info on this context.
"""
context = aq_inner(self.context)
count = self.num_total_comments()
discussion_allowed = self.is_discussion_allowed(context)
return dict(
count=count,
discussion_allowed=discussion_allowed,
)
def get_object_by_path(self, path):
portal_state = getMultiAdapter((self.context, self.request),
name=u'plone_portal_state')
portal = portal_state.portal()
return portal.restrictedTraverse(path)
def is_discussion_allowed(self, obj):
if IConversation is not None:
view = obj.restrictedTraverse('@@conversation_view', None)
if view is not None:
return view.enabled()
context = aq_inner(self.context)
portal_discussion = getToolByName(
context, 'portal_discussion', None)
if portal_discussion is None:
return False
return portal_discussion.isDiscussionAllowedFor(obj)
def paths(self):
context = aq_inner(self.context)
catalog = getToolByName(context, 'portal_catalog')
search_path = '/'.join(context.getPhysicalPath())
brains = catalog.searchResults(portal_type='Discussion Item',
path=search_path)
paths = {}
for brain in brains:
# Get path of real content item:
path = '/'.join(brain.getPath().split('/')[:-2])
if path in paths:
paths[path] += 1
else:
paths[path] = 1
sorted_paths = sorted(
[(count, path) for (path, count) in paths.items()], reverse=True)
# Make it handier for the template:
results = []
for count, path in sorted_paths:
obj = self.get_object_by_path(path)
info = dict(
count=count,
path=path,
url=obj.absolute_url(),
title=obj.Title(),
discussion_allowed=self.is_discussion_allowed(obj),
)
results.append(info)
return results
class DeleteComment(CommentManagement):
def __call__(self):
"""Delete a comment/reply/reaction.
Partly taken from
Products/CMFPlone/skins/plone_scripts/deleteDiscussion.py
"""
PostOnly(self.request)
comment_id = self.request.get('comment_id')
if not comment_id:
raise ValueError("comment_id expected")
context = aq_inner(self.context)
conversation = None
if IConversation is not None:
conversation = IConversation(context, None)
if conversation is not None:
del conversation[comment_id]
context.reindexObject()
if conversation is None:
portal_discussion = getToolByName(context, 'portal_discussion')
talkback = portal_discussion.getDiscussionFor(context)
# remove the discussion item
talkback.deleteReply(comment_id)
logger.info("Deleted reply %s from %s", comment_id,
context.absolute_url())
self.redirect()
msg = u'Comment deleted'
IStatusMessage(self.request).addStatusMessage(msg, type='info')
return msg
class DeleteAllFollowingComments(CommentManagement):
def __call__(self):
"""Delete this and all following comments.
This is '''not''' about removing a comment and all its nested
comments. No, it is about removing a comment and removing all
comments that have been added later. The idea is that you use
this to get rid of lots of spam comments in one go.
"""
PostOnly(self.request)
comment_id = self.request.get('comment_id')
if not comment_id:
raise ValueError("comment_id expected")
context = aq_inner(self.context)
portal_discussion = getToolByName(context, 'portal_discussion')
talkback = portal_discussion.getDiscussionFor(context)
conversation = None
if IConversation is not None:
conversation = IConversation(context, None)
found = False
# Note that getting the comment brains could result in a
# KeyError when the parent of a comment has been deleted just
# now by us. Easiest way around that is to get all comments
# first.
comments = self.comments()[:]
for comment in comments:
if comment.getId == comment_id:
found = True
if not found:
continue
# Remove the discussion item. A no longer existing item
# is silently ignored.
if conversation is not None:
del conversation[comment.getId]
else:
talkback.deleteReply(comment.getId)
logger.info("Deleted reply %s from %s", comment.getId,
context.absolute_url())
context.reindexObject()
self.redirect()
msg = u'Lots of comments deleted!'
IStatusMessage(self.request).addStatusMessage(msg, type='info')
return msg
class ToggleDiscussion(CommentManagement):
def __call__(self):
"""Allow or disallow discussion on this context.
This may also (re)catalog the comments on this context.
"""
PostOnly(self.request)
context = aq_inner(self.context)
if self.is_discussion_allowed(context):
self.uncatalog_comments()
context.allowDiscussion(False)
else:
context.allowDiscussion(True)
self.catalog_comments()
self.redirect()
msg = u'Toggled allowDiscussion.'
IStatusMessage(self.request).addStatusMessage(msg, type='info')
return msg
def catalog_comments(self, force=False):
"""Catalog the comments of this object.
When force=True, we force this, otherwise we check if the
number of items currently in the catalog under this context is
the same as the number of actual comments on the context.
This check does not work well for folderish items, so they
probably always get recataloged, but that is of small concern.
"""
context = aq_inner(self.context)
actual= self.actual_comment_count()
if not actual:
return
in_catalog = self.num_total_comments()
if not force and actual == in_catalog:
return
logger.info("Cataloging %s replies for obj at %s", actual,
context.absolute_url())
conversation = None
if IConversation is not None:
conversation = IConversation(context, None)
if conversation is not None:
for comment in conversation.getComments():
comment.reindexObject()
return
portal_discussion = getToolByName(context, 'portal_discussion')
talkback = portal_discussion.getDiscussionFor(context)
ids = talkback.objectIds()
for reply_id in ids:
reply = talkback.getReply(reply_id)
reply.reindexObject()
def uncatalog_comments(self):
"""Uncatalog the comments of this object.
"""
context = aq_inner(self.context)
in_catalog = self.num_total_comments()
if not in_catalog:
return
logger.info("Uncataloging %s replies for obj at %s", in_catalog,
context.absolute_url())
for comment in self.comments():
try:
obj = comment.getObject()
except (KeyError, AttributeError):
logger.warn("Cannot find comment at %s", comment.getPath())
else:
obj.unindexObject()
class CommentList(CommentManagement):
def comments(self):
"""Latest comments from this point on, including children.
"""
context = aq_inner(self.context)
catalog = getToolByName(context, 'portal_catalog')
search_path = '/'.join(context.getPhysicalPath())
filter = dict(
portal_type='Discussion Item',
path=search_path,
sort_on='created',
sort_order='reverse'
)
results = []
for brain in catalog.searchResults(**filter):
# This is rather ugly, but it works for standard comments
# in Plone 3.3 and 4.0:
comment_url = brain.getURL()
comment_path = brain.getPath()
context_url = '/'.join(comment_url.split('/')[:-2])
context_path = '/'.join(comment_path.split('/')[:-2])
context_obj = context.restrictedTraverse(context_path)
info = dict(
brain=brain,
reply_url=comment_url + '/discussion_reply_form',
context_url=context_url,
context_title=context_obj.Title(),
delete_url=context_url + '/@@delete-single-comment',
discussion_allowed=self.is_discussion_allowed(context_obj),
)
results.append(info)
return results
class FindAndCatalogComments(CommentManagement):
"""Find and catalog all comments.
If we clear and rebuild the portal_catalog, no comments
(DiscussionItems) will be left in the catalog. They will still
exist in the site as normal content though. But clear-and-rebuild
does not find them. That is where this view comes in handy.
"""
def __call__(self):
"""Go through the site and catalog all comments.
"""
PostOnly(self.request)
self.force = self.request.get('force', False)
context = aq_inner(self.context)
catalog = getToolByName(context, 'portal_catalog')
start = len(catalog.searchResults(portal_type='Discussion Item'))
self.find()
end = len(catalog.searchResults(portal_type='Discussion Item'))
self.redirect()
msg = u"Comments at start: %d, at end: %d" % (start, end)
IStatusMessage(self.request).addStatusMessage(msg, type='info')
return msg
def find(self):
"""Find comments and catalog them.
"""
context = aq_inner(self.context)
self.portal_discussion = getToolByName(context, 'portal_discussion')
def update_comments(obj, path):
"""Update the comments of this object
"""
if IConversation is not None:
conversation = IConversation(obj, None)
if conversation is not None:
for comment in conversation.getComments():
comment.reindexObject()
return
try:
talkback = self.portal_discussion.getDiscussionFor(obj)
except (TypeError, AttributeError):
# Happens for the 'portal_types' object and for
# objects in portal_skins/custom.
logger.debug("Error getting discussion for obj at %s", path)
return
except DiscussionNotAllowed:
logger.debug("Discussion not allowed for obj at %s", path)
if not self.force:
return
# Try anyway:
if not hasattr(aq_base(obj), 'talkback'):
return
talkback = getattr(obj, 'talkback')
logger.info("Discussion not allowed for obj, but forcing "
"recatalog anyway at %s", path)
ids = talkback.objectIds()
if ids:
logger.info("%s replies found for obj at %s", len(ids), path)
for reply_id in ids:
reply = talkback.getReply(reply_id)
reply.reindexObject()
logger.info("Finding and cataloging comments. "
"This can take a while...")
context.ZopeFindAndApply(context, apply_func=update_comments,
search_sub=True)
logger.info("Ready finding and cataloging comments.") | zest.commentcleanup | /zest.commentcleanup-1.6.zip/zest.commentcleanup-1.6/zest/commentcleanup/views.py | views.py |
History of zest.emailhider package
==================================
3.1.3 (2018-02-20)
------------------
- Make 'No uids found in request.' a warning instead of an error.
I see bots requesting this url. [maurits]
3.1.2 (2017-03-08)
------------------
- Do not render our javascript inline. It leads to display problems
when it is included on a 404 page or any other non 20x page:
an assertion in ``Products.ResourceRegistries`` fails, resulting in
the html being returned with mimetype javascript.
That seems a possible problem with any inline script.
Added upgrade step for this.
[maurits]
3.1.1 (2017-02-24)
------------------
- Fixed javascript bug that caused a request even without any emails to reveal.
[maurits]
3.1 (2016-11-02)
----------------
- Query and reveal all emails at once. If you have an employee
content type that you have hooked up to zest.emailhider, and you
have a page showing fifty employees, previously we would fire fifty
ajax requests. Now we gather everything into one request.
[maurits]
3.0 (2015-10-03)
----------------
- Added Travis badge.
[maurits]
- Support Plone 5 by reading ``plone.email_from_address`` from the
registry. This loses compatibility with Plone 4.0. We try reading
any email (also your own additional emails) from the registry first,
with ``plone.`` prepended, and then look for a property on the
portal root.
[maurits]
- Use ``$`` instead of ``jq`` in our javascript. Now it works without
needing ``jquery-integration.js``. This loses compatibility with
Plone 3.
[maurits]
- Added ``test_emailhider`` page that hides the portal
``email_from_address``, so you can easily test it. When you disable
your javascript you should not see an email address.
[maurits]
2.7 (2012-09-12)
----------------
- Moved to github.
[maurits]
2.6 (2011-11-11)
----------------
- Added MANIFEST.in so our generated .mo files are added to the source
distribution.
[maurits]
- Register our browser views only for our own new browser layer.
Added an upgrade step for this. This makes it easier for other
packages to have a conditional dependency on zest.emailhider.
[maurits]
2.5 (2011-06-01)
----------------
- Updated call to 'jq_reveal_email' to use the one at the root of the
site to avoid security errors. [vincent]
2.4 (2011-05-10)
----------------
- Updated jquery.pyproxy dependency to at least 0.3.1 and removed the
now no longer needed clean_string call.
[maurits]
2.3 (2010-12-15)
----------------
- Not only look up a fake uid for email_from_address as portal
property, but do this for any fake uid that has 'email' or 'address'
in it. Log warnings when no email address can be found for a fake
or real uid.
[maurits]
2.2 (2010-12-14)
----------------
- Added another upgrade step as we definitely need to apply our
javascript registry too when upgrading. Really at this point a
plain reinstall in the portal_quickinstaller is actually fine, which
we could also define as upgrade step, but never mind that now.
[maurits]
2.1 (2010-12-14)
----------------
- Added two upgrade steps to upgrade from 1.x by installing
jquery.pyproxy and running our kss step (which just removes our
no longer needed kss file).
[maurits]
2.0 (2010-12-09)
----------------
- Use jquery.pyproxy instead of KSS. This makes the page load much
less for anonymous users.
[vincent+maurits]
1.3 (2009-12-28)
----------------
- Made reveal_email available always, as it should just work whenever
we want to hide the glocal 'email_from_address'. If we have a real
uid target, then try to adapt that target to the IMailable interface
and if that fails we just silently do nothing.
[maurits]
1.2 (2008-11-19)
----------------
- Using kss.plugin.cacheability and added it as a dependency. [jladage]
- Allow to set the uid to email_from_address. [jladage]
- Changed the KSS to use the load event instead of the click event - it
now either works transparently, or asks the user to activate JS. [simon]
1.1 (2008-10-24)
----------------
- Added translations and modified template to use them. [simon]
- Initial creation of project. [simon]
1.0 (2008-10-20)
----------------
- Initial creation of project. [simon]
| zest.emailhider | /zest.emailhider-3.1.3.tar.gz/zest.emailhider-3.1.3/CHANGES.rst | CHANGES.rst |
Zest emailhider
===============
This document describes the zest.emailhider package.
.. image:: https://secure.travis-ci.org/zestsoftware/zest.emailhider.png?branch=master
:target: https://travis-ci.org/#!/zestsoftware/zest.emailhider
Dependencies
------------
This package depends on jquery.pyproxy to integrate python code with
jquery code.
Overview
--------
This package provides a mechanism to hide email addresses with
JavaScript. Or actually: with this package you can hide your email
addresses by default so they are never in the html; with javascript
the addresses are then fetched and displayed.
For every content item in your site you can have exactly one email
address, as we look up the email address for an object by its UID.
For objects for which you want this you should register a simple
adapter to the IMailable interface, so we can ask this adapter for an
``email`` attribute and a ``UID`` method. The 'emailhider' view is
provided to generate the placeholder link.
Objects display a placeholder link with a ``hidden-email`` class, a
``uid`` rel attribute and a ``email-uid-<some uid>`` class set to the
UID of an object; when the page is loaded some jQuery is run to make a
request for all those links to replace them with a 'mailto' link for
that object. Using this mechanism the email address isn't visible in
the initial page load, and it requires JavaScript to be seen - so it
is much harder for spammers to harvest.
Special case: when the uid contains 'email' or 'address' it is clearly
no real uid. In that case we do nothing with the IMailable interface
but we try to get a property with this 'uid' from the property sheet
of the portal. Main use case is of course the 'email_from_address',
but you can add other addresses as well, like 'info_email'. If you
want to display the email_from address in for example a static portlet
on any page in the site, use this html code::
<a class="hidden-email email-uid-email_from_address"
rel="email_from_address">
Activate JavaScript to see this address.</a>
You can load the ``test_emailhider`` page to see if this works.
Instructions for your own package
---------------------------------
What do you need to do if you want to use this in your own package,
for your own content type?
First you need to make your content type adaptable to the IMailable
interface, either directly or via an adapter.
If your content type already has a ``UID`` method (like all Archetypes
content types) and an ``email`` attribute, you can use some zcml like
this::
<class class=".content.MyContentType">
<implements interface="zest.emailhider.interfaces.IMailable" />
</class>
If not, then you need to register an adapter for your content type
that has this method and attribute. For example something like this::
from zope.component import adapts
from zope.interface import implements
from zest.emailhider.interfaces import IMailable
from your.package.interfaces import IMyContentType
class MailableAdapter(object):
adapts(IMyContentType)
implements(IMailable)
def __init__(self, context):
self.context = context
def UID(self):
return self.context.my_special_uid_attribute
@property
def email(self):
return self.context.getSomeContactAddress()
Second, in the page template of your content type you need to add code
to show the placeholder text instead of the real email address::
<span>For more information contact us via email:</span>
<span tal:replace="structure context/@@emailhider" />
Note that if you want this to still work when zest.emailhider is not
installed, you can use this code instead::
<span tal:replace="structure context/@@emailhider|context/email" />
This shows the unprotected plain text email when zest.emailhider is is
not available. When you are using zest.emailhider 2.6 or higher this
works a bit better, as we have introduced an own browser layer: the
@@emailhider page is only available when zest.emailhider is actually
installed in the Plone Site. This also makes it safe to use
zest.emailhider when you have more than one Plone Site in a single
Zope instance and want emailhider to only be used in one them.
Note that the generated code in the template is very small, so you
can also look at the page template in zest.emailhider and copy some
code from there and change it to your own needs. As long as your
objects can be found by UID in the uid_catalog and your content type
can be adapted to IMailable to get the email attribute, it should all
work fine.
Note on KSS usage in older releases
-----------------------------------
Older releases (until and including 1.3) used KSS instead of jQuery.
As our functionality should of course also work for anonymous users,
we had to make KSS publicly accessible. So all javascript that was
needed for KSS was loaded for anonymous users as well.
We cannot undo that automatically, as the package has no way of
knowing if the same change was needed by some other package or was
done for other valid reasons by a Manager. So you should check the
javascript registry in the ZMI and see if this needs to be undone so
anonymous users no longer get the kss javascripts as they no longer
need that.
For reference, this is the normal line in the Condition field of
``++resource++kukit.js`` (all on one line)::
python: not
here.restrictedTraverse('@@plone_portal_state').anonymous() and
here.restrictedTraverse('@@kss_devel_mode').isoff()
and this is the normal line in the Condition field of
``++resource++kukit-devel.js`` (all on one line)::
python: not
here.restrictedTraverse('@@plone_portal_state').anonymous() and
here.restrictedTraverse('@@kss_devel_mode').ison()
Compatibility
-------------
Version 3.0 should work on Plone 4.1, 4.2, 4.3, 5.0.
For older Plone versions, please stick to the 2.x line. Latest
release as of writing is 2.7.
Note that on Plone 5.0 we are not completely modern: we register our
css and javascript in the old portal tools, not in the new resource
registry. So it ends up in the Plone legacy bundle.
| zest.emailhider | /zest.emailhider-3.1.3.tar.gz/zest.emailhider-3.1.3/README.rst | README.rst |
from jquery.pyproxy.plone import jquery, JQueryProxy
from plone.registry.interfaces import IRegistry
from Products.CMFCore.utils import getToolByName
from Products.Five import BrowserView
from zest.emailhider.interfaces import IMailable
from zope.component import getMultiAdapter
from zope.component import getUtility
import logging
logger = logging.getLogger('zest.emailhider')
class EmailHider(BrowserView):
def UID(self):
mailable = IMailable(self.context, None)
if mailable is None:
return
return mailable.UID()
def email(self):
mailable = IMailable(self.context, None)
if mailable is None:
return
return mailable.email
class JqEmailHider(BrowserView):
"""jQuery view for revealing the email address of an object.
The object is given by a uid.
Objects should have a simple adapter to the IMailable interface,
so we can ask this adapter for an 'email' attribute.
Special case: when the uid contains 'email' or 'address' it is
clearly no real uid. In that case we do nothing with the
IMailable interface but we try to get a property with this 'uid'
from the property sheet of the portal. Main use case is of course
the 'email_from_address', but you can add other addresses as
well, like 'info_email'.
"""
@jquery
def reveal_email(self):
# single uid
uids = self.request.form.get('uid', None)
if not uids:
# With multiple uids the name looks a bit weird.
uids = self.request.form.get('uid[]', None)
if not uids:
# Nothing to do. Strange.
# Well, happens when a bot finds this url interesting.
logger.warning('No uids found in request.')
# Return an answer anyway, otherwise you get errors in the
# javascript console.
return JQueryProxy()
if isinstance(uids, basestring):
uids = [uids]
jq = JQueryProxy()
for uid in uids:
jq = self.handle_single_email(uid, jq)
return jq
def handle_single_email(self, uid, jq):
email = None
if 'email' in uid or 'address' in uid:
# This is definitely not a real uid. Try to get it from
# the registry or as a portal property. Best example:
# email_from_address.
registry = getUtility(IRegistry)
# Mostly Plone 5 and higher, although you can add
# plone.email_from_address or plone.some_email in the
# registry yourself on earlier Plone versions.
email = registry.get('plone.' + uid, '')
if email:
logger.debug("From registry: plone.%s=%s", uid, email)
else:
# Backwards compatibility for Plone 4.
portal_state = getMultiAdapter((self.context, self.request),
name=u'plone_portal_state')
portal = portal_state.portal()
email = portal.getProperty(uid)
if email:
logger.debug("From portal property: %s=%s", uid, email)
else:
logger.warn("No registry entry plone.%s or portal "
"property %s.", uid, uid)
else:
email = self.find_email_for_object_by_uid(uid)
if email:
email_link = u'<a href="mailto:%s">%s</a>' % (email, email)
else:
logger.debug("No email found for uid %s", uid)
# No sense in leaving the default text of 'Activate
# JavaScript to see this address.'
# We could add a <!-- comment --> but jQuery ignores this.
email_link = u''
jq('.email-uid-%s' % uid).replaceWith(email_link)
return jq
def find_email_for_object_by_uid(self, uid):
# Get catalog only once per request.
catalog = getattr(self, '_uid_catalog', None)
if catalog is None:
catalog = getToolByName(self.context, 'uid_catalog')
self._uid_catalog = catalog
# Find object by uid
brains = catalog(UID=uid)
if len(brains) != 1:
logger.warn("No brains for uid %s", uid)
return
target = brains[0].getObject()
mailable = IMailable(target, None)
if mailable is None:
logger.warn("Uid %s not mailable.", uid)
return
return mailable.email | zest.emailhider | /zest.emailhider-3.1.3.tar.gz/zest.emailhider-3.1.3/zest/emailhider/browser/emailhider.py | emailhider.py |
Changelog
=========
2.0.0 (2019-04-26)
------------------
- Added Dexterity support in a ``[dexterity]`` extra requirement in ``setup.py``.
This has ``plone.behavior`` and ``plone.dexterity`` as dependencies.
[maurits]
- Moved Archetypes support to an ``[archetypes]`` extra requirement in ``setup.py``.
This has ``archetypes.schemaextender`` as dependency.
[maurits]
- Register the default adapter only for Archetypes base content, instead of everything.
[maurits]
- Test only with Python 2.7 and Plone 4.3.
[maurits]
1.0 (2011-11-24)
----------------
- Initial release
[maurits]
| zest.ploneglossaryhighlight | /zest.ploneglossaryhighlight-2.0.0.tar.gz/zest.ploneglossaryhighlight-2.0.0/CHANGES.rst | CHANGES.rst |
Introduction
============
This is an extra package for `Products.PloneGlossary <https://pypi.org/project/Products.PloneGlossary>`_.
It adds a field ``highlight`` to content types in your site.
With that field you can specify for a page whether terms on it should be highlighted.
Values can be yes, no, or use the value of the parent (which is the default).
Support for Archetypes has been there since the first release (with ``archetypes.schemaextender``).
Support for Dexterity has been there since release 2.0.0 (with ``plone.behavior`` and ``plone.dexterity``).
Installation
============
- Add it to the eggs option of the zope instance in your buildout.
- When you want Archetypes support, please use ``zest.ploneglossaryhighlight[archetypes]``.
- When you want Dexterity support, please use ``zest.ploneglossaryhighlight[dexterity]``.
- Or use both: ``zest.ploneglossaryhighlight[archetypes, dexterity]``.
- Run buildout and restart your Zope instance.
- Go to the Add-ons control panel in Plone.
- Install PloneGlossary.
- Install zest.ploneglossaryhighlight.
Compatibility
=============
- Requires Products.PloneGlossary 1.5.0 or later. Tested with latest 1.7.3.
- Tested with Plone 4.3 on Python 2.7. Current latest release of PloneGlossary does not work on Plone 5.
| zest.ploneglossaryhighlight | /zest.ploneglossaryhighlight-2.0.0.tar.gz/zest.ploneglossaryhighlight-2.0.0/README.rst | README.rst |
from Products.Archetypes.public import BooleanField
from Products.Archetypes.public import StringField
from Products.Archetypes.public import SelectionWidget
from Products.Archetypes.atapi import DisplayList
from Products.Archetypes.interfaces import IBaseContent
from archetypes.schemaextender.field import ExtensionField
from archetypes.schemaextender.interfaces import (
ISchemaExtender,
IBrowserLayerAwareExtender,
)
from zope.component import adapter
from zope.interface import implementer
from zope.interface import Interface
from Products.PloneGlossary.interfaces import IOptionalHighLight
from zest.ploneglossaryhighlight.adapters import BaseOptionalHighLight
from zest.ploneglossaryhighlight.adapters import YES
from zest.ploneglossaryhighlight.adapters import NO
from zest.ploneglossaryhighlight.adapters import PARENT
# Our add-on browserlayer:
from zest.ploneglossaryhighlight.interfaces import IOptionalHighLightLayer
from zest.ploneglossaryhighlight import ZestPloneGlossaryHighlightMessageFactory as _
HIGHLIGHT_VOCAB = DisplayList(
data=[
(YES, _(u"Yes")),
(NO, _(u"No")),
(PARENT, _(u"Use setting of parent folder")),
]
)
class MyBooleanField(ExtensionField, BooleanField):
"""A extension boolean field."""
class MyStringField(ExtensionField, StringField):
"""A extension string field."""
@implementer(ISchemaExtender, IBrowserLayerAwareExtender)
@adapter(Interface)
class HighLightExtender(object):
"""Schema extender that makes highlighting the known terms
optional per object.
"""
# Don't do schema extending unless our add-on product is installed
# on this Plone site.
layer = IOptionalHighLightLayer
fields = [
MyStringField(
"highlight",
schemata="settings",
default=PARENT,
vocabulary=HIGHLIGHT_VOCAB,
widget=SelectionWidget(
label=_(
(
u"This page, or pages contained in this folder, "
u"wants to highlight known terms from the glossary."
)
)
),
)
]
def __init__(self, context):
self.context = context
def getFields(self):
"""Get the fields.
We could add a check like this, to avoid showing the fields
unnecessarily, but then we should allow it for folderish items
as well, so never mind.
from Products.CMFCore.utils import getToolByName
gtool = getToolByName(self.context, 'portal_glossary', None)
if gtool is None:
return []
if not self.context.portal_type in gtool.getAllowedPortalTypes():
return []
"""
return self.fields
@implementer(IOptionalHighLight)
@adapter(IBaseContent)
class OptionalHighLight(BaseOptionalHighLight):
"""Adapter that looks up the 'highlight' field on an AT object.
"""
def __init__(self, context):
self.context = context
@property
def highlight(self):
field = self.context.getField("highlight")
if field:
return field.get(self.context) | zest.ploneglossaryhighlight | /zest.ploneglossaryhighlight-2.0.0.tar.gz/zest.ploneglossaryhighlight-2.0.0/zest/ploneglossaryhighlight/at.py | at.py |
from optparse import OptionParser
from os.path import join
from pythongettext.msgfmt import Msgfmt
import logging
import os
import sys
try:
from zest.releaser.utils import ask
ask # pyflakes
except ImportError:
# No zest.releaser available
ask = None
logger = logging.getLogger("zest.pocompile")
def find_lc_messages(path, dry_run=False):
"""Find 'LC_MESSAGES' directories and compile all .po files in it.
Accepts an optional dry_run argument and passes this on.
Adapted from collective.releaser.
"""
for directory in os.listdir(path):
dir_path = join(path, directory)
if not os.path.isdir(dir_path):
continue
if directory == "LC_MESSAGES":
compile_po(dir_path, dry_run=dry_run)
else:
find_lc_messages(dir_path, dry_run=dry_run)
def compile_po(path, dry_run=False):
"""path is a LC_MESSAGES directory. Compile *.po into *.mo.
Accepts an optional dry_run argument. When True, only reports the
found po files, without compiling them.
Adapted from collective.releaser.
"""
for domain_file in os.listdir(path):
if domain_file.endswith(".po"):
file_path = join(path, domain_file)
if dry_run:
logger.info("Found .po file: %s" % file_path)
continue
logger.info("Building .mo for %s" % file_path)
mo_file = join(path, "%s.mo" % domain_file[:-3])
mo_content = Msgfmt(file_path, name=file_path).get()
mo = open(mo_file, "wb")
mo.write(mo_content)
mo.close()
def compile_in_tag(data):
"""Compile all po files in the tag.
We expect to get a dictionary from zest.releaser, with a tagdir.
When an exception occurs during finding/compiling, and we were
indeed called as an entry point of zest.releaser, we ask the user
what to do: continue with the release or not.
"""
tagdir = data.get("tagdir")
if not tagdir:
logger.warn("Aborted compiling of po files: no tagdir found in data.")
return
logger.info("Finding and compiling po files in %s", tagdir)
try:
find_lc_messages(tagdir)
except Exception:
logger.warn("Finding and compiling aborted after exception.", exc_info=True)
if data and ask:
# We were called as an entry point of zest.releaser.
if not ask(
"Error compiling po file. This could mean some "
"languages have no working translations. Do you want "
"to continue with the release?"
):
sys.exit(1)
def main(*args, **kwargs):
"""Run as stand-alone program."""
logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s")
# Parsing arguments. Note that the parse_args call might already
# stop the program, displaying a help or usage message.
usage = "usage: %prog [options] <directories>"
parser = OptionParser(usage=usage, description=__doc__)
parser.add_option(
"-n",
"--dry-run",
action="store_true",
dest="dry_run",
help="Only report found po files, without compiling them.",
)
options, directories = parser.parse_args()
if not directories:
# Run only in the current working directory.
directories = [os.getcwd()]
# This might very well raise an exception like 'Not a directory'
# and that is fine.
for directory in directories:
logger.info("Finding and compiling po files in %s", directory)
find_lc_messages(directory, options.dry_run) | zest.pocompile | /zest.pocompile-2.0.0a1-py3-none-any.whl/zest/pocompile/compile.py | compile.py |
Recipe to install Mysql
=======================
Code repository:
http://svn.plone.org/svn/collective/buildout/zest.recipe.mysql
It can be a problem getting both mysql and mysql's python binding to install
reliably on everyone's development machine. Problems we encountered were
mostly on the mac, as mysql gets installed into different locations by the
official mysql distribution, macports and fink. And on the mac, the python
binding needs a patch.
One solution: fix the underlying stuff and make mysql and mysql-python a
dependency that has to be handled by the OS. Alternative solution: this
'zest.recipe.mysql' recipe. **Warning: rough edges**. It is not a properly
documented and tested package as it originally was a quick
need-to-get-something-working-now job.
Here's a quick piece of buildout to get you started if you want to test
it::
[buildout]
parts =
mysql
...
[mysql]
recipe = zest.recipe.mysql
# Note that these urls usually stop working after a while... thanks...
mysql-url = http://dev.mysql.com/get/Downloads/MySQL-5.0/mysql-5.0.86.tar.gz/from/http://mysql.proserve.nl/
mysql-python-url = http://surfnet.dl.sourceforge.net/sourceforge/mysql-python/MySQL-python-1.2.2.tar.gz
[plone]
...
eggs =
...
${mysql:eggs}
* This ought to download and install mysql and mysql-python.
* Mysql and mysql-python end up in '.../parts/mysql'.
* mysql-python is installed as a development egg by buildout.
* The socket and the database are in '.../var/mysql'.
Options
-------
``mysql-url`` -- Download URL for the target version of MySQL (required).
``mysql-python-url`` -- Download URL for the target version of the
Python wrappers (required)
``python`` -- path to the Python against which to build the wrappers
(optional, currrently unused).
``config-skeleton`` -- path to a file used as a template for generating
the instance-local ``my.cnf`` file (optional). If this option is not
supplied, no configuration will be generated. If passed, the text from
the indicated file will be used as a Python string template, with the
following keys passed in the mapping to the ``%`` operator:
- ``MYSQLD_INSTALL``
- ``MYSQLD_SOCKET``
- ``MYSQLD_PIDFILE``
- ``DATADIR``
``with-innodb`` -- if set to a non-false value, will pass the
``--with-innodb`` option to the configure command.
| zest.recipe.mysql | /zest.recipe.mysql-1.0.4.zip/zest.recipe.mysql-1.0.4/README.txt | README.txt |
"""Recipe mysql"""
import logging
import os
import platform
import setuptools
import shutil
import string
import subprocess
import tempfile
import urllib2
import urlparse
import zc.buildout
logger = logging.getLogger('zest.recipe.mysql')
class Recipe(object):
"""zc.buildout recipe for building MySQL and the Python wrappers.
For options, see the README.txt file.
"""
def __init__(self, buildout, name, options):
self.name = name
self.options = options
self.buildout = buildout
self.mysql_url = options.get("mysql-url", None)
self.mysql_python_url = options.get("mysql-python-url", None)
if not self.mysql_url or not self.mysql_python_url:
logger.error("You need to specify the URLs to "
"download MySQL (now %r) and mysql-python (now %r)",
self.mysql_url, self.mysql_python_url)
raise zc.buildout.UserError("No download location provided")
pythonexec = options.get('python', buildout['buildout']['python'])
self.python = buildout[pythonexec]['executable']
logger.debug("Python used: %s", self.python)
location = options["location"] = os.path.join(
buildout["buildout"]["parts-directory"], self.name)
self.options["source-location"] = os.path.join(location, "source")
self.options["mysqldb-location"] = os.path.join(location,
"mysql-python")
self.options["eggs-directory"] = buildout['buildout']['eggs-directory']
self.options["eggs"] = 'MySQL_python'
install_dir = os.path.join(location, "install")
self.options["binary-location"] = install_dir
bindir = buildout["buildout"]["bin-directory"]
self.options["daemon"] = os.path.join(bindir, "mysqld_safe")
self.config_skeleton = options.get("config-skeleton", None)
if self.config_skeleton is not None:
self.config_file = os.path.join(install_dir, 'my.cnf')
else:
self.config_file = None
with_innodb = options.get('with-innodb', False)
if with_innodb:
if with_innodb.strip().lower() in ['false', 'f', 'no', 'n', '0']:
with_innodb = False
else:
with_innodb = True
self.with_innodb = with_innodb
# Set some default options
buildout['buildout'].setdefault('download-directory',
os.path.join(buildout['buildout']['directory'], 'downloads'))
def install(self):
"""Installer"""
location = self.options["location"]
if not os.path.exists(location):
os.mkdir(location)
self.download()
self.compileMySQL()
self.generateConfigFile()
self.addScriptWrappers()
if '10.5' in platform.mac_ver()[0]:
self.patchForOSXLeopard()
self.compileMySQLPython()
#self.update_find_links()
self.install_python_mysql_as_develop()
return []
def download(self):
logger.debug("Downloading/unpacking MySQL and mysql-python tarball.")
options = self.options
# Make sure the python lib is always fresh
if os.path.exists(options["mysqldb-location"]):
shutil.rmtree(options["mysqldb-location"])
self._download(self.mysql_url, options["source-location"])
self._download(self.mysql_python_url, options["mysqldb-location"])
def _download(self, url, location):
_, _, urlpath, _, _, _ = urlparse.urlparse(url)
tmp = tempfile.mkdtemp("buildout-" + self.name)
download_dir = self.buildout['buildout']['download-directory']
if not os.path.isdir(download_dir):
os.mkdir(download_dir)
try:
fname = os.path.join(download_dir, urlpath.split("/")[-1])
if not fname.endswith('gz'):
# We are downloading from a mirror like this:
# http://dev.mysql.com/get/Downloads/MySQL-5.0/
# mysql-5.0.51a.tar.gz/from/http://mysql.proserve.nl/
fname = os.path.join(download_dir, urlpath.split("/")[-6])
if not os.path.exists(fname):
logger.info("Downloading %s.", url)
f = open(fname, "wb")
try:
f.write(urllib2.urlopen(url).read())
except:
os.remove(fname)
raise
f.close()
if os.path.exists(location):
logger.debug("We just downloaded a new version, so we "
"remove the target location %s.", location)
shutil.rmtree(location)
if not os.path.exists(location):
# We need to unpack.
logger.info("Unzipping %s.", url)
setuptools.archive_util.unpack_archive(fname, tmp)
files = os.listdir(tmp)
shutil.move(os.path.join(tmp, files[0]), location)
finally:
shutil.rmtree(tmp)
def patchForOSXLeopard(self):
"""Patch _mysql.c on OS X Leopard.
see http://www.davidcramer.net/code/57/mysqldb-on-leopard.html
"""
#Commenting out uint define
mysql_c_filename = os.path.join(self.options["mysqldb-location"],
'_mysql.c')
mysql_c_source = open(mysql_c_filename, 'r').read()
mysql_c_source = mysql_c_source.splitlines()
mysql_c_source.remove('#ifndef uint')
mysql_c_source.remove('#define uint unsigned int')
mysql_c_source.remove('#endif')
open(mysql_c_filename, 'w').write(string.join(mysql_c_source, '\n'))
def compileMySQL(self):
os.chdir(self.options["source-location"])
logger.info("Compiling MySQL")
vardir = os.path.join(self.buildout['buildout']['directory'],
'var')
our_vardir = os.path.join(vardir, 'mysql')
socket = os.path.join(vardir, 'mysql.socket')
if not os.path.exists(our_vardir):
os.makedirs(our_vardir)
configure_call = ["./configure",
"--prefix=%s" % self.options["binary-location"],
"--localstatedir=%s" % our_vardir,
"--with-unix-socket-path=%s" % socket]
if self.with_innodb:
configure_call.append("--with-innodb")
assert subprocess.call(configure_call) == 0
# TODO: optional different port number per buildout
# --with-tcp-port=port-number
# Which port to use for MySQL services (default 3306)
assert subprocess.call(["make", "install"]) == 0
def compileMySQLPython(self):
options = self.options
#First make sure MySQLdb finds our freshly install mysql code
site_cfg_filename = os.path.join(options["mysqldb-location"],
'site.cfg')
site_cfg_source = open(site_cfg_filename, 'r').read()
myconfig = ('mysql_config = ' +
options["binary-location"] +
'/bin/mysql_config')
site_cfg_source = site_cfg_source.replace(
'#mysql_config = /usr/local/bin/mysql_config',
myconfig)
if myconfig in site_cfg_source:
logger.debug("Pointed mysql-python at our own mysql copy.")
else:
logger.warn("Failed to point mysql-python at our own mysql copy.")
# Disable Threadsafe
site_cfg_source = site_cfg_source.replace('threadsafe = True',
'threadsafe = False')
open(site_cfg_filename, 'w').write(site_cfg_source)
# Now we can build and install
os.chdir(options["mysqldb-location"])
#logger.info("Compiling MySQLdb")
#dest = options["eggs-directory"]
#if not os.path.exists(dest):
# os.makedirs(dest)
#environ = os.environ
#environ['PYTHONPATH'] = dest
# TODO: just build it and enable it as a develop egg.
#assert subprocess.call([self.python, "setup.py", "install",
# "--install-lib=%s" % dest],
# env = environ) == 0
def install_python_mysql_as_develop(self):
logger.info("Compiling MySQLdb")
zc.buildout.easy_install.develop(
self.options["mysqldb-location"],
self.buildout['buildout']['develop-eggs-directory'])
logger.info("python-mysql installed as a development egg.")
def update_find_links(self):
# TODO: zap this into oblivion, probably.
dest = self.options["eggs-directory"]
eggfiles = [f for f in os.listdir(dest) if f.endswith('.egg')]
find_links = ''
for eggfile in eggfiles:
find_links += "\n%s" % (os.path.join(dest, eggfile))
for k in self.buildout:
logger.info("Adding the MySQL_python egg to 'find-links' of %s",
k)
original = self.buildout[k].get('find-links', '')
self.buildout[k]['find-links'] = original + find_links
def generateConfigFile(self):
if self.config_skeleton is not None:
bdir = self.buildout['buildout']['directory']
skel = self.config_skeleton
if not os.path.isabs(skel):
skel = os.path.join(bdir, skel)
template = open(skel, "rt").read()
vardir = os.path.join(bdir, 'var', 'mysql')
installed = self.options["binary-location"]
vars = {"MYSQLD_INSTALL": installed,
"MYSQLD_SOCKET": os.path.join(vardir, 'mysqld.sock'),
"MYSQLD_PIDFILE": os.path.join(vardir, 'mysqld.pid'),
"DATADIR": vardir,
}
f = open(self.config_file, "wt")
f.write(template % vars)
f.close()
def addScriptWrappers(self):
bintarget = self.buildout["buildout"]["bin-directory"]
dir = os.path.join(self.options["binary-location"], "bin")
if self.config_file is not None:
defaults = '--defaults-file=%s' % self.config_file
else:
defaults = '--no-defaults'
for file in os.listdir(dir):
logger.info("Adding script wrapper for %s" % file)
wrapped = os.path.join(dir, file)
target = os.path.join(bintarget, file)
f = open(target, "wt")
print >> f, "#!/bin/sh"
if file == 'mysqld_safe':
print >> f, 'exec %s %s "$@" &' % (wrapped, defaults)
else:
print >> f, 'exec %s %s "$@"' % (wrapped, defaults)
f.close()
os.chmod(target, 0755)
# exec mysql_install_db
logger.info("Creating mysql database for access tables")
assert subprocess.call(
[os.path.join(bintarget, "mysql_install_db"),
'--no-defaults']) == 0
logger.info("Stopping mysqld_safe if any are running")
# TODO
logger.info("Starting mysqld_safe")
pid = subprocess.Popen(self.options['daemon']).pid
logger.info("Started mysqld_safe with pid %r" % pid)
# Adding stop script
logger.info("Adding mysql stop script.")
target = os.path.join(bintarget, 'stop-mysql')
f = open(target, "wt")
print >> f, "#!/bin/sh"
vardir = os.path.join(self.buildout['buildout']['directory'],
'var', 'mysql')
print >> f, 'kill `cat %s/*.pid`' % vardir
f.close()
os.chmod(target, 0755)
def update(self):
"""Updater"""
location = self.options["location"]
#self.install_python_mysql_as_develop()
if not os.path.exists(location):
logger.warn("%s doesn't exist anymore during mysql update. "
"Seeing that as a sign to re-run the install.")
self.install()
class DatabaseRecipe:
"""docstring for DatabaseRecipe"""
def __init__(self, buildout, name, options):
self.name = name
self.options = options
self.buildout = buildout
self.db = self.options.get('db', '')
self.user = self.options.get('user', 'root')
self.import_file = self.options.get('import-file', '')
if self.db == '':
# TODO: the following two errors don't match.
#logger = logging.getLogger(self.name)
#logger.error("You need to specify the URLs to "
# "download MySQL and mysql-python")
raise zc.buildout.UserError("No database name provided,"
" please specify a db = mydatabase")
def install(self):
#create the database
bindir = self.buildout["buildout"]["bin-directory"]
subprocess.call([bindir, 'mysql',
'-u %s' % self.user, '<',
'create', 'database', self.db])
def update(self):
pass | zest.recipe.mysql | /zest.recipe.mysql-1.0.4.zip/zest.recipe.mysql-1.0.4/zest/recipe/mysql/__init__.py | __init__.py |
from zest.releaser import choose
from zest.releaser import pypi
from zest.releaser import utils
from zest.releaser.utils import execute_command
from zest.releaser.utils import read_text_file
from zest.releaser.utils import write_text_file
import logging
import os
import pkg_resources
import re
import sys
logger = logging.getLogger(__name__)
DATE_PATTERN = re.compile(r"^\d{4}-\d{2}-\d{2}$")
# Documentation for self.data. You get runtime warnings when something is in
# self.data that is not in this list. Embarrasment-driven documentation! This
# is the data that is available for all commands. Individual commands may add
# or override data. Note that not all data is actually used or filled in by
# all commands.
DATA = {
"commit_msg": "Message template used when committing",
"has_released_header": ("Latest header is for a released version with a date"),
"headings": "Extracted headings from the history file",
"history_encoding": "The detected encoding of the history file",
"history_file": "Filename of history/changelog file (when found)",
"history_header": "Header template used for 1st history header",
"history_insert_line_here": (
"Line number where an extra changelog entry can be inserted."
),
"history_last_release": ("Full text of all history entries of the current release"),
"history_lines": "List with all history file lines (when found)",
"name": "Name of the project being released",
"new_version": "New version to write, possibly with development marker",
"nothing_changed_yet": (
"First line in new changelog section, "
"warn when this is still in there before releasing"
),
"original_version": "Original package version before any changes",
"reporoot": "Root of the version control repository",
"required_changelog_text": (
"Text that must be present in the changelog. Can be a string or a "
'list, for example ["New:", "Fixes:"]. For a list, only one of them '
"needs to be present."
),
"update_history": "Should zest.releaser update the history file?",
"workingdir": "Original working directory",
}
NOTHING_CHANGED_YET = "- Nothing changed yet."
class Basereleaser:
def __init__(self, vcs=None):
os.environ["ZESTRELEASER"] = "We are called from within zest.releaser"
# ^^^ Env variable so called tools can detect us. Don't depend on the
# actual text, just on the variable's name.
if vcs is None:
self.vcs = choose.version_control()
else:
# In a fullrelease, we share the determined vcs between
# prerelease, release and postrelease.
self.vcs = vcs
self.data = {
"name": self.vcs.name,
"nothing_changed_yet": NOTHING_CHANGED_YET,
"reporoot": self.vcs.reporoot,
"workingdir": self.vcs.workingdir,
}
self.setup_cfg = pypi.SetupConfig()
if utils.TESTMODE:
pypirc_old = pkg_resources.resource_filename(
"zest.releaser.tests", "pypirc_old.txt"
)
self.pypiconfig = pypi.PypiConfig(pypirc_old)
self.zest_releaser_config = pypi.ZestReleaserConfig(
pypirc_config_filename=pypirc_old
)
else:
self.pypiconfig = pypi.PypiConfig()
self.zest_releaser_config = pypi.ZestReleaserConfig()
if self.zest_releaser_config.no_input():
utils.AUTO_RESPONSE = True
@property
def history_format(self):
config_value = self.zest_releaser_config.history_format()
history_file = self.data.get("history_file") or ""
return utils.history_format(config_value, history_file)
@property
def underline_char(self):
underline_char = "-"
if self.history_format == "md":
underline_char = ""
return underline_char
def _grab_version(self, initial=False):
"""Just grab the version.
This may be overridden to get a different version, like in prerelease.
The 'initial' parameter may be used to making a difference
between initially getting the current version, and later getting
a suggested version or asking the user.
"""
version = self.vcs.version
if not version:
logger.critical("No version detected, so we can't do anything.")
sys.exit(1)
self.data["version"] = version
def _grab_history(self):
"""Calculate the needed history/changelog changes
Every history heading looks like '1.0 b4 (1972-12-25)'. Extract them,
check if the first one matches the version and whether it has a the
current date.
"""
self.data["history_lines"] = []
self.data["history_file"] = None
self.data["history_encoding"] = None
self.data["headings"] = []
self.data["history_last_release"] = ""
self.data["history_insert_line_here"] = 0
default_location = self.zest_releaser_config.history_file()
history_file = self.vcs.history_file(location=default_location)
self.data["history_file"] = history_file
if not history_file:
logger.warning("No history file found")
return
logger.debug("Checking %s", history_file)
history_lines, history_encoding = read_text_file(
history_file,
)
headings = utils.extract_headings_from_history(history_lines)
if not headings:
logger.warning(
"No detectable version heading in the history " "file %s", history_file
)
return
self.data["history_lines"] = history_lines
self.data["history_encoding"] = history_encoding
self.data["headings"] = headings
# Grab last header.
start = headings[0]["line"]
if len(headings) > 1:
# Include the next header plus underline, as this is nice
# to show in the history_last_release.
end = headings[1]["line"] + 2
else:
end = len(history_lines)
history_last_release = "\n".join(history_lines[start:end])
self.data["history_last_release"] = history_last_release
# Does the latest release header have a release date in it?
# This is useful to know, especially when using tools like towncrier
# to handle the changelog.
released = DATE_PATTERN.match(headings[0]["date"])
self.data["has_released_header"] = released
# Add line number where an extra changelog entry can be inserted. Can
# be useful for entry points. 'start' is the header, +1 is the
# underline, +2 is probably an empty line, so then we should take +3.
# Or rather: the first non-empty line.
insert = start + 2
while insert < end:
if history_lines[insert].strip():
break
insert += 1
self.data["history_insert_line_here"] = insert
def _change_header(self, add=False):
"""Change the latest header.
Change the version number and the release date or development status.
@add:
- False: edit current header (prerelease or bumpversion)
- True: add header (postrelease)
But in some cases it may not be wanted to change a header,
especially when towncrier is used to handle the history.
Otherwise we could be changing a date of an already existing release.
What we want to avoid:
- change a.b.c (1999-12-32) to x.y.z (unreleased) [bumpversion]
- change a.b.c (1999-12-32) to x.y.z (today) [prerelease]
But this is fine:
- change a.b.c (unreleased) to x.y.z (unreleased) [bumpversion]
- change a.b.c (unreleased) to a.b.c (today) [prerelease]
- change a.b.c (unreleased) to x.y.z (today) [prerelease]
"""
if self.data["history_file"] is None:
return
good_heading = self.data["history_header"] % self.data
if self.history_format == "md" and not good_heading.startswith("#"):
good_heading = f"## {good_heading}"
if not add and self.data.get("has_released_header"):
# So we are editing a line, but it already has a release date.
logger.warning(
"Refused to edit the first history heading, because it looks "
"to be from an already released version with a date. "
"Would have wanted to set this header: %s",
good_heading,
)
return
# ^^^ history_header is a string with %(abc)s replacements.
headings = self.data["headings"]
history_lines = self.data["history_lines"]
previous = ""
underline_char = self.underline_char
empty = False
if not history_lines:
# Remember that we were empty to start with.
empty = True
# prepare header line
history_lines.append("")
if len(history_lines) <= 1 and underline_char:
# prepare underline
history_lines.append(underline_char)
if not headings:
# Mock a heading
headings = [{"line": 0}]
inject_location = 0
first = headings[0]
inject_location = first["line"]
underline_line = first["line"] + 1
try:
underline_char = history_lines[underline_line][0]
except IndexError:
logger.debug("No character on line below header.")
underline_char = self.underline_char
previous = history_lines[inject_location]
if add:
underline = underline_char * len(good_heading) if underline_char else ""
inject = [
good_heading,
underline,
"",
self.data["nothing_changed_yet"],
"",
"",
]
if empty:
history_lines = []
history_lines[inject_location:inject_location] = inject
else:
# edit current line
history_lines[inject_location] = good_heading
logger.debug("Set heading from '%s' to '%s'.", previous, good_heading)
if self.history_format == "rst":
history_lines[underline_line] = utils.fix_rst_heading(
heading=good_heading, below=history_lines[underline_line]
)
logger.debug(
"Set line below heading to '%s'", history_lines[underline_line]
)
# Setting history_lines is not needed, except when we have replaced the
# original instead of changing it. So just set it.
self.data["history_lines"] = history_lines
def _insert_changelog_entry(self, message):
"""Insert changelog entry."""
if self.data["history_file"] is None:
return
insert = self.data["history_insert_line_here"]
# Hopefully the inserted data matches the existing encoding.
orig_encoding = self.data["history_encoding"]
try:
message.encode(orig_encoding)
except UnicodeEncodeError:
logger.warning(
"Changelog entry does not have the same encoding (%s) as "
"the existing file. This might give problems.",
orig_encoding,
)
fallback_encoding = "utf-8"
try:
# Note: we do not change the message at this point,
# we only check if an encoding can work.
message.encode(fallback_encoding)
except UnicodeEncodeError:
logger.warning(
"Tried %s for changelog entry, but that didn't work. "
"It will probably fail soon afterwards.",
fallback_encoding,
)
else:
logger.debug("Forcing new history_encoding %s", fallback_encoding)
self.data["history_encoding"] = fallback_encoding
lines = []
prefix = utils.get_list_item(self.data["history_lines"])
for index, line in enumerate(message.splitlines()):
if index == 0:
line = f"{prefix} {line}"
else:
line = "{} {}".format(" " * len(prefix), line)
lines.append(line)
lines.append("")
self.data["history_lines"][insert:insert] = lines
def _check_nothing_changed(self):
"""Look for 'Nothing changed yet' under the latest header.
Not nice if this text ends up in the changelog. Did nothing
happen?
"""
if self.data["history_file"] is None:
return
nothing_yet = self.data["nothing_changed_yet"]
if nothing_yet not in self.data["history_last_release"]:
return
# We want quotes around the text, but also want to avoid
# printing text with a u'unicode marker' in front...
pretty_nothing_changed = f'"{nothing_yet}"'
if not utils.ask(
"WARNING: Changelog contains {}. Are you sure you "
"want to release?".format(pretty_nothing_changed),
default=False,
):
logger.info(
"You can use the 'lasttaglog' command to "
"see the commits since the last tag."
)
sys.exit(1)
def _check_required(self):
"""Look for required text under the latest header.
This can be a list, in which case only one item needs to be
there.
"""
if self.data["history_file"] is None:
return
required = self.data.get("required_changelog_text")
if not required:
return
if isinstance(required, str):
required = [required]
history_last_release = self.data["history_last_release"]
for text in required:
if text in history_last_release:
# Found it, all is fine.
return
pretty_required = '"{}"'.format('", "'.join(required))
if not utils.ask(
"WARNING: Changelog should contain at least one of "
"these required strings: {}. Are you sure you "
"want to release?".format(pretty_required),
default=False,
):
sys.exit(1)
def _write_version(self):
if self.data["new_version"] != self.data["original_version"]:
# self.vcs.version writes it to the file it got the version from.
self.vcs.version = self.data["new_version"]
logger.info(
"Changed version from %s to %s",
self.data["original_version"],
self.data["new_version"],
)
def _write_history(self):
"""Write previously-calculated history lines back to the file"""
if self.data["history_file"] is None:
return
contents = "\n".join(self.data["history_lines"])
history = self.data["history_file"]
write_text_file(history, contents, encoding=self.data["history_encoding"])
logger.info("History file %s updated.", history)
def _diff_and_commit(self, commit_msg=""):
"""Show diff and offer commit.
commit_msg is optional. If it is not there, we get the
commit_msg from self.data. That is the usual mode and is at
least used in prerelease and postrelease. If it is not there
either, we ask.
"""
if not commit_msg:
if "commit_msg" not in self.data:
# Ask until we get a non-empty commit message.
while not commit_msg:
commit_msg = utils.get_input("What is the commit message? ")
else:
commit_msg = self.data["commit_msg"]
diff_cmd = self.vcs.cmd_diff()
diff = execute_command(diff_cmd)
logger.info("The '%s':\n\n%s\n", utils.format_command(diff_cmd), diff)
if utils.ask("OK to commit this"):
msg = commit_msg % self.data
msg = self.update_commit_message(msg)
commit_cmd = self.vcs.cmd_commit(msg)
commit = execute_command(commit_cmd)
logger.info(commit)
def _push(self):
"""Offer to push changes, if needed."""
push_cmds = self.vcs.push_commands()
if not push_cmds:
return
default_anwer = self.zest_releaser_config.push_changes()
if utils.ask("OK to push commits to the server?", default=default_anwer):
for push_cmd in push_cmds:
output = execute_command(push_cmd)
logger.info(output)
def _run_hooks(self, when):
which_releaser = self.__class__.__name__.lower()
utils.run_hooks(self.zest_releaser_config, which_releaser, when, self.data)
def run(self):
self._run_hooks("before")
self.prepare()
self._run_hooks("middle")
self.execute()
self._run_hooks("after")
def prepare(self):
raise NotImplementedError()
def execute(self):
raise NotImplementedError()
def update_commit_message(self, msg):
prefix_message = self.zest_releaser_config.prefix_message()
extra_message = self.zest_releaser_config.extra_message()
if prefix_message:
msg = "%s %s" % (prefix_message, msg)
if extra_message:
msg += "\n\n%s" % extra_message
return msg | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/baserelease.py | baserelease.py |
from pkg_resources import parse_version
from zest.releaser import baserelease
from zest.releaser import utils
import logging
import sys
logger = logging.getLogger(__name__)
HISTORY_HEADER = "%(clean_new_version)s (unreleased)"
COMMIT_MSG = "Bumped version for %(release)s release."
# Documentation for self.data. You get runtime warnings when something is in
# self.data that is not in this list. Embarrasment-driven documentation!
DATA = baserelease.DATA.copy()
DATA.update(
{
"breaking": "True if we handle a breaking (major) change",
"clean_new_version": "Clean new version (say 1.1)",
"feature": "True if we handle a feature (minor) change",
"final": "True if we handle a final release",
"release": "Type of release: breaking, feature, normal, final",
}
)
class BumpVersion(baserelease.Basereleaser):
"""Add a changelog entry.
self.data holds data that can optionally be changed by plugins.
"""
def __init__(self, vcs=None, breaking=False, feature=False, final=False):
baserelease.Basereleaser.__init__(self, vcs=vcs)
# Prepare some defaults for potential overriding.
if breaking:
release = "breaking"
elif feature:
release = "feature"
elif final:
release = "final"
else:
release = "normal"
self.data.update(
dict(
breaking=breaking,
commit_msg=COMMIT_MSG,
feature=feature,
final=final,
history_header=HISTORY_HEADER,
release=release,
update_history=True,
)
)
def prepare(self):
"""Prepare self.data by asking about new dev version"""
print("Checking version bump for {} release.".format(self.data["release"]))
if not utils.sanity_check(self.vcs):
logger.critical("Sanity check failed.")
sys.exit(1)
self._grab_version(initial=True)
self._grab_history()
# Grab and set new version.
self._grab_version()
def execute(self):
"""Make the changes and offer a commit"""
if self.data["update_history"]:
self._change_header()
self._write_version()
if self.data["update_history"]:
self._write_history()
self._diff_and_commit()
def _grab_version(self, initial=False):
"""Grab the version.
When initial is False, ask the user for a non-development
version. When initial is True, grab the current suggestion.
"""
original_version = self.vcs.version
logger.debug("Extracted version: %s", original_version)
if not original_version:
logger.critical("No version found.")
sys.exit(1)
suggestion = new_version = self.data.get("new_version")
if not new_version:
# Get a suggestion.
breaking = self.data["breaking"]
feature = self.data["feature"]
final = self.data["final"]
# Compare the suggestion for the last tag with the current version.
# The wanted version bump may already have been done.
last_tag_version = utils.get_last_tag(self.vcs, allow_missing=True)
if last_tag_version is None:
print("No tag found. No version bump needed.")
sys.exit(0)
else:
print(f"Last tag: {last_tag_version}")
print(f"Current version: {original_version}")
params = dict(
feature=feature,
breaking=breaking,
final=final,
less_zeroes=self.zest_releaser_config.less_zeroes(),
levels=self.zest_releaser_config.version_levels(),
dev_marker=self.zest_releaser_config.development_marker(),
)
if final:
minimum_version = utils.suggest_version(original_version, **params)
if minimum_version is None:
print("No version bump needed.")
sys.exit(0)
else:
minimum_version = utils.suggest_version(last_tag_version, **params)
if parse_version(minimum_version) <= parse_version(
utils.cleanup_version(original_version)
):
print("No version bump needed.")
sys.exit(0)
# A bump is needed. Get suggestion for next version.
suggestion = utils.suggest_version(original_version, **params)
if not initial:
new_version = utils.ask_version("Enter version", default=suggestion)
if not new_version:
new_version = suggestion
self.data["original_version"] = original_version
self.data["new_version"] = new_version
self.data["clean_new_version"] = utils.cleanup_version(new_version)
def datacheck(data):
"""Entrypoint: ensure that the data dict is fully documented"""
utils.is_data_documented(data, documentation=DATA)
def main():
parser = utils.base_option_parser()
parser.add_argument(
"--feature",
action="store_true",
dest="feature",
default=False,
help="Bump for feature release (increase minor version)",
)
parser.add_argument(
"--breaking",
action="store_true",
dest="breaking",
default=False,
help="Bump for breaking release (increase major version)",
)
parser.add_argument(
"--final",
action="store_true",
dest="final",
default=False,
help="Bump for final release (remove alpha/beta/rc from version)",
)
options = utils.parse_options(parser)
# How many options are enabled?
if len(list(filter(None, [options.breaking, options.feature, options.final]))) > 1:
print("ERROR: Only enable one option of breaking/feature/final.")
sys.exit(1)
utils.configure_logging()
bumpversion = BumpVersion(
breaking=options.breaking,
feature=options.feature,
final=options.final,
)
bumpversion.run() | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/bumpversion.py | bumpversion.py |
from configparser import ConfigParser
from zest.releaser import pypi
from zest.releaser import utils
import logging
import os
import pkg_resources
import re
import sys
try:
# Python 3.11+
import tomllib
except ImportError:
# Python 3.10-
import tomli as tomllib
VERSION_PATTERN = re.compile(
r"""
^ # Start of line
\s* # Indentation
version\s*=\s* # 'version = ' with possible whitespace
['"]? # String literal begins
\d # Some digit, start of version.
""",
re.VERBOSE,
)
UPPERCASE_VERSION_PATTERN = re.compile(
r"""
^ # Start of line
VERSION\s*=\s* # 'VERSION = ' with possible whitespace
['"] # String literal begins
\d # Some digit, start of version.
""",
re.VERBOSE,
)
UNDERSCORED_VERSION_PATTERN = re.compile(
r"""
^ # Start of line
__version__\s*=\s* # '__version__ = ' with possible whitespace
['"] # String literal begins
\d # Some digit, start of version.
""",
re.VERBOSE,
)
TXT_EXTENSIONS = ["rst", "txt", "markdown", "md"]
logger = logging.getLogger(__name__)
class BaseVersionControl:
"Shared implementation between all version control systems"
internal_filename = "" # e.g. '.svn' or '.hg'
setuptools_helper_package = ""
def __init__(self, reporoot=None):
self.workingdir = os.getcwd()
if reporoot is None:
self.reporoot = self.workingdir
self.relative_path_in_repo = ""
else:
self.reporoot = reporoot
# Determine relative path from root of repo.
self.relative_path_in_repo = os.path.relpath(self.workingdir, reporoot)
if utils.TESTMODE:
pypirc_old = pkg_resources.resource_filename(
"zest.releaser.tests", "pypirc_old.txt"
)
self.zest_releaser_config = pypi.ZestReleaserConfig(
pypirc_config_filename=pypirc_old
)
else:
self.zest_releaser_config = pypi.ZestReleaserConfig()
def __repr__(self):
return "<{} at {} {}>".format(
self.__class__.__name__, self.reporoot, self.relative_path_in_repo
)
def is_setuptools_helper_package_installed(self):
try:
__import__(self.setuptools_helper_package)
except ImportError:
return False
return True
def get_setup_py_version(self):
if os.path.exists("setup.py"):
# First run egg_info, as that may get rid of some warnings
# that otherwise end up in the extracted version, like
# UserWarnings.
utils.execute_command(utils.setup_py("egg_info"))
version = utils.execute_command(utils.setup_py("--version")).splitlines()[0]
if "Traceback" in version:
# Likely cause is for example forgetting to 'import
# os' when using 'os' in setup.py.
logger.critical("The setup.py of this package has an error:")
print(version)
logger.critical("No version found.")
sys.exit(1)
return utils.strip_version(version)
def get_setup_py_name(self):
if os.path.exists("setup.py"):
# First run egg_info, as that may get rid of some warnings
# that otherwise end up in the extracted name, like
# UserWarnings.
utils.execute_command(utils.setup_py("egg_info"))
return utils.execute_command(utils.setup_py("--name")).strip()
def get_setup_cfg_name(self):
if os.path.exists("setup.cfg"):
setup_cfg = ConfigParser()
setup_cfg.read("setup.cfg")
try:
return setup_cfg["metadata"]["name"]
except KeyError:
return None
def get_version_txt_version(self):
filenames = ["version"]
for extension in TXT_EXTENSIONS:
filenames.append(".".join(["version", extension]))
version_file = self.filefind(filenames)
if version_file:
with open(version_file) as f:
version = f.read()
return utils.strip_version(version)
def get_python_file_version(self):
python_version_file = self.zest_releaser_config.python_file_with_version()
if not python_version_file:
return
lines, encoding = utils.read_text_file(python_version_file)
encoding # noqa, unused variable
for line in lines:
match = UNDERSCORED_VERSION_PATTERN.search(line)
if match:
logger.debug("Matching __version__ line found: '%s'", line)
line = line.lstrip("__version__").strip()
line = line.lstrip("=").strip()
line = line.replace('"', "").replace("'", "")
return utils.strip_version(line)
def get_pyproject_toml_version(self):
if not os.path.exists("pyproject.toml"):
return
with open("pyproject.toml", "rb") as myfile:
result = tomllib.load(myfile)
# Might be None, but that is fine.
return result.get("project", {}).get("version")
def get_pyproject_toml_name(self):
if not os.path.exists("pyproject.toml"):
return
with open("pyproject.toml", "rb") as myfile:
result = tomllib.load(myfile)
# Might be None, but that is fine.
return result.get("project", {}).get("name")
def filefind(self, names):
"""Return first found file matching name (case-insensitive).
Some packages have docs/HISTORY.txt and
package/name/HISTORY.txt. We make sure we only return the one
in the docs directory if no other can be found.
'names' can be a string or a list of strings; if you have both
a CHANGES.txt and a docs/HISTORY.txt, you want the top level
CHANGES.txt to be found first.
"""
if isinstance(names, str):
names = [names]
names = [name.lower() for name in names]
files = self.list_files()
found = []
for fullpath in files:
if fullpath.lower().endswith("debian/changelog"):
logger.debug(
"Ignoring %s, unreadable (for us) debian changelog", fullpath
)
continue
filename = os.path.basename(fullpath)
if filename.lower() in names:
logger.debug("Found %s", fullpath)
if not os.path.exists(fullpath):
# Strange. It at least happens in the tests when
# we deliberately remove a CHANGES.txt file.
logger.warning(
"Found file %s in version control but not on "
"file execute_command.",
fullpath,
)
continue
found.append(fullpath)
if not found:
return
if len(found) > 1:
found.sort(key=len)
logger.warning(
"Found more than one file, picked the shortest one to " "change: %s",
", ".join(found),
)
return found[0]
def history_file(self, location=None):
"""Return history file location."""
if location:
# Hardcoded location passed from the config file.
if os.path.exists(location):
return location
else:
logger.warning("The specified history file %s doesn't exist", location)
filenames = []
for base in ["CHANGES", "HISTORY", "CHANGELOG"]:
filenames.append(base)
for extension in TXT_EXTENSIONS:
filenames.append(".".join([base, extension]))
history = self.filefind(filenames)
if history:
return history
def tag_exists(self, tag_name):
"""Check if a tag has already been created with the name of the
version.
"""
for tag in self.available_tags():
if tag == tag_name:
return True
return False
def _extract_version(self):
"""Extract the version from setup.py or version.txt or similar.
If there is a setup.py and it gives back a version that differs
from version.txt then this version.txt is not the one we should
use. This can happen in packages like ZopeSkel that have one or
more version.txt files that have nothing to do with the version of
the package itself.
So when in doubt: use setup.py.
But if there's an explicitly configured Python file that has to be
searched for a ``__version__`` attribute, use that one.
"""
return (
self.get_python_file_version()
or self.get_pyproject_toml_version()
or self.get_setup_py_version()
or self.get_version_txt_version()
)
def _extract_name(self):
"""Extract the package name from setup.py or pyproject.toml or similar."""
return (
self.get_setup_py_name()
or self.get_setup_cfg_name()
or self.get_pyproject_toml_name()
)
def _update_python_file_version(self, version):
filename = self.zest_releaser_config.python_file_with_version()
lines, encoding = utils.read_text_file(
filename,
)
good_version = "__version__ = '%s'" % version
for index, line in enumerate(lines):
if UNDERSCORED_VERSION_PATTERN.search(line):
lines[index] = (
good_version.replace("'", '"') if '"' in line else good_version
)
contents = "\n".join(lines)
utils.write_text_file(filename, contents, encoding)
logger.info("Set __version__ in %s to '%s'", filename, version)
def _update_pyproject_toml_version(self, version):
# This is only called when get_pyproject_toml_version sees a version.
filename = "pyproject.toml"
lines, encoding = utils.read_text_file(
filename,
)
good_version = "version = '%s'" % version
found_project = False
found_version = False
for index, line in enumerate(lines):
line = line.strip()
# First look for '[project]'.
if line == "[project]":
found_project = True
continue
if not found_project:
continue
# Then look for 'version =' within the same section
if line.startswith("["):
# The next section starts. Stop searching.
break
if not line.replace(" ", "").startswith("version="):
continue
# We found the version line!
lines[index] = (
good_version.replace("'", '"') if '"' in line else good_version
)
found_version = True
if not found_version:
logger.error(
"We could read a version from %s, "
"but could not write it back. See "
"https://zestreleaser.readthedocs.io/en/latest/versions.html "
"for hints.",
filename,
)
raise RuntimeError("Cannot set version")
contents = "\n".join(lines)
utils.write_text_file(filename, contents, encoding)
logger.info("Set version in %s to '%s'", filename, version)
def _update_version(self, version):
"""Find out where to change the version and change it.
There are three places where the version can be defined. The first one
is an explicitly defined Python file with a ``__version__``
attribute. The second one is some version.txt that gets read by
setup.py. The third is directly in setup.py.
"""
if self.get_python_file_version():
self._update_python_file_version(version)
return
if self.get_pyproject_toml_version():
self._update_pyproject_toml_version(version)
return
version_filenames = ["version"]
version_filenames.extend(
[".".join(["version", extension]) for extension in TXT_EXTENSIONS]
)
versionfile = self.filefind(version_filenames)
if versionfile:
# We have a version.txt file but does it match the setup.py
# version (if any)?
setup_version = self.get_setup_py_version()
if not setup_version or (setup_version == self.get_version_txt_version()):
with open(versionfile, "w") as f:
f.write(version + "\n")
logger.info("Changed %s to '%s'", versionfile, version)
return
good_version = "version = '%s'" % version
line_number = 0
setup_lines, encoding = utils.read_text_file(
"setup.py",
)
for line_number, line in enumerate(setup_lines):
if VERSION_PATTERN.search(line):
logger.debug("Matching version line found: '%s'", line)
if line.startswith(" "):
# oh, probably ' version = 1.0,' line.
indentation = line.split("version")[0]
# Note: no spaces around the '='.
good_version = indentation + "version='%s'," % version
if '"' in line:
good_version = good_version.replace("'", '"')
setup_lines[line_number] = good_version
utils.write_text_file("setup.py", "\n".join(setup_lines), encoding)
logger.info("Set setup.py's version to '%s'", version)
return
if UPPERCASE_VERSION_PATTERN.search(line):
# This one only occurs in the first column, so no need to
# handle indentation.
logger.debug("Matching version line found: '%s'", line)
good_version = good_version.upper()
if '"' in line:
good_version = good_version.replace("'", '"')
setup_lines[line_number] = good_version
utils.write_text_file("setup.py", "\n".join(setup_lines), encoding)
logger.info("Set setup.py's version to '%s'", version)
return
good_version = "version = %s" % version
if os.path.exists("setup.cfg"):
setup_cfg_lines, encoding = utils.read_text_file(
"setup.cfg",
)
for line_number, line in enumerate(setup_cfg_lines):
if VERSION_PATTERN.search(line):
logger.debug("Matching version line found: '%s'", line)
if line.startswith(" "):
indentation = line.split("version")[0]
good_version = indentation + good_version
setup_cfg_lines[line_number] = good_version
utils.write_text_file(
"setup.cfg", "\n".join(setup_cfg_lines), encoding
)
logger.info("Set setup.cfg's version to '%s'", version)
return
logger.error(
"We could read a version from setup.py, but could not write it "
"back. See "
"https://zestreleaser.readthedocs.io/en/latest/versions.html "
"for hints."
)
raise RuntimeError("Cannot set version")
version = property(_extract_version, _update_version)
#
# Methods that need to be supplied by child classes
#
@property
def name(self):
"Name of the project under version control"
raise NotImplementedError()
def available_tags(self):
"""Return available tags."""
raise NotImplementedError()
def prepare_checkout_dir(self, prefix):
"""Return a temporary checkout location. Create this directory first
if necessary."""
raise NotImplementedError()
def tag_url(self, version):
"URL to tag of version."
raise NotImplementedError()
def cmd_diff(self):
"diff command"
raise NotImplementedError()
def cmd_commit(self, message):
"commit command: should specify a verbose option if possible"
raise NotImplementedError()
def cmd_diff_last_commit_against_tag(self, version):
"""Return diffs between a tagged version and the last commit of
the working copy.
"""
raise NotImplementedError()
def cmd_log_since_tag(self, version):
"""Return log since a tagged version till the last commit of
the working copy.
"""
raise NotImplementedError()
def cmd_create_tag(self, version, message, sign=False):
"Create a tag from a version name."
raise NotImplementedError()
def checkout_from_tag(self, version):
package = self.name
prefix = f"{package}-{version}-"
tagdir = self.prepare_checkout_dir(prefix)
os.chdir(tagdir)
cmd = self.cmd_checkout_from_tag(version, tagdir)
print(utils.execute_commands(cmd))
def is_clean_checkout(self):
"Is this a clean checkout?"
raise NotImplementedError()
def push_commands(self):
"""Return commands to push changes to the server.
Needed if a commit isn't enough.
"""
return []
def list_files(self):
"""List files in version control.
We could raise a NotImplementedError, but a basic method that
works is handy for the vcs.txt tests.
"""
files = []
for dirpath, dirnames, filenames in os.walk(os.curdir):
dirnames # noqa pylint
for filename in filenames:
files.append(os.path.join(dirpath, filename))
return files | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/vcs.py | vcs.py |
from zest.releaser import baserelease
from zest.releaser import utils
import datetime
import logging
import sys
try:
# This is a recommended dependency.
# Not a core dependency for now, as zest.releaser can also be used for
# non-Python projects.
from pep440 import is_canonical
except ImportError:
def is_canonical(version):
logger.debug("Using dummy is_canonical that always returns True.")
return True
logger = logging.getLogger(__name__)
HISTORY_HEADER = "%(new_version)s (%(today)s)"
PRERELEASE_COMMIT_MSG = "Preparing release %(new_version)s"
# Documentation for self.data. You get runtime warnings when something is in
# self.data that is not in this list. Embarrassment-driven documentation!
DATA = baserelease.DATA.copy()
DATA.update(
{
"today": "Date string used in history header",
}
)
class Prereleaser(baserelease.Basereleaser):
"""Prepare release, ready for making a tag and an sdist.
self.data holds data that can optionally be changed by plugins.
"""
def __init__(self, vcs=None):
baserelease.Basereleaser.__init__(self, vcs=vcs)
# Prepare some defaults for potential overriding.
date_format = self.zest_releaser_config.date_format()
self.data.update(
dict(
commit_msg=PRERELEASE_COMMIT_MSG,
history_header=HISTORY_HEADER,
today=datetime.datetime.today().strftime(date_format),
update_history=True,
)
)
def prepare(self):
"""Prepare self.data by asking about new version etc."""
if not utils.sanity_check(self.vcs):
logger.critical("Sanity check failed.")
sys.exit(1)
if not utils.check_recommended_files(self.data, self.vcs):
logger.debug("Recommended files check failed.")
sys.exit(1)
# Grab current version.
self._grab_version(initial=True)
# Grab current history.
# It seems useful to do this even when we will not update the history.
self._grab_history()
if self.data["update_history"]:
# Print changelog for this release.
print(
"Changelog entries for version {}:\n".format(self.data["new_version"])
)
print(self.data.get("history_last_release"))
# Grab and set new version.
self._grab_version()
if self.data["update_history"]:
# Look for unwanted 'Nothing changed yet' in latest header.
self._check_nothing_changed()
# Look for required text under the latest header.
self._check_required()
def execute(self):
"""Make the changes and offer a commit"""
if self.data["update_history"]:
self._change_header()
self._write_version()
if self.data["update_history"]:
self._write_history()
self._diff_and_commit()
def _grab_version(self, initial=False):
"""Grab the version.
When initial is False, ask the user for a non-development
version. When initial is True, grab the current suggestion.
"""
original_version = self.vcs.version
logger.debug("Extracted version: %s", original_version)
if not original_version:
logger.critical("No version found.")
sys.exit(1)
suggestion = utils.cleanup_version(original_version)
new_version = None
if not initial:
while new_version is None:
new_version = utils.ask_version("Enter version", default=suggestion)
if not is_canonical(new_version):
logger.warning(
f"'{new_version}' is not a canonical Python package version."
)
question = "Do you want to use this version anyway?"
if not utils.ask(question):
# Set to None: we will ask to enter a new version.
new_version = None
if not new_version:
new_version = suggestion
self.data["original_version"] = original_version
self.data["new_version"] = new_version
def datacheck(data):
"""Entrypoint: ensure that the data dict is fully documented"""
utils.is_data_documented(data, documentation=DATA)
def main():
utils.parse_options()
utils.configure_logging()
prereleaser = Prereleaser()
prereleaser.run() | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/prerelease.py | prerelease.py |
from build import ProjectBuilder
from colorama import Fore
from urllib import request
from urllib.error import HTTPError
import logging
import os
import requests
import sys
try:
from twine.cli import dispatch as twine_dispatch
except ImportError:
print("twine.cli.dispatch apparently cannot be imported anymore")
print("See https://github.com/zestsoftware/zest.releaser/pull/309/")
print("Try a newer zest.releaser or an older twine (and warn us ")
print("by reacting in that pull request, please).")
raise
from zest.releaser import baserelease
from zest.releaser import pypi
from zest.releaser import utils
from zest.releaser.utils import execute_command
# Documentation for self.data. You get runtime warnings when something is in
# self.data that is not in this list. Embarrasment-driven documentation!
DATA = baserelease.DATA.copy()
DATA.update(
{
"tag_already_exists": "Internal detail, don't touch this :-)",
"tagdir": """Directory where the tag checkout is placed (*if* a tag
checkout has been made)""",
"tagworkingdir": """Working directory inside the tag checkout. This is
the same, except when you make a release from within a sub directory.
We then make sure you end up in the same relative directory after a
checkout is done.""",
"version": "Version we're releasing",
"tag": "Tag we're releasing",
"tag-message": "Commit message for the tag",
"tag-signing": "Sign tag using gpg or pgp",
}
)
logger = logging.getLogger(__name__)
def package_in_pypi(package):
"""Check whether the package is registered on pypi"""
url = "https://pypi.org/simple/%s" % package
try:
request.urlopen(url)
return True
except HTTPError as e:
logger.debug("Package not found on pypi: %s", e)
return False
def _project_builder_runner(cmd, cwd=None, extra_environ=None):
"""Run the build command and format warnings and errors.
It runs the build command in a subprocess.
extra_environ will contain for example:
{'PEP517_BUILD_BACKEND': 'setuptools.build_meta:__legacy__'}
"""
utils.show_interesting_lines(
execute_command(cmd, cwd=cwd, extra_environ=extra_environ)
)
class Releaser(baserelease.Basereleaser):
"""Release the project by tagging it and optionally uploading to pypi."""
def __init__(self, vcs=None):
baserelease.Basereleaser.__init__(self, vcs=vcs)
# Prepare some defaults for potential overriding.
self.data.update(
dict(
# Nothing yet
)
)
def prepare(self):
"""Collect some data needed for releasing"""
self._grab_version()
tag = self.zest_releaser_config.tag_format(self.data["version"])
self.data["tag"] = tag
self.data["tag-message"] = self.zest_releaser_config.tag_message(
self.data["version"]
)
self.data["tag-signing"] = self.zest_releaser_config.tag_signing()
self.data["tag_already_exists"] = self.vcs.tag_exists(tag)
def execute(self):
"""Do the actual releasing"""
self._info_if_tag_already_exists()
self._make_tag()
self._release()
def _info_if_tag_already_exists(self):
if self.data["tag_already_exists"]:
# Safety feature.
version = self.data["version"]
tag = self.data["tag"]
q = "There is already a tag %s, show " "if there are differences?" % version
if utils.ask(q):
diff_command = self.vcs.cmd_diff_last_commit_against_tag(tag)
print(utils.format_command(diff_command))
print(execute_command(diff_command))
def _make_tag(self):
version = self.data["version"]
tag = self.data["tag"]
if self.data["tag_already_exists"]:
return
cmds = self.vcs.cmd_create_tag(
tag, self.data["tag-message"], self.data["tag-signing"]
)
assert isinstance(cmds, (list, tuple)) # transitional guard
if not isinstance(cmds[0], (list, tuple)):
cmds = [cmds]
if len(cmds) == 1:
print("Tag needed to proceed, you can use the following command:")
for cmd in cmds:
print(utils.format_command(cmd))
if utils.ask("Run this command"):
print(execute_command(cmd))
else:
# all commands are needed in order to proceed normally
print(
"Please create a tag %s for %s yourself and rerun." % (tag, version)
)
sys.exit(1)
if not self.vcs.tag_exists(tag):
print(f"\nFailed to create tag {tag}!")
sys.exit(1)
def _upload_distributions(self, package):
# See if creating an sdist (and maybe a wheel) actually works.
# Also, this makes the sdist (and wheel) available for plugins.
# And for twine, who will just upload the created files.
logger.info(
"Making a source distribution of a fresh tag checkout (in %s).",
self.data["tagworkingdir"],
)
builder = ProjectBuilder(srcdir=".", runner=_project_builder_runner)
builder.build("sdist", "./dist/")
if self.zest_releaser_config.create_wheel():
logger.info(
"Making a wheel of a fresh tag checkout (in %s).",
self.data["tagworkingdir"],
)
builder.build("wheel", "./dist/")
if not self.pypiconfig.is_pypi_configured():
logger.error(
"You must have a properly configured %s file in "
"your home dir to upload to a Python package index.",
pypi.DIST_CONFIG_FILE,
)
if utils.ask("Do you want to continue without uploading?", default=False):
return
sys.exit(1)
# Run extra entry point
self._run_hooks("before_upload")
# Get list of all files to upload.
files_in_dist = sorted(
os.path.join("dist", filename) for filename in os.listdir("dist")
)
register = self.zest_releaser_config.register_package()
# If TWINE_REPOSITORY_URL is set, use it.
if self.pypiconfig.twine_repository_url():
if not self._ask_upload(
package, self.pypiconfig.twine_repository_url(), register
):
return
if register:
self._retry_twine("register", None, files_in_dist[:1])
self._retry_twine("upload", None, files_in_dist)
# Only upload to the server specified in the environment
return
# Upload to the repository in the environment or .pypirc
servers = self.pypiconfig.distutils_servers()
for server in servers:
if not self._ask_upload(package, server, register):
continue
if register:
logger.info("Registering...")
# We only need the first file, it has all the needed info
self._retry_twine("register", server, files_in_dist[:1])
self._retry_twine("upload", server, files_in_dist)
def _ask_upload(self, package, server, register):
"""Ask if the package should be registered and/or uploaded.
Args:
package (str): The name of the package.
server (str): The distutils server name or URL.
register (bool): Whether or not the package should be registered.
"""
default = True
exact = False
if server == "pypi" and not package_in_pypi(package):
logger.info("This package does NOT exist yet on PyPI.")
# We are not yet on pypi. To avoid an 'Oops...,
# sorry!' when registering and uploading an internal
# package we default to False here.
default = False
exact = True
question = "Upload"
if register:
question = "Register and upload"
return utils.ask(f"{question} to {server}", default=default, exact=exact)
def _retry_twine(self, twine_command, server, filenames):
"""Attempt to execute a Twine command.
Args:
twine_command: The Twine command to use (eg. register, upload).
server: The distutils server name from a `.pipyrc` config file.
If this is `None` the TWINE_REPOSITORY_URL environment variable
will be used instead of a distutils server name.
filenames: A list of files which will be uploaded.
"""
twine_args = (twine_command,)
if server is not None:
twine_args += ("-r", server)
if twine_command == "register":
pass
elif twine_command == "upload":
twine_args += ("--skip-existing",)
else:
print(Fore.RED + "Unknown twine command: %s" % twine_command)
sys.exit(1)
twine_args += tuple(filenames)
try:
twine_dispatch(twine_args)
return
except requests.HTTPError as e:
# Something went wrong. Close repository.
response = e.response
# Some errors reported by PyPI after register or upload may be
# fine. The register command is not really needed anymore with the
# new PyPI. See https://github.com/pypa/twine/issues/200
# This might change, but for now the register command fails.
if (
twine_command == "register"
and response.reason == "This API is no longer supported, "
"instead simply upload the file."
):
return
# Show the error.
print(Fore.RED + "Response status code: %s" % response.status_code)
print(Fore.RED + "Reason: %s" % response.reason)
print(Fore.RED + "There were errors or warnings.")
logger.exception("Package %s has failed.", twine_command)
retry = utils.retry_yes_no(["twine", twine_command])
if retry:
logger.info("Retrying.")
return self._retry_twine(twine_command, server, filenames)
def _release(self):
"""Upload the release, when desired"""
# Does the user normally want a real release? We are
# interested in getting a sane default answer here, so you can
# override it in the exceptional case but just hit Enter in
# the usual case.
main_files = os.listdir(self.data["workingdir"])
if not {"setup.py", "setup.cfg", "pyproject.toml"}.intersection(main_files):
# No setup.py, setup.cfg, or pyproject.toml, so this is no
# python package, so at least a pypi release is not useful.
# Expected case: this is a buildout directory.
default_answer = False
else:
default_answer = self.zest_releaser_config.want_release()
if not utils.ask(
"Check out the tag (for tweaks or pypi/distutils " "server upload)",
default=default_answer,
):
return
package = self.vcs.name
tag = self.data["tag"]
logger.info("Doing a checkout...")
self.vcs.checkout_from_tag(tag)
# ^^^ This changes directory to a temp folder.
self.data["tagdir"] = os.path.realpath(os.getcwd())
logger.info("Tag checkout placed in %s", self.data["tagdir"])
if self.vcs.relative_path_in_repo:
# We were in a sub directory of the repo when we started
# the release, so we go to the same relative sub
# directory.
tagworkingdir = os.path.realpath(
os.path.join(os.getcwd(), self.vcs.relative_path_in_repo)
)
os.chdir(tagworkingdir)
self.data["tagworkingdir"] = tagworkingdir
logger.info(
"Changing to sub directory in tag checkout: %s",
self.data["tagworkingdir"],
)
else:
# The normal case.
self.data["tagworkingdir"] = self.data["tagdir"]
# Possibly fix setup.cfg.
if self.setup_cfg.has_bad_commands():
logger.info("This is not advisable for a release.")
if utils.ask(
"Fix %s (and commit to tag if possible)"
% self.setup_cfg.config_filename,
default=True,
):
# Fix the setup.cfg in the current working directory
# so the current release works well.
self.setup_cfg.fix_config()
# Run extra entry point
self._run_hooks("after_checkout")
if any(
filename in os.listdir(self.data["tagworkingdir"])
for filename in ["setup.py", "pyproject.toml"]
):
self._upload_distributions(package)
# Make sure we are in the expected directory again.
os.chdir(self.vcs.workingdir)
def datacheck(data):
"""Entrypoint: ensure that the data dict is fully documented"""
utils.is_data_documented(data, documentation=DATA)
def main():
utils.parse_options()
utils.configure_logging()
releaser = Releaser()
releaser.run()
tagdir = releaser.data.get("tagdir")
if tagdir:
logger.info("Reminder: tag checkout is in %s", tagdir) | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/release.py | release.py |
from zest.releaser import baserelease
from zest.releaser import utils
import logging
import sys
logger = logging.getLogger(__name__)
HISTORY_HEADER = "%(new_version)s (unreleased)"
COMMIT_MSG = "Back to development: %(new_version)s"
DEV_VERSION_TEMPLATE = "%(new_version)s%(development_marker)s"
# Documentation for self.data. You get runtime warnings when something is in
# self.data that is not in this list. Embarrasment-driven documentation!
DATA = baserelease.DATA.copy()
DATA.update(
{
"breaking": "True if we handle a breaking (major) change",
"dev_version": "New version with development marker (so 1.1.dev0)",
"dev_version_template": "Template for development version number",
"development_marker": "String to be appended to version after postrelease",
"feature": "True if we handle a feature (minor) change",
"final": "True if we handle a final release",
"new_version": "New version, without development marker (so 1.1)",
}
)
class Postreleaser(baserelease.Basereleaser):
"""Post-release tasks like resetting version number.
self.data holds data that can optionally be changed by plugins.
"""
def __init__(self, vcs=None, breaking=False, feature=False, final=False):
baserelease.Basereleaser.__init__(self, vcs=vcs)
# Prepare some defaults for potential overriding.
self.data.update(
dict(
breaking=breaking,
commit_msg=COMMIT_MSG,
feature=feature,
final=final,
dev_version_template=DEV_VERSION_TEMPLATE,
development_marker=self.zest_releaser_config.development_marker(),
history_header=HISTORY_HEADER,
update_history=True,
)
)
def prepare(self):
"""Prepare self.data by asking about new dev version"""
if not utils.sanity_check(self.vcs):
logger.critical("Sanity check failed.")
sys.exit(1)
self._ask_for_new_dev_version()
self._grab_history()
def execute(self):
"""Make the changes and offer a commit"""
self._write_version()
if self.data["update_history"]:
self._change_header(add=True)
self._write_history()
self._diff_and_commit()
self._push()
def _ask_for_new_dev_version(self):
"""Ask for and store a new dev version string."""
current = self.vcs.version
if not current:
logger.critical("No version found.")
sys.exit(1)
# Clean it up to a non-development version.
current = utils.cleanup_version(current)
params = dict(
breaking=self.data["breaking"],
feature=self.data["feature"],
final=self.data["final"],
less_zeroes=self.zest_releaser_config.less_zeroes(),
levels=self.zest_releaser_config.version_levels(),
dev_marker=self.zest_releaser_config.development_marker(),
)
suggestion = utils.suggest_version(current, **params)
print("Current version is %s" % current)
q = (
"Enter new development version "
"('%(development_marker)s' will be appended)" % self.data
)
version = utils.ask_version(q, default=suggestion)
if not version:
version = suggestion
if not version:
logger.error("No version entered.")
sys.exit(1)
self.data["new_version"] = version
dev_version = self.data["dev_version_template"] % self.data
self.data["dev_version"] = dev_version
logger.info("New version string is %s", dev_version)
def _write_version(self):
"""Update the version in vcs"""
self.vcs.version = self.data["dev_version"]
def datacheck(data):
"""Entrypoint: ensure that the data dict is fully documented"""
utils.is_data_documented(data, documentation=DATA)
def main():
parser = utils.base_option_parser()
parser.add_argument(
"--feature",
action="store_true",
dest="feature",
default=False,
help="Bump for feature release (increase minor version)",
)
parser.add_argument(
"--breaking",
action="store_true",
dest="breaking",
default=False,
help="Bump for breaking release (increase major version)",
)
parser.add_argument(
"--final",
action="store_true",
dest="final",
default=False,
help="Bump for final release (remove alpha/beta/rc from version)",
)
options = utils.parse_options(parser)
# How many options are enabled?
if len(list(filter(None, [options.breaking, options.feature, options.final]))) > 1:
print("ERROR: Only enable one option of breaking/feature/final.")
sys.exit(1)
utils.configure_logging()
postreleaser = Postreleaser(
breaking=options.breaking,
feature=options.feature,
final=options.final,
)
postreleaser.run() | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/postrelease.py | postrelease.py |
from argparse import ArgumentParser
from colorama import Fore
from pkg_resources import parse_version
import logging
import os
import pkg_resources
import re
import shlex
import subprocess
import sys
import textwrap
import tokenize
logger = logging.getLogger(__name__)
WRONG_IN_VERSION = ["svn", "dev", "("]
AUTO_RESPONSE = False
VERBOSE = False
def fs_to_text(fs_name):
if not isinstance(fs_name, str):
fs_name = fs_name.decode(sys.getfilesystemencoding(), "surrogateescape")
return fs_name
class CommandException(Exception):
"""Exception for when a command fails."""
def loglevel():
"""Return DEBUG when -v is specified, INFO otherwise"""
if VERBOSE:
return logging.DEBUG
return logging.INFO
def splitlines_with_trailing(content):
"""Return .splitlines() lines, but with a trailing newline if needed"""
lines = content.splitlines()
if content.endswith("\n"):
lines.append("")
return lines
def write_text_file(filename, contents, encoding=None):
with open(filename, "w", encoding=encoding) as f:
f.write(contents)
def read_text_file(filename, encoding=None):
"""Return lines and encoding of the file
Unless specified manually, We have no way of knowing what text
encoding this file may be in.
The standard Python 'open' method uses the default system encoding
to read text files in Python 3 or falls back to utf-8.
1. If encoding is specified, we use that encoding.
2. Lastly we try to detect the encoding using tokenize.
"""
if encoding is not None:
# The simple case.
logger.debug(
"Decoding file %s from encoding %s from argument.", filename, encoding
)
with open(filename, "rb", encoding=encoding) as filehandler:
data = filehandler.read()
return splitlines_with_trailing(data), encoding
# tokenize first detects the encoding (looking for encoding hints
# or an UTF-8 BOM) and opens the file using this encoding.
# See https://docs.python.org/3/library/tokenize.html
with tokenize.open(filename) as filehandler:
data = filehandler.read()
encoding = filehandler.encoding
logger.debug("Detected encoding of %s with tokenize: %s", filename, encoding)
return splitlines_with_trailing(data), encoding
def strip_version(version):
"""Strip the version of all whitespace."""
return version.strip().replace(" ", "")
def cleanup_version(version):
"""Check if the version looks like a development version."""
for w in WRONG_IN_VERSION:
if version.find(w) != -1:
logger.debug("Version indicates development: %s.", version)
version = version[: version.find(w)].strip()
logger.debug("Removing debug indicators: '%s'", version)
version = version.rstrip(".") # 1.0.dev0 -> 1.0. -> 1.0
return version
def strip_last_number(value):
"""Remove last number from a value.
This is mostly for markers like ``.dev0``, where this would
return ``.dev``.
"""
if not value:
return value
match = re.search(r"\d+$", value)
if not match:
return value
return value[: -len(match.group())]
def suggest_version(
current,
feature=False,
breaking=False,
less_zeroes=False,
levels=0,
dev_marker=".dev0",
final=False,
):
"""Suggest new version.
Try to make sure that the suggestion for next version after
1.1.19 is not 1.1.110, but 1.1.20.
- feature: increase major version, 1.2.3 -> 1.3.
- breaking: increase minor version, 1.2.3 -> 2 (well, 2.0)
- final: remove a/b/rc, 6.0.0rc1 -> 6.0.0
- less_zeroes: instead of 2.0.0, suggest 2.0.
Only makes sense in combination with feature or breaking.
- levels: number of levels to aim for. 3 would give: 1.2.3.
levels=0 would mean: do not change the number of levels.
"""
# How many options are enabled?
if len(list(filter(None, [breaking, feature, final]))) > 1:
print("ERROR: Only enable one option of breaking/feature/final.")
sys.exit(1)
dev = ""
base_dev_marker = strip_last_number(dev_marker)
if base_dev_marker in current:
index = current.find(base_dev_marker)
# Put the standard development marker back at the end.
dev = dev_marker
current = current[:index]
# Split in first and last part, where last part is one integer and the
# first part can contain more integers plus dots.
current_split = current.split(".")
original_levels = len(current_split)
try:
[int(x) for x in current_split]
except ValueError:
# probably a/b in the version.
pass
else:
# With levels=3, we prefer major.minor.patch as version. Add zeroes
# where needed. We don't subtract: if version is 1.2.3.4.5, we are not
# going to suggest to drop a few numbers.
if levels:
while len(current_split) < levels:
current_split.append("0")
if breaking:
target = 0
elif feature:
if len(current_split) > 1:
target = 1
else:
# When the version is 1, a feature release is the same as a
# breaking release.
target = 0
else:
target = -1
first = ".".join(current_split[:target])
last = current_split[target]
try:
last = int(last) + 1
suggestion = ".".join([char for char in (first, str(last)) if char])
except ValueError:
if target != -1:
# Something like 1.2rc1 where we want a feature bump. This gets
# too tricky.
return
if final:
parsed_version = parse_version(current)
if not parsed_version.pre:
logger.warning(
"Version is not a pre version, so we cannot "
"calculate a suggestion for the final version."
)
return
suggestion = parsed_version.base_version
else:
# Fall back on simply updating the last character when it is
# an integer.
try:
last = int(current[target]) + 1
suggestion = current[:target] + str(last)
except (ValueError, IndexError):
logger.warning(
"Version does not end with a number, so we can't "
"calculate a suggestion for a next version."
)
return
# Maybe add a few zeroes: turn 2 into 2.0.0 if 3 levels is the goal.
goal = max(original_levels, levels)
length = len(suggestion.split("."))
if less_zeroes and goal > 2:
# Adding zeroes is okay, but the user prefers not to overdo it. If the
# goal is 3 levels, and the current suggestion is 1.3, then that is
# fine. If the current suggestion is 2, then don't increase the zeroes
# all the way to 2.0.0, but stop at 2.0.
goal = 2
missing = goal - length
if missing > 0:
suggestion += ".0" * missing
return suggestion + dev
def base_option_parser():
parser = ArgumentParser()
parser.add_argument(
"--no-input",
action="store_true",
dest="auto_response",
default=False,
help="Don't ask questions, just use the default values",
)
parser.add_argument(
"-v",
"--verbose",
action="store_true",
dest="verbose",
default=False,
help="Verbose mode",
)
return parser
def parse_options(parser=None):
global AUTO_RESPONSE
global VERBOSE
if parser is None:
parser = base_option_parser()
options = parser.parse_args()
AUTO_RESPONSE = options.auto_response
VERBOSE = options.verbose
return options
# Hack for testing, see get_input()
TESTMODE = False
class AnswerBook:
def __init__(self):
self.answers = []
def set_answers(self, answers=None):
if answers is None:
answers = []
self.answers = answers
def get_next_answer(self):
if self.answers:
return self.answers.pop(0)
# Accept the default.
return ""
test_answer_book = AnswerBook()
def get_input(question):
if not TESTMODE:
# Normal operation.
result = input(question)
return result.strip()
# Testing means no interactive input. Get it from answers_for_testing.
print("Question: %s" % question)
answer = test_answer_book.get_next_answer()
if answer == "":
print("Our reply: <ENTER>")
else:
print("Our reply: %s" % answer)
return answer
def ask_version(question, default=None):
if AUTO_RESPONSE:
if default is None:
msg = (
"We cannot determine a default version, but "
"we're running in --no-input mode. The original "
"question: %s"
)
msg = msg % question
raise RuntimeError(msg)
logger.info(question)
logger.info("Auto-responding '%s'.", default)
return default
if default:
question += " [%s]: " % default
else:
question += ": "
while True:
input_value = get_input(question)
if input_value:
if input_value.lower() in ("y", "n"):
# Please read the question.
print("y/n not accepted as version.")
continue
return input_value
if default:
return default
def ask(question, default=True, exact=False):
"""Ask the question in y/n form and return True/False.
If you don't want a default 'yes', set default to None (or to False if you
want a default 'no').
With exact=True, we want to get a literal 'yes' or 'no', at least
when it does not match the default.
"""
if AUTO_RESPONSE:
if default is None:
msg = (
"The question '%s' requires a manual answer, but "
"we're running in --no-input mode."
)
msg = msg % question
raise RuntimeError(msg)
logger.info(question)
logger.info("Auto-responding '%s'.", "yes" if default else "no")
return default
while True:
yn = "y/n"
if default is True:
yn = "Y/n"
if default is False:
yn = "y/N"
q = question + " (%s)? " % yn
input_value = get_input(q)
if input_value:
answer = input_value
else:
answer = ""
if not answer and default is not None:
return default
if exact and answer.lower() not in ("yes", "no"):
print("Please explicitly answer yes/no in full " "(or accept the default)")
continue
if answer:
answer = answer[0].lower()
if answer == "y":
return True
if answer == "n":
return False
# We really want an answer.
print("Please explicitly answer y/n")
continue
def fix_rst_heading(heading, below):
"""If the 'below' line looks like a reST line, give it the correct length.
This allows for different characters being used as header lines.
"""
if len(below) == 0:
return below
first = below[0]
if first not in "-=`~":
return below
if not len(below) == len([char for char in below if char == first]):
# The line is not uniformly the same character
return below
below = first * len(heading)
return below
def extract_headings_from_history(history_lines):
"""Return list of dicts with version-like headers.
We check for patterns like '2.10 (unreleased)', so with either
'unreleased' or a date between parenthesis as that's the format we're
using. Just fix up your first heading and you should be set.
As an alternative, we support an alternative format used by some
zope/plone paster templates: '2.10 - unreleased' or '2.10 ~ unreleased'
Note that new headers that zest.releaser sets are in our preferred
form (so 'version (date)').
"""
pattern = re.compile(
r"""
(?P<version>.+) # Version string
\( # Opening (
(?P<date>.+) # Date
\) # Closing )
\W*$ # Possible whitespace at end of line.
""",
re.VERBOSE,
)
alt_pattern = re.compile(
r"""
^ # Start of line
(?P<version>.+) # Version string
\ [-~]\ # space dash/twiggle space
(?P<date>.+) # Date
\W*$ # Possible whitespace at end of line.
""",
re.VERBOSE,
)
headings = []
line_number = 0
for line in history_lines:
match = pattern.search(line)
alt_match = alt_pattern.search(line)
if match:
result = {
"line": line_number,
"version": match.group("version").strip(),
"date": match.group("date".strip()),
}
headings.append(result)
logger.debug("Found heading: '%s'", result)
if alt_match:
result = {
"line": line_number,
"version": alt_match.group("version").strip(),
"date": alt_match.group("date".strip()),
}
headings.append(result)
logger.debug("Found alternative heading: '%s'", result)
line_number += 1
return headings
def show_interesting_lines(result):
"""Just print the first and last five lines of (pypi) output.
But: when there are errors or warnings, print everything and ask
the user if she wants to continue.
"""
if Fore.RED in result:
# warnings/errors, print complete result.
print(result)
if not ask(
"There were errors or warnings. Are you sure you want to continue?",
default=False,
):
sys.exit(1)
# User has seen everything and wants to continue.
return
# No errors or warnings. Show first and last lines.
lines = [line for line in result.split("\n")]
if len(lines) < 11:
for line in lines:
print(line)
return
print("Showing first few lines...")
for line in lines[:5]:
print(line)
print("...")
print("Showing last few lines...")
for line in lines[-5:]:
print(line)
def setup_py(*rest_of_cmdline):
"""Return 'python setup.py' command."""
for unsafe in ["upload", "register"]:
if unsafe in rest_of_cmdline:
logger.error("Must not use setup.py %s. Use twine instead", unsafe)
sys.exit(1)
return [sys.executable, "setup.py"] + list(rest_of_cmdline)
def is_data_documented(data, documentation=None):
"""check that the self.data dict is fully documented"""
if documentation is None:
documentation = {}
if TESTMODE:
# Hack for testing to prove entry point is being called.
print("Checking data dict")
undocumented = [
key for key in data if key not in documentation and not key.startswith("_")
]
if undocumented:
print("Internal detail: key(s) %s are not documented" % undocumented)
def resolve_name(name):
"""Resolve a name like ``module.object`` to an object and return it.
This functions supports packages and attributes without depth limitation:
``package.package.module.class.class.function.attr`` is valid input.
However, looking up builtins is not directly supported: use
``builtins.name``.
Raises ImportError if importing the module fails or if one requested
attribute is not found.
"""
if "." not in name:
# shortcut
__import__(name)
return sys.modules[name]
# FIXME clean up this code!
parts = name.split(".")
cursor = len(parts)
module_name = parts[:cursor]
ret = ""
while cursor > 0:
try:
ret = __import__(".".join(module_name))
break
except ImportError:
cursor -= 1
module_name = parts[:cursor]
if ret == "":
raise ImportError(parts[0])
for part in parts[1:]:
try:
ret = getattr(ret, part)
except AttributeError:
raise ImportError(part)
return ret
def run_hooks(zest_releaser_config, which_releaser, when, data):
"""Run all release hooks for the given release step, including
project-specific hooks from setup.cfg, and globally installed entry-points.
which_releaser can be prereleaser, releaser, postreleaser.
when can be before, middle, after.
"""
hook_group = f"{which_releaser}.{when}"
config = zest_releaser_config.config
if config is not None and config.get(hook_group):
# Multiple hooks may be specified
# in setup.cfg or .pypirc each one is separated by whitespace
# (including newlines)
if zest_releaser_config.hooks_filename in ["setup.py", "setup.cfg", ".pypirc"]:
hook_names = config.get(hook_group).split()
# in pyproject.toml, a list is passed with the hooks
elif zest_releaser_config.hooks_filename == "pyproject.toml":
hook_names = config.get(hook_group)
else:
hook_names = []
hooks = []
# The following code is adapted from the 'packaging' package being
# developed for Python's stdlib:
# add project directory to sys.path, to allow hooks to be
# distributed with the project
# an optional package_dir option adds support for source layouts where
# Python packages are not directly in the root of the source
config_dir = os.path.dirname(zest_releaser_config.hooks_filename)
sys.path.insert(0, os.path.dirname(zest_releaser_config.hooks_filename))
package_dir = config.get("hook_package_dir")
if package_dir:
package_dir = os.path.join(config_dir, package_dir)
sys.path.insert(0, package_dir)
try:
for hook_name in hook_names:
# Resolve the hook or fail with ImportError.
hooks.append(resolve_name(hook_name))
for hook in hooks:
hook(data)
finally:
sys.path.pop(0)
if package_dir is not None:
sys.path.pop(0)
run_entry_points(which_releaser, when, data)
def run_entry_points(which_releaser, when, data):
"""Run the requested entry points.
which_releaser can be prereleaser, releaser, postreleaser.
when can be before, middle, after.
"""
group = f"zest.releaser.{which_releaser}.{when}"
for entrypoint in pkg_resources.iter_entry_points(group=group):
# Grab the function that is the actual plugin.
plugin = entrypoint.load()
# Feed the data dict to the plugin.
plugin(data)
# Lines ending up in stderr that are only warnings, not errors.
# We only check the start of lines. Should be lowercase.
KNOWN_WARNINGS = [
# Not a real error.
"warn",
# A warning from distutils like this:
# no previously-included directories found matching...
# distutils is basically warning that a previous distutils run has
# done its job properly while reading the package manifest.
"no previously-included",
# This is from bdist_wheel displaying a warning by setuptools that
# it will not include the __init__.py of a namespace package. See
# issue 108.
"skipping installation of",
]
# Make them lowercase just to be sure.
KNOWN_WARNINGS = [w.lower() for w in KNOWN_WARNINGS]
def format_command(command):
"""Return command list formatted as string.
THIS IS INSECURE! DO NOT USE except for directly printing the result.
Do NOT pass this to subprocess.popen/run.
See also: https://docs.python.org/3/library/shlex.html#shlex.quote
"""
args = [shlex.quote(arg) for arg in command]
return " ".join(args)
def _execute_command(command, cwd=None, extra_environ=None):
"""Execute a command, returning stdout, plus maybe parts of stderr."""
# Enforce the command to be a list or arguments.
assert isinstance(command, (list, tuple))
logger.debug("Running command: '%s'", format_command(command))
env = dict(os.environ, PYTHONPATH=os.pathsep.join(sys.path))
if extra_environ is not None:
env.update(extra_environ)
# By default we show errors, of course.
show_stderr = True
if command[0].startswith(sys.executable):
# For several Python commands, we do not want to see the stderr:
# if we include it for `python setup.py --version`, then the version
# may contain all kinds of warnings.
show_stderr = False
# But we really DO want to see the stderr for some other Python commands,
# otherwise for example a failed upload would not even show up in the output.
for flag in ("upload", "register"):
if flag in command:
show_stderr = True
break
process_kwargs = {
"stdin": subprocess.PIPE,
"stdout": subprocess.PIPE,
"stderr": subprocess.PIPE,
"cwd": cwd,
"env": env,
"text": True,
}
process = subprocess.run(command, **process_kwargs)
if process.returncode or show_stderr or "Traceback" in process.stderr:
# Some error occurred
return process.stdout + get_errors(process.stderr)
# Only return the stdout. Stderr only contains possible
# weird/confusing warnings that might trip up extraction of version
# numbers and so.
if process.stderr:
logger.debug(
"Stderr of running command '%s':\n%s",
format_command(process.args),
process.stderr,
)
return process.stdout
def get_errors(stderr_output):
# Some error occurred. Return the relevant output.
# print(Fore.RED + stderr_output)
stderr_output = stderr_output.strip()
if not stderr_output:
return ""
# Make sure every error line is marked red. The stderr
# output also catches some warnings though. It would be
# really irritating if we start treating a line like this
# as an error: warning: no previously-included files
# matching '*.pyc' found anywhere in distribution. Same
# for empty lines. So try to be smart about it.
errors = []
for line in stderr_output.split("\n"):
line = line.strip()
if not line:
# Keep it in the errors, but do not mark it with a color.
errors.append(line)
continue
for known in KNOWN_WARNINGS:
if line.lower().startswith(known):
# Not a real error, so mark it as a warning.
errors.append(Fore.MAGENTA + line)
break
else:
# Not found in known warnings, so mark it as an error.
errors.append(Fore.RED + line)
return "\n".join(errors)
def execute_command(
command, allow_retry=False, fail_message="", cwd=None, extra_environ=None
):
"""Run the command and possibly retry it.
When allow_retry is False, we simply call the base
_execute_command and return the result.
When allow_retry is True, a few things change.
We print interesting lines. When all is right, this will be the
first and last few lines, otherwise the full standard output plus
error output.
When we discover errors, we give three options:
- Abort
- Retry
- Continue
There is an error when there is a red color in the output.
It might be a warning, but we cannot detect the distinction.
"""
result = _execute_command(command, cwd=cwd, extra_environ=extra_environ)
if not allow_retry:
return result
if Fore.RED not in result:
show_interesting_lines(result)
return result
# There are warnings or errors. Print the complete result.
print(result)
print(Fore.RED + "There were errors or warnings.")
if fail_message:
print(Fore.RED + fail_message)
retry = retry_yes_no(command)
if retry:
logger.info("Retrying command: '%s'", format_command(command))
return execute_command(command, allow_retry=True)
# Accept the error, continue with the program.
return result
def execute_commands(commands, allow_retry=False, fail_message=""):
assert isinstance(commands, (list, tuple))
if not isinstance(commands[0], (list, tuple)):
commands = [commands]
result = []
for cmd in commands:
assert isinstance(cmd, (list, tuple))
result.append(
execute_command(cmd, allow_retry=allow_retry, fail_message=fail_message)
)
return "\n".join(result)
def retry_yes_no(command):
"""Ask the user to maybe retry a command."""
explanation = """
You have these options for retrying (first character is enough):
Yes: Retry. Do this if it looks like a temporary Internet or PyPI outage.
You can also first edit $HOME/.pypirc and then retry in
case of a credentials problem.
No: Do not retry, but continue with the rest of the process.
Quit: Stop completely. Note that the postrelease step has not
been run yet, you need to do that manually.
?: Show this help."""
explanation = textwrap.dedent(explanation)
question = "Retry this command? [Yes/no/quit/?]"
if AUTO_RESPONSE:
msg = (
"The question '%s' requires a manual answer, but "
"we're running in --no-input mode."
)
msg = msg % question
raise RuntimeError(msg)
while True:
input_value = get_input(question)
if not input_value:
# Default: yes, retry the command.
input_value = "y"
if input_value:
input_value = input_value.lower()
if input_value == "y" or input_value == "yes":
logger.info("Retrying command: '%s'", format_command(command))
return True
if input_value == "n" or input_value == "no":
# Accept the error, continue with the program.
return False
if input_value == "q" or input_value == "quit":
raise CommandException("Command failed: '%s'" % format_command(command))
# We could print the help/explanation only if the input is
# '?', or maybe 'h', but if the user input has any other
# content, it makes sense to explain the options anyway.
print(explanation)
def get_last_tag(vcs, allow_missing=False):
"""Get last tag number, compared to current version.
Note: when this cannot get a proper tag for some reason, it may exit
the program completely. When no tags are found and allow_missing is
True, we return None.
"""
version = vcs.version
if not version:
logger.critical("No version detected, so we can't do anything.")
sys.exit(1)
available_tags = vcs.available_tags()
if not available_tags:
if allow_missing:
logger.debug("No tags found.")
return
logger.critical("No tags found, so we can't do anything.")
sys.exit(1)
# Mostly nicked from zest.stabilizer.
# We seek a tag that's the same or less than the version as determined
# by setuptools' version parsing. A direct match is obviously
# right. The 'less' approach handles development eggs that have
# already been switched back to development.
# Note: if parsing the current version fails, there is nothing we can do:
# there is no sane way of knowing which version is smaller than an unparsable
# version, so we just break hard.
parsed_version = parse_version(version)
found = parsed_found = None
for tag in available_tags:
try:
parsed_tag = parse_version(tag)
except Exception:
# I don't want to import this specific exception,
# because it sounds unstable:
# pkg_resources.extern.packaging.version.InvalidVersion
logger.debug("Could not parse version: %s", tag)
continue
if parsed_tag == parsed_version:
found = tag
logger.debug("Found exact match: %s", found)
break
if parsed_tag >= parsed_version:
# too new tag, not interesting
continue
if found is not None and parsed_tag <= parsed_found:
# we already have a better match
continue
logger.debug("Found possible lower match: %s", tag)
found = tag
parsed_found = parsed_tag
return found
def sanity_check(vcs):
"""Do sanity check before making changes
Check that we are not on a tag and/or do not have local changes.
Returns True when all is fine.
"""
if not vcs.is_clean_checkout():
q = (
"This is NOT a clean checkout. You are on a tag or you have "
"local changes.\n"
"Are you sure you want to continue?"
)
if not ask(q, default=False):
return False
return True
def check_recommended_files(data, vcs):
"""Do check for recommended files.
Returns True when all is fine.
"""
main_files = os.listdir(data["workingdir"])
if "setup.py" not in main_files and "setup.cfg" not in main_files:
# Not a python package. We have no recommendations.
return True
if "MANIFEST.in" not in main_files and "MANIFEST" not in main_files:
q = """This package is missing a MANIFEST.in file. This file is
recommended. See http://docs.python.org/distutils/sourcedist.html for
more info. Sample contents:
recursive-include main_directory *
recursive-include docs *
include *
global-exclude *.pyc
You may want to quit and fix this.
"""
if not vcs.is_setuptools_helper_package_installed():
q += "Installing %s may help too.\n" % vcs.setuptools_helper_package
# We could ask, but simply printing it is nicer. Well, okay,
# let's avoid some broken eggs on PyPI, per
# https://github.com/zestsoftware/zest.releaser/issues/10
q += "Do you want to continue with the release?"
if not ask(q, default=False):
return False
print(q)
return True
def configure_logging():
logging.addLevelName(
logging.WARNING, Fore.MAGENTA + logging.getLevelName(logging.WARNING)
)
logging.addLevelName(logging.ERROR, Fore.RED + logging.getLevelName(logging.ERROR))
logging.basicConfig(level=loglevel(), format="%(levelname)s: %(message)s")
def get_list_item(lines):
"""Get most used list item from text.
Meaning: probably a dash, maybe a star.
"""
unordered_list = []
for line in lines:
# Possibly there is leading white space, strip it.
stripped = line.strip()
# Look for lines starting with one character and a space.
if len(stripped) < 3:
continue
if stripped[1] != " ":
continue
prefix = stripped[0]
# Restore stripped whitespace.
white = line.find(prefix)
unordered_list.append("{}{}".format(" " * white, prefix))
# Get sane default.
best = "-"
count = 0
# Start counting.
for key in set(unordered_list):
new_count = unordered_list.count(key)
if new_count > count:
best = key
count = new_count
# Return the best one.
return best
def history_format(config_value, history_file):
"""Decide what is the history/changelog format."""
default = "rst"
history_format = config_value
if not history_format:
history_file = history_file or ""
ext = history_file.split(".")[-1].lower()
history_format = "md" if ext in ["md", "markdown"] else default
return history_format
def string_to_bool(value):
"""Reimplementation of configparser.ConfigParser.getboolean()"""
if value.isalpha():
value = value.lower()
if value in ["1", "yes", "true", "on"]:
return True
elif value in ["0", "no", "false", "off"]:
return False
else:
raise ValueError(f"Cannot convert string '{value}' to a bool")
def extract_zestreleaser_configparser(config, config_filename):
if not config:
return None
try:
result = dict(config["zest.releaser"].items())
except KeyError:
logger.debug(f"No [zest.releaser] section found in the {config_filename}")
return None
boolean_keys = [
"release",
"create-wheel",
"no-input",
"register",
"push-changes",
"less-zeroes",
"tag-signing",
"run-pre-commit",
]
integer_keys = [
"version-levels",
]
for key, value in result.items():
if key in boolean_keys:
result[key] = string_to_bool(value)
if key in integer_keys:
result[key] = int(value)
return result | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/utils.py | utils.py |
from zest.releaser.utils import execute_command
from zest.releaser.utils import fs_to_text
from zest.releaser.vcs import BaseVersionControl
import logging
import os.path
import sys
import tempfile
logger = logging.getLogger(__name__)
class Git(BaseVersionControl):
"""Command proxy for Git"""
internal_filename = ".git"
setuptools_helper_package = "setuptools-git"
def is_setuptools_helper_package_installed(self):
# The package is setuptools-git with a dash, the module is
# setuptools_git with an underscore. Thanks.
try:
__import__("setuptools_git")
except ImportError:
return False
return True
@property
def name(self):
package_name = self._extract_name()
if package_name:
return package_name
# No python package name? With git we can probably only fall back to the directory
# name as there's no svn-url with a usable name in it.
dir_name = os.path.basename(os.getcwd())
dir_name = fs_to_text(dir_name)
return dir_name
def available_tags(self):
tag_info = execute_command(["git", "tag"])
tags = [line for line in tag_info.split("\n") if line]
logger.debug("Available tags: '%s'", ", ".join(tags))
return tags
def prepare_checkout_dir(self, prefix):
# Watch out: some git versions can't clone into an existing
# directory, even when it is empty.
temp = tempfile.mkdtemp(prefix=prefix)
cwd = os.getcwd()
os.chdir(temp)
cmd = ["git", "clone", "--depth", "1", self.reporoot, "gitclone"]
logger.debug(execute_command(cmd))
clonedir = os.path.join(temp, "gitclone")
os.chdir(clonedir)
cmd = ["git", "submodule", "update", "--init", "--recursive"]
logger.debug(execute_command(cmd))
os.chdir(cwd)
return clonedir
def tag_url(self, version):
# this doesn't apply to Git, so we just return the
# version name given ...
return version
def cmd_diff(self):
return ["git", "diff"]
def cmd_commit(self, message):
parts = ["git", "commit", "-a", "-m", message]
if not self.zest_releaser_config.run_pre_commit():
parts.append("-n")
return parts
def cmd_diff_last_commit_against_tag(self, version):
return ["git", "diff", version]
def cmd_log_since_tag(self, version):
"""Return log since a tagged version till the last commit of
the working copy.
"""
return ["git", "log", "%s..HEAD" % version]
def cmd_create_tag(self, version, message, sign=False):
cmd = ["git", "tag", version, "-m", message]
if sign:
cmd.append("--sign")
return cmd
def cmd_checkout_from_tag(self, version, checkout_dir):
if not (os.path.realpath(os.getcwd()) == os.path.realpath(checkout_dir)):
# Specific to git: we need to be in that directory for the command
# to work.
logger.warning("We haven't been chdir'ed to %s", checkout_dir)
sys.exit(1)
return [
["git", "checkout", version],
["git", "submodule", "update", "--init", "--recursive"],
]
def is_clean_checkout(self):
"""Is this a clean checkout?"""
head = execute_command(["git", "symbolic-ref", "--quiet", "HEAD"])
# This returns something like 'refs/heads/maurits-warn-on-tag'
# or nothing. Nothing would be bad as that indicates a
# detached head: likely a tag checkout
if not head:
# Greetings from Nearly Headless Nick.
return False
if execute_command(["git", "status", "--short", "--untracked-files=no"]):
# Uncommitted changes in files that are tracked.
return False
return True
def push_commands(self):
"""Push changes to the server."""
return [["git", "push"], ["git", "push", "--tags"]]
def list_files(self):
"""List files in version control."""
return execute_command(["git", "ls-files"]).splitlines() | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/git.py | git.py |
from .utils import extract_zestreleaser_configparser
from configparser import ConfigParser
from configparser import NoOptionError
from configparser import NoSectionError
import logging
import os
import pkg_resources
import sys
try:
# Python 3.11+
import tomllib
except ImportError:
# Python 3.10-
import tomli as tomllib
try:
pkg_resources.get_distribution("wheel")
except pkg_resources.DistributionNotFound:
USE_WHEEL = False
else:
USE_WHEEL = True
DIST_CONFIG_FILE = ".pypirc"
SETUP_CONFIG_FILE = "setup.cfg"
PYPROJECTTOML_CONFIG_FILE = "pyproject.toml"
DEFAULT_REPOSITORY = "https://upload.pypi.org/legacy/"
logger = logging.getLogger(__name__)
class BaseConfig:
"""Base config class with a few helper methods."""
def __init__(self):
self.config = None
def _get_boolean(self, section, key, default=False):
"""Get a boolean from the config.
Standard config rules apply, so you can use upper or lower or
mixed case and specify 0, false, no or off for boolean False,
and 1, on, true or yes for boolean True.
"""
result = default
if self.config is not None:
try:
result = self.config.getboolean(section, key)
except (NoSectionError, NoOptionError, ValueError):
return result
return result
def _get_text(self, section, key, default=None, raw=False):
"""Get a text from the config."""
result = default
if self.config is not None:
try:
result = self.config.get(section, key, raw=raw)
except (NoSectionError, NoOptionError, ValueError):
return result
return result
class SetupConfig(BaseConfig):
"""Wrapper around the setup.cfg file if available.
One reason is to cleanup setup.cfg from these settings::
[egg_info]
tag_build = dev
tag_svn_revision = true
Another is for optional zest.releaser-specific settings::
[zest.releaser]
python-file-with-version = reinout/maurits.py
"""
config_filename = SETUP_CONFIG_FILE
def __init__(self):
"""Grab the configuration (overridable for test purposes)"""
# If there is a setup.cfg in the package, parse it
if not os.path.exists(self.config_filename):
self.config = None
return
self.config = ConfigParser(interpolation=None)
self.config.read(self.config_filename)
def has_bad_commands(self):
if self.config is None:
return False
if not self.config.has_section("egg_info"):
# bail out early as the main section is not there
return False
bad = False
# Check 1.
if self.config.has_option("egg_info", "tag_build"):
# Might still be empty.
value = self._get_text("egg_info", "tag_build")
if value:
logger.warning(
"%s has [egg_info] tag_build set to '%s'",
self.config_filename,
value,
)
bad = True
# Check 2.
if self.config.has_option("egg_info", "tag_svn_revision"):
if self.config.getboolean("egg_info", "tag_svn_revision"):
value = self._get_text("egg_info", "tag_svn_revision")
logger.warning(
"%s has [egg_info] tag_svn_revision set to '%s'",
self.config_filename,
value,
)
bad = True
return bad
def fix_config(self):
if not self.has_bad_commands():
logger.warning("Cannot fix already fine %s.", self.config_filename)
return
if self.config.has_option("egg_info", "tag_build"):
self.config.set("egg_info", "tag_build", "")
if self.config.has_option("egg_info", "tag_svn_revision"):
self.config.set("egg_info", "tag_svn_revision", "false")
new_setup = open(self.config_filename, "w")
try:
self.config.write(new_setup)
finally:
new_setup.close()
logger.info("New setup.cfg contents:")
with open(self.config_filename) as config_file:
print("".join(config_file.readlines()))
def zest_releaser_config(self):
return extract_zestreleaser_configparser(self.config, self.config_filename)
class PypiConfig(BaseConfig):
"""Wrapper around the pypi config file.
Contains functions which return information about
the pypi configuration.
"""
def __init__(self, config_filename=DIST_CONFIG_FILE):
"""Grab the PyPI configuration.
This is .pypirc in the home directory. It is overridable for
test purposes.
"""
self.config_filename = config_filename
self.reload()
def reload(self):
"""Load the config.
Do the initial load of the config.
Or reload it in case of problems: this is needed when a pypi
upload fails, you edit the .pypirc file to fix the account
settings, and tell release to retry the command.
"""
self._read_configfile()
def zest_releaser_config(self):
return extract_zestreleaser_configparser(self.config, self.config_filename)
def _read_configfile(self):
"""Read the PyPI config file and store it (when valid)."""
config_filename = self.config_filename
if not os.path.exists(config_filename) and not os.path.isabs(config_filename):
# When filename is .pypirc, we look in ~/.pypirc
config_filename = os.path.join(os.path.expanduser("~"), config_filename)
if not os.path.exists(config_filename):
self.config = None
return
self.config = ConfigParser(interpolation=None)
self.config.read(config_filename)
def twine_repository(self):
"""Gets the repository from Twine environment variables."""
return os.getenv("TWINE_REPOSITORY")
def twine_repository_url(self):
"""Gets the repository URL from Twine environment variables."""
return os.getenv("TWINE_REPOSITORY_URL")
def is_pypi_configured(self):
"""Determine if we're configured to publish to 1+ PyPi server.
PyPi is considered to be 'configued' if the TWINE_REPOSITORY_URL is set,
or if we have a config which contains at least 1 PyPi server.
"""
servers = len(self.distutils_servers()) > 0
twine_url = self.twine_repository_url() is not None
return any((servers, twine_url))
def distutils_servers(self):
"""Return a list of known distutils servers.
If the config has an old pypi config, remove the default pypi
server from the list.
"""
twine_repository = self.twine_repository()
if twine_repository and self.config:
# If there is no section we can't upload there
if self.config.has_section(twine_repository):
return [twine_repository]
else:
return []
# If we don't have a config we can't continue
if not self.config:
return []
try:
index_servers = self._get_text(
"distutils", "index-servers", default=""
).split()
except (NoSectionError, NoOptionError):
index_servers = []
if not index_servers:
# If no distutils index-servers have been given, 'pypi' should be
# the default. This is what twine does.
if self.config.has_option("server-login", "username"):
# We have a username, so upload to pypi should work fine, even
# when no explicit pypi section is in the file.
return ["pypi"]
# https://github.com/zestsoftware/zest.releaser/issues/199
index_servers = ["pypi"]
# The servers all need to have a section in the config file.
return [server for server in index_servers if self.config.has_section(server)]
class PyprojectTomlConfig(BaseConfig):
"""Wrapper around the pyproject.toml file if available.
This is for optional zest.releaser-specific settings::
[tool.zest-releaser]
python-file-with-version = "reinout/maurits.py"
"""
config_filename = PYPROJECTTOML_CONFIG_FILE
def __init__(self):
"""Grab the configuration (overridable for test purposes)"""
# If there is a pyproject.toml in the package, parse it
if not os.path.exists(self.config_filename):
self.config = None
return
with open(self.config_filename, "rb") as tomlconfig:
self.config = tomllib.load(tomlconfig)
def zest_releaser_config(self):
if self.config is None:
return None
try:
result = self.config["tool"]["zest-releaser"]
except KeyError:
logger.debug(
f"No [tool.zest-releaser] section found in the {self.config_filename}"
)
return None
return result
class ZestReleaserConfig:
hooks_filename = None
def load_configs(self, pypirc_config_filename=DIST_CONFIG_FILE):
"""Load configs from several files.
The order is this:
- ~/.pypirc
- setup.cfg
- pyproject.toml
A later config file overwrites keys from an earlier config file.
I think this order makes the most sense.
Example: extra-message = [ci skip]
What I expect, is:
* Most packages won't have this setting.
* If you make releases for lots of packages, you probably set this in
your global ~/.pypirc.
* A few individual packages will explicitly set this.
They will expect this to have the effect that the extra message is
added to commits, regardless of who makes a release.
So this individual package setting should win.
* Finally, pyproject.toml is newer than setup.cfg, so it makes sense
that this file has the last say.
"""
setup_config = SetupConfig()
pypi_config = PypiConfig(config_filename=pypirc_config_filename)
pyproject_config = PyprojectTomlConfig()
combined_config = {}
config_files = [pypi_config]
if not self.omit_package_config_in_test:
config_files.extend([setup_config, pyproject_config])
for config in config_files:
if config.zest_releaser_config() is not None:
zest_config = config.zest_releaser_config()
assert isinstance(zest_config, dict)
combined_config.update(zest_config)
# store which config file contained entrypoint hooks
if any(
[
x
for x in zest_config.keys()
if x.lower().startswith(
("prereleaser.", "releaser.", "postreleaser.")
)
]
):
self.hooks_filename = config.config_filename
self.config = combined_config
def __init__(
self, pypirc_config_filename=DIST_CONFIG_FILE, omit_package_config_in_test=False
):
self.omit_package_config_in_test = omit_package_config_in_test
self.load_configs(pypirc_config_filename=pypirc_config_filename)
def want_release(self):
"""Does the user normally want to release this package.
Some colleagues find it irritating to have to remember to
answer the question "Check out the tag (for tweaks or
pypi/distutils server upload)" with the non-default 'no' when
in 99 percent of the cases they just make a release specific
for a customer, so they always answer 'no' here. This is
where an extra config option comes in handy: you can influence
the default answer so you can just keep hitting 'Enter' until
zest.releaser is done.
Either in your ~/.pypirc or in a setup.cfg or pyproject.toml in a specific
package, add this when you want the default answer to this
question to be 'no':
[zest.releaser]
release = no
The default when this option has not been set is True.
Standard config rules apply, so you can use upper or lower or
mixed case and specify 0, false, no or off for boolean False,
and 1, on, true or yes for boolean True.
"""
return self.config.get("release", True)
def extra_message(self):
"""Return extra text to be added to commit messages.
This can for example be used to skip CI builds. This at least
works for Travis. See
http://docs.travis-ci.com/user/how-to-skip-a-build/
Enable this mode by adding a ``extra-message`` option, either in the
package you want to release, or in your ~/.pypirc::
[zest.releaser]
extra-message = [ci skip]
"""
return self.config.get("extra-message")
def prefix_message(self):
"""Return extra text to be added before the commit message.
This can for example be used follow internal policies on commit messages.
Enable this mode by adding a ``prefix-message`` option, either in the
package you want to release, or in your ~/.pypirc::
[zest.releaser]
prefix-message = [TAG]
"""
return self.config.get("prefix-message")
def history_file(self):
"""Return path of history file.
Usually zest.releaser can find the correct one on its own.
But sometimes it may not find anything, or it finds multiple
and selects the wrong one.
Configure this by adding a ``history-file`` option, either in the
package you want to release, or in your ~/.pypirc::
[zest.releaser]
history-file = deep/down/historie.doc
"""
# we were using an underscore at first
result = self.config.get("history_file")
# but if they're both defined, the hyphenated key takes precedence
result = self.config.get("history-file", result)
return result
def python_file_with_version(self):
"""Return Python filename with ``__version__`` marker, if configured.
Enable this by adding a ``python-file-with-version`` option::
[zest-releaser]
python-file-with-version = reinout/maurits.py
Return None when nothing has been configured.
"""
return self.config.get("python-file-with-version")
def history_format(self):
"""Return the format to be used for Changelog files.
Configure this by adding an ``history_format`` option, either in the
package you want to release, or in your ~/.pypirc, and using ``rst`` for
Restructured Text and ``md`` for Markdown::
[zest.releaser]
history_format = md
"""
return self.config.get("history_format", "")
def create_wheel(self):
"""Should we create a Python wheel for this package?
This is next to the standard source distribution that we always create
when releasing a Python package.
Changed in version 8.0.0a2: we ALWAYS create a wheel,
unless this is explicitly switched off.
The `wheel` package must be installed though, which is in our
'recommended' extra.
To switch this OFF, either in your ~/.pypirc or in a setup.cfg in
a specific package, add this:
[zest.releaser]
create-wheel = no
"""
if not USE_WHEEL:
# If the wheel package is not available, we obviously
# cannot create wheels.
return False
return self.config.get("create-wheel", True)
def register_package(self):
"""Should we try to register this package with a package server?
For the standard Python Package Index (PyPI), registering a
package is no longer needed: this is done automatically when
uploading a distribution for a package. In fact, trying to
register may fail. See
https://github.com/zestsoftware/zest.releaser/issues/191
So by default zest.releaser will no longer register a package.
But you may be using your own package server, and registering
may be wanted or even required there. In this case
you will need to turn on the register function.
In your setup.cfg or ~/.pypirc, use the following to ensure that
register is called on the package server:
[zest.releaser]
register = yes
Note that if you have specified multiple package servers, this
option is used for all of them. There is no way to register and
upload to server A, and only upload to server B.
"""
return self.config.get("register", False)
def no_input(self):
"""Return whether the user wants to run in no-input mode.
Enable this mode by adding a ``no-input`` option::
[zest.releaser]
no-input = yes
The default when this option has not been set is False.
"""
return self.config.get("no-input", False)
def development_marker(self):
"""Return development marker to be appended in postrelease.
Override the default ``.dev0`` in ~/.pypirc or setup.cfg using
a ``development-marker`` option::
[zest.releaser]
development-marker = .dev1
Returns default of ``.dev0`` when nothing has been configured.
"""
return self.config.get("development-marker", ".dev0")
def push_changes(self):
"""Return whether the user wants to push the changes to the remote.
Configure this mode by adding a ``push-changes`` option::
[zest.releaser]
push-changes = no
The default when this option has not been set is True.
"""
return self.config.get("push-changes", True)
def less_zeroes(self):
"""Return whether the user prefers less zeroes at the end of a version.
Configure this mode by adding a ``less-zeroes`` option::
[zest.releaser]
less-zeroes = yes
The default when this option has not been set is False.
When set to true:
- Instead of 1.3.0 we will suggest 1.3.
- Instead of 2.0.0 we will suggest 2.0.
This only makes sense for the bumpversion command.
In the postrelease command we read this option too,
but with the current logic it has no effect there.
"""
return self.config.get("less-zeroes", False)
def version_levels(self):
"""How many levels does the user prefer in a version number?
Configure this mode by adding a ``version-levels`` option::
[zest.releaser]
version-levels = 3
The default when this option has not been set is 0, which means:
no preference, so use the length of the current number.
This means when suggesting a next version after 1.2:
- with levels=0 we will suggest 1.3: no change
- with levels=1 we will still suggest 1.3, as we will not
use this to remove numbers, only to add them
- with levels=2 we will suggest 1.3
- with levels=3 we will suggest 1.2.1
If the current version number has more levels, we keep them.
So next version for 1.2.3.4 with levels=1 is 1.2.3.5.
Tweaking version-levels and less-zeroes should give you the
version number strategy that you prefer.
"""
default = 0
result = self.config.get("version-levels", default)
if result < 0:
return default
return result
_tag_format_deprecated_message = "\n".join(
line.strip()
for line in """
`tag-format` contains deprecated `%%(version)s` format. Please change to:
[zest.releaser]
tag-format = %s
""".strip().splitlines()
)
def tag_format(self, version):
"""Return the formatted tag that should be used in the release.
Configure it in ~/.pypirc or setup.cfg using a ``tag-format`` option::
[zest.releaser]
tag-format = v{version}
``tag-format`` must contain exactly one formatting instruction: for the
``version`` key.
Accepts also ``%(version)s`` format for backward compatibility.
The default format, when nothing has been configured, is ``{version}``.
"""
default_fmt = "{version}"
fmt = self.config.get("tag-format", default_fmt)
if "{version}" in fmt:
return fmt.format(version=version)
# BBB:
if "%(version)s" in fmt:
proposed_fmt = fmt.replace("%(version)s", "{version}")
print(self._tag_format_deprecated_message % proposed_fmt)
return fmt % {"version": version}
print("{version} needs to be part of 'tag-format': %s" % fmt)
sys.exit(1)
def tag_message(self, version):
"""Return the commit message to be used when tagging.
Configure it in ~/.pypirc or setup.cfg using a ``tag-message``
option::
[zest.releaser]
tag-message = Creating v{version} tag.
``tag-message`` must contain exactly one formatting
instruction: for the ``version`` key.
The default format is ``Tagging {version}``.
"""
default_fmt = "Tagging {version}"
fmt = self.config.get("tag-message", default_fmt)
if "{version}" not in fmt:
print("{version} needs to be part of 'tag-message': '%s'" % fmt)
sys.exit(1)
return fmt.format(version=version)
def tag_signing(self):
"""Return whether the tag should be signed.
Configure it in ~/.pypirc or setup.cfg using a ``tag-signing`` option::
[zest.releaser]
tag-signing = yes
``tag-signing`` must contain exactly one word which will be
converted to a boolean. Currently are accepted (case
insensitively): 0, false, no, off for False, and 1, true, yes,
on for True).
The default when this option has not been set is False.
"""
return self.config.get("tag-signing", False)
def date_format(self):
"""Return the string format for the date used in the changelog.
Override the default ``%Y-%m-%d`` in ~/.pypirc or setup.cfg using
a ``date-format`` option::
[zest.releaser]
date-format = %%B %%e, %%Y
Note: the % signs should be doubled for compatibility with other tools
(i.e. pip) that parse setup.cfg using the interpolating ConfigParser.
Returns default of ``%Y-%m-%d`` when nothing has been configured.
"""
default = "%Y-%m-%d"
try:
result = self.config["date-format"].replace("%%", "%")
except (KeyError, ValueError):
return default
return result
def run_pre_commit(self):
"""Return whether we should run pre commit hooks.
At least in git you have pre commit hooks.
These may interfere with releasing:
zest.releaser changes your setup.py, a pre commit hook
runs black or isort and gives an error, so the commit is cancelled.
By default (since version 7.3.0) we do not run pre commit hooks.
Configure it in ~/.pypirc or setup.cfg using a ``tag-signing`` option::
[zest.releaser]
run-pre-commit = yes
``run-pre-commit`` must contain exactly one word which will be
converted to a boolean. Currently are accepted (case
insensitively): 0, false, no, off for False, and 1, true, yes,
on for True).
The default when this option has not been set is False.
"""
return self.config.get("run-pre-commit", False) | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/pypi.py | pypi.py |
from zest.releaser import baserelease
from zest.releaser import utils
import logging
import sys
logger = logging.getLogger(__name__)
COMMIT_MSG = ""
# Documentation for self.data. You get runtime warnings when something is in
# self.data that is not in this list. Embarrasment-driven documentation!
DATA = baserelease.DATA.copy()
DATA.update(
{
"commit_msg": (
"Message template used when committing. "
"Default: same as the message passed on the command line."
),
"message": "The message we want to add",
}
)
class AddChangelogEntry(baserelease.Basereleaser):
"""Add a changelog entry.
self.data holds data that can optionally be changed by plugins.
"""
def __init__(self, vcs=None, message=""):
baserelease.Basereleaser.__init__(self, vcs=vcs)
# Prepare some defaults for potential overriding.
self.data.update(
dict(
commit_msg=COMMIT_MSG,
message=message.strip(),
)
)
def prepare(self):
"""Prepare self.data by asking about new dev version"""
if not utils.sanity_check(self.vcs):
logger.critical("Sanity check failed.")
sys.exit(1)
self._grab_history()
self._get_message()
def execute(self):
"""Make the changes and offer a commit"""
self._remove_nothing_changed()
self._insert_changelog_entry(self.data["message"])
self._write_history()
self._diff_and_commit()
def _remove_nothing_changed(self):
"""Remove nothing_changed_yet line from history lines"""
nothing_changed = self.data["nothing_changed_yet"]
if nothing_changed in self.data["history_last_release"]:
nc_pos = self.data["history_lines"].index(nothing_changed)
if nc_pos == self.data["history_insert_line_here"]:
self.data["history_lines"] = (
self.data["history_lines"][:nc_pos]
+ self.data["history_lines"][nc_pos + 2 :]
)
def _get_message(self):
"""Get changelog message and commit message."""
message = self.data["message"]
while not message:
q = "What is the changelog message? "
message = utils.get_input(q)
self.data["message"] = message
if not self.data["commit_msg"]:
# The commit message does %-replacement, so escape any %'s.
message = message.replace("%", "%%")
self.data["commit_msg"] = message
def datacheck(data):
"""Entrypoint: ensure that the data dict is fully documented"""
utils.is_data_documented(data, documentation=DATA)
def main():
parser = utils.base_option_parser()
parser.add_argument("message", help="Text of changelog entry")
options = utils.parse_options(parser)
utils.configure_logging()
addchangelogentry = AddChangelogEntry(message=utils.fs_to_text(options.message))
addchangelogentry.run() | zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest/releaser/addchangelogentry.py | addchangelogentry.py |
zest.releaser is copyright (C) 2008-2012 Zest Software
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston,
MA 02111-1307 USA.
| zest.releaser | /zest.releaser-9.0.0a2-py3-none-any.whl/zest.releaser-9.0.0a2.dist-info/LICENSE.rst | LICENSE.rst |
Changelog
=========
1.3 (2012-09-12)
----------------
- Moved to github.
[maurits]
1.2 (2010-10-19)
----------------
- Added MANIFEST.in file so the released package will contain ``.mo``
files (at least when using ``zest.releaser`` in combination with
``zest.pocompile``).
[maurits]
- When context.show_social_viewlet does not work, try
context.getField('show_social_viewlet').get(context)
as somehow the first only works when you have called getField once.
Tested with archetypes.schemaextender 1.6 and 2.0.3.
[maurits]
- Added config.py to ease overriding the
default value for the show_social_viewlet field (False)
and the fallback value for when the field does not exist for the
current object (False).
[maurits]
1.1 (2010-10-18)
----------------
- Explicitly load the zcml of the archetypes.schemaextender package so
you do not need to add this yourself to your buildout config on
Plone 3.2 or earlier.
[maurits]
1.0 (2010-10-18)
----------------
- Initial release. [maurits]
| zest.social | /zest.social-1.3.zip/zest.social-1.3/CHANGES.rst | CHANGES.rst |
Introduction
============
This is yet another social bookmarking viewlet based on
http://www.addthis.com/
Why a new one and not for example `collective.addthis`_? Well,
probably just because it is so easy to generate the javascript with
the services we choose, and register this as a viewlet. We did that
for our own `Zest Software`_ site and a client wanted the same, but
then with s checkbox per page to turn it on or off.
Features
========
- This gives you a viewlet near the bottom of the page with links to
share this on LinkedIn, Twitter or Google; you can share on some other
sites in a popup; plus a print button.
- Also, you get an extra boolean field ``show_social_viewlet`` on the
edit page (the Settings tab) of content types (using
``archetypes.schemaextender``). When this field is checked, the viewlet
is shown. By default the field is not checked, so the viewlet is
not shown.
- The extra field and the viewlet are only available when
you have actually installed this Add-On in your Plone Site (this is
done using plone.browserlayer). So when your Zope instance has more
than one Plone Site, the viewlet is only used in the site where you
install it.
Configuration
=============
There is no configuration in the UI. If you want to override the
default value and fallback value for showing the viewlet you may want
to look at ``config.py`` and do a monkey patch on the values there.
If you want to change the links that are shown, you should just
override the viewlet template, which is probably easiest using
`z3c.jbot`_.
Compatibility
=============
``zest.social`` has been tested with Plone 3.3. and Plone 4.0, using
`archetypes.schemaextender`_ 1.6 and 2.0.3.
.. _`collective.addthis`: http://pypi.python.org/pypi/collective.addthis
.. _`archetypes.schemaextender`: http://pypi.python.org/pypi/archetypes.schemaextender
.. _`z3c.jbot`: http://pypi.python.org/pypi/z3c.jbot
.. _`Zest Software`: http://zestsoftware.nl
| zest.social | /zest.social-1.3.zip/zest.social-1.3/README.rst | README.rst |
Introduction
============
This is yet another social bookmarking viewlet based on
http://www.addthis.com/
Why a new one and not for example `collective.addthis`_? Well,
probably just because it is so easy to generate the javascript with
the services we choose, and register this as a viewlet. We did that
for our own `Zest Software`_ site and a client wanted the same, but
then with s checkbox per page to turn it on or off.
Features
========
- This gives you a viewlet near the bottom of the page with links to
share this on LinkedIn, Twitter or Google; you can share on some other
sites in a popup; plus a print button.
- Also, you get an extra boolean field ``show_social_viewlet`` on the
edit page (the Settings tab) of content types (using
``archetypes.schemaextender``). When this field is checked, the viewlet
is shown. By default the field is not checked, so the viewlet is
not shown.
- The extra field and the viewlet are only available when
you have actually installed this Add-On in your Plone Site (this is
done using plone.browserlayer). So when your Zope instance has more
than one Plone Site, the viewlet is only used in the site where you
install it.
Configuration
=============
There is no configuration in the UI. If you want to override the
default value and fallback value for showing the viewlet you may want
to look at ``config.py`` and do a monkey patch on the values there.
If you want to change the links that are shown, you should just
override the viewlet template, which is probably easiest using
`z3c.jbot`_.
Compatibility
=============
``zest.social`` has been tested with Plone 3.3. and Plone 4.0, using
`archetypes.schemaextender`_ 1.6 and 2.0.3.
.. _`collective.addthis`: http://pypi.python.org/pypi/collective.addthis
.. _`archetypes.schemaextender`: http://pypi.python.org/pypi/archetypes.schemaextender
.. _`z3c.jbot`: http://pypi.python.org/pypi/z3c.jbot
.. _`Zest Software`: http://zestsoftware.nl
| zest.social | /zest.social-1.3.zip/zest.social-1.3/README.txt | README.txt |
import os, shutil, sys, tempfile, urllib2
from optparse import OptionParser
tmpeggs = tempfile.mkdtemp()
is_jython = sys.platform.startswith('java')
# parsing arguments
parser = OptionParser(
'This is a custom version of the zc.buildout %prog script. It is '
'intended to meet a temporary need if you encounter problems with '
'the zc.buildout 1.5 release.')
parser.add_option("-v", "--version", dest="version", default='1.4.4',
help='Use a specific zc.buildout version. *This '
'bootstrap script defaults to '
'1.4.4, unlike usual buildpout bootstrap scripts.*')
parser.add_option("-d", "--distribute",
action="store_true", dest="distribute", default=False,
help="Use Disribute rather than Setuptools.")
parser.add_option("-c", None, action="store", dest="config_file",
help=("Specify the path to the buildout configuration "
"file to be used."))
options, args = parser.parse_args()
# if -c was provided, we push it back into args for buildout' main function
if options.config_file is not None:
args += ['-c', options.config_file]
if options.version is not None:
VERSION = '==%s' % options.version
else:
VERSION = ''
USE_DISTRIBUTE = options.distribute
args = args + ['bootstrap']
to_reload = False
try:
import pkg_resources
if not hasattr(pkg_resources, '_distribute'):
to_reload = True
raise ImportError
except ImportError:
ez = {}
if USE_DISTRIBUTE:
exec urllib2.urlopen('http://python-distribute.org/distribute_setup.py'
).read() in ez
ez['use_setuptools'](to_dir=tmpeggs, download_delay=0, no_fake=True)
else:
exec urllib2.urlopen('http://peak.telecommunity.com/dist/ez_setup.py'
).read() in ez
ez['use_setuptools'](to_dir=tmpeggs, download_delay=0)
if to_reload:
reload(pkg_resources)
else:
import pkg_resources
if sys.platform == 'win32':
def quote(c):
if ' ' in c:
return '"%s"' % c # work around spawn lamosity on windows
else:
return c
else:
def quote (c):
return c
ws = pkg_resources.working_set
if USE_DISTRIBUTE:
requirement = 'distribute'
else:
requirement = 'setuptools'
env = dict(os.environ,
PYTHONPATH=
ws.find(pkg_resources.Requirement.parse(requirement)).location
)
cmd = [quote(sys.executable),
'-c',
quote('from setuptools.command.easy_install import main; main()'),
'-mqNxd',
quote(tmpeggs)]
if 'bootstrap-testing-find-links' in os.environ:
cmd.extend(['-f', os.environ['bootstrap-testing-find-links']])
cmd.append('zc.buildout' + VERSION)
if is_jython:
import subprocess
exitcode = subprocess.Popen(cmd, env=env).wait()
else: # Windows prefers this, apparently; otherwise we would prefer subprocess
exitcode = os.spawnle(*([os.P_WAIT, sys.executable] + cmd + [env]))
assert exitcode == 0
ws.add_entry(tmpeggs)
ws.require('zc.buildout' + VERSION)
import zc.buildout.buildout
zc.buildout.buildout.main(args)
shutil.rmtree(tmpeggs) | zest.social | /zest.social-1.3.zip/zest.social-1.3/bootstrap.py | bootstrap.py |
.. contents::
..
Contents
Introduction
============
When copying and pasting an object in Plone the workflow state of the
newly pasted object is set to the initial state. Sometimes you want
to keep the original state. This is what ``zest.specialpaste`` does.
Use case
========
You use Plone to store some information about clients in a folder.
You have created a standard folder with a few sub folders and several
documents, images and files that you use as a template for new
clients. For new clients some of these objects should already be
published. You have set this up correctly in the template or sample
folder. You copy this folder, go to a new location and use the
'Special paste' action from ``zest.specialpaste`` to paste the objects
and let the review state of the new objects be the same as their
originals.
Compatibility
=============
Tested on Plone 4.0 and 4.1. Currently it does not work on Plone 3.3;
that surprises me, so it might be fixable.
Installation
============
- Add ``zest.specialpaste`` to the ``eggs`` of your buildout (and to
the ``zcml`` too if you are on Plone 3.2 or earlier, but it does not
work there currently). Rerun the buildout.
- Install Zest Special Paste in the Add-on Products control panel.
This adds a 'Special paste' action on objects and registers a
browser layer that makes our ``@@special-paste`` browser view available.
Future ideas
============
- We can add a form in between where you can specify what should be
special for the paste. When selecting no options it should do the
same as the standard paste action.
- Allow keeping the original owner.
- Take over local roles.
- Make compatible with Plone 3.3 as well.
| zest.specialpaste | /zest.specialpaste-1.2.zip/zest.specialpaste-1.2/README.txt | README.txt |
import os, shutil, sys, tempfile, urllib2
from optparse import OptionParser
tmpeggs = tempfile.mkdtemp()
is_jython = sys.platform.startswith('java')
# parsing arguments
parser = OptionParser(
'This is a custom version of the zc.buildout %prog script. It is '
'intended to meet a temporary need if you encounter problems with '
'the zc.buildout 1.5 release.')
parser.add_option("-v", "--version", dest="version", default='1.4.4',
help='Use a specific zc.buildout version. *This '
'bootstrap script defaults to '
'1.4.4, unlike usual buildpout bootstrap scripts.*')
parser.add_option("-d", "--distribute",
action="store_true", dest="distribute", default=False,
help="Use Disribute rather than Setuptools.")
parser.add_option("-c", None, action="store", dest="config_file",
help=("Specify the path to the buildout configuration "
"file to be used."))
options, args = parser.parse_args()
# if -c was provided, we push it back into args for buildout' main function
if options.config_file is not None:
args += ['-c', options.config_file]
if options.version is not None:
VERSION = '==%s' % options.version
else:
VERSION = ''
USE_DISTRIBUTE = options.distribute
args = args + ['bootstrap']
to_reload = False
try:
import pkg_resources
if not hasattr(pkg_resources, '_distribute'):
to_reload = True
raise ImportError
except ImportError:
ez = {}
if USE_DISTRIBUTE:
exec urllib2.urlopen('http://python-distribute.org/distribute_setup.py'
).read() in ez
ez['use_setuptools'](to_dir=tmpeggs, download_delay=0, no_fake=True)
else:
exec urllib2.urlopen('http://peak.telecommunity.com/dist/ez_setup.py'
).read() in ez
ez['use_setuptools'](to_dir=tmpeggs, download_delay=0)
if to_reload:
reload(pkg_resources)
else:
import pkg_resources
if sys.platform == 'win32':
def quote(c):
if ' ' in c:
return '"%s"' % c # work around spawn lamosity on windows
else:
return c
else:
def quote (c):
return c
ws = pkg_resources.working_set
if USE_DISTRIBUTE:
requirement = 'distribute'
else:
requirement = 'setuptools'
env = dict(os.environ,
PYTHONPATH=
ws.find(pkg_resources.Requirement.parse(requirement)).location
)
cmd = [quote(sys.executable),
'-c',
quote('from setuptools.command.easy_install import main; main()'),
'-mqNxd',
quote(tmpeggs)]
if 'bootstrap-testing-find-links' in os.environ:
cmd.extend(['-f', os.environ['bootstrap-testing-find-links']])
cmd.append('zc.buildout' + VERSION)
if is_jython:
import subprocess
exitcode = subprocess.Popen(cmd, env=env).wait()
else: # Windows prefers this, apparently; otherwise we would prefer subprocess
exitcode = os.spawnle(*([os.P_WAIT, sys.executable] + cmd + [env]))
assert exitcode == 0
ws.add_entry(tmpeggs)
ws.require('zc.buildout' + VERSION)
import zc.buildout.buildout
zc.buildout.buildout.main(args)
shutil.rmtree(tmpeggs) | zest.specialpaste | /zest.specialpaste-1.2.zip/zest.specialpaste-1.2/bootstrap.py | bootstrap.py |
import logging
from Products.CMFCore.utils import getToolByName
from Products.CMFCore.WorkflowCore import WorkflowException
from zope.annotation.interfaces import IAnnotations
from zest.specialpaste.interfaces import ISpecialPasteInProgress
logger = logging.getLogger(__name__)
ANNO_KEY = 'zest.specialpaste.original'
_marker = object()
def update_copied_objects_list(object, event):
"""Update the list of which objects have been copied.
Note that the new object does not yet have an acquisition chain,
so we cannot really do much here yet. We are only interested in
the old object now.
When copying a single item:
- object is copy of item
- event.object is copy of item
- event.original is original item
Both copies have not been added to an acquisition context yet.
When copying a folder that has sub folders with content, like
folder/sub/doc, and pasting it to the same location so the origal
and pasted folders are at the same level, this event is also fired
with:
- object is copy of doc, with physical path copy_of_folder/sub/doc
- event.object is copy of folder, with physical path copy_of_folder
- event.original is the original folder
- sub is nowhere to be seen...
Luckily we can use physical paths in that case.
"""
request = event.original.REQUEST
if not ISpecialPasteInProgress.providedBy(request):
return
annotations = IAnnotations(object, None)
if annotations is None:
# Annotations on this object are not supported. This happens
# e.g. for SyndicationInformation, ATSimpleStringCriterion,
# and WorkflowPolicyConfig, so it is quite normal.
return
if object is event.object:
original = event.original
else:
# Use the path minus the top level folder, as that may be
# copy_of_folder.
path = '/'.join(object.getPhysicalPath()[1:])
try:
original = event.original.restrictedTraverse(path)
except:
logger.error("Could not get original %s from parent %r", path,
event.original)
raise
annotations[ANNO_KEY] = original.getPhysicalPath()
logger.debug("Annotation set: %r", '/'.join(original.getPhysicalPath()))
def update_cloned_object(object, event):
"""Update the cloned object.
Now the new (cloned) object has an acquisition chain and we can
start doing interesting things to it, based on the info of the old
object.
"""
request = object.REQUEST
if not ISpecialPasteInProgress.providedBy(request):
return
annotations = IAnnotations(object, None)
if annotations is None:
logger.debug("No annotations.")
return
original_path = annotations.get(ANNO_KEY, None)
if not original_path:
logger.debug("No original found.")
return
logger.debug("Original found: %r", original_path)
# We could delete our annotation, but it does not hurt to keep it
# and it may hurt to remove it when others write subscribers that
# depend on it.
#
# del annotations[ANNO_KEY]
original_object = object.restrictedTraverse('/'.join(original_path))
wf_tool = getToolByName(object, 'portal_workflow')
wfs = wf_tool.getWorkflowsFor(original_object)
if wfs is None:
return
for wf in wfs:
if not wf.isInfoSupported(original_object, 'review_state'):
continue
original_state = wf.getInfoFor(original_object, 'review_state',
_marker)
if original_state is _marker:
continue
# We need to store a real status on the new object.
former_status = wf_tool.getStatusOf(wf.id, original_object)
if former_status is None:
former_status = {}
# Use a copy for good measure
status = former_status.copy()
# We could fire a BeforeTransitionEvent and an
# AfterTransitionEvent, but that does not seem wise, as we do
# not want to treat this as a transition at all.
try:
wf_tool.setStatusOf(wf.id, object, status)
except WorkflowException:
logger.warn("WorkflowException when setting review state of "
"cloned object %r to %s.", object, original_state)
else:
logger.debug("Setting review state of cloned "
"object %r to %s.", object, original_state)
# Update role to permission assignments.
wf.updateRoleMappingsFor(object)
# Update the catalog, especially the review_state.
# object.reindexObjectSecurity() does not help though.
object.reindexObject(idxs='review_state') | zest.specialpaste | /zest.specialpaste-1.2.zip/zest.specialpaste-1.2/zest/specialpaste/eventhandlers.py | eventhandlers.py |
Zest buildout stabilizer
========================
**Goal of this product**: zest.stabilizer helps moving the trunk checkouts in
your development buildout to tag checkouts in your production buildout. It
detects the latest tag and changes stable.cfg accordingly.
It is at the moment quite `Zest software <http://zestsoftware.nl>`_ specific
in the sense that it is hardcoded to two assumptions/requirements that are
true for us.
Requirement 1: split buildout configs
-------------------------------------
At Zest software, we've settled on a specific buildout.cfg setup that
separates the buildout.cfg into five files:
unstable.cfg
Trunk checkouts, development eggs, development settings.
stable.cfg
Tag checkouts, released eggs. No development products.
devel.cfg/preview.cfg/production.cfg
Symlinked as production.cfg. The parts of the configuration that differ on
development laptops, the preview and the production system. Port numbers,
varnish installation, etc. Devel extends unstable, preview and production
extend stable.
zest.stabilizer thus moves the trunk checkouts in unstable.cfg to tag
checkouts in stable.cfg.
Requirement 2: infrae.subversion instead of svn:externals
---------------------------------------------------------
Our internal policy is to keep as much configuration in the buildout
config. So we've switched from svn:externals in ``src/`` to
infrae.subversion. We extended infrae.subversion to support development eggs
and to support placement in a different directory from the default
``parts/[partname]/``.
Zest.stabilizer expects a specific name ("ourpackages"). Such a part looks
like this::
[ourpackages]
recipe = infrae.subversion >= 1.4
urls =
https://svn.vanrees.org/svn/reinout/anker/anker.theme/trunk anker.theme
http://codespeak.net/svn/z3/deliverance/trunk Deliverance
as_eggs = true
location = src
What zest.stabilizer does
-------------------------
When you run ``stabilize``, zest.stabilizer does the following:
* Detect the ``[ourpackages]`` section in unstable.cfg and read in the urls.
* Remove "trunk" from each url and add "tags" and look up the available tags in
svn.
* Grab the highest number for each.
* Remove existing ``[ourpackages]`` in stable.cfg if it exists.
* Add ``[ourpackages]`` part into stable.cfg with those highest available tag
checkouts in it.
* Show the "svn diff" and ask you whether to commit the change.
Helper command: ``needrelease``
-------------------------------
Before stabilization, often a set of products first needs to be released. If
you have multiple packages, it is a chore to check all the svn logs to see if
there's a change since the last release.
Run ``needrelease`` and you'll get the last svn log message of every detected package.
Installation
------------
Installation is a simple ``easy_install zest.stabilizer``.
zest.stabilizer requires zest.releaser, which is installed automatically as a
dependency. Wow, more goodies!
Included programs
-----------------
Two programs are installed globally:
* ``unstable_fixup`` which currently only assists with moving ``src/*``
development eggs to an infrae.subversion part. At the end it prints
instructions for further work that you have to do manually.
* ``stabilize`` which takes the infrae.subversion part of ``unstable.cfg``
and finds out the latest tags for each of those development packages. It
then adds a similar part to ``stable.cfg``.
The development version of zest.stabilizer can be found at
https://svn.plone.org/svn/collective/zest.stabilizer/trunk .
| zest.stabilizer | /zest.stabilizer-1.4.tar.gz/zest.stabilizer-1.4/zest/stabilizer/README.txt | README.txt |
import difflib
import logging
import sys
import os
from pkg_resources import parse_version
from commands import getoutput
from zest.releaser import utils
import zest.releaser.choose
import buildoututils
logger = logging.getLogger('stabilize')
PARTNAME = 'ourpackages'
def check_for_files():
"""Make sure stable.cfg and unstable.cfg are present"""
required_files = ['stable.cfg', 'unstable.cfg']
available = os.listdir('.')
for required in required_files:
if required not in available:
logger.critical("Required file %s not found.", required)
sys.exit()
logger.debug("stable.cfg and unstable.cfg found.")
def development_eggs():
"""Return list of development egg directories from unstable.cfg.
Zest assumption: we've placed them in src/*
"""
unstable_lines = open('unstable.cfg').read().split('\n')
part = buildoututils.extract_parts(unstable_lines,
partname=PARTNAME)
if not part:
logger.error("No [%s] part, unstable.cfg isn't up-to-date.",
PARTNAME)
return
(start,
end,
specs,
url_section) = buildoututils.extract_option(
unstable_lines,
'urls',
startline=part['start'],
endline=part['end'])
development_eggs = []
for spec in specs:
url, name = spec.split()
development_eggs.append('src/%s' % name)
return development_eggs
def determine_tags(directories):
"""Return desired tags for all development eggs"""
results = []
start_dir = os.path.abspath('.')
for directory in directories:
logger.debug("Determining tag for %s...", directory)
os.chdir(directory)
vcs = zest.releaser.choose.version_control()
version = vcs.version
logger.debug("Current version is %r.", version)
available_tags = vcs.available_tags()
# We seek a tag that's the same or less than the version as determined
# by setuptools' version parsing. A direct match is obviously
# right. The 'less' approach handles development eggs that have
# already been switched back to development.
available_tags.reverse()
found = available_tags[0]
parsed_version = parse_version(version)
for tag in available_tags:
parsed_tag = parse_version(tag)
parsed_found = parse_version(found)
if parsed_tag == parsed_version:
found = tag
logger.debug("Found exact match: %s", found)
break
if (parsed_tag >= parsed_found and
parsed_tag < parsed_version):
logger.debug("Found possible lower match: %s", tag)
found = tag
name = vcs.name
full_tag = vcs.tag_url(found)
logger.debug("Picked tag %r for %s (currently at %r).",
full_tag, name, found)
results.append((full_tag, name))
os.chdir(start_dir)
return results
def url_list():
"""Return version lines"""
stable_lines = open('stable.cfg').read().split('\n')
part = buildoututils.extract_parts(stable_lines, partname=PARTNAME)
if not part:
return []
(start,
end,
specs,
url_section) = buildoututils.extract_option(
stable_lines,
'urls',
startline=part['start'],
endline=part['end'])
# http://somethinglong/tags/xyz something
interesting = [spec.split('tags/')[1] for spec in specs]
logger.debug("Version lines: %s", interesting)
return interesting
def remove_old_part():
"""If PARTNAME already exists, remove it"""
stable_lines = open('stable.cfg').read().split('\n')
old_part = buildoututils.extract_parts(stable_lines, partname=PARTNAME)
if old_part:
del stable_lines[old_part['start']:old_part['end']]
contents = '\n'.join(stable_lines)
open('stable.cfg', 'w').write(contents)
logger.debug("New stable.cfg written: old part removed.")
def add_new_part(tags, filename='stable.cfg'):
"""Add PARTNAME part"""
new = ['[%s]' % PARTNAME]
new.append('recipe = infrae.subversion >= 1.4')
checkouts = []
for tag, name in tags:
checkouts.append('%s %s' % (tag, name))
lines = buildoututils.format_option('urls', checkouts)
new += lines
new.append('as_eggs = true')
new.append('location = src')
new.append('')
new.append('')
lines = open(filename).read().splitlines()
first = buildoututils.extract_parts(lines)[0]
insertion_point = first['end']
lines[insertion_point:insertion_point] = new
contents = '\n'.join(lines)
# Add newline at end of file:
contents += '\n'
open(filename, 'w').write(contents)
logger.debug("New %s written. Added %s", filename, new)
def check_stable():
"""Check whether common tasks are handled"""
logger.info("Make sure part %s is actually called.", PARTNAME)
logger.info("Make sure ${%s:eggs} is actually included.", PARTNAME)
def insert_msg_into_history(msg):
vcs = zest.releaser.choose.version_control()
filename = vcs.history_file()
if not filename:
return
lines = open(filename).read().splitlines()
headings = utils.extract_headings_from_history(lines)
if not headings:
return
# Hardcoded zest assumptions.
target_line = headings[0]['line'] + 3
if len(lines) < target_line:
return
if 'nothing changed yet' in lines[target_line].lower():
del lines[target_line]
# Msg formatting:
msg = msg[:]
msg = [' %s' % line for line in msg]
msg += ['']
firstline = msg[0]
firstline = '-' + firstline[1:]
msg[0] = firstline
lines[target_line:target_line] = msg
contents = '\n'.join(lines)
# Add newline at end of file:
contents += '\n'
open(filename, 'w').write(contents)
logger.debug("Written change to history file")
def main():
logging.basicConfig(level=utils.loglevel(),
format="%(levelname)s: %(message)s")
check_for_files()
directories = development_eggs()
if not directories:
sys.exit()
old_situation = url_list()
tags = determine_tags(directories)
remove_old_part()
add_new_part(tags)
new_situation = url_list()
diff = list(difflib.ndiff(old_situation, new_situation))
logger.debug("Diff: %s", diff)
check_stable()
# XXX The diff is too ugly to put in the history file or the
# commit message.
msg = ["Stabilized buildout to most recent svn tags of our packages:"]
msg += diff
insert_msg_into_history(msg)
msg = '\n'.join(msg)
# show diff, offer commit
vcs = zest.releaser.choose.version_control()
diff_cmd = vcs.cmd_diff()
diff = getoutput(diff_cmd)
logger.info("The '%s':\n\n%s\n" % (diff_cmd, diff))
if utils.ask("OK to commit this"):
commit_cmd = vcs.cmd_commit(msg)
commit = getoutput(commit_cmd)
logger.info(commit) | zest.stabilizer | /zest.stabilizer-1.4.tar.gz/zest.stabilizer-1.4/zest/stabilizer/stabilize.py | stabilize.py |
import logging
import re
logger = logging.getLogger('utils')
WRONG_IN_VERSION = ['svn', 'dev', '(']
def extract_option(buildout_lines, option_name, startline=None, endline=None):
"""Return info on an option (like 'develop=...').
Return start/end line numbers, actual option lines and a list of
options.
"""
pattern = re.compile(r"""
^%s # Line that starts with the option name
\W*= # Followed by an '=' with possible whitespace before it.
""" % option_name, re.VERBOSE)
line_number = 0
first_line = None
if startline is not None:
logger.debug("Searching in specific lines: %s",
'\n'.join(buildout_lines[startline:endline]))
for line in buildout_lines:
if startline is not None:
if line_number < startline or line_number > endline:
line_number += 1
continue
match = pattern.search(line)
if match:
logger.debug("Matching %s line found: %r", option_name, line)
start = line_number
first_line = line
break
line_number += 1
if not first_line:
logger.error("'%s = ....' line not found.", option_name)
return (None, None, None, None)
option_values = [first_line.split('=')[1]]
for line in buildout_lines[start + 1:]:
line_number += 1
if ('=' in line or '[' in line):
if not '#egg=' in line:
end = line_number
break
option_values.append(line)
option_values = [item.strip() for item in option_values
if item.strip()]
logger.info("Found option values: %r.", option_values)
option_section = buildout_lines[start:end]
return (start,
end,
option_values,
option_section)
def extract_parts(buildout_lines, partname=None):
"""Return info on the parts (like [development]) in the buildout.
Return list of dicts with start/end line numbers, part name and the
actual part lines.
"""
results = []
pattern = re.compile(r"""
^\[ # A '[' at the start of a line
\S+ # Followed by one continuous word
\] # Closing ']'
\W*$ # Followed by optional whitespace and nothing more.
""", re.VERBOSE)
line_number = 0
start = None
name = None
# TODO below
for line in buildout_lines:
match = pattern.search(line)
if match:
# Handle previous part if we already have some data.
if start is not None:
results.append(dict(start=start,
end=line_number,
name=name))
logger.debug("Matching line found: %r", line)
line = line.strip()
line = line.replace('[', '')
line = line.replace(']', '')
name = line
start = line_number
line_number += 1
# Handle last part
results.append(dict(start=start,
end=line_number,
name=name))
if partname is not None:
for result in results:
if result['name'] == partname:
return result
return None
return results
def format_option(name, options):
"""Return lines with formatted option."""
lines = ['%s =' % name]
for option in options:
if option.startswith('#'):
# Comments must start in the first column, so don't indent them.
lines.append(option)
else:
lines.append(' %s' % option)
lines.append('')
return lines | zest.stabilizer | /zest.stabilizer-1.4.tar.gz/zest.stabilizer-1.4/zest/stabilizer/buildoututils.py | buildoututils.py |
zest.zodbupdate
===============
zodbupdate rename dictionary and dexterity patch for Plone 5.2 projects.
See `post on community.plone.org <https://community.plone.org/t/zodbverify-porting-plone-with-zopedb-to-python3/8806/13>`_.
And the `Plone ZODB Python 3 migration docs <https://docs.plone.org/manage/upgrading/version_specific_migration/upgrade_zodb_to_python3.html>`_.
Quick usage
-----------
In a simplified ``buildout.cfg``::
[buildout]
parts =
instance
zodbupdate
[instance]
recipe = plone.recipe.zope2instance
eggs =
Plone
zodbverify
[zodbupdate]
recipe = zc.recipe.egg
scripts = zodbupdate
eggs =
zodbupdate
zest.zodbupdate
${instance:eggs}
Run ``bin/buildout`` and then ``bin/zodbupdate -f var/filestorage/Data.fs``.
Use case and process
--------------------
You want to migrate your Plone 5.2 database from Python 2.7 to Python 3.
You use the zodbverify and zodbupdate tools for this.
When you first run ``bin/zodbverify`` or ``bin/instance zodbverify``, you may see warnings and exceptions.
It may warn about problems that zodbupdate will fix.
So the idea is now:
1. First with Python 2.7, run ``bin/zodbupdate -f var/filestorage/Data.fs``
So *no* python3 convert stuff!
This will detect and apply several explicit and implicit rename rules.
2. Then run ``bin/instance zodbverify``.
If this still gives warnings or exceptions,
you may need to define more rules and apply them with zodbupdate.
3. When all is well, on Python 3 run::
bin/zodbupdate --convert-py3 --file=var/filestorage/Data.fs --encoding utf8
4. For good measure, on Python 3 run ``bin/instance zodbverify``.
When this works fine on a copy of your production database,
you could choose to safe some downtime and only do step 3 on your production database.
But please check this process again on a copy of your database.
Rename rules
------------
zodbverify may give warnings and exceptions like these::
Warning: Missing factory for Products.ResourceRegistries.interfaces.settings IResourceRegistriesSettings
Warning: Missing factory for App.interfaces IPersistentExtra
Warning: Missing factory for App.interfaces IUndoSupport
...
Found X records that could not be loaded.
Exceptions and how often they happened:
ImportError: No module named ResourceRegistries.interfaces.settings: 8
AttributeError: 'module' object has no attribute 'IPersistentExtra': 4508
For each, you need to check if it can be safely replaced by something else,
or if this points to a real problem: maybe a previously installed add-on is missing.
In this case, these interfaces seem no longer needed.
Easiest is to replace them with a basic Interface.
Maybe there are better ways to clean these up, but so far so good.
You fix these with renames in an entrypoint using zodbupdate.
See https://github.com/zopefoundation/zodbupdate#pre-defined-rename-rules
The current package defines such an entrypoint.
Here is the rename dictionary from the `master branch <https://github.com/zestsoftware/zest.zodbupdate/blob/master/src/zest/zodbupdate/renames.py>`_.
The warnings and exceptions mentioned above are handled here.
Each version of this package may have different contents.
Note that I have seen several warnings that are not handled but that seem innocent.
I choose to ignore them.
These are some warnings because of a missing ``webdav`` (removed in Zope 4.0, reintroduced in 4.3)::
Warning: Missing factory for webdav.interfaces IDAVResource
Warning: Missing factory for webdav.interfaces IFTPAccess
Warning: Missing factory for webdav.interfaces IDAVCollection
Dynamic dexterity schemas
-------------------------
A special case that ``bin/zodbupdate`` and ``bin/zodbverify`` may bump into, is::
AttributeError: Cannot find dynamic object factory for module plone.dexterity.schema.generated: 58
Warning: Missing factory for plone.dexterity.schema.generated Plone_0_Image
Warning: Missing factory for plone.dexterity.schema.generated Plone_0_Document
Warning: Missing factory for plone.dexterity.schema.generated Site2_0_News_1_Item
Warning: Missing factory for plone.dexterity.schema.generated Site3_0_Document
This is because no zcml is loaded by these scripts.
So this utility from ``plone.dexterity/configure.zcml`` is not registered::
<utility
factory=".schema.SchemaModuleFactory"
name="plone.dexterity.schema.generated"
/>
This utility implements ``plone.alterego.interfaces.IDynamicObjectFactory``.
This is responsible for generating schemas on the fly.
So we register this utility in Python code.
Note that in normal use (``bin/instance``) this would result in a double registration,
but the second one is simply ignored by zope.interface, because it is the same.
Also, when you have zodbverify in the instance eggs and you call ``bin/instance zodbverify``,
you will not get this error, because then zcml is loaded, and no special handling is needed.
Package structure
-----------------
- This package only has an ``__init__.py`` file.
- It has the rename dictionary pointed to by the entrypoint in our ``setup.cfg``.
- It is only loaded when running ``bin/zodbupdate``, because this is the only code that looks for the entrypoint.
- As a side effect, when the entrypoint is loaded we also register the dexterity utility when available.
This code is executed simply because it also is in the ``__init__.py`` file.
| zest.zodbupdate | /zest.zodbupdate-1.0.0b2.tar.gz/zest.zodbupdate-1.0.0b2/README.rst | README.rst |
Zester
=========================
Zester is a library that makes it easier to develop Python clients for websites without APIs.
No lxml, no XPath, just javascript.
Let's make a client library for `Hacker News <http://news.ycombinator.com/>`_ by saving the following code in a file named hnclient.py::
from zester import MultipleClient, Attribute
class HNClient(MultipleClient):
url = "http://news.ycombinator.com/"
title = Attribute(selector="$('.title a')", modifier="$(el).html()")
link = Attribute(selector="$('.title a')"), modifier="$(el).attr('href')")
points = Attribute(selector="$('.subtext span')", modifier="$(el).html().replace(' points', '')")
Now, let's use the client we just made. Open a python shell::
>>> from hnclient import HNClient
>>> client = HNClient()
>>> stories = client.process()
>>> stories[0]
HNClientResponse(points=u'200', link=u'http://daltoncaldwell.com/what-twitter-could-have-been', title=u'What Twitter could have been')
>>> print stories[0].title
What Twitter could have been
>>> print stories[0].link
http://daltoncaldwell.com/what-twitter-could-have-been
>>> print stories[0].points
56
We subclassed MultipleClient there because we were planning on returning multiple results. If we wanted to make a client for something like `Weather.gov <http://weather.gov>`_ that returned a single result, we could do something like this::
from zester import SingleClient, Attribute
class WeatherClient(SingleClient):
url = "http://forecast.weather.gov/MapClick.php?lat={lat}&lon={lng}"
temperature = Attribute(selector="$('.myforecast-current-lrg').html()")
humidity = Attribute(selector="$('.current-conditions-detail li').contents()[1]")
heat_index = Attribute(selector="$('.current-conditions-detail li').contents()[11]")
def __init__(self, lat, lng, *args, **kwargs):
super(WeatherClient, self).__init__(*args, **kwargs)
self.url = self.url.format(lat=lat, lng=lng)
This also demonstrates how you can allow arguments to be taken::
>>> from weather_client import WeatherClient
>>> client = WeatherClient(lat=40.7143528, lng=-74.0059731)
>>> curr_weather = client.process()
>>> curr_weather
WeatherClientResponse(heat_index=u'82\xb0F (28\xb0C)', temperature=u'80\xb0F', humidity=u'58%')
>>> print curr_weather.temperature
80°F
>>> print curr_weather.humidity
58%
>>> print curr_weather.heat_index
82°F (28°C)
Installation
------------
Zester is dependant upon `Ghost.py <http://jeanphix.me/Ghost.py/>`_. You must install it before installing Zester. Ghost.py will also require the installation of either PyQt or PySide.
After Ghost.py is installed, to install zester: ::
$ pip install zester
| zester | /zester-0.0.3.tar.gz/zester-0.0.3/README.rst | README.rst |
import json
from urllib import request
import parse_ingredient.ingredient
class Error(Exception):
pass
class ZestfulServerError(Error):
pass
class InsufficientQuotaError(Error):
pass
_DEMO_ENDPOINT_URL = 'https://sandbox.zestfuldata.com'
_RAPID_API_DOMAIN = 'zestful.p.rapidapi.com'
_RAPID_API_URL = 'https://' + _RAPID_API_DOMAIN
class Client:
def __init__(self, endpoint_url=None, rapid_api_key=None):
self._rapid_api_key = rapid_api_key
if endpoint_url:
self._endpoint_url = endpoint_url
elif self._rapid_api_key:
self._endpoint_url = _RAPID_API_URL
else:
self._endpoint_url = _DEMO_ENDPOINT_URL
def parse_ingredient(self, ingredient):
results_raw = self._send_request([ingredient])
_check_quota(results_raw)
if results_raw['error']:
raise ZestfulServerError('failed to parse ingredient: %s' %
results_raw['error'])
if len(results_raw['results']) != 1:
raise ValueError(
'Unexpected response from server. Expected 1 result, got %d' %
len(results_raw['results']))
result_raw = results_raw['results'][0]
if result_raw['error']:
raise ZestfulServerError('failed to parse ingredient: %s' %
result_raw['error'])
return parse_ingredient.ingredient.ParsedIngredient(
confidence=result_raw['confidence'],
product=result_raw['ingredientParsed']['product'],
product_size_modifier=result_raw['ingredientParsed']
['productSizeModifier'],
quantity=result_raw['ingredientParsed']['quantity'],
unit=result_raw['ingredientParsed']['unit'],
preparation_notes=result_raw['ingredientParsed']
['preparationNotes'],
usda_info=_parse_usda_info(
result_raw['ingredientParsed']['usdaInfo']))
def parse_ingredients(self, ingredients):
results_raw = self._send_request(ingredients)
_check_quota(results_raw)
if results_raw['error']:
raise ZestfulServerError('failed to parse ingredients: %s' %
results_raw['error'])
results = []
for result_raw in results_raw['results']:
results.append(
parse_ingredient.ingredient.ParsedIngredientEntry(
error=result_raw['error'],
raw=result_raw['ingredientRaw'],
parsed=parse_ingredient.ingredient.ParsedIngredient(
confidence=result_raw['confidence'],
product=result_raw['ingredientParsed']['product'],
product_size_modifier=result_raw['ingredientParsed']
['productSizeModifier'],
quantity=result_raw['ingredientParsed']['quantity'],
unit=result_raw['ingredientParsed']['unit'],
preparation_notes=result_raw['ingredientParsed']
['preparationNotes'],
usda_info=_parse_usda_info(
result_raw['ingredientParsed']['usdaInfo']))))
return parse_ingredient.ingredient.ParsedIngredients(
ingredients=results)
def _send_request(self, ingredients):
req = request.Request(self._endpoint_url + '/parseIngredients',
method='POST')
body = json.dumps({'ingredients': ingredients}).encode('utf-8')
req.add_header('Content-Type', 'application/json')
req.add_header('Content-Length', len(body))
if self._rapid_api_key:
req.add_header('x-rapidapi-key', self._rapid_api_key)
req.add_header('x-rapidapi-host', _RAPID_API_DOMAIN)
with request.urlopen(req, data=body) as response:
return json.loads(response.read().decode('utf-8'))
def _parse_usda_info(usda_info_raw):
if not usda_info_raw:
return None
return parse_ingredient.ingredient.UsdaInfo(
category=usda_info_raw['category'],
description=usda_info_raw['description'],
fdc_id=usda_info_raw['fdcId'],
match_method=usda_info_raw['matchMethod'])
def _check_quota(server_response):
if not server_response['error']:
return
server_error = server_response['error']
if 'insufficient quota' in server_error.lower():
raise InsufficientQuotaError(
'You have insufficient quota to complete this request. '
'To continue parsing ingredients, purchase a Zestful plan from '
'https://zestfuldata.com') | zestful-parse-ingredient | /zestful_parse_ingredient-0.0.7-py3-none-any.whl/parse_ingredient/internal/client.py | client.py |
Changelog
=========
.. NOTE: You should *NOT* be adding new change log entries to this file, this
file is managed by towncrier. You *may* edit previous change logs to
fix problems like typo corrections or such.
To add a new change log entry, please see the notes from the ``pip`` project at
https://pip.pypa.io/en/latest/development/#adding-a-news-entry
.. towncrier release notes start
1.3.0 (2022-04-19)
------------------
New features:
- Use the ``build`` subcommand for ``towncrier`` to build the changelog.
Fixes compatibility with ``towncrier`` 21.9.0 or later.
Requires ``towncrier`` 19.9.0 or later.
[mcflugen] (`Issue #22 <https://github.com/collective/zestreleaser.towncrier/issues/22>`_)
- For parsing, use ``tomli`` when on Python 3, ``toml`` on Python 2.
Same as ``towncrier`` did until recently.
[maurits] (`Issue #23 <https://github.com/collective/zestreleaser.towncrier/issues/23>`_)
1.2.0 (2019-03-05)
------------------
New features:
- Use 'python -m towncrier' when the script is not easily findable.
Still check the directory of the fullrelease script first.
No longer check the PATH.
[maurits] (`Issue #17 <https://github.com/collective/zestreleaser.towncrier/issues/17>`_)
Bug fixes:
- Do not run sanity checks or run draft during postrelease. [maurits] (`Issue #16 <https://github.com/collective/zestreleaser.towncrier/issues/16>`_)
1.1.0 (2019-03-05)
------------------
New features:
- Rerelease 1.0.3 as 1.1.0, as it contains new features. (`Issue #9 <https://github.com/collective/zestreleaser.towncrier/issues/9>`_)
1.0.3 (2019-03-05)
------------------
New features:
- Report on sanity of newsfragments: do they have the correct extensions?
Is at least one found?
Show dry-run (draft) of what towncrier would do.
[maurits] (`Issue #9 <https://github.com/collective/zestreleaser.towncrier/issues/9>`_)
- Handle multiple news entries per issue/type pair. [maurits] (`Issue #14 <https://github.com/collective/zestreleaser.towncrier/issues/14>`_)
1.0.2 (2019-03-04)
------------------
Bug fixes:
- Fixed finding towncrier when sys.argv is messed up. [maurits] (`Issue #6 <https://github.com/collective/zestreleaser.towncrier/issues/6>`_)
1.0.1 (2019-02-20)
------------------
Bug fixes:
- Tell bumpversion to not update the history. [maurits] (`Issue #10
<https://github.com/collective/zestreleaser.towncrier/issues/10>`_)
1.0.0 (2019-02-06)
------------------
New features:
- Warn and ask when towncrier is wanted but not found. [maurits] (`Issue #7
<https://github.com/collective/zestreleaser.towncrier/issues/7>`_)
1.0.0b3 (2018-05-17)
--------------------
New features:
- Require towncrier 18.5.0 so we don't need a package name in the config.
[maurits] (`Issue #3
<https://github.com/collective/zestreleaser.towncrier/issues/3>`_)
Bug fixes:
- First look for ``towncrier`` next to the ``full/prerelease`` script. Then
fall back to looking on the ``PATH``. [maurits] (`Issue #4
<https://github.com/collective/zestreleaser.towncrier/issues/4>`_)
1.0.0b2 (2018-05-16)
--------------------
Bug fixes:
- Do not fail when pyproject.toml file is not there. [maurits] (`Issue #2
<https://github.com/collective/zestreleaser.towncrier/issues/2>`_)
1.0.0b1 (2018-05-15)
--------------------
New features:
- First release. [maurits] (`Issue #1
<https://github.com/collective/zestreleaser.towncrier/issues/1>`_)
| zestreleaser.towncrier | /zestreleaser.towncrier-1.3.0.tar.gz/zestreleaser.towncrier-1.3.0/CHANGES.rst | CHANGES.rst |
.. This README is meant for consumption by humans and pypi. Pypi can render rst files so please do not use Sphinx features.
If you want to learn more about writing documentation, please check out: http://docs.plone.org/about/documentation_styleguide.html
This text does not appear on pypi or github. It is a comment.
zestreleaser.towncrier
======================
This calls `towncrier <https://github.com/hawkowl/towncrier>`_ when releasing a package with `zest.releaser <http://zestreleaser.readthedocs.io/en/latest/>`_.
``towncrier`` updates your history file (like ``CHANGES.rst``) based on news snippets.
This is for example `used by pip <https://pip.pypa.io/en/latest/development/#adding-a-news-entry>`_.
The plugin will call ``towncrier --version <package version> --yes``.
You can get a preview of the result yourself by calling ``towncrier --version 1.2.3 --draft``.
The ``towncrier`` command should be on your ``PATH``.
The plugin can also find it when it is in the same directory as the ``fullrelease`` script (or ``prerelease/postrelease``).
Installation
------------
Install ``zestreleaser.towncrier`` with ``pip``::
$ pip install zestreleaser.towncrier
Then you can run ``fullrelease`` like you would normally do when releasing a package.
Contribute
----------
- Issue Tracker: https://github.com/collective/zestreleaser.towncrier/issues
- Source Code: https://github.com/collective/zestreleaser.towncrier
Support
-------
If you are having problems, please let us know by filing an `issue <https://github.com/collective/zestreleaser.towncrier/issues>`_.
License
-------
The project is licensed under the GPL.
| zestreleaser.towncrier | /zestreleaser.towncrier-1.3.0.tar.gz/zestreleaser.towncrier-1.3.0/README.rst | README.rst |
zestreleaser.towncrier Copyright 2018, Maurits van Rees
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston,
MA 02111-1307 USA.
| zestreleaser.towncrier | /zestreleaser.towncrier-1.3.0.tar.gz/zestreleaser.towncrier-1.3.0/LICENSE.rst | LICENSE.rst |
import io
import logging
import os
import sys
from copy import deepcopy
from textwrap import dedent
from zest.releaser import utils
try:
# We prefer tomli, as pip and towncrier use it.
import tomli
toml = None
except ImportError:
# But tomli is not available on Python 2.
tomli = None
import toml
logger = logging.getLogger(__name__)
TOWNCRIER_MARKER = "_towncrier_applicable"
TOWNCRIER_CONFIG_FILE = "pyproject.toml"
def _towncrier_executable():
"""Find the towncrier executable.
We return a list, with either [path/to/towncrier] or
[python, -m, towncrier]
"""
# First try to find towncrier in the same directory as full/prerelease.
# That is mostly likely to be the version that we want.
script = sys.argv[0]
if script == "setup.py":
# Problem caused by 'pyroma' zest.releaser plugin. See
# https://github.com/collective/zestreleaser.towncrier/issues/6
# Try the Python executable instead, which should work in case
# zest.releaser was installed in a virtualenv.
script = sys.executable
releaser_path = os.path.abspath(script)
releaser_dir = os.path.split(releaser_path)[0]
path = os.path.join(releaser_dir, "towncrier")
if os.path.isfile(path):
return [path]
# It might be a symbolic link. Follow it and try again.
releaser_path = os.path.realpath(releaser_path)
releaser_dir = os.path.split(releaser_path)[0]
path = os.path.join(releaser_dir, "towncrier")
if os.path.isfile(path):
return [path]
# towncrier is in our install_requires, so it is available as module,
# and since 18.6.0 it supports calling with 'python -m towncrier'.
# (Note: you will want 19.2.0+ on Python 2.7.)
return [sys.executable, "-m", "towncrier"]
def _load_config():
if tomli is not None:
with io.open(TOWNCRIER_CONFIG_FILE, "rb") as conffile:
return tomli.load(conffile)
with io.open(TOWNCRIER_CONFIG_FILE, "r", encoding="utf8", newline="") as conffile:
return toml.load(conffile)
def _is_towncrier_wanted():
if not os.path.exists(TOWNCRIER_CONFIG_FILE):
return
full_config = _load_config()
try:
config = full_config["tool"]["towncrier"]
except KeyError:
return
return True
def _report_newsfragments_sanity():
"""Report on the sanity of the newsfragments.
I hope this is not too specific to the pyproject.toml config
that I am used to.
"""
full_config = _load_config()
config = full_config["tool"]["towncrier"]
if "type" in config:
types = [entry.get("directory") for entry in config["type"]]
else:
# towncrier._settings._default_types seems too private to depend on,
# so hardcopy it.
types = ["feature", "bugfix", "doc", "removal", "misc"]
# Where are the snippets stored?
directory = config.get("directory")
if not directory:
# Look for "newsfragments" directory.
# We could look in the config file, but I wonder if that may change,
# so simply look for a certain directory name.
fragment_directory = "newsfragments"
for dirpath, dirnames, filenames in os.walk("."):
if dirpath.startswith(os.path.join(".", ".")):
# for example ./.git
continue
if fragment_directory in dirnames:
directory = os.path.join(dirpath, fragment_directory)
break
if not directory:
# Either towncrier won't work, or our logic is off.
print("WARNING: could not find newsfragments directory " "for towncrier.")
return
problems = []
correct = []
for filename in os.listdir(directory):
if filename.startswith("."):
continue
# filename can be like 42.bugfix or 42.bugfix.1
filename_parts = filename.split(".")
if 2 <= len(filename_parts) <= 3:
# In both cases, we take item 1.
ext = filename_parts[1]
if ext in types:
correct.append(filename)
continue
problems.append(filename)
print(
"Found {0} towncrier newsfragments with recognized extension.".format(
len(correct)
)
)
if problems:
print(
dedent(
"""
WARNING: According to the pyproject.toml file,
towncrier accepts news snippets with these extensions:
{0}
Problem: the {1} directory contains files with other extensions,
which will be ignored:
{2}
""".format(
", ".join(types), directory, ", ".join(problems)
)
)
)
if not utils.ask("Do you want to continue anyway?", default=False):
sys.exit(1)
if len(correct) == 0:
print(
dedent(
"""
WARNING: No towncrier newsfragments found.
The changelog will not contain anything interesting.
"""
)
)
if not utils.ask("Do you want to continue anyway?", default=False):
sys.exit(1)
def check_towncrier(data, check_sanity=True, do_draft=True):
"""Check if towncrier can and should be run.
This is a zest.releaser entrypoint that is called during
prerelease, postrelease, and bumpversion.
Not in all cases are all checks useful, so there are some options.
For example, postrelease should not complain that there are no
news fragments.
"""
if TOWNCRIER_MARKER in data:
# We have already been called.
return data[TOWNCRIER_MARKER]
if not data.get("update_history", True):
# Someone has instructed zest.releaser to not update the history,
# and it was not us, because our marker was not set,
# so we should not update the history either.
logger.debug("update_history is already False, so towncrier will not be run.")
data[TOWNCRIER_MARKER] = False
return False
# Check if towncrier should be applied.
result = _is_towncrier_wanted()
if not result:
logger.debug("towncrier is not wanted.")
else:
result = _towncrier_executable()
if result:
logger.debug("towncrier should be run.")
# zest.releaser should not update the history.
# towncrier will do that.
data["update_history"] = False
if check_sanity:
_report_newsfragments_sanity()
if do_draft:
# Do a draft.
cmd = deepcopy(result)
cmd.extend(
[
"build",
"--draft",
"--version",
data.get("new_version", "t.b.d."),
"--yes",
]
)
# We would like to pass ['--package' 'package name'] as well,
# but that is not yet in a release of towncrier.
logger.info(
"Doing dry-run of towncrier to see what would be changed: %s",
utils.format_command(cmd),
)
print(utils.execute_command(cmd))
else:
print(
dedent(
"""
According to the pyproject.toml file,
towncrier is used to update the changelog.
The problem is: we cannot find the towncrier executable.
Please make sure it is on your PATH."""
)
)
if not utils.ask("Do you want to continue anyway?", default=False):
sys.exit(1)
data[TOWNCRIER_MARKER] = result
return result
def post_check_towncrier(data):
"""Entrypoint for postrelease."""
return check_towncrier(data, check_sanity=False, do_draft=False)
def call_towncrier(data):
"""Entrypoint: run towncrier when available and configured."""
# check_towncrier will either give a path to towncrier, or False.
path = check_towncrier(data)
if not path:
return
# path is a list
cmd = deepcopy(path)
cmd.extend(["build", "--version", data["new_version"], "--yes"])
# We would like to pass ['--package' 'package name'] as well,
# but that is not yet in a release of towncrier.
logger.info("Running command to update news: %s", utils.format_command(cmd))
print(utils.execute_command(cmd))
# towncrier stages the changes with git,
# which BTW means that our plugin requires git.
logger.info("The staged git changes are:")
print(utils.execute_command(["git", "diff", "--cached"]))
logger.info(
"towncrier has finished updating the history file "
"and has staged the above changes in git."
) | zestreleaser.towncrier | /zestreleaser.towncrier-1.3.0.tar.gz/zestreleaser.towncrier-1.3.0/src/zestreleaser/towncrier/__init__.py | __init__.py |
import itertools
from .card import Card
from .deck import Deck
from .lookup import LookupTable
class Evaluator(object):
"""
Evaluates hand strengths using a variant of Cactus Kev's algorithm:
http://suffe.cool/poker/evaluator.html
I make considerable optimizations in terms of speed and memory usage,
in fact the lookup table generation can be done in under a second and
consequent evaluations are very fast. Won't beat C, but very fast as
all calculations are done with bit arithmetic and table lookups.
"""
def __init__(self):
self.table = LookupTable()
self.hand_size_map = {
5: self._five,
6: self._six,
7: self._seven
}
def evaluate(self, cards, board):
"""
This is the function that the user calls to get a hand rank.
Supports empty board, etc very flexible. No input validation
because that's cycles!
"""
all_cards = cards + board
return self.hand_size_map[len(all_cards)](all_cards)
def _five(self, cards):
"""
Performs an evalution given cards in integer form, mapping them to
a rank in the range [1, 7462], with lower ranks being more powerful.
Variant of Cactus Kev's 5 card evaluator, though I saved a lot of memory
space using a hash table and condensing some of the calculations.
"""
# if flush
if cards[0] & cards[1] & cards[2] & cards[3] & cards[4] & 0xF000:
handOR = (cards[0] | cards[1] | cards[2] | cards[3] | cards[4]) >> 16
prime = Card.prime_product_from_rankbits(handOR)
return self.table.flush_lookup[prime]
# otherwise
else:
prime = Card.prime_product_from_hand(cards)
return self.table.unsuited_lookup[prime]
def _six(self, cards):
"""
Performs five_card_eval() on all (6 choose 5) = 6 subsets
of 5 cards in the set of 6 to determine the best ranking,
and returns this ranking.
"""
minimum = LookupTable.MAX_HIGH_CARD
all5cardcombobs = itertools.combinations(cards, 5)
for combo in all5cardcombobs:
score = self._five(combo)
if score < minimum:
minimum = score
return minimum
def _seven(self, cards):
"""
Performs five_card_eval() on all (7 choose 5) = 21 subsets
of 5 cards in the set of 7 to determine the best ranking,
and returns this ranking.
"""
minimum = LookupTable.MAX_HIGH_CARD
all5cardcombobs = itertools.combinations(cards, 5)
for combo in all5cardcombobs:
score = self._five(combo)
if score < minimum:
minimum = score
return minimum
def get_rank_class(self, hr):
"""
Returns the class of hand given the hand hand_rank
returned from evaluate.
"""
if hr >= 0 and hr <= LookupTable.MAX_STRAIGHT_FLUSH:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_STRAIGHT_FLUSH]
elif hr <= LookupTable.MAX_FOUR_OF_A_KIND:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_FOUR_OF_A_KIND]
elif hr <= LookupTable.MAX_FULL_HOUSE:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_FULL_HOUSE]
elif hr <= LookupTable.MAX_FLUSH:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_FLUSH]
elif hr <= LookupTable.MAX_STRAIGHT:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_STRAIGHT]
elif hr <= LookupTable.MAX_THREE_OF_A_KIND:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_THREE_OF_A_KIND]
elif hr <= LookupTable.MAX_TWO_PAIR:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_TWO_PAIR]
elif hr <= LookupTable.MAX_PAIR:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_PAIR]
elif hr <= LookupTable.MAX_HIGH_CARD:
return LookupTable.MAX_TO_RANK_CLASS[LookupTable.MAX_HIGH_CARD]
else:
raise Exception("Inavlid hand rank, cannot return rank class")
def class_to_string(self, class_int):
"""
Converts the integer class hand score into a human-readable string.
"""
return LookupTable.RANK_CLASS_TO_STRING[class_int]
def get_five_card_rank_percentage(self, hand_rank):
"""
Scales the hand rank score to the [0.0, 1.0] range.
"""
return float(hand_rank) / float(LookupTable.MAX_HIGH_CARD)
def hand_summary(self, board, hands):
"""
Gives a sumamry of the hand with ranks as time proceeds.
Requires that the board is in chronological order for the
analysis to make sense.
"""
assert len(board) == 5, "Invalid board length"
for hand in hands:
assert len(hand) == 2, "Invalid hand length"
line_length = 10
stages = ["FLOP", "TURN", "RIVER"]
for i in range(len(stages)):
line = "=" * line_length
print("{} {} {}".format(line,stages[i],line))
best_rank = 7463 # rank one worse than worst hand
winners = []
for player, hand in enumerate(hands):
# evaluate current board position
rank = self.evaluate(hand, board[:(i + 3)])
rank_class = self.get_rank_class(rank)
class_string = self.class_to_string(rank_class)
percentage = 1.0 - self.get_five_card_rank_percentage(rank) # higher better here
print("Player {} hand = {}, percentage rank among all hands = {}".format(player + 1, class_string, percentage))
# detect winner
if rank == best_rank:
winners.append(player)
best_rank = rank
elif rank < best_rank:
winners = [player]
best_rank = rank
# if we're not on the river
if i != stages.index("RIVER"):
if len(winners) == 1:
print("Player {} hand is currently winning.\n".format(winners[0] + 1))
else:
print("Players {} are tied for the lead.\n".format([x + 1 for x in winners]))
# otherwise on all other streets
else:
hand_result = self.class_to_string(self.get_rank_class(self.evaluate(hands[winners[0]], board)))
print()
print("{} HAND OVER {}".format(line, line))
if len(winners) == 1:
print("Player {} is the winner with a {}\n".format(winners[0] + 1, hand_result))
else:
print("Players {} tied for the win with a {}\n".format([x + 1 for x in winners],hand_result)) | zesty-poker | /zesty_poker-1.10.0.tar.gz/zesty_poker-1.10.0/zesty_poker/evaluator.py | evaluator.py |
class Card:
"""
Static class that handles cards. We represent cards as 32-bit integers, so
there is no object instantiation - they are just ints. Most of the bits are
used, and have a specific meaning. See below:
Card:
bitrank suit rank prime
+--------+--------+--------+--------+
|xxxbbbbb|bbbbbbbb|cdhsrrrr|xxpppppp|
+--------+--------+--------+--------+
1) p = prime number of rank (deuce=2,trey=3,four=5,...,ace=41)
2) r = rank of card (deuce=0,trey=1,four=2,five=3,...,ace=12)
3) cdhs = suit of card (bit turned on based on suit of card)
4) b = bit turned on depending on rank of card
5) x = unused
This representation will allow us to do very important things like:
- Make a unique prime prodcut for each hand
- Detect flushes
- Detect straights
and is also quite performant.
"""
# the basics
STR_RANKS = '23456789TJQKA'
INT_RANKS = range(13)
PRIMES = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41]
# conversion from string => int
CHAR_RANK_TO_INT_RANK = dict(zip(list(STR_RANKS), INT_RANKS))
CHAR_SUIT_TO_INT_SUIT = {
's': 1, # spades
'h': 2, # hearts
'd': 4, # diamonds
'c': 8, # clubs
}
INT_SUIT_TO_CHAR_SUIT = 'xshxdxxxc'
# for pretty printing
PRETTY_SUITS = {
1: chr(9824), # spades
2: chr(9829), # hearts
4: chr(9830), # diamonds
8: chr(9827) # clubs
}
# hearts and diamonds
PRETTY_REDS = [2, 4]
@staticmethod
def new(string):
"""
Converts Card string to binary integer representation of card, inspired by:
http://www.suffecool.net/poker/evaluator.html
"""
rank_char = string[0]
suit_char = string[1]
rank_int = Card.CHAR_RANK_TO_INT_RANK[rank_char]
suit_int = Card.CHAR_SUIT_TO_INT_SUIT[suit_char]
rank_prime = Card.PRIMES[rank_int]
bitrank = 1 << rank_int << 16
suit = suit_int << 12
rank = rank_int << 8
return bitrank | suit | rank | rank_prime
@staticmethod
def int_to_str(card_int):
rank_int = Card.get_rank_int(card_int)
suit_int = Card.get_suit_int(card_int)
return Card.STR_RANKS[rank_int] + Card.INT_SUIT_TO_CHAR_SUIT[suit_int]
@staticmethod
def get_rank_int(card_int):
return (card_int >> 8) & 0xF
@staticmethod
def get_suit_int(card_int):
return (card_int >> 12) & 0xF
@staticmethod
def get_bitrank_int(card_int):
return (card_int >> 16) & 0x1FFF
@staticmethod
def get_prime(card_int):
return card_int & 0x3F
@staticmethod
def hand_to_binary(card_strs):
"""
Expects a list of cards as strings and returns a list
of integers of same length corresponding to those strings.
"""
bhand = []
for c in card_strs:
bhand.append(Card.new(c))
return bhand
@staticmethod
def prime_product_from_hand(card_ints):
"""
Expects a list of cards in integer form.
"""
product = 1
for c in card_ints:
product *= (c & 0xFF)
return product
@staticmethod
def prime_product_from_rankbits(rankbits):
"""
Returns the prime product using the bitrank (b)
bits of the hand. Each 1 in the sequence is converted
to the correct prime and multiplied in.
Params:
rankbits = a single 32-bit (only 13-bits set) integer representing
the ranks of 5 _different_ ranked cards
(5 of 13 bits are set)
Primarily used for evaulating flushes and straights,
two occasions where we know the ranks are *ALL* different.
Assumes that the input is in form (set bits):
rankbits
+--------+--------+
|xxxbbbbb|bbbbbbbb|
+--------+--------+
"""
product = 1
for i in Card.INT_RANKS:
# if the ith bit is set
if rankbits & (1 << i):
product *= Card.PRIMES[i]
return product
@staticmethod
def int_to_binary(card_int):
"""
For debugging purposes. Displays the binary number as a
human readable string in groups of four digits.
"""
bstr = bin(card_int)[2:][::-1] # chop off the 0b and THEN reverse string
output = list("".join(["0000" + "\t"] * 7) + "0000")
for i in range(len(bstr)):
output[i + int(i/4)] = bstr[i]
# output the string to console
output.reverse()
return "".join(output)
@staticmethod
def int_to_pretty_str(card_int):
"""
Prints a single card
"""
color = False
try:
from termcolor import colored
# for mac, linux: http://pypi.python.org/pypi/termcolor
# can use for windows: http://pypi.python.org/pypi/colorama
color = True
except ImportError:
pass
# suit and rank
suit_int = Card.get_suit_int(card_int)
rank_int = Card.get_rank_int(card_int)
# if we need to color red
s = Card.PRETTY_SUITS[suit_int]
if color and suit_int in Card.PRETTY_REDS:
s = colored(s, "red")
r = Card.STR_RANKS[rank_int]
return "[{}{}]".format(r,s)
@staticmethod
def print_pretty_card(card_int):
"""
Expects a single integer as input
"""
return Card.int_to_pretty_str(card_int)
@staticmethod
def print_pretty_cards(card_ints):
"""
Expects a list of cards in integer form.
"""
output = " "
for i in range(len(card_ints)):
c = card_ints[i]
if i != len(card_ints) - 1:
output += str(Card.int_to_pretty_str(c)) + ","
else:
output += str(Card.int_to_pretty_str(c)) + " "
char_map = {
'♠': 's',
'♥': 'h',
'♦': 'd',
'♣': 'c'
}
for i in char_map.keys():
output.replace(i, char_map[i])
return output | zesty-poker | /zesty_poker-1.10.0.tar.gz/zesty_poker-1.10.0/zesty_poker/card.py | card.py |
import itertools
from .card import Card
class LookupTable(object):
"""
Number of Distinct Hand Values:
Straight Flush 10
Four of a Kind 156 [(13 choose 2) * (2 choose 1)]
Full Houses 156 [(13 choose 2) * (2 choose 1)]
Flush 1277 [(13 choose 5) - 10 straight flushes]
Straight 10
Three of a Kind 858 [(13 choose 3) * (3 choose 1)]
Two Pair 858 [(13 choose 3) * (3 choose 2)]
One Pair 2860 [(13 choose 4) * (4 choose 1)]
High Card + 1277 [(13 choose 5) - 10 straights]
-------------------------
TOTAL 7462
Here we create a lookup table which maps:
5 card hand's unique prime product => rank in range [1, 7462]
Examples:
* Royal flush (best hand possible) => 1
* 7-5-4-3-2 unsuited (worst hand possible) => 7462
"""
MAX_STRAIGHT_FLUSH = 10
MAX_FOUR_OF_A_KIND = 166
MAX_FULL_HOUSE = 322
MAX_FLUSH = 1599
MAX_STRAIGHT = 1609
MAX_THREE_OF_A_KIND = 2467
MAX_TWO_PAIR = 3325
MAX_PAIR = 6185
MAX_HIGH_CARD = 7462
MAX_TO_RANK_CLASS = {
MAX_STRAIGHT_FLUSH: 1,
MAX_FOUR_OF_A_KIND: 2,
MAX_FULL_HOUSE: 3,
MAX_FLUSH: 4,
MAX_STRAIGHT: 5,
MAX_THREE_OF_A_KIND: 6,
MAX_TWO_PAIR: 7,
MAX_PAIR: 8,
MAX_HIGH_CARD: 9
}
RANK_CLASS_TO_STRING = {
1: "Straight Flush",
2: "Four of a Kind",
3: "Full House",
4: "Flush",
5: "Straight",
6: "Three of a Kind",
7: "Two Pair",
8: "Pair",
9: "High Card"
}
def __init__(self):
"""
Calculates lookup tables
"""
# create dictionaries
self.flush_lookup = {}
self.unsuited_lookup = {}
# create the lookup table in piecewise fashion
# this will call straights and high cards method,
# we reuse some of the bit sequences
self.flushes()
self.multiples()
def flushes(self):
"""
Straight flushes and flushes.
Lookup is done on 13 bit integer (2^13 > 7462):
xxxbbbbb bbbbbbbb => integer hand index
"""
# straight flushes in rank order
straight_flushes = [
7936, # int('0b1111100000000', 2), # royal flush
3968, # int('0b111110000000', 2),
1984, # int('0b11111000000', 2),
992, # int('0b1111100000', 2),
496, # int('0b111110000', 2),
248, # int('0b11111000', 2),
124, # int('0b1111100', 2),
62, # int('0b111110', 2),
31, # int('0b11111', 2),
4111 # int('0b1000000001111', 2) # 5 high
]
# now we'll dynamically generate all the other
# flushes (including straight flushes)
flushes = []
gen = self.get_lexographically_next_bit_sequence(int('0b11111', 2))
# 1277 = number of high cards
# 1277 + len(str_flushes) is number of hands with all cards unique rank
for i in range(1277 + len(straight_flushes) - 1): # we also iterate over SFs
# pull the next flush pattern from our generator
f = next(gen)
# if this flush matches perfectly any
# straight flush, do not add it
notSF = True
for sf in straight_flushes:
# if f XOR sf == 0, then bit pattern
# is same, and we should not add
if not f ^ sf:
notSF = False
if notSF:
flushes.append(f)
# we started from the lowest straight pattern, now we want to start ranking from
# the most powerful hands, so we reverse
flushes.reverse()
# now add to the lookup map:
# start with straight flushes and the rank of 1
# since it is the best hand in poker
# rank 1 = Royal Flush!
rank = 1
for sf in straight_flushes:
prime_product = Card.prime_product_from_rankbits(sf)
self.flush_lookup[prime_product] = rank
rank += 1
# we start the counting for flushes on max full house, which
# is the worst rank that a full house can have (2,2,2,3,3)
rank = LookupTable.MAX_FULL_HOUSE + 1
for f in flushes:
prime_product = Card.prime_product_from_rankbits(f)
self.flush_lookup[prime_product] = rank
rank += 1
# we can reuse these bit sequences for straights
# and high cards since they are inherently related
# and differ only by context
self.straight_and_highcards(straight_flushes, flushes)
def straight_and_highcards(self, straights, highcards):
"""
Unique five card sets. Straights and highcards.
Reuses bit sequences from flush calculations.
"""
rank = LookupTable.MAX_FLUSH + 1
for s in straights:
prime_product = Card.prime_product_from_rankbits(s)
self.unsuited_lookup[prime_product] = rank
rank += 1
rank = LookupTable.MAX_PAIR + 1
for h in highcards:
prime_product = Card.prime_product_from_rankbits(h)
self.unsuited_lookup[prime_product] = rank
rank += 1
def multiples(self):
"""
Pair, Two Pair, Three of a Kind, Full House, and 4 of a Kind.
"""
backwards_ranks = list(range(len(Card.INT_RANKS) - 1, -1, -1))
# 1) Four of a Kind
rank = LookupTable.MAX_STRAIGHT_FLUSH + 1
# for each choice of a set of four rank
for i in backwards_ranks:
# and for each possible kicker rank
kickers = backwards_ranks[:]
kickers.remove(i)
for k in kickers:
product = Card.PRIMES[i]**4 * Card.PRIMES[k]
self.unsuited_lookup[product] = rank
rank += 1
# 2) Full House
rank = LookupTable.MAX_FOUR_OF_A_KIND + 1
# for each three of a kind
for i in backwards_ranks:
# and for each choice of pair rank
pairranks = backwards_ranks[:]
pairranks.remove(i)
for pr in pairranks:
product = Card.PRIMES[i]**3 * Card.PRIMES[pr]**2
self.unsuited_lookup[product] = rank
rank += 1
# 3) Three of a Kind
rank = LookupTable.MAX_STRAIGHT + 1
# pick three of one rank
for r in backwards_ranks:
kickers = backwards_ranks[:]
kickers.remove(r)
gen = itertools.combinations(kickers, 2)
for kickers in gen:
c1, c2 = kickers
product = Card.PRIMES[r]**3 * Card.PRIMES[c1] * Card.PRIMES[c2]
self.unsuited_lookup[product] = rank
rank += 1
# 4) Two Pair
rank = LookupTable.MAX_THREE_OF_A_KIND + 1
tpgen = itertools.combinations(backwards_ranks, 2)
for tp in tpgen:
pair1, pair2 = tp
kickers = backwards_ranks[:]
kickers.remove(pair1)
kickers.remove(pair2)
for kicker in kickers:
product = Card.PRIMES[pair1]**2 * Card.PRIMES[pair2]**2 * Card.PRIMES[kicker]
self.unsuited_lookup[product] = rank
rank += 1
# 5) Pair
rank = LookupTable.MAX_TWO_PAIR + 1
# choose a pair
for pairrank in backwards_ranks:
kickers = backwards_ranks[:]
kickers.remove(pairrank)
kgen = itertools.combinations(kickers, 3)
for kickers in kgen:
k1, k2, k3 = kickers
product = Card.PRIMES[pairrank]**2 * Card.PRIMES[k1] \
* Card.PRIMES[k2] * Card.PRIMES[k3]
self.unsuited_lookup[product] = rank
rank += 1
def write_table_to_disk(self, table, filepath):
"""
Writes lookup table to disk
"""
with open(filepath, 'w') as f:
for prime_prod, rank in table.iteritems():
f.write(str(prime_prod) + "," + str(rank) + '\n')
def get_lexographically_next_bit_sequence(self, bits):
"""
Bit hack from here:
http://www-graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
Generator even does this in poker order rank
so no need to sort when done! Perfect.
"""
t = int((bits | (bits - 1))) + 1
next = t | ((int(((t & -t) / (bits & -bits))) >> 1) - 1)
yield next
while True:
t = (next | (next - 1)) + 1
next = t | ((((t & -t) // (next & -next)) >> 1) - 1)
yield next | zesty-poker | /zesty_poker-1.10.0.tar.gz/zesty_poker-1.10.0/zesty_poker/lookup.py | lookup.py |
import uuid as uuid_gen
from typing import Dict
from zesty.models.hf_interface import *
from zesty.models.disk_mon import FileSystem
class ZBSActionData:
mount_path = None
device = None
filesystem = None
fs_type = None
is_partition = False
partition_number = None
LV = None
VG = None
lvm_path = None
chunk_size = None
def __init__(self, mount_path=None,
device=None,
filesystem=None,
fs_type=None,
is_partition=False,
partition_number=None,
LV=None,
VG=None,
lvm_path=None,
chunk_size=None):
self.mount_path = mount_path
self.filesystem = filesystem
self.fs_type = fs_type
self.device = device
self.is_partition = is_partition
self.partition_number = partition_number
self.LV = LV
self.VG = VG
self.lvm_path = lvm_path
self.chunk_size = chunk_size
def serialize(self):
return self.__dict__
def set_data(self, json):
self.mount_path = json.get('mount_path')
self.filesystem = json.get('filesystem')
self.fs_type = json.get('fs_type')
self.device = json.get('device')
self.is_partition = json.get('is_partition', False)
self.partition_number = json.get('partition_number', '')
self.LV = json.get('LV', '')
self.VG = json.get('VG', '')
self.lvm_path = json.get('lvm_path', '')
self.chunk_size = json.get('chunk_size', 0)
return self
class ZBSAgentReceiver:
"""
The ZBSAgentReceiver (Receiver class in the Command pattern) contain some important business logic.
It knows how to perform any kind of action sent by the ZBS Backend.
ZBSAgent is an abstract class, while the concrete implementations should be per OS
"""
@abstractmethod
def do_nothing(self, data: ZBSActionData) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'do_nothing' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def extend_fs(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'extend_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def add_disk(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'add_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_fs(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def remove_disk(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'remove_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_ebs_structure(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_ebs_structure' is abstract, please implement a concrete per OD receiver")
class SpecialInstructions(ISpecialInstructions):
"""
Constructor for special instructions with optional parameters:
* dev_id: identify the device for the filesystem to which the action is attached
* size: specify the capacity for a new device or the additional capacity when extending a device
* sub_actions: when an action implements multiple actions, specify a dictionary:
-- { int(specifies action priorities): list(actions that can be run in parallel) }
-- Actions in a list keyed to a higher order cannot start until all Actions of lower orders complete
"""
def __init__(self, dev_id: str = None, size: int = None, sub_actions: Dict[int, Dict[str, IActionHF]] = None):
self.dev_id = dev_id
self.size = size
self.sub_actions = sub_actions
class ZBSAction(IActionHF):
"""
Base command class
Delegates the business logic to the receiver
There are receivers per OS (Linux and Windows for now)
"""
TYPE_FIELD_NAME = "type"
DATA_FIELD_NAME = "data"
STATUS_FIELD_NAME = "status"
UUID_FIELD_NAME = "uuid"
SPECIAL_INSTRUCTIONS_FIELD_NAME = "_ZBSAction__special_instructions"
__uuid = None
__status: IActionHF.Status = IActionHF.Status.NEW
__special_instructions: SpecialInstructions
def __init__(self, receiver: ZBSAgentReceiver = None, data: ZBSActionData = None, uuid: str = None):
self.receiver = receiver
self.data = data
if uuid is not None:
self.__uuid = uuid
else:
self.__uuid = str(uuid_gen.uuid4())
def __repr__(self):
return "<Action Type: {} | Action Status: {} | SpecialInstructions: {}>".format(self.get_action_type(), self.get_status(), self.get_special_instructions().__dict__)
def set_data(self, data: ZBSActionData):
self.data = data
def set_receiver(self, receiver: ZBSAgentReceiver):
self.receiver = receiver
def serialize(self):
result = self.__dict__
result[ZBSAction.TYPE_FIELD_NAME] = self.get_action_type()
result[ZBSAction.DATA_FIELD_NAME] = self.data.serialize() if self.data is not None else None
result[ZBSAction.STATUS_FIELD_NAME] = self.get_status().name
result[ZBSAction.UUID_FIELD_NAME] = self.get_action_id()
if hasattr(self, '_ZBSAction__special_instructions'):
result[
ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME] = self.get_special_instructions().__dict__ if self.__special_instructions is not None else None
return result
# ActionHF interface implementation
def get_action_id(self) -> str:
return self.__uuid
def get_action_type(self) -> str:
return str(type(self).__name__)
def get_status(self) -> IActionHF.Status:
return self.__status
def set_status(self, status: IActionHF.Status):
self.__status = status
def get_special_instructions(self) -> SpecialInstructions:
return self.__special_instructions
def set_special_instructions(self, special_instructions: SpecialInstructions):
self.__special_instructions = special_instructions
@staticmethod
def deserialize_type(json):
return json[ZBSAction.TYPE_FIELD_NAME]
@staticmethod
def deserialize_data(json):
return ZBSActionData().set_data(json[ZBSAction.DATA_FIELD_NAME])
@staticmethod
def deserialize_uuid(serialized_action):
return serialized_action.get(ZBSAction.UUID_FIELD_NAME)
@staticmethod
def deserialize_status(serialized_action):
return serialized_action.get(ZBSAction.STATUS_FIELD_NAME)
@staticmethod
def deserialize_special_instructions(serialized_action):
if not isinstance(serialized_action, dict):
serialized_action = serialized_action.serialize()
special_instructions = SpecialInstructions(
dev_id=serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('dev_id'),
size=serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('size'),
sub_actions=serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('sub_actions'),
)
for key, val in serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).items():
if key not in ['dev_id', 'size', 'sub_actions']:
exec('special_instructions.' + key + '=val')
return special_instructions
@staticmethod
def deserialize_action(serialized_action):
action_type = ZBSAction.deserialize_type(serialized_action)
action_data = ZBSAction.deserialize_data(serialized_action) if serialized_action.get(
ZBSAction.DATA_FIELD_NAME) is not None else None
action_uuid = ZBSAction.deserialize_uuid(serialized_action)
action_status = ZBSAction.deserialize_status(serialized_action)
action_to_perform = ZBSActionFactory.create_action(action_type, action_uuid)
action_to_perform.set_data(action_data)
action_to_perform.set_status(IActionHF.Status[serialized_action.get('status')])
if ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME in serialized_action:
special_instructions = ZBSAction.deserialize_special_instructions(serialized_action)
action_to_perform.set_special_instructions(special_instructions)
return action_to_perform
@abstractmethod
def execute(self):
raise NotImplementedError("BaseAction is abstract, please implement a concrete action")
class FileSystemHF(IFileSystemHF):
def __init__(self, fs_id: str, fs_usage: int, devices: Dict[str, FileSystem.BlockDevice], existing_actions: Dict[str, ZBSAction]):
self.fs_id = fs_id
self.fs_usage = fs_usage
self.devices = devices
self.existing_actions = existing_actions
def get_fs_id(self) -> str:
return self.fs_id
def get_usage(self) -> int:
return self.fs_usage
def get_devices(self) -> Dict[str, FileSystem.BlockDevice]:
return self.devices
def get_existing_actions(self) -> Dict[str, ZBSAction]:
return self.existing_actions
class DoNothingAction(ZBSAction):
"""
Do nothing action
"""
def execute(self):
print("Do nothing || Action ID : {}".format(self.get_action_id()))
class Factory:
def create(self, uuid): return DoNothingAction(uuid=uuid)
class ExtendFileSystemAction(ZBSAction):
"""
Extend File System Action.
"""
def execute(self, fs):
try:
return self.receiver.extend_fs(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return ExtendFileSystemAction(uuid=uuid)
class AddDiskAction(ZBSAction):
"""
Add Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.add_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return AddDiskAction(uuid=uuid)
class RemoveDiskAction(ZBSAction):
"""
Remove Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.remove_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return RemoveDiskAction(uuid=uuid)
class BalanceFileSystemAction(ZBSAction):
"""
Balance File System Action.
"""
def execute(self):
try:
self.receiver.balance_fs(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceFileSystemAction(uuid=uuid)
class BalanceEBSStructureAction(ZBSAction):
"""
Balance EBS structure Action.
"""
def execute(self):
try:
self.receiver.extend_fs(self.data, self.get_action_id())
self.receiver.remove_disk(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceEBSStructureAction(uuid=uuid)
class ZBSActionFactory:
actions = {}
def create_action(action_type, uuid=None):
if action_type not in ZBSActionFactory.actions:
ZBSActionFactory.actions[action_type] = eval(action_type + '.Factory()')
return ZBSActionFactory.actions[action_type].create(uuid)
create_action = staticmethod(create_action) | zesty.zbs-api-1621 | /zesty.zbs-api-1621-1.0.2021.4.25.1619359700.tar.gz/zesty.zbs-api-1621-1.0.2021.4.25.1619359700/zesty/actions.py | actions.py |
import config as cfg
import requests
import json
"""
USAGE:
First you have to init factory with base settings
factory = RequestFactory(stage=${STAGE}, version=${VERSION}, api_key=${API_KEY})
Then need to create request instance depend on the type of the request you want to send
metrics_request = factory.create_request("Metrics")
Pass the data to set_data function
metrics_request.set_data(
agent_version,
overview,
plugins
)
Then send it to the BackEnd and receive the response
response = metrics_request.send()
"""
class RequestFactory:
requests = {}
stage = None
version = None
api_key = None
def __init__(self, stage, version, api_key):
self.stage = stage
self.version = version
self.api_key = api_key
def create_request(self, request_type):
if request_type not in RequestFactory.requests:
RequestFactory.requests[request_type] = eval(request_type + '.Factory(self.stage, self.version, self.api_key)')
return RequestFactory.requests[request_type].create()
class Request(object):
stage = None
version = None
api_key = None
prefix = None
api_base = None
def __init__(self, stage, version, api_key):
self.stage = stage
self.version = version
self.api_key = api_key
self.prefix = ""
if self.stage == 'staging':
self.prefix = "-staging"
self.api_base = "https://api{}.cloudvisor.io".format(self.prefix)
def send(self):
res = requests.post(
self.build_url(),
data=json.dumps(self.message, separators=(',', ':')),
headers={"Cache-Control": "no-cache", "Pragma": "no-cache", "x-api-key": self.api_key}
)
return self.Response(res)
class Metrics(Request):
message = {}
def build_url(self):
return '{}{}'.format(self.api_base, cfg.post_metrics_ep)
def set_data(self, agent_version, overview, plugins):
self.message = {
"agent": {
"version": agent_version
},
"overview": overview,
"plugins": plugins
}
class Response:
raw_data: str = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
exec('self.' + k + '=v')
class Factory:
stage = None
version = None
api_key = None
def __init__(self, stage, version, api_key):
self.stage = stage
self.version = version
self.api_key = api_key
def create(self): return Metrics(stage=self.stage, version=self.version, api_key=self.api_key)
class NotifyException(Request):
message = {}
def build_url(self):
return '{}{}'.format(self.api_base, cfg.notify_exception_ep)
def set_data(self, account_id, instance_id, exception, msg):
self.message = {
"exception": exception,
"message": msg,
"instance_id": instance_id,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
exec('self.' + k + '=v')
class Factory:
stage = None
version = None
api_key = None
def __init__(self, stage, version, api_key):
self.stage = stage
self.version = version
self.api_key = api_key
def create(self): return NotifyException(stage=self.stage, version=self.version, api_key=self.api_key)
class FsResizeCompleted(Request):
message = {}
def build_url(self):
return '{}{}'.format(self.api_base, cfg.fs_resize_completed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
def __init__(self, stage, version, api_key):
self.stage = stage
self.version = version
self.api_key = api_key
def create(self): return FsResizeCompleted(stage=self.stage, version=self.version, api_key=self.api_key)
class FsResizeFailed(Request):
message = {}
def build_url(self):
return '{}{}'.format(self.api_base, cfg.resize_failed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, error, resize_steps, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"error": error,
"resize_steps": resize_steps,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
def __init__(self, stage, version, api_key):
self.stage = stage
self.version = version
self.api_key = api_key
def create(self): return FsResizeFailed(stage=self.stage, version=self.version, api_key=self.api_key) | zesty.zbs-api-1621 | /zesty.zbs-api-1621-1.0.2021.4.25.1619359700.tar.gz/zesty.zbs-api-1621-1.0.2021.4.25.1619359700/zesty/protocol.py | protocol.py |
from abc import ABC, abstractmethod
import enum
from typing import TYPE_CHECKING, Dict
if TYPE_CHECKING:
from zesty.actions import ZBSAction
class ISpecialInstructions(ABC):
pass
class IActionHF(ABC):
class Status(enum.Enum):
NEW = 1
PENDING = 2
RUNNING = 3
CANCELED = 4
READY = 5
@abstractmethod
def get_action_id(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_id' is abstract, please implement")
@abstractmethod
def get_action_type(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_type' is abstract, please implement")
@abstractmethod
def get_status(self) -> Status:
raise NotImplementedError(
"ActionHF 'get_status' is abstract, please implement")
@abstractmethod
def set_status(self, status: Status):
raise NotImplementedError(
"ActionHF 'set_status' is abstract, please implement")
@abstractmethod
def get_special_instructions(self) -> ISpecialInstructions:
raise NotImplementedError(
"ActionHF 'get_special_instructions' is abstract, please implement")
@abstractmethod
def set_special_instructions(self, special_instructions: ISpecialInstructions):
raise NotImplementedError(
"ActionHF 'set_special_instructions' is abstract, please implement")
class IDeviceHF(ABC):
@abstractmethod
def get_dev_id(self) -> str:
raise NotImplementedError(
"DeviceHF 'get_dev_id' is abstract, please implement")
@abstractmethod
def get_size(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_size' is abstract, please implement")
@abstractmethod
def get_usage(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_usage' is abstract, please implement")
@abstractmethod
def get_unlock_ts(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_unlock_ts' is abstract, please implement")
class IFileSystemHF(ABC):
@abstractmethod
def get_fs_id(self) -> str:
raise NotImplementedError(
"FileSystemHF 'get_fs_id' is abstract, please implement")
@abstractmethod
def get_devices(self) -> Dict[str, IDeviceHF]:
raise NotImplementedError(
"FileSystemHF 'get_devices' is abstract, please implement")
@abstractmethod
def get_existing_actions(self) -> Dict[str, IActionHF]:
raise NotImplementedError(
"FileSystemHF 'get_existing_actions' is abstract, please implement") | zesty.zbs-api-1621 | /zesty.zbs-api-1621-1.0.2021.4.25.1619359700.tar.gz/zesty.zbs-api-1621-1.0.2021.4.25.1619359700/zesty/models/hf_interface.py | hf_interface.py |
from zesty.models.hf_interface import IDeviceHF
GB_IN_BYTES = 1024**3
class DiskMonitor:
def __init__(self, disk_mon_data):
self.filesystems = {}
self.unused_devices = {}
for fs in disk_mon_data.get('filesystems', {}):
self.filesystems[fs] = FileSystem(disk_mon_data.get('filesystems', {}).get(fs, {}))
for unused_dev in disk_mon_data.get('unused_devices', {}):
self.unused_devices[unused_dev] = self.UnusedDevice(disk_mon_data.get('unused_devices', {}).get(unused_dev, {}))
class UnusedDevice:
size = None
map = None
def __init__(self, block_device_data):
for k, v in block_device_data.items():
exec('self.' + k + '=v')
class FileSystem:
def __init__(self, fs_data):
self.mount_path = fs_data.get('mount_path')
self.fs_type = fs_data.get('fs_type')
self.mount_path = fs_data.get('mount_path')
self.space = self.Usage(fs_data.get('space'))
self.inodes = self.Usage(fs_data.get('inodes'))
self.partition_number = fs_data.get('partition_number')
self.is_partition = fs_data.get('is_partition')
self.label = fs_data.get('label')
self.LV = fs_data.get('LV')
self.VG = fs_data.get('VG')
self.lvm_path = fs_data.get('lvm_path')
self.devices = {}
for dev in fs_data.get('devices'):
self.devices[dev] = self.BlockDevice(fs_data.get('devices', {}).get(dev, {}))
class BlockDevice(IDeviceHF):
def __init__(self, block_device_data):
# for k, v in block_device_data.items():
# exec('self.' + k + '=v')
self.size = block_device_data.get('size')
self.btrfs_dev_id = block_device_data.get('btrfs_dev_id')
self.volume_id = block_device_data.get('volume_id')
self.dev_usage = block_device_data.get('dev_usage')
self.iops_stats = block_device_data.get('iops_stats', {})
self.map = block_device_data.get('map')
self.unlock_ts = block_device_data.get('unlock_ts', 0)
self.volume_type = block_device_data.get('volume_type')
self.iops = block_device_data.get('iops')
self.created = block_device_data.get('created')
if 'parent' in block_device_data.keys():
self.parent = block_device_data.get('parent')
if self.volume_id is None:
print("ERROR : Volume ID is None!")
def __repr__(self):
return "<VolumeID: {} | Size: {:.1f} GB | Usage: {:.1f} GB>".format(self.volume_id, self.size/GB_IN_BYTES, self.dev_usage/GB_IN_BYTES)
def get_dev_id(self) -> str:
return self.volume_id
def get_size(self) -> int:
return self.size
def get_usage(self) -> int:
return self.dev_usage
def get_unlock_ts(self) -> int:
return self.unlock_ts
def set_unlock_ts(self, ts):
self.unlock_ts = ts
def get_iops_stats(self):
return self.iops_stats
def get_volume_id(self):
return self.volume_id
class Usage:
def __init__(self, usage_data):
self.total = 0
self.used = 0
self.free = 0
self.percent = 0
try:
for k, v in usage_data.items():
exec('self.' + k + '=v')
except Exception as e:
print("usage_data is empty, returning None... || ERROR : {}".format(e)) | zesty.zbs-api-1621 | /zesty.zbs-api-1621-1.0.2021.4.25.1619359700.tar.gz/zesty.zbs-api-1621-1.0.2021.4.25.1619359700/zesty/models/disk_mon.py | disk_mon.py |
import json
import uuid
from copy import deepcopy
from dataclasses import dataclass
from enum import Enum, auto
from typing import Dict, List, Optional, Union
class StepStatus(Enum):
# Note: this protocol needs to consider Dead-locking:
# examples: we should not have a step get stuck because of a failed communication or otherwise
INIT = auto()
ACK = auto()
DONE = auto()
FAIL = auto()
class DeviceType(Enum):
STANDARD = "standard"
GP3 = "gp3"
GP2 = "gp2"
ST1 = "st1"
SC1 = "sc1"
IO1 = "io1"
IO2 = "io2"
def __str__(self):
return self.value
class InstructionMetaclass(type):
def __new__(mcs, instruction_type: str):
return globals()[instruction_type]
def __call__(cls, action_id: str, *args, **kwargs):
if not issubclass(cls, StepInstruction):
raise TypeError(f"{cls.__name__} is not of StepInstruction type")
instruction = cls(action_id, *args, **kwargs)
return instruction
@classmethod
def deserialize_instruction(mcs, instruction_val: Union[str, dict]) -> 'StepInstruction':
if isinstance(instruction_val, str):
instruction_val = json.loads(instruction_val)
try:
instruction_type = instruction_val.pop('instruction_type')
instruction = mcs(instruction_type)(**instruction_val)
except Exception as e:
raise Exception(f"Failed to create instance from {instruction_type} class | Error: {e}")
instruction.set_values_from_dict(instruction_val)
return instruction
@dataclass
class StepInstruction:
action_id: str
step_id: Optional[str]
status: Optional[StepStatus]
def __post_init__(self) -> None:
if self.status is None:
self.status = StepStatus.INIT
if self.step_id is None:
self.step_id = str(uuid.uuid4())
self._instruction_type = None
def as_dict(self) -> dict:
dict_values = deepcopy(self.__dict__)
# Enum members
for key, value in filter(lambda t: isinstance(t[1], Enum), self.__dict__.items()):
dict_values[key] = value.name
dict_values.update({
key: getattr(self, key).name
})
dict_values['instruction_type'] = self.instruction_type
dict_values.pop('_instruction_type')
return dict_values
def serialize(self) -> str:
return json.dumps(self.as_dict())
@property
def instruction_type(self):
if not self._instruction_type:
self._instruction_type = type(self).__name__
return self._instruction_type
def set_values_from_dict(self, instruction_data: dict):
# Set Enum members values
enum_members = list(filter(lambda k: isinstance(getattr(self, k), Enum), self.__dict__.keys()))
for key in enum_members:
enum_cls = getattr(self, key).__class__
setattr(self, key, getattr(enum_cls, instruction_data[key]))
for key, val in instruction_data.items():
if key in enum_members or key == 'instruction_type':
continue
setattr(self, key, val)
# TODO: move correct/common elements here
@dataclass
class MachineInstruction(StepInstruction):
...
@dataclass
class CloudInstruction(StepInstruction):
...
@dataclass
class AddDiskCloud(CloudInstruction):
instance_id: str # Note: will need region/az but should
# be available from other source/context
dev_type: DeviceType
dev_delete_on_terminate: bool
dev_size_gb: int
dev_iops: Optional[int] = None
dev_throughput: Optional[int] = None
@dataclass
class AddDiskMachine(MachineInstruction):
dev_path: str
fs_id: str
fs_mount_path: str
volume_id: str
dev_map: Optional[str] = None
@dataclass
class ModifyDiskCloud(CloudInstruction):
dev_id: str # this is volume id
dev_type: Optional[DeviceType] = None
dev_size_gb: Optional[int] = None # Note: today this is change in size - should be final size
dev_iops: Optional[int] = None
dev_throughput: Optional[int] = None
# check if we need dev old size
def __post_init__(self) -> None:
super().__post_init__() # TODO: check if necessary for __post_init__
if self.dev_type is None \
and self.dev_size_gb is None \
and self.dev_iops is None \
and self.dev_throughput is None:
raise Exception(f"Must modify at least one attribute")
@dataclass
class DetachDiskCloud(CloudInstruction):
volume_id: str
@dataclass
class TakeSnapshotCloud(CloudInstruction):
dev_id: str
snapshot_id: str
@dataclass
class ExtendDiskSizeMachine(MachineInstruction):
# Note: this is a copy of AddDiskMachine
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
# dev_path: str # This is necessary for Monitored disk Extend only actions
# Probably better to have a separate payload/step
btrfs_dev_id: int
@dataclass
class ResizeDisksMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
resize_btrfs_dev_ids: Dict[str, int] # action_id, btrfs_dev_id
@dataclass
class GradualRemoveChunk:
iterations: int # Note: 0 iterations will represent delete
chunk_size_gb: int # Note: respective chunk_size for 0 iter represents pause/delete threshold
@dataclass
class RemoveDiskMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
btrfs_dev_id: int
dev_path: str
chunk_scheme: List[GradualRemoveChunk] # Note: Order matters
cutoff_gb: int
def as_dict(self) -> dict:
return {
"instruction_type": self.instruction_type,
"action_id": self.action_id,
"step_id": self.step_id,
"status": self.status.name,
"fs_id": self.fs_id,
"fs_mount_path": self.fs_mount_path,
"btrfs_dev_id": self.btrfs_dev_id,
"dev_path": self.dev_path,
"chunk_scheme": [{"iterations": chunk.iterations, "chunk_size_gb": chunk.chunk_size_gb} for chunk in
self.chunk_scheme], # Note: Order matters
"cutoff_gb": self.cutoff_gb
}
###### TODO: discuss with Osher - separate instructions, or overlap into one
@dataclass
class ModifyRemoveDisksMachine(MachineInstruction):
# will service Revert/ChangingTarget/NewResize
# because they are all correlated to the same
# instruction data - changing cutoff
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
cutoff_gb: int
revert_btrfs_dev_ids: Dict[str, int] # action_id, btrfs_dev_id
###### TODO: END discuss with Osher
@dataclass
class RemoveDiskCloud(CloudInstruction):
volume_id: str
@dataclass
class StartMigrationMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
account_id: str
action_id: str
fs_mount_path: str
volume_id: str
dev_path: str
reboot: bool
@dataclass
class KubeAddDiskMachine(AddDiskMachine):
dev_size_gb: Optional[int] = None
@dataclass
class KubeExtendDiskSizeMachine(ExtendDiskSizeMachine):
dev_size_gb: Optional[int] = None | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/step_instructions.py | step_instructions.py |
import uuid as uuid_gen
from abc import abstractmethod
from typing import Dict
from .models.hf_interface import IActionHF, ISpecialInstructions
class ZBSActionData:
mount_path = None
device = None
filesystem = None
fs_type = None
is_partition = False
partition_number = None
LV = None
VG = None
lvm_path = None
chunk_size = None
def __init__(self, mount_path=None,
device=None,
filesystem=None,
fs_type=None,
is_partition=False,
partition_number=None,
LV=None,
VG=None,
lvm_path=None,
chunk_size=None,
dev_id=None,
dev_path=None,
parent=None,
btrfs_dev_id=None,
partition_id=None,
windows_old_size=None,
size=None,
_map=None):
self.mount_path = mount_path
self.filesystem = filesystem
self.fs_type = fs_type
self.device = device
self.is_partition = is_partition
self.partition_number = partition_number
self.LV = LV
self.VG = VG
self.lvm_path = lvm_path
self.chunk_size = chunk_size
self.dev_id = dev_id
self.dev_path = dev_path
self.parent = parent
self.btrfs_dev_id = btrfs_dev_id
self.partition_id = partition_id
self.windows_old_size = windows_old_size
self.size = size
self.map = _map
def serialize(self):
return self.__dict__
def set_data(self, json):
self.mount_path = json.get('mount_path')
self.filesystem = json.get('filesystem')
self.fs_type = json.get('fs_type')
self.device = json.get('device')
self.is_partition = json.get('is_partition', False)
self.partition_number = json.get('partition_number', '')
self.LV = json.get('LV', '')
self.VG = json.get('VG', '')
self.lvm_path = json.get('lvm_path', '')
self.chunk_size = json.get('chunk_size', 0)
self.dev_id = json.get('dev_id')
self.dev_path = json.get('dev_path')
self.parent = json.get('parent')
self.btrfs_dev_id = json.get('btrfs_dev_id')
self.partition_id = json.get('partition_id')
self.windows_old_size = json.get('windows_old_size')
self.size = json.get('size')
self.map = json.get('_map')
return self
class ZBSAgentReceiver:
"""
The ZBSAgentReceiver (Receiver class in the Command pattern) contain some important business logic.
It knows how to perform any kind of action sent by the ZBS Backend.
ZBSAgent is an abstract class, while the concrete implementations should be per OS
"""
@abstractmethod
def do_nothing(self, data: ZBSActionData) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'do_nothing' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def extend_fs(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'extend_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def add_disk(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'add_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_fs(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def remove_disk(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'remove_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_ebs_structure(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_ebs_structure' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def start_migration(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'start_migration' is abstract, please implement a concrete per OD receiver")
class SpecialInstructions(ISpecialInstructions):
"""
Constructor for special instructions with optional parameters:
* dev_id: identify the device for the filesystem to which the action is attached
* size: specify the capacity for a new device or the additional capacity when extending a device
* sub_actions: when an action implements multiple actions, specify a dictionary:
-- { int(specifies action priorities): list(actions that can be run in parallel) }
-- Actions in a list keyed to a higher order cannot start until all Actions of lower orders complete
"""
def __init__(self, dev_id: str = None, size: int = None, sub_actions: Dict[int, Dict[str, IActionHF]] = None):
self.dev_id = dev_id
self.size = size
self.sub_actions = sub_actions
def __repr__(self):
return str(self.__dict__)
class ZBSAction(IActionHF):
"""
Base command class
Delegates the business logic to the receiver
There are receivers per OS (Linux and Windows for now)
"""
TYPE_FIELD_NAME = "type"
DATA_FIELD_NAME = "data"
STATUS_FIELD_NAME = "status"
UUID_FIELD_NAME = "uuid"
SPECIAL_INSTRUCTIONS_FIELD_NAME = "_ZBSAction__special_instructions"
__uuid = None
__status: IActionHF.Status = IActionHF.Status.NEW
__special_instructions: SpecialInstructions
subclasses = {}
def __init__(self, receiver: ZBSAgentReceiver = None, data: ZBSActionData = None, uuid: str = None):
self.receiver = receiver
self.data = data
if uuid is not None:
self.__uuid = uuid
else:
self.__uuid = str(uuid_gen.uuid4())
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses[cls.__name__] = cls
def __repr__(self):
special_instructions = self.get_special_instructions() if isinstance(self.get_special_instructions(),
Dict) else self.get_special_instructions().__dict__
repr_dict = dict(zip(['Action Type', 'Action Status', 'SpecialInstructions'],
[self.get_action_type(),
str(self.get_status().name),
special_instructions]))
return str(repr_dict)
def set_data(self, data: ZBSActionData):
self.data = data
def set_receiver(self, receiver: ZBSAgentReceiver):
self.receiver = receiver
def serialize(self):
result = self.__dict__
result[ZBSAction.TYPE_FIELD_NAME] = self.get_action_type()
result[ZBSAction.DATA_FIELD_NAME] = self.data.serialize() if self.data is not None else None
result[ZBSAction.STATUS_FIELD_NAME] = self.get_status().name
result[ZBSAction.UUID_FIELD_NAME] = self.get_action_id()
if hasattr(self, '_ZBSAction__special_instructions'):
result[
ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME] = self.get_special_instructions().__dict__ if self.__special_instructions is not None else None
return result
# ActionHF interface implementation
def get_action_id(self) -> str:
return self.__uuid
def get_action_type(self) -> str:
return str(type(self).__name__)
def get_status(self) -> IActionHF.Status:
return self.__status
def set_status(self, status: IActionHF.Status):
self.__status = status
def get_special_instructions(self) -> SpecialInstructions:
return self.__special_instructions
def set_special_instructions(self, special_instructions: SpecialInstructions):
self.__special_instructions = special_instructions
@staticmethod
def deserialize_type(json):
return json[ZBSAction.TYPE_FIELD_NAME]
@staticmethod
def deserialize_data(json):
return ZBSActionData().set_data(json[ZBSAction.DATA_FIELD_NAME])
@staticmethod
def deserialize_uuid(serialized_action):
return serialized_action.get(ZBSAction.UUID_FIELD_NAME)
@staticmethod
def deserialize_status(serialized_action):
return serialized_action.get(ZBSAction.STATUS_FIELD_NAME)
@staticmethod
def deserialize_special_instructions(serialized_action):
if not isinstance(serialized_action, dict):
serialized_action = serialized_action.serialize()
special_instructions = SpecialInstructions(
dev_id = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('dev_id'),
size = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('size'),
sub_actions = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('sub_actions'),
)
for key, val in serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).items():
if key not in ['dev_id', 'size', 'sub_actions']:
setattr(special_instructions, str(key), val)
return special_instructions
@staticmethod
def deserialize_action(serialized_action):
action_type = ZBSAction.deserialize_type(serialized_action)
action_data = ZBSAction.deserialize_data(serialized_action) if serialized_action.get(
ZBSAction.DATA_FIELD_NAME) is not None else None
action_uuid = ZBSAction.deserialize_uuid(serialized_action)
action_status = ZBSAction.deserialize_status(serialized_action)
action_to_perform = ZBSActionFactory.create_action(action_type, action_uuid)
action_to_perform.set_data(action_data)
action_to_perform.set_status(IActionHF.Status[serialized_action.get('status')])
if ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME in serialized_action:
special_instructions = ZBSAction.deserialize_special_instructions(serialized_action)
action_to_perform.set_special_instructions(special_instructions)
return action_to_perform
@abstractmethod
def execute(self):
raise NotImplementedError("BaseAction is abstract, please implement a concrete action")
class DoNothingAction(ZBSAction):
"""
Do nothing action
"""
def execute(self):
print("Do nothing || Action ID : {}".format(self.get_action_id()))
class Factory:
def create(self, uuid): return DoNothingAction(uuid=uuid)
class ExtendFileSystemAction(ZBSAction):
"""
Extend File System Action.
"""
def execute(self, fs):
try:
return self.receiver.extend_fs(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return ExtendFileSystemAction(uuid=uuid)
class AddDiskAction(ZBSAction):
"""
Add Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.add_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return AddDiskAction(uuid=uuid)
class RemoveDiskAction(ZBSAction):
"""
Remove Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.remove_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return RemoveDiskAction(uuid=uuid)
class BalanceFileSystemAction(ZBSAction):
"""
Balance File System Action.
"""
def execute(self):
try:
self.receiver.balance_fs(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceFileSystemAction(uuid=uuid)
class BalanceEBSStructureAction(ZBSAction):
"""
Balance EBS structure Action.
"""
def execute(self):
try:
self.receiver.extend_fs(self.data, self.get_action_id())
self.receiver.remove_disk(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceEBSStructureAction(uuid=uuid)
class MigrationStartAction(ZBSAction):
"""
Migration Start Action.
The purpose of this action is to get a BE request to start a migration action for a mount point
Returns: if migration started successfully or failed with the error
"""
def execute(self, account_id):
try:
return self.receiver.start_migration(self.get_special_instructions(), self.get_action_id(), account_id)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return MigrationStartAction(uuid=uuid)
class ZBSActionFactory:
actions = {}
@staticmethod
def create_action(action_type, uuid=None):
if action_type not in ZBSActionFactory.actions:
action_class = ZBSAction.subclasses.get(action_type)
if action_class:
ZBSActionFactory.actions[action_type] = action_class.Factory()
else:
raise ValueError(f'Could not find action class `{action_type}`')
return ZBSActionFactory.actions[action_type].create(uuid) | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/actions.py | actions.py |
import json
from typing import List
import requests
import config as cfg
"""
USAGE:
First you have to init factory with base settings
factory = RequestFactory(stage=${STAGE}, version=${VERSION}, api_key=${API_KEY})
Then need to create request instance depend on the type of the request you want to send
metrics_request = factory.create_request("Metrics")
Pass the data to set_data function
metrics_request.set_data(
agent_version,
overview,
plugins
)
Then send it to the BackEnd and receive the response
response = metrics_request.send()
"""
DEFAULT_BASE_URL = "https://api{}.cloudvisor.io"
ESTABLISH_CONN_TIMEOUT = 10
RECEIVE_RESPONSE_TIMEOUT = 30
class RequestFactory:
requests = {}
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create_request(self, request_type):
if request_type not in RequestFactory.requests:
request_class = Request.subclasses.get(request_type)
if request_class:
RequestFactory.requests[request_type] = request_class.Factory(self.stage, self.version, self.api_key,
self.api_base)
return RequestFactory.requests[request_type].create()
class Request:
stage = None
version = None
api_key = None
prefix = None
api_base = None
api_is_private_endpoint = False
subclasses = {}
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.prefix = ""
if self.stage == 'staging':
self.prefix = "-staging"
if api_base != DEFAULT_BASE_URL:
self.api_is_private_endpoint = True
self.api_base = api_base.format(self.prefix)
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses[cls.__name__] = cls
def send(self):
res = requests.post(
self.build_url(),
data=json.dumps(self.message, separators=(',', ':')),
headers={"Cache-Control": "no-cache", "Pragma": "no-cache", "x-api-key": self.api_key},
timeout=(ESTABLISH_CONN_TIMEOUT, RECEIVE_RESPONSE_TIMEOUT)
)
return self.Response(res)
class Metrics(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-metrics")
else:
return '{}{}'.format(self.api_base, cfg.post_metrics_ep)
def set_data(self, agent_version, overview, plugins, package_version=None, autoupdate_last_execution_time=None):
self.message = {
"agent": {
"version": agent_version,
"package_version": package_version,
"autoupdate_last_execution_time": autoupdate_last_execution_time
},
"overview": overview,
"plugins": plugins
}
class Response:
raw_data: dict = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return Metrics(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class MetricsCollection(Request):
message: List[dict] = []
def build_url(self):
if self.api_is_private_endpoint:
return f'{self.api_base}/bulk-post-metrics'
else:
return f'{self.api_base}{cfg.bulk_post_metrics_ep}'
def set_data(self, metrics: dict):
self.message.append(metrics)
def clear(self):
self.message = []
class Response:
raw_data: dict = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return MetricsCollection(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class NotifyException(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-notify-exception")
else:
return '{}{}'.format(self.api_base, cfg.notify_exception_ep)
def set_data(self, account_id, instance_id, exception, msg):
self.message = {
"exception": exception,
"message": msg,
"instance_id": instance_id,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return NotifyException(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class FsResizeCompleted(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-delete-resize-item")
else:
return '{}{}'.format(self.api_base, cfg.fs_resize_completed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return FsResizeCompleted(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class HoldingRemoveAction(Request):
message = {}
def build_url(self):
return '{}{}'.format(self.api_base, cfg.hold_remove_action_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, index, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"index": index,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return HoldingRemoveAction(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class FsResizeFailed(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-fs-resize-failed")
else:
return '{}{}'.format(self.api_base, cfg.resize_failed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, error, resize_steps, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"error": error,
"resize_steps": resize_steps,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return FsResizeFailed(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class SyncMachineActions(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-sync-machine-actions")
else:
return '{}{}'.format(self.api_base, cfg.sync_machine_actions_ep)
def set_data(self, action_id, account_id, status, fs_id):
self.message = {
'account_id': account_id,
'actions': {
action_id: {
'fs_id': fs_id,
'instruction_status': status
}
}
}
def add_data(self, action_id, status, fs_id):
self.message['actions'][action_id] = {
'fs_id': fs_id,
'instruction_status': status
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return SyncMachineActions(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class MigrationStartActionCompleted(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-migration-start-action-complete")
else:
return '{}{}'.format(self.api_base, cfg.migration_start_action_completed_ep)
def set_data(self, account_id, fs_id, action_id, mount_path, volume_id, region, cloud_vendor, dev_path, exit_code,
error):
self.message = {
"account_id": account_id,
"fs_id": fs_id,
"action_id": action_id,
"mount_path": mount_path,
"volume_id": volume_id,
"region": region,
"cloud_vendor": cloud_vendor,
"dev_path": dev_path,
"exit_code": exit_code,
"error": error
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return MigrationStartActionCompleted(stage=self.stage, version=self.version,
api_key=self.api_key,
api_base=self.api_base) | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/protocol.py | protocol.py |
import json
import time
import traceback
from typing import Dict, Optional
from copy import deepcopy
from decimal import Decimal
from zesty.id_handler import create_zesty_id, create_zesty_filesystem_id
from ..actions import ZBSAction
from .BlockDevice import BlockDevice
from .Usage import Usage
GB_IN_BYTES = 1024**3
class FileSystem:
"""
This object interacts with DynamoDB representing a FileSystem.
As per the data model migration ZES-2884,
these will be backwards compatible and awkward in appearance until
the code is brought up to date.
"""
def __init__(
self,
fs_id: str,
account_id: str = None,
account_uuid: str = None,
agent_update_required: bool = None,
btrfs_version: str = None,
cloud: str = None,
cloud_vendor: str = None,
cycle_period: int = None,
delete_on_termination: bool = None,
devices: Dict[str, BlockDevice] = None,
encrypted: dict = None,
existing_actions: Dict[str, ZBSAction] = None,
expiredAt: int = None,
fs_cost: float = None,
fs_devices_to_count: int = None,
fs_size: int = None,
fs_type: str = None,
fs_usage: int = None,
has_unallocated_space: bool = None,
inodes: Dict[str, Usage] = None,
instance_id: str = None,
instance_type: str = None,
is_ephemeral: bool = None,
is_partition: bool = None,
is_zesty_disk: bool = None,
label: str = None,
last_update: int = None,
LV: str = None,
lvm_path: str = None,
mount_path: str = None,
name: str = None,
org_id: str = None,
partition_id: str = None,
partition_number: int = None,
platform: str = None,
potential_savings: float = None,
region: str = None,
resizable: bool = None,
space: Dict[str, Usage] = None,
tags: Dict[str, str] = None,
unallocated_chunk: int = None,
update_data_ts: int = 0,
VG: str = None,
wrong_fs_alert: bool = None,
zesty_disk_iops: int = None,
zesty_disk_throughput: int = None,
zesty_disk_vol_type: str = None,
max_utilization_in_72_hrs: int = None,
package_version: str = None,
autoupdate_last_execution_time: str = None,
statvfs_raw_data: Dict[str, str] = None,
pvc_id: str = None,
mount_options: list = None,
leading_device: str = None,
policies: Dict[str, dict] = None,
instance_tags: Dict[str, str] = None,
is_manageable: bool = False, #related migration
is_emr: bool = False,
target: Optional[int] = None,
iops_tps_vol_type_triggered: bool = False,
iops_tps_vol_type_change_ts: Optional[int] = None
):
# Initialize empty dict not as default arg
existing_actions = {} if existing_actions is None else existing_actions
devices = {} if devices is None else devices
inodes = {} if inodes is None else inodes
space = {} if space is None else space
tags = {} if tags is None else tags
instance_tags = {} if instance_tags is None else instance_tags
self.account_id = account_id
self.account_uuid = account_uuid
self.agent_update_required = agent_update_required
self.btrfs_version = btrfs_version
if cloud is None and cloud_vendor is None:
self.cloud = 'Amazon'
self.cloud_vendor = 'Amazon'
elif cloud:
self.cloud = cloud
self.cloud_vendor = cloud
elif cloud_vendor:
self.cloud = cloud_vendor
self.cloud_vendor = cloud_vendor
self.cycle_period = cycle_period
self.devices = self.init_devices(devices)
self.delete_on_termination = delete_on_termination
self.encrypted = encrypted
self.existing_actions = existing_actions
self.expiredAt = expiredAt
self.fs_cost = fs_cost
self.fs_devices_to_count = fs_devices_to_count
try:
self.fs_id = create_zesty_filesystem_id(
cloud=self.cloud_vendor,
fs_id=fs_id
)
except Exception as e:
self.fs_id = fs_id
self.fs_size = fs_size
self.fs_type = fs_type
self.fs_usage = fs_usage
self.has_unallocated_space = has_unallocated_space
self.inodes = Usage(inodes)
self.instance_id = instance_id
self.instance_type = instance_type
self.is_ephemeral = is_ephemeral
self.is_partition = is_partition
self.is_zesty_disk = is_zesty_disk
self.label = label
if last_update is None:
self.last_update = int(time.time()) - 60
else:
self.last_update = last_update
self.LV = LV
self.lvm_path = lvm_path
self.mount_path = mount_path
self.name = name
self.org_id = org_id
self.partition_id = partition_id
self.partition_number = partition_number
self.platform = platform
self.potential_savings = potential_savings
self.region = region
self.resizable = resizable
self.space = Usage(space)
self.tags = tags
self.unallocated_chunk = unallocated_chunk
self.update_data_ts = update_data_ts
self.VG = VG
self.wrong_fs_alert = wrong_fs_alert
self.zesty_disk_iops = zesty_disk_iops
self.zesty_disk_throughput = zesty_disk_throughput
self.zesty_disk_vol_type = zesty_disk_vol_type
self.max_utilization_in_72_hrs = max_utilization_in_72_hrs
self.package_version = package_version
self.autoupdate_last_execution_time = autoupdate_last_execution_time
self.statvfs_raw_data = statvfs_raw_data
self.pvc_id = pvc_id
self.mount_options = mount_options
self.leading_device = leading_device
self.policies = policies
self.instance_tags = instance_tags
self.is_manageable = is_manageable #related migration
self.is_emr = is_emr
self.target = target
self.iops_tps_vol_type_triggered = iops_tps_vol_type_triggered
self.iops_tps_vol_type_change_ts = iops_tps_vol_type_change_ts
@staticmethod
def init_devices(devices: Dict[str, BlockDevice]):
if not devices:
return {}
else:
devices = deepcopy(devices)
for dev in devices:
if isinstance(devices[dev], BlockDevice):
continue
devices[dev] = BlockDevice(
**devices.get(dev, {})
)
return devices
def as_dict(self) -> dict:
return_dict = json.loads(json.dumps(self, default=self.object_dumper))
return {k: v for k, v in return_dict.items() if v is not None}
@staticmethod
def object_dumper(obj) -> dict:
try:
return obj.__dict__
except AttributeError as e:
if isinstance(obj, Decimal):
return int(obj)
print(f"Got exception in object_dumper value: {obj} | type : {type(obj)}")
print(traceback.format_exc())
return obj
def serialize(self) -> dict:
return self.as_dict()
def __repr__(self) -> str:
return f"FileSystem:{self.fs_id}" | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/models/FileSystem.py | FileSystem.py |
import enum
from typing import Dict, Union
from sqlalchemy import func
from sqlalchemy.orm import Session, sessionmaker, Query
from sqlalchemy.sql.elements import or_, Label
from sqlalchemy.ext.hybrid import hybrid_property
from .InstancesTags import InstancesTags
from .common_base import Base
try:
from sqlalchemy import Column, engine, case, func, cast, String, text
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.dialects.postgresql import BOOLEAN, FLOAT, INTEGER, BIGINT, \
JSON, TIMESTAMP, VARCHAR
except ImportError:
raise ImportError("sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
class EbsVolumeConfig(enum.Enum):
none = 'None'
unattached = 'Unattached'
potentialZesty = 'Potential ZestyDisk'
class EbsVolume(Base):
# TODO: Move this model into our Alembic system
# when a modification of this model is needed.
__tablename__ = "disks"
volume_id = Column(VARCHAR, primary_key=True)
org_id = Column(VARCHAR, index=True)
account_uuid = Column(VARCHAR, index=True)
account_id = Column(VARCHAR, index=True)
region = Column(VARCHAR, index=True)
volume_type = Column(VARCHAR, index=True)
cloud = Column(VARCHAR, index=True)
availability_zone = Column(VARCHAR)
create_time = Column(TIMESTAMP)
encrypted = Column(BOOLEAN)
size = Column(INTEGER)
snapshot_id = Column(VARCHAR)
state = Column(VARCHAR)
iops = Column(INTEGER)
tags = Column(JSON)
attachments = Column(JSON)
attached_to = Column(JSON)
monthly_cost = Column(FLOAT, default=0)
is_unused_resource = Column(INTEGER, default=0)
unused_since = Column(VARCHAR)
agent_installed = Column(BOOLEAN, default=False)
_zbs_supported_os = Column(INTEGER)
potential_savings = Column(FLOAT, default=0)
image_id = Column(VARCHAR, nullable=True)
image_name = Column(VARCHAR, nullable=True)
# dict for custom_order_by class method
col_to_actual_sorting_col = {"instance_tags": "instance_tags_keys"}
def __init__(
self,
volume_aws_schema: Dict,
account_uuid: str = None):
if account_uuid:
self.account_uuid = account_uuid
else:
self.account_uuid = volume_aws_schema["account_uuid"]
self.volume_id = volume_aws_schema["volume_id"]
self.org_id = volume_aws_schema["org_id"]
self.account_id = volume_aws_schema["account_id"]
self.cloud = volume_aws_schema["cloud"]
self.region = volume_aws_schema["region"]
self.volume_type = volume_aws_schema["volume_type"]
self.availability_zone = volume_aws_schema["availability_zone"]
self.create_time = volume_aws_schema["create_time"]
self.encrypted = volume_aws_schema["encrypted"]
self.size = volume_aws_schema["size"]
self.snapshot_id = volume_aws_schema["snapshot_id"]
self.state = volume_aws_schema["state"]
self.iops = volume_aws_schema.get("iops", 0)
self.tags = volume_aws_schema.get("tags", {})
self.attachments = volume_aws_schema.get("attachments", [])
self.attached_to = volume_aws_schema.get("attached_to", [])
self.monthly_cost = volume_aws_schema.get("monthly_cost", 0)
self.is_unused_resource = volume_aws_schema.get(
"is_unused_resource", 0)
self.unused_since = volume_aws_schema.get("unused_since", None)
self.agent_installed = volume_aws_schema.get("agent_installed", False)
self.potential_savings = volume_aws_schema.get("potential_savings", 0)
self._zbs_supported_os = volume_aws_schema.get("_zbs_supported_os")
self.image_id = volume_aws_schema.get("ami_id")
self.image_name = volume_aws_schema.get("ami_name")
def __repr__(self):
return f"{self.__tablename__}:{self.volume_id}"
@classmethod
def instance_id_filter(cls, query: Query, value: str):
val = f'%{value}%'
query = query.filter(
case((or_(cls.attached_to == None, func.json_array_length(cls.attached_to) == 0), False),
else_=cast(cls.attached_to, String).ilike(val)))
return query
@classmethod
def instance_name_filter(cls, query: Query, value: str):
subq = query.session.query(InstancesTags.instance_name)
val = '%{}%'.format(value.replace("%", "\\%"))
query = query.filter((subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (
cls.account_id == InstancesTags.account_id))).ilike(val))
return query
@classmethod
def instance_tags_filter(cls, query: Query, value: str):
session = query.session
subq = session.query(InstancesTags.instance_tags)
python_types_to_pg = {int: BIGINT, float: FLOAT, bool: BOOLEAN}
for key_val in value:
key = key_val.get('key')
val = key_val.get('value')
if key is not None and val is not None:
if not isinstance(val, str):
query = query.filter(cast(cast(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)).op('->')(key)), String), python_types_to_pg[type(val)]) == val)
else:
val = f'%{val}%'
query = query.filter(cast(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)).op('->')(key)), String).ilike(val))
elif key is not None:
query = query.filter(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id))).op('?')(key))
elif val is not None:
if isinstance(val, str):
query = query.filter(cast(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)), String)
.regexp_replace(r'.+\: "[^"]*(' + str(val) + r')[^"]*"[,\s}].*', "\\1") == f"{val}")
else:
if isinstance(val, bool):
val = f'"{val}"'
query = query.filter(cast(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)), String)
.regexp_replace(r'.+\: (' + str(val) + r')[,\s}].*', "\\1") == f"{val}")
return query
# Custom query
@classmethod
def custom_query(cls, session: Union[Session, sessionmaker]) -> Query:
q = session.query(cls)
subq_2 = session.query(func.json_object_keys(InstancesTags.instance_tags))
subq_3 = session.query(InstancesTags.instance_tags)
instance_name_clause = "regexp_replace(cast(array((select instances_tags.instance_name from instances_tags " \
"inner join json_array_elements(disks.attached_to) as attached_to_set " \
"on instances_tags.instance_id = replace(cast(attached_to_set.value as varchar), '\"', '') " \
"and instances_tags.account_id = disks.account_id)) as varchar), '[\\{\\}\"]', '', 'g')"
q = q.add_columns(case((or_(cls.attached_to == None, func.json_array_length(cls.attached_to) == 0), ''),
else_=cast(cls.attached_to, String).regexp_replace(r'[\[\]"]', '', 'g'))
.label("instance_id"),
Label('instance_name', text(instance_name_clause)),
func.array(subq_2.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) &
(cls.account_id == InstancesTags.account_id)))
.label('instance_tags_keys'),
subq_3.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) &
(cls.account_id == InstancesTags.account_id))
.label('instance_tags'))
return q
@classmethod
def custom_order_by(cls, sorting_column: str, sorting_order: str) -> str:
actual_sorting_column = cls.col_to_actual_sorting_col.get(sorting_column, sorting_column)
return f"{actual_sorting_column} {sorting_order}"
def get_volume_id(self):
return self.volume_id
def as_dict(self):
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
@hybrid_property
def is_attached(self):
return len(self.attached_to) > 0
@is_attached.expression
def is_attached(cls):
return func.json_array_length(cls.attached_to) > 0
def create_tables(engine: engine.base.Engine) -> None: #type: ignore
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/models/EbsVolume.py | EbsVolume.py |
import time
from enum import Enum, auto
from typing import Dict, List, Optional, Union
from uuid import UUID as _UUID
from uuid import uuid4
from sqlalchemy import INT
from sqlalchemy import Enum as sa_ENUM
from sqlalchemy.dialects.postgresql import ARRAY, UUID
from sqlalchemy.sql.schema import ForeignKey
try:
from sqlalchemy import Column, String, case, cast, engine, func, or_
from sqlalchemy.dialects.postgresql import (BIGINT, BOOLEAN, FLOAT, JSON,
TIMESTAMP, VARCHAR)
from sqlalchemy.orm import Query, Session, aliased, sessionmaker
except ImportError:
raise ImportError(
"sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
from ..actions import ZBSAction
from .BlockDevice import BlockDevice
from .common_base import Base, BaseMixin
from .Usage import Usage
class ManagedFsMixin:
fs_id = Column(VARCHAR, primary_key=True)
account_id = Column(VARCHAR, index=True, default=None)
account_uuid = Column(VARCHAR, index=True, default=None)
agent_update_required = Column(BOOLEAN, default=None)
btrfs_version = Column(VARCHAR, default=None)
cloud = Column(VARCHAR, default=None)
cloud_vendor = Column(VARCHAR, default=None)
cycle_period = Column(BIGINT, default=None)
delete_on_termination = Column(BOOLEAN, default=None)
devices = Column(JSON, default=None)
encrypted = Column(JSON, default=None)
existing_actions = Column(JSON, default=None)
expiredAt = Column(BIGINT, default=None)
fs_cost = Column(FLOAT, default=None)
fs_devices_to_count = Column(BIGINT, default=None)
fs_size = Column(BIGINT, default=None)
fs_type = Column(VARCHAR, default=None)
fs_usage = Column(BIGINT, default=None)
has_unallocated_space = Column(BOOLEAN, default=None)
inodes = Column(JSON, default=None)
instance_id = Column(VARCHAR, default=None)
instance_type = Column(VARCHAR, default=None)
is_ephemeral = Column(BOOLEAN, default=None)
is_partition = Column(BOOLEAN, default=None)
is_zesty_disk = Column(BOOLEAN, default=None)
label = Column(VARCHAR, default=None)
last_update = Column(BIGINT, default=None)
LV = Column(VARCHAR, default=None)
lvm_path = Column(VARCHAR, default=None)
mount_path = Column(VARCHAR, default=None)
name = Column(VARCHAR, default=None)
org_id = Column(VARCHAR, index=True)
partition_id = Column(VARCHAR, default=None)
partition_number = Column(BIGINT, default=None)
platform = Column(VARCHAR, default=None)
potential_savings = Column(FLOAT, default=None)
region = Column(VARCHAR, index=True)
resizable = Column(BOOLEAN, default=None)
space = Column(JSON, default=None)
tags = Column(JSON, default=None)
unallocated_chunk = Column(BIGINT, default=None)
update_data_ts = Column(BIGINT, default=0)
VG = Column(VARCHAR, default=None)
wrong_fs_alert = Column(BOOLEAN, default=None)
zesty_disk_iops = Column(BIGINT, default=None)
zesty_disk_throughput = Column(BIGINT, default=None)
zesty_disk_vol_type = Column(VARCHAR, default=None)
max_utilization_in_72_hrs = Column(BIGINT, default=None)
package_version = Column(VARCHAR, default=None)
autoupdate_last_execution_time = Column(VARCHAR, default=None)
policies = Column(JSON, default=None)
instance_tags = Column(JSON, default=None)
migration_uuid = Column(UUID(as_uuid=True), nullable=True)
is_manageable = Column(BOOLEAN, default=False)
iops_tps_vol_type_triggered = Column(BOOLEAN, default=False)
iops_tps_vol_type_change_ts = Column(BIGINT, nullable=True, default=None)
# dict for custom_order_by class method
col_to_actual_sorting_col = {
"policies": "policies_name",
"instance_tags": "instance_tags_keys"}
def __init__(
self,
fs_id: str,
account_id: str = None,
account_uuid: str = None,
agent_update_required: bool = None,
btrfs_version: str = None,
cloud: str = None,
cloud_vendor: str = None,
cycle_period: int = None,
delete_on_termination: bool = None,
devices: Dict[str, BlockDevice] = None,
encrypted: dict = None,
existing_actions: Dict[str, ZBSAction] = None,
expiredAt: int = None,
fs_cost: float = None,
fs_devices_to_count: int = None,
fs_size: int = None,
fs_type: str = None,
fs_usage: int = None,
has_unallocated_space: bool = None,
inodes: Dict[str, Usage] = None,
instance_id: str = None,
instance_type: str = None,
is_ephemeral: bool = None,
is_partition: bool = None,
is_zesty_disk: bool = None,
label: str = None,
last_update: int = None,
LV: str = None,
lvm_path: str = None,
mount_path: str = None,
name: str = None,
org_id: str = None,
partition_id: str = None,
partition_number: int = None,
platform: str = None,
potential_savings: float = None,
region: str = None,
resizable: bool = None,
space: Dict[str, Usage] = None,
tags: Dict[str, str] = None,
unallocated_chunk: int = None,
update_data_ts: int = 0,
VG: str = None,
wrong_fs_alert: bool = None,
zesty_disk_iops: int = None,
zesty_disk_throughput: int = None,
zesty_disk_vol_type: str = None,
max_utilization_in_72_hrs: int = None,
package_version: str = None,
autoupdate_last_execution_time: str = None,
statvfs_raw_data: Dict[str, str] = None, # unused to support initialization with **dict, do not remove
policies: Dict[str, dict] = None,
instance_tags: Dict[str, str] = None,
is_emr: bool = False, # unused to support initialization with **dict, do not remove
is_manageable: bool = False,
iops_tps_vol_type_triggered: bool = False,
iops_tps_vol_type_change_ts: Optional[int] = None,
**kwargs
):
self.fs_id = fs_id
self.account_id = account_id
self.account_uuid = account_uuid
self.agent_update_required = agent_update_required
self.btrfs_version = btrfs_version
if cloud is None and cloud_vendor is None:
self.cloud = 'Amazon'
self.cloud_vendor = 'Amazon'
elif cloud:
self.cloud = cloud
self.cloud_vendor = cloud
elif cloud_vendor:
self.cloud = cloud_vendor
self.cloud_vendor = cloud_vendor
self.cycle_period = cycle_period
self.delete_on_termination = delete_on_termination
self.devices = devices
if devices:
for dev in self.devices:
if isinstance(self.devices[dev], BlockDevice):
self.devices[dev] = self.devices[dev].asdict()
else:
self.devices[dev] = self.devices.get(dev, {})
self.encrypted = encrypted
if existing_actions:
for action in existing_actions:
self.existing_actions[action] = self.existing_actions[action].serialize(
)
self.expiredAt = expiredAt
self.fs_cost = fs_cost
self.fs_devices_to_count = fs_devices_to_count
self.fs_size = fs_size
self.fs_type = fs_type
self.fs_usage = fs_usage
self.has_unallocated_space = has_unallocated_space
self.inodes = inodes
self.instance_id = instance_id
self.instance_type = instance_type
self.is_ephemeral = is_ephemeral
self.is_partition = is_partition
self.is_zesty_disk = is_zesty_disk
self.label = label
if last_update:
self.last_update = last_update
else:
self.last_update = int(time.time()) - 60
self.LV = LV
self.lvm_path = lvm_path
self.mount_path = mount_path
self.name = name
self.org_id = org_id
self.partition_id = partition_id
self.partition_number = partition_number
self.platform = platform
self.potential_savings = potential_savings
self.region = region
self.resizable = resizable
self.space = space
self.tags = tags
self.unallocated_chunk = unallocated_chunk
self.update_data_ts = update_data_ts
self.VG = VG
self.wrong_fs_alert = wrong_fs_alert
self.zesty_disk_iops = zesty_disk_iops
self.zesty_disk_throughput = zesty_disk_throughput
self.zesty_disk_vol_type = zesty_disk_vol_type
self.max_utilization_in_72_hrs = max_utilization_in_72_hrs
self.package_version = package_version
self.autoupdate_last_execution_time = autoupdate_last_execution_time
self.policies = policies
self.instance_tags = instance_tags
self.is_manageable = is_manageable
self.iops_tps_vol_type_triggered = iops_tps_vol_type_triggered
self.iops_tps_vol_type_change_ts = iops_tps_vol_type_change_ts
def __repr__(self) -> str:
return f"{self.__tablename__}:{self.fs_id}"
def asdict(self) -> dict:
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def as_dict(self) -> dict:
return self.asdict()
# Custom filters
@classmethod
def policies_filter(cls, query: Query, value: str):
query = query.filter(
cast(cls.policies, String).contains(f'"name": "{value}"'))
return query
@classmethod
def instance_name_filter(cls, query: Query, value: str):
val = '%{}%'.format(value.replace("%", "\\%"))
query = query.filter(
case((cls.instance_tags == None, ''), else_=func.replace(cast(cls.instance_tags.op('->')('Name'), String), "\"", "")).ilike(val))
return query
# Custom query
@classmethod
def custom_query(cls, session: Union[Session, sessionmaker]) -> Query:
clsb = aliased(cls)
subq = session.query(func.json_object_keys(clsb.instance_tags))
q = session.query(cls)
q = q.add_columns(case((or_(cls.policies == None, cast(cls.policies, String) == 'null'), ''),
else_=cast(cls.policies, String).regexp_replace(r'.+"name":\s"([^"]+).+', "\\1"))
.label("policies_name"),
case((cls.instance_tags == None, ''),
else_=func.replace(cast(cls.instance_tags.op('->')('Name'), String), "\"", ""))
.label('instance_name'),
case((cast(cls.instance_tags, String) == 'null', []),
else_=func.array(subq.scalar_subquery().where(cls.fs_id == clsb.fs_id)))
.label('instance_tags_keys')
)
return q
@classmethod
def custom_order_by(cls, sorting_column: str, sorting_order: str) -> str:
actual_sorting_column = cls.col_to_actual_sorting_col.get(
sorting_column, sorting_column)
return f"{actual_sorting_column} {sorting_order}"
class ManagedFs(ManagedFsMixin, BaseMixin, Base):
__tablename__ = "managed_filesystems"
class MigrationStatus(Enum):
Active = auto()
Aborting = auto()
Aborted = auto()
Completed = auto()
Failed = auto()
class RunningMigrations(BaseMixin, Base):
__tablename__ = "active_migration"
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), nullable=False, primary_key=True)
finished_at = Column(TIMESTAMP, nullable=True)
account_id = Column(VARCHAR, default=None)
region = Column(VARCHAR(255))
reboot = Column(BOOLEAN, default=False)
# array of day numbers when reboot is allowed 0-6
days = Column(ARRAY(VARCHAR))
# timeframe from-to in %I:%M %p
from_ = Column(VARCHAR)
to = Column(VARCHAR)
status = Column(sa_ENUM(MigrationStatus), nullable=False, server_default=MigrationStatus.Active.name)
is_rebooting = Column(BOOLEAN, default=False) # TODO: can this be deleted?
snapshot_id = Column(VARCHAR(255))
snapshot_remove_after = Column(INT, nullable=True) # in days
snapshot_create_started_at = Column(TIMESTAMP, nullable=True)
snapshot_deleted_at = Column(TIMESTAMP, nullable=True)
ebs_id = Column(VARCHAR(255))
ebs_remove_after = Column(INT, nullable=True) # in days
ebs_detached_at = Column(TIMESTAMP, nullable=True)
ebs_deleted_at = Column(TIMESTAMP, nullable=True)
def __init__(
self,
fs_id: str,
migration_uuid: _UUID,
account_id: str = None,
region: str = None,
days: Optional[List[int]] = None,
from_: Optional[str] = None,
to: Optional[str] = None,
reboot: bool = False,
status: MigrationStatus = MigrationStatus.Active,
ebs_remove_after: int = 1,
snapshot_remove_after: int = 7):
self.migration_uuid = migration_uuid
self.fs_id = fs_id
self.account_id = account_id
self.region = region
self.days = days
self.from_ = from_
self.to = to
self.reboot = reboot
self.status = status
self.ebs_remove_after = ebs_remove_after
self.snapshot_remove_after = snapshot_remove_after
@staticmethod
def new_migration(
fs_id,
days: Optional[List[int]] = None,
from_: Optional[str] = None,
to: Optional[str] = None,
reboot: bool = False,
ebs_remove_after: int = 1,
snapshot_remove_after: int = 7) -> 'RunningMigrations':
return RunningMigrations(
fs_id,
uuid4(),
days, from_,
to, reboot,
ebs_remove_after,
snapshot_remove_after,
)
class MigrationHistory(BaseMixin, Base):
__tablename__ = "migration_history"
time_start = Column(TIMESTAMP)
time_end = Column(TIMESTAMP)
status = Column(VARCHAR)
phase = Column(VARCHAR, primary_key=True)
progress = Column(FLOAT)
completed = Column(BOOLEAN)
failed = Column(BOOLEAN)
failure_reason = Column(VARCHAR)
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), ForeignKey("active_migration.migration_uuid", ondelete="CASCADE"),
nullable=False, primary_key=True, index=True)
# should be returned from the agent in seconds
estimated = Column(INT)
name = Column(VARCHAR)
weight = Column(INT)
abortable = Column(BOOLEAN)
index = Column(INT, primary_key=True)
def __init__(
self,
status: str,
phase: str,
progress: int,
eta: int,
name: str,
weight: int,
abortable: bool,
start_time: int,
end_time: int,
migration_uuid: 'UUID',
fs_id: str,
index: int):
self.status = status
self.phase = phase
self.progress = progress
self.estimated = eta
self.name = name
self.weight = weight
self.time_start = start_time
self.time_end = end_time
self.abortable = abortable
self.index = index
self.migration_uuid = migration_uuid
self.fs_id = fs_id
class WrongActionException(Exception):
pass
class MigrationActions(BaseMixin, Base):
__tablename__ = "migration_actions"
id = Column(INT, primary_key=True, autoincrement=True)
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), ForeignKey("active_migration.migration_uuid", ondelete="CASCADE"),
nullable=False)
action = Column(VARCHAR)
value = Column(VARCHAR)
allowed_actions = ['start', 'reboot', 'reboot_now', 'abort']
def __init__(self, fs_id, migration_uuid, action, value):
self.fs_id = fs_id
self.migration_uuid = migration_uuid
self.set_action(action)
self.value = value
def set_action(self, action):
if action not in self.allowed_actions:
raise WrongActionException
self.action = action
def create_tables(engine: engine.base.Engine) -> None:
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/models/ManagedFS.py | ManagedFS.py |
import json
import time
from datetime import datetime
from typing import Dict, Union
from .common_base import Base, BaseMixin
try:
from sqlalchemy import (Column, PrimaryKeyConstraint, String, case, cast,
engine, func, or_, select, text)
from sqlalchemy.dialects.postgresql import (BIGINT, BOOLEAN, FLOAT, JSON,
VARCHAR)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Query, Session, aliased, sessionmaker
except ImportError:
raise ImportError(
"sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
class InstancesTags(BaseMixin, Base):
__tablename__ = "instances_tags"
instance_id = Column(VARCHAR, primary_key=True)
account_id = Column(VARCHAR, index=True, default=None)
account_uuid = Column(VARCHAR, index=True, default=None)
instance_name = Column(VARCHAR, default=None)
instance_tags = Column(JSON, default=None)
expired_at = Column(BIGINT, default=None)
__table_args__ = (
PrimaryKeyConstraint('instance_id', name='instances_tags_pkey'),)
def __init__(
self,
instance_id: str,
account_id: str = None,
account_uuid: str = None,
instance_name: str = None,
instance_tags: dict = None,
expired_at: int = None
):
self.instance_id = instance_id
self.account_id = account_id
self.account_uuid = account_uuid
self.instance_name = instance_name
self.instance_tags = instance_tags
self.expired_at = expired_at or int(datetime.utcnow().timestamp()) + 3 * 3600
def __eq__(self, other) -> bool:
return self.__hash__() == other.__hash__()
def __hash__(self) -> int:
return hash(''.join(map(lambda c: getattr(self, c.name) or '',
filter(lambda c: c.name not in ['instance_tags', 'expired_at', 'created_at', 'updated_at'],
self.__table__.columns))) +
json.dumps(self.instance_tags))
def __repr__(self) -> str:
return f"{self.__tablename__}:{self.instance_id}"
def asdict(self) -> dict:
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def as_dict(self) -> dict:
return self.asdict()
def create_tables(engine: engine.base.Engine) -> None:
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/models/InstancesTags.py | InstancesTags.py |
import json
import traceback
from decimal import Decimal
from typing import Dict
from zesty.id_handler import create_zesty_id
GB_IN_BYTES = 1024**3
class BlockDevice:
def __init__(
self,
size: int,
btrfs_dev_id: str = None,
cloud_vendor: str = 'Amazon',
created: str = None,
dev_usage: int = None,
iops: int = None,
throughput: int = None,
lun: int = None,
map: str = None,
iops_stats: Dict[str, int] = None,
parent: str = None,
unlock_ts: int = 0,
volume_id: str = None,
volume_type: str = None,
device: str = None,
btrfs_size: int = None,
extendable: bool = True,
removable: bool = True
):
"""
Block Device class doc:
:param size: Size of the device in Bytes
:param btrfs_dev_id: ID of the device inside the BTRFS structure
:param cloud_vendor: Cloud vendor (AWS/Azure/GCP)
:param created: Device creation date
:param dev_usage: How much of the device is in use (in Bytes)
:param iops: Device IOPS amount
:param lun: LUN number (Only for Azure)
:param map: The mount slot of the device inside the OS
:param iops_stats: Dict with IOPS statistics
:param parent: If it's a partition so this one represent the parent device
:param unlock_ts: TS when the device will be ready to be extended again
:param volume_id: Device ID
:param volume_type: Type of the device in the cloud
:param device: Device mount slot from the cloud
:param btrfs_size: The usable size for the filesystem in bytes
:param extendable: Whether ZestyDisk Handsfree logic is allowed to extend the device
:param removable: Whether ZestyDisk Handsfree logic is allowed to remove the device from the filesystem
"""
# Init empty dict here instead of passing as default value
iops_stats = {} if iops_stats is None else iops_stats
self.size = size
self.cloud_vendor = cloud_vendor
try:
self.volume_id = create_zesty_id(
cloud=self.cloud_vendor,
resource_id=volume_id
)
except:
self.volume_id = volume_id
self.btrfs_dev_id = btrfs_dev_id
self.created = created
self.dev_usage = dev_usage
self.iops = iops
self.throughput = throughput
self.lun = lun
self.map = map
self.iops_stats = iops_stats
if device:
self.device = device
if parent:
self.parent = parent
if not unlock_ts:
self.unlock_ts = 0
else:
self.unlock_ts = unlock_ts
self.volume_type = volume_type
self.extendable = extendable
self.removable = removable
# if btrfs_size is None (e.g- windows/old collector, set btrfs_size to size)
self.btrfs_size = btrfs_size if btrfs_size is not None else size
def as_dict(self) -> dict:
return_dict = json.loads(json.dumps(self, default=self.object_dumper))
return {k: v for k, v in return_dict.items() if v is not None}
@staticmethod
def object_dumper(obj) -> dict:
try:
return obj.__dict__
except AttributeError as e:
if isinstance(obj, Decimal):
return int(obj)
print(f"Got exception in object_dumper value: {obj} | type : {type(obj)}")
print(traceback.format_exc())
return obj
def serialize(self) -> dict:
return self.as_dict()
def __repr__(self) -> str:
return str(self.as_dict()) | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/models/BlockDevice.py | BlockDevice.py |
from zesty.id_handler import create_zesty_id
class MachineData:
def __init__(self, machine_data):
self.cloud_vendor = machine_data.get('cloud', 'Amazon')
self.os = self.OS(machine_data.get('os', {}))
self.instance = self.Instance(machine_data.get('instance', {}), self.cloud_vendor)
self.is_kubernetes_context = machine_data.get('is_kubernetes_context', False)
class OS:
def __init__(self, os_data):
self.system = os_data.get('system')
self.name = os_data.get('name')
self.processor = os_data.get('processor')
self.id = os_data.get('id')
self.os_pretty_name = os_data.get('os_pretty_name', 'N/A')
class Instance:
def __init__(self, instance_data, cloud_vendor="Amazon"):
self.accountId = instance_data.get('accountId')
self.architecture = instance_data.get('architecture')
self.availabilityZone = instance_data.get('availabilityZone')
self.billingProducts = instance_data.get('billingProducts')
self.devpayProductCodes = instance_data.get('devpayProductCodes')
self.marketplaceProductCodes = instance_data.get('marketplaceProductCodes')
self.imageId = instance_data.get('imageId')
self.instanceId = instance_data.get('instanceId')
self.instanceType = instance_data.get('instanceType')
self.kernelId = instance_data.get('kernelId')
self.pendingTime = instance_data.get('pendingTime')
self.privateIp = instance_data.get('privateIp')
self.ramdiskId = instance_data.get('ramdiskId')
self.region = instance_data.get('region')
self.version = instance_data.get('version')
try:
self.instanceId = create_zesty_id(cloud=cloud_vendor, resource_id=self.instanceId)
except Exception as e:
print(f"Failed to create ZestyID, will stay with {self.instanceId} || ERROR : {e}") | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/models/overview.py | overview.py |
from abc import ABC, abstractmethod
import enum
from typing import TYPE_CHECKING, Dict
if TYPE_CHECKING:
from ..actions import ZBSAction
pass
class ISpecialInstructions(ABC):
pass
class IActionHF(ABC):
class Status(enum.Enum):
NEW = 1
PENDING = 2
RUNNING = 3
CANCELED = 4
READY = 5
HOLDING = 6
REVERT = 7
PAUSE = 8 # should stop
STOPPED = 9 # action stopped
@abstractmethod
def get_action_id(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_id' is abstract, please implement")
@abstractmethod
def get_action_type(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_type' is abstract, please implement")
@abstractmethod
def get_status(self) -> Status:
raise NotImplementedError(
"ActionHF 'get_status' is abstract, please implement")
@abstractmethod
def set_status(self, status: Status):
raise NotImplementedError(
"ActionHF 'set_status' is abstract, please implement")
@abstractmethod
def get_special_instructions(self) -> ISpecialInstructions:
raise NotImplementedError(
"ActionHF 'get_special_instructions' is abstract, please implement")
@abstractmethod
def set_special_instructions(self, special_instructions: ISpecialInstructions):
raise NotImplementedError(
"ActionHF 'set_special_instructions' is abstract, please implement")
class IDeviceHF(ABC):
@abstractmethod
def get_dev_id(self) -> str:
raise NotImplementedError(
"DeviceHF 'get_dev_id' is abstract, please implement")
@abstractmethod
def get_size(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_size' is abstract, please implement")
@abstractmethod
def get_usage(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_usage' is abstract, please implement")
@abstractmethod
def get_unlock_ts(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_unlock_ts' is abstract, please implement")
class IFileSystemHF(ABC):
@abstractmethod
def get_fs_id(self) -> str:
raise NotImplementedError(
"IFileSystemHF 'get_fs_id' is abstract, please implement")
@abstractmethod
def get_devices(self) -> Dict[str, IDeviceHF]:
raise NotImplementedError(
"IFileSystemHF 'get_devices' is abstract, please implement")
@abstractmethod
def get_existing_actions(self) -> Dict[str, IActionHF]:
raise NotImplementedError(
"IFileSystemHF 'get_existing_actions' is abstract, please implement") | zesty.zbs-api-k8s | /zesty.zbs_api_k8s-1.0.2023.8.30.1693381546-py3-none-any.whl/zesty/models/hf_interface.py | hf_interface.py |
## Description
Zesty Disk API
## Installation
pip install zesty.zbs-api
## Usage
from zesty.zbs-api import *
## Alembic and Postgres Data Models.
For instructions on how to manage the Postgres data model as well as using Alembic to automatically prepare database migrations, please refer to the `README.md` in the `alembic` folder which is located at the root of this repository.
## SQLAlchemy Dependency
Some of the models within this package require SQLAlchemy to be imported. To keep this package light as it's used across many places in our code base, we do not install SQLAlchemy with this package. Thus, if you need to use any of the models that depend on SQLAlchemy, please install the Postgres-Utils package with your requirements.txt.
| zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/README.md | README.md |
import json
import uuid
from copy import deepcopy
from dataclasses import dataclass
from enum import Enum, auto
from typing import Dict, List, Optional, Union
class StepStatus(Enum):
# Note: this protocol needs to consider Dead-locking:
# examples: we should not have a step get stuck because of a failed communication or otherwise
INIT = auto()
ACK = auto()
DONE = auto()
FAIL = auto()
class DeviceType(Enum):
STANDARD = "standard"
GP3 = "gp3"
GP2 = "gp2"
ST1 = "st1"
SC1 = "sc1"
IO1 = "io1"
IO2 = "io2"
def __str__(self):
return self.value
class InstructionMetaclass(type):
def __new__(mcs, instruction_type: str):
return globals()[instruction_type]
def __call__(cls, action_id: str, *args, **kwargs):
if not issubclass(cls, StepInstruction):
raise TypeError(f"{cls.__name__} is not of StepInstruction type")
instruction = cls(action_id, *args, **kwargs)
return instruction
@classmethod
def deserialize_instruction(mcs, instruction_val: Union[str, dict]) -> 'StepInstruction':
if isinstance(instruction_val, str):
instruction_val = json.loads(instruction_val)
try:
instruction_type = instruction_val.pop('instruction_type')
instruction = mcs(instruction_type)(**instruction_val)
except Exception as e:
raise Exception(f"Failed to create instance from {instruction_type} class | Error: {e}")
instruction.set_values_from_dict(instruction_val)
return instruction
@dataclass
class StepInstruction:
action_id: str
step_id: Optional[str]
status: Optional[StepStatus]
def __post_init__(self) -> None:
if self.status is None:
self.status = StepStatus.INIT
if self.step_id is None:
self.step_id = str(uuid.uuid4())
self._instruction_type = None
def as_dict(self) -> dict:
dict_values = deepcopy(self.__dict__)
# Enum members
for key, value in filter(lambda t: isinstance(t[1], Enum), self.__dict__.items()):
dict_values[key] = value.name
dict_values.update({
key: getattr(self, key).name
})
dict_values['instruction_type'] = self.instruction_type
dict_values.pop('_instruction_type')
return dict_values
def serialize(self) -> str:
return json.dumps(self.as_dict())
@property
def instruction_type(self):
if not self._instruction_type:
self._instruction_type = type(self).__name__
return self._instruction_type
def set_values_from_dict(self, instruction_data: dict):
# Set Enum members values
enum_members = list(filter(lambda k: isinstance(getattr(self, k), Enum), self.__dict__.keys()))
for key in enum_members:
enum_cls = getattr(self, key).__class__
setattr(self, key, getattr(enum_cls, instruction_data[key]))
for key, val in instruction_data.items():
if key in enum_members or key == 'instruction_type':
continue
setattr(self, key, val)
# TODO: move correct/common elements here
@dataclass
class MachineInstruction(StepInstruction):
...
@dataclass
class CloudInstruction(StepInstruction):
...
@dataclass
class AddDiskCloud(CloudInstruction):
instance_id: str # Note: will need region/az but should
# be available from other source/context
dev_type: DeviceType
dev_delete_on_terminate: bool
dev_size_gb: int
dev_iops: Optional[int] = None
dev_throughput: Optional[int] = None
@dataclass
class AddDiskMachine(MachineInstruction):
dev_path: str
fs_id: str
fs_mount_path: str
volume_id: str
dev_map: Optional[str] = None
@dataclass
class ModifyDiskCloud(CloudInstruction):
dev_id: str # this is volume id
dev_type: Optional[DeviceType] = None
dev_size_gb: Optional[int] = None # Note: today this is change in size - should be final size
dev_iops: Optional[int] = None
dev_throughput: Optional[int] = None
# check if we need dev old size
def __post_init__(self) -> None:
super().__post_init__() # TODO: check if necessary for __post_init__
if self.dev_type is None \
and self.dev_size_gb is None \
and self.dev_iops is None \
and self.dev_throughput is None:
raise Exception(f"Must modify at least one attribute")
@dataclass
class DetachDiskCloud(CloudInstruction):
volume_id: str
@dataclass
class TakeSnapshotCloud(CloudInstruction):
dev_id: str
snapshot_id: str
@dataclass
class ExtendDiskSizeMachine(MachineInstruction):
# Note: this is a copy of AddDiskMachine
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
# dev_path: str # This is necessary for Monitored disk Extend only actions
# Probably better to have a separate payload/step
btrfs_dev_id: int
@dataclass
class ResizeDisksMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
resize_btrfs_dev_ids: Dict[str, int] # action_id, btrfs_dev_id
@dataclass
class GradualRemoveChunk:
iterations: int # Note: 0 iterations will represent delete
chunk_size_gb: int # Note: respective chunk_size for 0 iter represents pause/delete threshold
@dataclass
class RemoveDiskMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
btrfs_dev_id: int
dev_path: str
chunk_scheme: List[GradualRemoveChunk] # Note: Order matters
cutoff_gb: int
def as_dict(self) -> dict:
return {
"instruction_type": self.instruction_type,
"action_id": self.action_id,
"step_id": self.step_id,
"status": self.status.name,
"fs_id": self.fs_id,
"fs_mount_path": self.fs_mount_path,
"btrfs_dev_id": self.btrfs_dev_id,
"dev_path": self.dev_path,
"chunk_scheme": [{"iterations": chunk.iterations, "chunk_size_gb": chunk.chunk_size_gb} for chunk in
self.chunk_scheme], # Note: Order matters
"cutoff_gb": self.cutoff_gb
}
###### TODO: discuss with Osher - separate instructions, or overlap into one
@dataclass
class ModifyRemoveDisksMachine(MachineInstruction):
# will service Revert/ChangingTarget/NewResize
# because they are all correlated to the same
# instruction data - changing cutoff
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
cutoff_gb: int
revert_btrfs_dev_ids: Dict[str, int] # action_id, btrfs_dev_id
###### TODO: END discuss with Osher
@dataclass
class RemoveDiskCloud(CloudInstruction):
volume_id: str
@dataclass
class StartMigrationMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
account_id: str
action_id: str
fs_mount_path: str
volume_id: str
dev_path: str
reboot: bool | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/step_instructions.py | step_instructions.py |
import uuid as uuid_gen
from abc import abstractmethod
from typing import Dict
from .models.hf_interface import IActionHF, ISpecialInstructions
class ZBSActionData:
mount_path = None
device = None
filesystem = None
fs_type = None
is_partition = False
partition_number = None
LV = None
VG = None
lvm_path = None
chunk_size = None
def __init__(self, mount_path=None,
device=None,
filesystem=None,
fs_type=None,
is_partition=False,
partition_number=None,
LV=None,
VG=None,
lvm_path=None,
chunk_size=None,
dev_id=None,
dev_path=None,
parent=None,
btrfs_dev_id=None,
partition_id=None,
windows_old_size=None,
size=None,
_map=None):
self.mount_path = mount_path
self.filesystem = filesystem
self.fs_type = fs_type
self.device = device
self.is_partition = is_partition
self.partition_number = partition_number
self.LV = LV
self.VG = VG
self.lvm_path = lvm_path
self.chunk_size = chunk_size
self.dev_id = dev_id
self.dev_path = dev_path
self.parent = parent
self.btrfs_dev_id = btrfs_dev_id
self.partition_id = partition_id
self.windows_old_size = windows_old_size
self.size = size
self.map = _map
def serialize(self):
return self.__dict__
def set_data(self, json):
self.mount_path = json.get('mount_path')
self.filesystem = json.get('filesystem')
self.fs_type = json.get('fs_type')
self.device = json.get('device')
self.is_partition = json.get('is_partition', False)
self.partition_number = json.get('partition_number', '')
self.LV = json.get('LV', '')
self.VG = json.get('VG', '')
self.lvm_path = json.get('lvm_path', '')
self.chunk_size = json.get('chunk_size', 0)
self.dev_id = json.get('dev_id')
self.dev_path = json.get('dev_path')
self.parent = json.get('parent')
self.btrfs_dev_id = json.get('btrfs_dev_id')
self.partition_id = json.get('partition_id')
self.windows_old_size = json.get('windows_old_size')
self.size = json.get('size')
self.map = json.get('_map')
return self
class ZBSAgentReceiver:
"""
The ZBSAgentReceiver (Receiver class in the Command pattern) contain some important business logic.
It knows how to perform any kind of action sent by the ZBS Backend.
ZBSAgent is an abstract class, while the concrete implementations should be per OS
"""
@abstractmethod
def do_nothing(self, data: ZBSActionData) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'do_nothing' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def extend_fs(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'extend_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def add_disk(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'add_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_fs(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def remove_disk(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'remove_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_ebs_structure(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_ebs_structure' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def start_migration(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'start_migration' is abstract, please implement a concrete per OD receiver")
class SpecialInstructions(ISpecialInstructions):
"""
Constructor for special instructions with optional parameters:
* dev_id: identify the device for the filesystem to which the action is attached
* size: specify the capacity for a new device or the additional capacity when extending a device
* sub_actions: when an action implements multiple actions, specify a dictionary:
-- { int(specifies action priorities): list(actions that can be run in parallel) }
-- Actions in a list keyed to a higher order cannot start until all Actions of lower orders complete
"""
def __init__(self, dev_id: str = None, size: int = None, sub_actions: Dict[int, Dict[str, IActionHF]] = None):
self.dev_id = dev_id
self.size = size
self.sub_actions = sub_actions
def __repr__(self):
return str(self.__dict__)
class ZBSAction(IActionHF):
"""
Base command class
Delegates the business logic to the receiver
There are receivers per OS (Linux and Windows for now)
"""
TYPE_FIELD_NAME = "type"
DATA_FIELD_NAME = "data"
STATUS_FIELD_NAME = "status"
UUID_FIELD_NAME = "uuid"
SPECIAL_INSTRUCTIONS_FIELD_NAME = "_ZBSAction__special_instructions"
__uuid = None
__status: IActionHF.Status = IActionHF.Status.NEW
__special_instructions: SpecialInstructions
subclasses = {}
def __init__(self, receiver: ZBSAgentReceiver = None, data: ZBSActionData = None, uuid: str = None):
self.receiver = receiver
self.data = data
if uuid is not None:
self.__uuid = uuid
else:
self.__uuid = str(uuid_gen.uuid4())
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses[cls.__name__] = cls
def __repr__(self):
special_instructions = self.get_special_instructions() if isinstance(self.get_special_instructions(),
Dict) else self.get_special_instructions().__dict__
repr_dict = dict(zip(['Action Type', 'Action Status', 'SpecialInstructions'],
[self.get_action_type(),
str(self.get_status().name),
special_instructions]))
return str(repr_dict)
def set_data(self, data: ZBSActionData):
self.data = data
def set_receiver(self, receiver: ZBSAgentReceiver):
self.receiver = receiver
def serialize(self):
result = self.__dict__
result[ZBSAction.TYPE_FIELD_NAME] = self.get_action_type()
result[ZBSAction.DATA_FIELD_NAME] = self.data.serialize() if self.data is not None else None
result[ZBSAction.STATUS_FIELD_NAME] = self.get_status().name
result[ZBSAction.UUID_FIELD_NAME] = self.get_action_id()
if hasattr(self, '_ZBSAction__special_instructions'):
result[
ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME] = self.get_special_instructions().__dict__ if self.__special_instructions is not None else None
return result
# ActionHF interface implementation
def get_action_id(self) -> str:
return self.__uuid
def get_action_type(self) -> str:
return str(type(self).__name__)
def get_status(self) -> IActionHF.Status:
return self.__status
def set_status(self, status: IActionHF.Status):
self.__status = status
def get_special_instructions(self) -> SpecialInstructions:
return self.__special_instructions
def set_special_instructions(self, special_instructions: SpecialInstructions):
self.__special_instructions = special_instructions
@staticmethod
def deserialize_type(json):
return json[ZBSAction.TYPE_FIELD_NAME]
@staticmethod
def deserialize_data(json):
return ZBSActionData().set_data(json[ZBSAction.DATA_FIELD_NAME])
@staticmethod
def deserialize_uuid(serialized_action):
return serialized_action.get(ZBSAction.UUID_FIELD_NAME)
@staticmethod
def deserialize_status(serialized_action):
return serialized_action.get(ZBSAction.STATUS_FIELD_NAME)
@staticmethod
def deserialize_special_instructions(serialized_action):
if not isinstance(serialized_action, dict):
serialized_action = serialized_action.serialize()
special_instructions = SpecialInstructions(
dev_id = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('dev_id'),
size = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('size'),
sub_actions = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('sub_actions'),
)
for key, val in serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).items():
if key not in ['dev_id', 'size', 'sub_actions']:
setattr(special_instructions, str(key), val)
return special_instructions
@staticmethod
def deserialize_action(serialized_action):
action_type = ZBSAction.deserialize_type(serialized_action)
action_data = ZBSAction.deserialize_data(serialized_action) if serialized_action.get(
ZBSAction.DATA_FIELD_NAME) is not None else None
action_uuid = ZBSAction.deserialize_uuid(serialized_action)
action_status = ZBSAction.deserialize_status(serialized_action)
action_to_perform = ZBSActionFactory.create_action(action_type, action_uuid)
action_to_perform.set_data(action_data)
action_to_perform.set_status(IActionHF.Status[serialized_action.get('status')])
if ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME in serialized_action:
special_instructions = ZBSAction.deserialize_special_instructions(serialized_action)
action_to_perform.set_special_instructions(special_instructions)
return action_to_perform
@abstractmethod
def execute(self):
raise NotImplementedError("BaseAction is abstract, please implement a concrete action")
class DoNothingAction(ZBSAction):
"""
Do nothing action
"""
def execute(self):
print("Do nothing || Action ID : {}".format(self.get_action_id()))
class Factory:
def create(self, uuid): return DoNothingAction(uuid=uuid)
class ExtendFileSystemAction(ZBSAction):
"""
Extend File System Action.
"""
def execute(self, fs):
try:
return self.receiver.extend_fs(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return ExtendFileSystemAction(uuid=uuid)
class AddDiskAction(ZBSAction):
"""
Add Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.add_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return AddDiskAction(uuid=uuid)
class RemoveDiskAction(ZBSAction):
"""
Remove Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.remove_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return RemoveDiskAction(uuid=uuid)
class BalanceFileSystemAction(ZBSAction):
"""
Balance File System Action.
"""
def execute(self):
try:
self.receiver.balance_fs(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceFileSystemAction(uuid=uuid)
class BalanceEBSStructureAction(ZBSAction):
"""
Balance EBS structure Action.
"""
def execute(self):
try:
self.receiver.extend_fs(self.data, self.get_action_id())
self.receiver.remove_disk(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceEBSStructureAction(uuid=uuid)
class MigrationStartAction(ZBSAction):
"""
Migration Start Action.
The purpose of this action is to get a BE request to start a migration action for a mount point
Returns: if migration started successfully or failed with the error
"""
def execute(self, account_id):
try:
return self.receiver.start_migration(self.get_special_instructions(), self.get_action_id(), account_id)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return MigrationStartAction(uuid=uuid)
class ZBSActionFactory:
actions = {}
@staticmethod
def create_action(action_type, uuid=None):
if action_type not in ZBSActionFactory.actions:
action_class = ZBSAction.subclasses.get(action_type)
if action_class:
ZBSActionFactory.actions[action_type] = action_class.Factory()
else:
raise ValueError(f'Could not find action class `{action_type}`')
return ZBSActionFactory.actions[action_type].create(uuid) | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/actions.py | actions.py |
import json
from typing import List
import requests
import config as cfg
"""
USAGE:
First you have to init factory with base settings
factory = RequestFactory(stage=${STAGE}, version=${VERSION}, api_key=${API_KEY})
Then need to create request instance depend on the type of the request you want to send
metrics_request = factory.create_request("Metrics")
Pass the data to set_data function
metrics_request.set_data(
agent_version,
overview,
plugins
)
Then send it to the BackEnd and receive the response
response = metrics_request.send()
"""
DEFAULT_BASE_URL = "https://api{}.cloudvisor.io"
ESTABLISH_CONN_TIMEOUT = 10
RECEIVE_RESPONSE_TIMEOUT = 30
class RequestFactory:
requests = {}
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create_request(self, request_type):
if request_type not in RequestFactory.requests:
request_class = Request.subclasses.get(request_type)
if request_class:
RequestFactory.requests[request_type] = request_class.Factory(self.stage, self.version, self.api_key,
self.api_base)
return RequestFactory.requests[request_type].create()
class Request:
stage = None
version = None
api_key = None
prefix = None
api_base = None
api_is_private_endpoint = False
subclasses = {}
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.prefix = ""
if self.stage == 'staging':
self.prefix = "-staging"
if api_base != DEFAULT_BASE_URL:
self.api_is_private_endpoint = True
self.api_base = api_base.format(self.prefix)
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses[cls.__name__] = cls
def send(self):
res = requests.post(
self.build_url(),
data=json.dumps(self.message, separators=(',', ':')),
headers={"Cache-Control": "no-cache", "Pragma": "no-cache", "x-api-key": self.api_key},
timeout=(ESTABLISH_CONN_TIMEOUT, RECEIVE_RESPONSE_TIMEOUT)
)
return self.Response(res)
class Metrics(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-metrics")
else:
return '{}{}'.format(self.api_base, cfg.post_metrics_ep)
def set_data(self, agent_version, overview, plugins, package_version=None, autoupdate_last_execution_time=None):
self.message = {
"agent": {
"version": agent_version,
"package_version": package_version,
"autoupdate_last_execution_time": autoupdate_last_execution_time
},
"overview": overview,
"plugins": plugins
}
class Response:
raw_data: dict = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return Metrics(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class MetricsCollection(Request):
message: List[dict] = []
def build_url(self):
if self.api_is_private_endpoint:
return f'{self.api_base}/bulk-post-metrics'
else:
return f'{self.api_base}{cfg.bulk_post_metrics_ep}'
def set_data(self, metrics: dict):
self.message.append(metrics)
def clear(self):
self.message = []
class Response:
raw_data: dict = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return MetricsCollection(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class NotifyException(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-notify-exception")
else:
return '{}{}'.format(self.api_base, cfg.notify_exception_ep)
def set_data(self, account_id, instance_id, exception, msg):
self.message = {
"exception": exception,
"message": msg,
"instance_id": instance_id,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return NotifyException(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class FsResizeCompleted(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-delete-resize-item")
else:
return '{}{}'.format(self.api_base, cfg.fs_resize_completed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return FsResizeCompleted(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class HoldingRemoveAction(Request):
message = {}
def build_url(self):
return '{}{}'.format(self.api_base, cfg.hold_remove_action_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, index, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"index": index,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return HoldingRemoveAction(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class FsResizeFailed(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-fs-resize-failed")
else:
return '{}{}'.format(self.api_base, cfg.resize_failed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, error, resize_steps, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"error": error,
"resize_steps": resize_steps,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return FsResizeFailed(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class SyncMachineActions(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-sync-machine-actions")
else:
return '{}{}'.format(self.api_base, cfg.sync_machine_actions_ep)
def set_data(self, action_id, account_id, status, fs_id):
self.message = {
'account_id': account_id,
'actions': {
action_id: {
'fs_id': fs_id,
'instruction_status': status
}
}
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return SyncMachineActions(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class MigrationStartActionCompleted(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-migration-start-action-complete")
else:
return '{}{}'.format(self.api_base, cfg.migration_start_action_completed_ep)
def set_data(self, account_id, fs_id, action_id, mount_path, volume_id, region, cloud_vendor, dev_path, exit_code,
error):
self.message = {
"account_id": account_id,
"fs_id": fs_id,
"action_id": action_id,
"mount_path": mount_path,
"volume_id": volume_id,
"region": region,
"cloud_vendor": cloud_vendor,
"dev_path": dev_path,
"exit_code": exit_code,
"error": error
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return MigrationStartActionCompleted(stage=self.stage, version=self.version,
api_key=self.api_key,
api_base=self.api_base) | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/protocol.py | protocol.py |
import json
import time
import traceback
from typing import Dict
from copy import deepcopy
from decimal import Decimal
from zesty.id_handler import create_zesty_id, create_zesty_filesystem_id
from ..actions import ZBSAction
from .BlockDevice import BlockDevice
from .Usage import Usage
GB_IN_BYTES = 1024**3
class FileSystem:
"""
This object interacts with DynamoDB representing a FileSystem.
As per the data model migration ZES-2884,
these will be backwards compatible and awkward in appearance until
the code is brought up to date.
"""
def __init__(
self,
fs_id: str,
account_id: str = None,
account_uuid: str = None,
agent_update_required: bool = None,
btrfs_version: str = None,
cloud: str = None,
cloud_vendor: str = None,
cycle_period: int = None,
delete_on_termination: bool = None,
devices: Dict[str, BlockDevice] = None,
encrypted: dict = None,
existing_actions: Dict[str, ZBSAction] = None,
expiredAt: int = None,
fs_cost: float = None,
fs_devices_to_count: int = None,
fs_size: int = None,
fs_type: str = None,
fs_usage: int = None,
has_unallocated_space: bool = None,
inodes: Dict[str, Usage] = None,
instance_id: str = None,
instance_type: str = None,
is_ephemeral: bool = None,
is_partition: bool = None,
is_zesty_disk: bool = None,
label: str = None,
last_update: int = None,
LV: str = None,
lvm_path: str = None,
mount_path: str = None,
name: str = None,
org_id: str = None,
partition_id: str = None,
partition_number: int = None,
platform: str = None,
potential_savings: float = None,
region: str = None,
resizable: bool = None,
space: Dict[str, Usage] = None,
tags: Dict[str, str] = None,
unallocated_chunk: int = None,
update_data_ts: int = 0,
VG: str = None,
wrong_fs_alert: bool = None,
zesty_disk_iops: int = None,
zesty_disk_throughput: int = None,
zesty_disk_vol_type: str = None,
max_utilization_in_72_hrs: int = None,
package_version: str = None,
autoupdate_last_execution_time: str = None,
statvfs_raw_data: Dict[str, str] = None,
pvc_id: str = None,
mount_options: list = None,
leading_device: str = None,
policies: Dict[str, dict] = None,
instance_tags: Dict[str, str] = None,
is_manageable: bool = False, #related migration
is_emr: bool = False
):
# Initialize empty dict not as default arg
existing_actions = {} if existing_actions is None else existing_actions
devices = {} if devices is None else devices
inodes = {} if inodes is None else inodes
space = {} if space is None else space
tags = {} if tags is None else tags
instance_tags = {} if instance_tags is None else instance_tags
self.account_id = account_id
self.account_uuid = account_uuid
self.agent_update_required = agent_update_required
self.btrfs_version = btrfs_version
if cloud is None and cloud_vendor is None:
self.cloud = 'Amazon'
self.cloud_vendor = 'Amazon'
elif cloud:
self.cloud = cloud
self.cloud_vendor = cloud
elif cloud_vendor:
self.cloud = cloud_vendor
self.cloud_vendor = cloud_vendor
self.cycle_period = cycle_period
self.devices = self.init_devices(devices)
self.delete_on_termination = delete_on_termination
self.encrypted = encrypted
self.existing_actions = existing_actions
self.expiredAt = expiredAt
self.fs_cost = fs_cost
self.fs_devices_to_count = fs_devices_to_count
try:
self.fs_id = create_zesty_filesystem_id(
cloud=self.cloud_vendor,
fs_id=fs_id
)
except Exception as e:
self.fs_id = fs_id
self.fs_size = fs_size
self.fs_type = fs_type
self.fs_usage = fs_usage
self.has_unallocated_space = has_unallocated_space
self.inodes = Usage(inodes)
self.instance_id = instance_id
self.instance_type = instance_type
self.is_ephemeral = is_ephemeral
self.is_partition = is_partition
self.is_zesty_disk = is_zesty_disk
self.label = label
if last_update is None:
self.last_update = int(time.time()) - 60
else:
self.last_update = last_update
self.LV = LV
self.lvm_path = lvm_path
self.mount_path = mount_path
self.name = name
self.org_id = org_id
self.partition_id = partition_id
self.partition_number = partition_number
self.platform = platform
self.potential_savings = potential_savings
self.region = region
self.resizable = resizable
self.space = Usage(space)
self.tags = tags
self.unallocated_chunk = unallocated_chunk
self.update_data_ts = update_data_ts
self.VG = VG
self.wrong_fs_alert = wrong_fs_alert
self.zesty_disk_iops = zesty_disk_iops
self.zesty_disk_throughput = zesty_disk_throughput
self.zesty_disk_vol_type = zesty_disk_vol_type
self.max_utilization_in_72_hrs = max_utilization_in_72_hrs
self.package_version = package_version
self.autoupdate_last_execution_time = autoupdate_last_execution_time
self.statvfs_raw_data = statvfs_raw_data
self.pvc_id = pvc_id
self.mount_options = mount_options
self.leading_device = leading_device
self.policies = policies
self.instance_tags = instance_tags
self.is_manageable = is_manageable #related migration
self.is_emr = is_emr
@staticmethod
def init_devices(devices: Dict[str, BlockDevice]):
if not devices:
return {}
else:
devices = deepcopy(devices)
for dev in devices:
if isinstance(devices[dev], BlockDevice):
continue
devices[dev] = BlockDevice(
**devices.get(dev, {})
)
return devices
def as_dict(self) -> dict:
return_dict = json.loads(json.dumps(self, default=self.object_dumper))
return {k: v for k, v in return_dict.items() if v is not None}
@staticmethod
def object_dumper(obj) -> dict:
try:
return obj.__dict__
except AttributeError as e:
if isinstance(obj, Decimal):
return int(obj)
print(f"Got exception in object_dumper value: {obj} | type : {type(obj)}")
print(traceback.format_exc())
return obj
def serialize(self) -> dict:
return self.as_dict()
def __repr__(self) -> str:
return f"FileSystem:{self.fs_id}" | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/models/FileSystem.py | FileSystem.py |
from typing import Dict, Union
from sqlalchemy.orm import Session, sessionmaker, Query
from sqlalchemy.sql.elements import or_, Label
from .InstancesTags import InstancesTags
try:
from sqlalchemy import Column, engine, case, func, cast, String, text
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.dialects.postgresql import BOOLEAN, FLOAT, INTEGER, BIGINT, \
JSON, TIMESTAMP, VARCHAR
except ImportError:
raise ImportError("sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
Base = declarative_base()
class EbsVolume(Base):
# TODO: Move this model into our Alembic system
# when a modification of this model is needed.
__tablename__ = "disks"
volume_id = Column(VARCHAR, primary_key=True)
org_id = Column(VARCHAR, index=True)
account_uuid = Column(VARCHAR, index=True)
account_id = Column(VARCHAR, index=True)
region = Column(VARCHAR, index=True)
volume_type = Column(VARCHAR, index=True)
cloud = Column(VARCHAR, index=True)
availability_zone = Column(VARCHAR)
create_time = Column(TIMESTAMP)
encrypted = Column(BOOLEAN)
size = Column(INTEGER)
snapshot_id = Column(VARCHAR)
state = Column(VARCHAR)
iops = Column(INTEGER)
tags = Column(JSON)
attachments = Column(JSON)
attached_to = Column(JSON)
monthly_cost = Column(FLOAT, default=0)
is_unused_resource = Column(INTEGER, default=0)
unused_since = Column(VARCHAR)
agent_installed = Column(BOOLEAN, default=False)
_zbs_supported_os = Column(INTEGER)
potential_savings = Column(FLOAT, default=0)
# dict for custom_order_by class method
col_to_actual_sorting_col = {"instance_tags": "instance_tags_keys"}
def __init__(
self,
volume_aws_schema: Dict,
account_uuid: str = None):
if account_uuid:
self.account_uuid = account_uuid
else:
self.account_uuid = volume_aws_schema["account_uuid"]
self.volume_id = volume_aws_schema["volume_id"]
self.org_id = volume_aws_schema["org_id"]
self.account_id = volume_aws_schema["account_id"]
self.cloud = volume_aws_schema["cloud"]
self.region = volume_aws_schema["region"]
self.volume_type = volume_aws_schema["volume_type"]
self.availability_zone = volume_aws_schema["availability_zone"]
self.create_time = volume_aws_schema["create_time"]
self.encrypted = volume_aws_schema["encrypted"]
self.size = volume_aws_schema["size"]
self.snapshot_id = volume_aws_schema["snapshot_id"]
self.state = volume_aws_schema["state"]
self.iops = volume_aws_schema.get("iops", 0)
self.tags = volume_aws_schema.get("tags", {})
self.attachments = volume_aws_schema.get("attachments", [])
self.attached_to = volume_aws_schema.get("attached_to", [])
self.monthly_cost = volume_aws_schema.get("monthly_cost", 0)
self.is_unused_resource = volume_aws_schema.get(
"is_unused_resource", 0)
self.unused_since = volume_aws_schema.get("unused_since", None)
self.agent_installed = volume_aws_schema.get("agent_installed", False)
self._zbs_supported_os = volume_aws_schema.get("_zbs_supported_os")
self.potential_savings = volume_aws_schema.get("potential_savings", 0)
def __repr__(self):
return f"{self.__tablename__}:{self.volume_id}"
@classmethod
def instance_id_filter(cls, query: Query, value: str):
val = f'%{value}%'
query = query.filter(
case((or_(cls.attached_to == None, func.json_array_length(cls.attached_to) == 0), False),
else_=cast(cls.attached_to, String).ilike(val)))
return query
@classmethod
def instance_name_filter(cls, query: Query, value: str):
subq = query.session.query(InstancesTags.instance_name)
val = '%{}%'.format(value.replace("%", "\\%"))
query = query.filter((subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (
cls.account_id == InstancesTags.account_id))).ilike(val))
return query
@classmethod
def instance_tags_filter(cls, query: Query, value: str):
session = query.session
subq = session.query(InstancesTags.instance_tags)
python_types_to_pg = {int: BIGINT, float: FLOAT, bool: BOOLEAN}
for key_val in value:
key = key_val.get('key')
val = key_val.get('value')
if key is not None and val is not None:
if not isinstance(val, str):
query = query.filter(cast(cast(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)).op('->')(key)), String), python_types_to_pg[type(val)]) == val)
else:
val = f'%{val}%'
query = query.filter(cast(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)).op('->')(key)), String).ilike(val))
elif key is not None:
query = query.filter(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id))).op('?')(key))
elif val is not None:
if isinstance(val, str):
query = query.filter(cast(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)), String)
.regexp_replace(r'.+\: "[^"]*(' + str(val) + r')[^"]*"[,\s}].*', "\\1") == f"{val}")
else:
if isinstance(val, bool):
val = f'"{val}"'
query = query.filter(cast(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)), String)
.regexp_replace(r'.+\: (' + str(val) + r')[,\s}].*', "\\1") == f"{val}")
return query
# Custom query
@classmethod
def custom_query(cls, session: Union[Session, sessionmaker]) -> Query:
q = session.query(cls)
subq_2 = session.query(func.json_object_keys(InstancesTags.instance_tags))
subq_3 = session.query(InstancesTags.instance_tags)
instance_name_clause = "regexp_replace(cast(array((select instances_tags.instance_name from instances_tags " \
"inner join json_array_elements(disks.attached_to) as attached_to_set " \
"on instances_tags.instance_id = replace(cast(attached_to_set.value as varchar), '\"', '') " \
"and instances_tags.account_id = disks.account_id)) as varchar), '[\\{\\}\"]', '', 'g')"
q = q.add_columns(case((or_(cls.attached_to == None, func.json_array_length(cls.attached_to) == 0), ''),
else_=cast(cls.attached_to, String).regexp_replace(r'[\[\]"]', '', 'g'))
.label("instance_id"),
Label('instance_name', text(instance_name_clause)),
func.array(subq_2.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) &
(cls.account_id == InstancesTags.account_id)))
.label('instance_tags_keys'),
subq_3.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) &
(cls.account_id == InstancesTags.account_id))
.label('instance_tags'))
return q
@classmethod
def custom_order_by(cls, sorting_column: str, sorting_order: str) -> str:
actual_sorting_column = cls.col_to_actual_sorting_col.get(sorting_column, sorting_column)
return f"{actual_sorting_column} {sorting_order}"
def get_volume_id(self):
return self.volume_id
def as_dict(self):
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def create_tables(engine: engine.base.Engine) -> None: #type: ignore
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/models/EbsVolume.py | EbsVolume.py |
import time
from enum import Enum, auto
from typing import Dict, List, Optional, Union
from uuid import UUID as _UUID
from uuid import uuid4
from sqlalchemy import Enum as sa_ENUM
from sqlalchemy import INT
from sqlalchemy.sql.schema import ForeignKey
from sqlalchemy.dialects.postgresql import ARRAY, UUID
try:
from sqlalchemy import Column, String, case, cast, engine, func, or_
from sqlalchemy.dialects.postgresql import (BIGINT, BOOLEAN, FLOAT, JSON,
TIMESTAMP, VARCHAR)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Query, Session, aliased, sessionmaker
except ImportError:
raise ImportError(
"sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
from ..actions import ZBSAction
from .BlockDevice import BlockDevice
from .Usage import Usage
Base = declarative_base()
class BaseMixin:
"""Base for database models with some default fields"""
created_at = Column(TIMESTAMP, server_default=func.now())
updated_at = Column(TIMESTAMP, onupdate=func.now())
class ManagedFsMixin:
fs_id = Column(VARCHAR, primary_key=True)
account_id = Column(VARCHAR, index=True, default=None)
account_uuid = Column(VARCHAR, index=True, default=None)
agent_update_required = Column(BOOLEAN, default=None)
btrfs_version = Column(VARCHAR, default=None)
cloud = Column(VARCHAR, default=None)
cloud_vendor = Column(VARCHAR, default=None)
cycle_period = Column(BIGINT, default=None)
delete_on_termination = Column(BOOLEAN, default=None)
devices = Column(JSON, default=None)
encrypted = Column(JSON, default=None)
existing_actions = Column(JSON, default=None)
expiredAt = Column(BIGINT, default=None)
fs_cost = Column(FLOAT, default=None)
fs_devices_to_count = Column(BIGINT, default=None)
fs_size = Column(BIGINT, default=None)
fs_type = Column(VARCHAR, default=None)
fs_usage = Column(BIGINT, default=None)
has_unallocated_space = Column(BOOLEAN, default=None)
inodes = Column(JSON, default=None)
instance_id = Column(VARCHAR, default=None)
instance_type = Column(VARCHAR, default=None)
is_ephemeral = Column(BOOLEAN, default=None)
is_partition = Column(BOOLEAN, default=None)
is_zesty_disk = Column(BOOLEAN, default=None)
label = Column(VARCHAR, default=None)
last_update = Column(BIGINT, default=None)
LV = Column(VARCHAR, default=None)
lvm_path = Column(VARCHAR, default=None)
mount_path = Column(VARCHAR, default=None)
name = Column(VARCHAR, default=None)
org_id = Column(VARCHAR, index=True)
partition_id = Column(VARCHAR, default=None)
partition_number = Column(BIGINT, default=None)
platform = Column(VARCHAR, default=None)
potential_savings = Column(FLOAT, default=None)
region = Column(VARCHAR, index=True)
resizable = Column(BOOLEAN, default=None)
space = Column(JSON, default=None)
tags = Column(JSON, default=None)
unallocated_chunk = Column(BIGINT, default=None)
update_data_ts = Column(BIGINT, default=0)
VG = Column(VARCHAR, default=None)
wrong_fs_alert = Column(BOOLEAN, default=None)
zesty_disk_iops = Column(BIGINT, default=None)
zesty_disk_throughput = Column(BIGINT, default=None)
zesty_disk_vol_type = Column(VARCHAR, default=None)
max_utilization_in_72_hrs = Column(BIGINT, default=None)
package_version = Column(VARCHAR, default=None)
autoupdate_last_execution_time = Column(VARCHAR, default=None)
policies = Column(JSON, default=None)
instance_tags = Column(JSON, default=None)
migration_uuid = Column(UUID(as_uuid=True), nullable=True)
is_manageable = Column(BOOLEAN, default=False)
# dict for custom_order_by class method
col_to_actual_sorting_col = {
"policies": "policies_name",
"instance_tags": "instance_tags_keys"}
def __init__(
self,
fs_id: str,
account_id: str = None,
account_uuid: str = None,
agent_update_required: bool = None,
btrfs_version: str = None,
cloud: str = None,
cloud_vendor: str = None,
cycle_period: int = None,
delete_on_termination: bool = None,
devices: Dict[str, BlockDevice] = None,
encrypted: dict = None,
existing_actions: Dict[str, ZBSAction] = None,
expiredAt: int = None,
fs_cost: float = None,
fs_devices_to_count: int = None,
fs_size: int = None,
fs_type: str = None,
fs_usage: int = None,
has_unallocated_space: bool = None,
inodes: Dict[str, Usage] = None,
instance_id: str = None,
instance_type: str = None,
is_ephemeral: bool = None,
is_partition: bool = None,
is_zesty_disk: bool = None,
label: str = None,
last_update: int = None,
LV: str = None,
lvm_path: str = None,
mount_path: str = None,
name: str = None,
org_id: str = None,
partition_id: str = None,
partition_number: int = None,
platform: str = None,
potential_savings: float = None,
region: str = None,
resizable: bool = None,
space: Dict[str, Usage] = None,
tags: Dict[str, str] = None,
unallocated_chunk: int = None,
update_data_ts: int = 0,
VG: str = None,
wrong_fs_alert: bool = None,
zesty_disk_iops: int = None,
zesty_disk_throughput: int = None,
zesty_disk_vol_type: str = None,
max_utilization_in_72_hrs: int = None,
package_version: str = None,
autoupdate_last_execution_time: str = None,
statvfs_raw_data: Dict[str, str] = None, # unused to support initialization with **dict, do not remove
policies: Dict[str, dict] = None,
instance_tags: Dict[str, str] = None,
is_emr: bool = False, # unused to support initialization with **dict, do not remove
is_manageable: bool = False,
**kwargs
):
self.fs_id = fs_id
self.account_id = account_id
self.account_uuid = account_uuid
self.agent_update_required = agent_update_required
self.btrfs_version = btrfs_version
if cloud is None and cloud_vendor is None:
self.cloud = 'Amazon'
self.cloud_vendor = 'Amazon'
elif cloud:
self.cloud = cloud
self.cloud_vendor = cloud
elif cloud_vendor:
self.cloud = cloud_vendor
self.cloud_vendor = cloud_vendor
self.cycle_period = cycle_period
self.delete_on_termination = delete_on_termination
self.devices = devices
if devices:
for dev in self.devices:
if isinstance(self.devices[dev], BlockDevice):
self.devices[dev] = self.devices[dev].asdict()
else:
self.devices[dev] = self.devices.get(dev, {})
self.encrypted = encrypted
if existing_actions:
for action in existing_actions:
self.existing_actions[action] = self.existing_actions[action].serialize(
)
self.expiredAt = expiredAt
self.fs_cost = fs_cost
self.fs_devices_to_count = fs_devices_to_count
self.fs_size = fs_size
self.fs_type = fs_type
self.fs_usage = fs_usage
self.has_unallocated_space = has_unallocated_space
self.inodes = inodes
self.instance_id = instance_id
self.instance_type = instance_type
self.is_ephemeral = is_ephemeral
self.is_partition = is_partition
self.is_zesty_disk = is_zesty_disk
self.label = label
if last_update:
self.last_update = last_update
else:
self.last_update = int(time.time()) - 60
self.LV = LV
self.lvm_path = lvm_path
self.mount_path = mount_path
self.name = name
self.org_id = org_id
self.partition_id = partition_id
self.partition_number = partition_number
self.platform = platform
self.potential_savings = potential_savings
self.region = region
self.resizable = resizable
self.space = space
self.tags = tags
self.unallocated_chunk = unallocated_chunk
self.update_data_ts = update_data_ts
self.VG = VG
self.wrong_fs_alert = wrong_fs_alert
self.zesty_disk_iops = zesty_disk_iops
self.zesty_disk_throughput = zesty_disk_throughput
self.zesty_disk_vol_type = zesty_disk_vol_type
self.max_utilization_in_72_hrs = max_utilization_in_72_hrs
self.package_version = package_version
self.autoupdate_last_execution_time = autoupdate_last_execution_time
self.policies = policies
self.instance_tags = instance_tags
self.is_manageable = is_manageable
def __repr__(self) -> str:
return f"{self.__tablename__}:{self.fs_id}"
def asdict(self) -> dict:
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def as_dict(self) -> dict:
return self.asdict()
# Custom filters
@classmethod
def policies_filter(cls, query: Query, value: str):
query = query.filter(
cast(cls.policies, String).contains(f'"name": "{value}"'))
return query
@classmethod
def instance_name_filter(cls, query: Query, value: str):
val = '%{}%'.format(value.replace("%", "\\%"))
query = query.filter(
case((cls.instance_tags == None, ''),
else_=func.replace(cast(cls.instance_tags.op('->')('Name'), String), "\"", "")).ilike(val))
return query
# Custom query
@classmethod
def custom_query(cls, session: Union[Session, sessionmaker]) -> Query:
clsb = aliased(cls)
subq = session.query(func.json_object_keys(clsb.instance_tags))
q = session.query(cls)
q = q.add_columns(case((or_(cls.policies == None, cast(cls.policies, String) == 'null'), ''),
else_=cast(cls.policies, String).regexp_replace(r'.+"name":\s"([^"]+).+', "\\1"))
.label("policies_name"),
case((cls.instance_tags == None, ''),
else_=func.replace(cast(cls.instance_tags.op('->')('Name'), String), "\"", ""))
.label('instance_name'),
case((cast(cls.instance_tags, String) == 'null', []),
else_=func.array(subq.scalar_subquery().where(cls.fs_id == clsb.fs_id)))
.label('instance_tags_keys')
)
return q
@classmethod
def custom_order_by(cls, sorting_column: str, sorting_order: str) -> str:
actual_sorting_column = cls.col_to_actual_sorting_col.get(
sorting_column, sorting_column)
return f"{actual_sorting_column} {sorting_order}"
class ManagedFs(ManagedFsMixin, BaseMixin, Base):
__tablename__ = "managed_filesystems"
class MigrationStatus(Enum):
Active = auto()
Aborting = auto()
Aborted = auto()
Completed = auto()
Failed = auto()
class RunningMigrations(BaseMixin, Base):
__tablename__ = "active_migration"
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), nullable=False, primary_key=True)
finished_at = Column(TIMESTAMP, nullable=True)
account_id = Column(VARCHAR, default=None)
region = Column(VARCHAR(255))
reboot = Column(BOOLEAN, default=False)
# array of day numbers when reboot is allowed 0-6
days = Column(ARRAY(VARCHAR))
# timeframe from-to in %I:%M %p
from_ = Column(VARCHAR)
to = Column(VARCHAR)
status = Column(sa_ENUM(MigrationStatus), nullable=False, server_default=MigrationStatus.Active.name)
is_rebooting = Column(BOOLEAN, default=False) # TODO: can this be deleted?
snapshot_id = Column(VARCHAR(255))
snapshot_remove_after = Column(INT, nullable=True) # in days
snapshot_create_started_at = Column(TIMESTAMP, nullable=True)
snapshot_deleted_at = Column(TIMESTAMP, nullable=True)
ebs_id = Column(VARCHAR(255))
ebs_remove_after = Column(INT, nullable=True) # in days
ebs_detached_at = Column(TIMESTAMP, nullable=True)
ebs_deleted_at = Column(TIMESTAMP, nullable=True)
def __init__(
self,
fs_id: str,
migration_uuid: _UUID,
account_id: str = None,
region: str = None,
days: Optional[List[int]] = None,
from_: Optional[str] = None,
to: Optional[str] = None,
reboot: bool = False,
status: MigrationStatus = MigrationStatus.Active,
ebs_remove_after: int = 1,
snapshot_remove_after: int = 7):
self.migration_uuid = migration_uuid
self.fs_id = fs_id
self.account_id = account_id
self.region = region
self.days = days
self.from_ = from_
self.to = to
self.reboot = reboot
self.status = status
self.ebs_remove_after = ebs_remove_after
self.snapshot_remove_after = snapshot_remove_after
class MigrationHistoryStatus(Enum):
Pending = auto()
Running = auto()
PendingRestart = auto()
Aborting = auto()
Aborted = auto()
Completed = auto()
Failed = auto()
class MigrationHistoryPhase(Enum):
phase: str
human_friendly_phase: str
index: int
abortable: bool
weight: int
def __new__(cls, phase: str, human_friendly_phase: str, index: int, abortable: bool, weight:int):
entry = object.__new__(cls)
entry.phase = phase
entry.human_friendly_phase = human_friendly_phase
entry.index = index
entry.abortable = abortable
entry.weight = weight
return entry
INITIALIZE = "", "Initializing migration", 0, True, 5
CREATE_SNAPSHOT = "", "Creating instance snapshot", 1, True, 5
CREATE_NEW_FILESYSTEM = "", "Creating destination filesystem", 2, True, 5
PENDING_AGENT_COMM = "", "Awaiting agent communication", 3, True, 5
AGENT_PREPARING_EZSWITCH = "PrepareRestart", "Preparing EZswitch", 10, True, 5
AGENT_FIRST_RESTART = "PendingRestart", "Pending restart", 11, True, 5
AGENT_DATA_MIGRATION = "EZSwitch", "Copying data", 12, False, 50
AGENT_POST_DATA_MIGRATION = "PostMigration", "Preparing for finalization", 13, False, 5
AGENT_SECOND_RESTART = "PendingRestart", "Pending restart", 14, False, 5
AGENT_CLEANUP = "Cleanup", "Finalizing", 15, False, 5
DETACH_OLD_FILESYSTEM = "", "Detaching source filesystem", 20, False, 5
@staticmethod
def from_phase(phase: str, index: int = 0):
if not phase:
return None
# special case to distinguish between first and second restart
if phase == "PendingRestart":
return MigrationHistoryPhase.AGENT_FIRST_RESTART if index <= 1 \
else MigrationHistoryPhase.AGENT_SECOND_RESTART
for mp in MigrationHistoryPhase:
if mp.phase == phase:
return mp
return None
@staticmethod
def from_name(name: str):
if not name:
return None
for mp in MigrationHistoryPhase:
if mp.name == name:
return mp
return None
@staticmethod
def get_abortable_phases_names():
return [phase.name for phase in MigrationHistoryPhase if phase.abortable]
class MigrationHistory(BaseMixin, Base):
__tablename__ = "migration_history"
migration_uuid = Column(UUID(as_uuid=True), ForeignKey("active_migration.migration_uuid", ondelete="CASCADE"),
nullable=False, primary_key=True, index=True)
time_start = Column(TIMESTAMP)
time_end = Column(TIMESTAMP)
status = Column(sa_ENUM(MigrationHistoryStatus), nullable=False, server_default=MigrationHistoryStatus.Pending.name)
phase = Column(VARCHAR, primary_key=True)
progress = Column(FLOAT) # this is used only to determine which status is the latest in case of a race condition
failure_reason = Column(VARCHAR)
# should be returned from the agent in seconds
estimated = Column(INT)
def __init__(
self,
migration_uuid: 'UUID',
start_time: Optional[int],
end_time: Optional[int],
migration_status: MigrationHistoryStatus,
migration_phase: MigrationHistoryPhase,
progress: int,
estimated: int):
self.migration_uuid = migration_uuid
self.time_start = start_time
self.time_end = end_time
self.status = migration_status
self.phase = migration_phase.name
self.progress = progress
self.estimated = estimated
class WrongActionException(Exception):
pass
class MigrationActions(BaseMixin, Base):
__tablename__ = "migration_actions"
id = Column(INT, primary_key=True, autoincrement=True)
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), ForeignKey("active_migration.migration_uuid", ondelete="CASCADE"),
nullable=False)
action = Column(VARCHAR)
value = Column(VARCHAR)
allowed_actions = ['start', 'reboot', 'reboot_now', 'abort']
def __init__(self, fs_id, migration_uuid, action, value):
self.fs_id = fs_id
self.migration_uuid = migration_uuid
self.set_action(action)
self.value = value
def set_action(self, action):
if action not in self.allowed_actions:
raise WrongActionException
self.action = action
def create_tables(engine: engine.base.Engine) -> None:
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/models/ManagedFS.py | ManagedFS.py |
import json
import time
from datetime import datetime
from typing import Dict, Union
from .ManagedFS import Base, BaseMixin
try:
from sqlalchemy import (Column, PrimaryKeyConstraint, String, case, cast,
engine, func, or_, select, text)
from sqlalchemy.dialects.postgresql import (BIGINT, BOOLEAN, FLOAT, JSON,
VARCHAR)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Query, Session, aliased, sessionmaker
except ImportError:
raise ImportError(
"sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
class InstancesTags(BaseMixin, Base):
__tablename__ = "instances_tags"
instance_id = Column(VARCHAR, primary_key=True)
account_id = Column(VARCHAR, index=True, default=None)
account_uuid = Column(VARCHAR, index=True, default=None)
instance_name = Column(VARCHAR, default=None)
instance_tags = Column(JSON, default=None)
expired_at = Column(BIGINT, default=None)
__table_args__ = (
PrimaryKeyConstraint('instance_id', name='instances_tags_pkey'),)
def __init__(
self,
instance_id: str,
account_id: str = None,
account_uuid: str = None,
instance_name: str = None,
instance_tags: dict = None,
expired_at: int = None
):
self.instance_id = instance_id
self.account_id = account_id
self.account_uuid = account_uuid
self.instance_name = instance_name
self.instance_tags = instance_tags
self.expired_at = expired_at or int(datetime.utcnow().timestamp()) + 3 * 3600
def __eq__(self, other) -> bool:
return self.__hash__() == other.__hash__()
def __hash__(self) -> int:
return hash(''.join(map(lambda c: getattr(self, c.name) or '',
filter(lambda c: c.name not in ['instance_tags', 'expired_at', 'created_at', 'updated_at'],
self.__table__.columns))) +
json.dumps(self.instance_tags))
def __repr__(self) -> str:
return f"{self.__tablename__}:{self.instance_id}"
def asdict(self) -> dict:
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def as_dict(self) -> dict:
return self.asdict()
def create_tables(engine: engine.base.Engine) -> None:
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/models/InstancesTags.py | InstancesTags.py |
import json
import traceback
from decimal import Decimal
from typing import Dict
from zesty.id_handler import create_zesty_id
GB_IN_BYTES = 1024**3
class BlockDevice:
def __init__(
self,
size: int,
btrfs_dev_id: str = None,
cloud_vendor: str = 'Amazon',
created: str = None,
dev_usage: int = None,
iops: int = None,
throughput: int = None,
lun: int = None,
map: str = None,
iops_stats: Dict[str, int] = None,
parent: str = None,
unlock_ts: int = 0,
volume_id: str = None,
volume_type: str = None,
device: str = None,
btrfs_size: int = None,
extendable: bool = True,
removable: bool = True
):
"""
Block Device class doc:
:param size: Size of the device in Bytes
:param btrfs_dev_id: ID of the device inside the BTRFS structure
:param cloud_vendor: Cloud vendor (AWS/Azure/GCP)
:param created: Device creation date
:param dev_usage: How much of the device is in use (in Bytes)
:param iops: Device IOPS amount
:param lun: LUN number (Only for Azure)
:param map: The mount slot of the device inside the OS
:param iops_stats: Dict with IOPS statistics
:param parent: If it's a partition so this one represent the parent device
:param unlock_ts: TS when the device will be ready to be extended again
:param volume_id: Device ID
:param volume_type: Type of the device in the cloud
:param device: Device mount slot from the cloud
:param btrfs_size: The usable size for the filesystem in bytes
:param extendable: Whether ZestyDisk Handsfree logic is allowed to extend the device
:param removable: Whether ZestyDisk Handsfree logic is allowed to remove the device from the filesystem
"""
# Init empty dict here instead of passing as default value
iops_stats = {} if iops_stats is None else iops_stats
self.size = size
self.cloud_vendor = cloud_vendor
try:
self.volume_id = create_zesty_id(
cloud=self.cloud_vendor,
resource_id=volume_id
)
except:
self.volume_id = volume_id
self.btrfs_dev_id = btrfs_dev_id
self.created = created
self.dev_usage = dev_usage
self.iops = iops
self.throughput = throughput
self.lun = lun
self.map = map
self.iops_stats = iops_stats
if device:
self.device = device
if parent:
self.parent = parent
if not unlock_ts:
self.unlock_ts = 0
else:
self.unlock_ts = unlock_ts
self.volume_type = volume_type
self.btrfs_size = btrfs_size
self.extendable = extendable
self.removable = removable
def as_dict(self) -> dict:
return_dict = json.loads(json.dumps(self, default=self.object_dumper))
return {k: v for k, v in return_dict.items() if v is not None}
@staticmethod
def object_dumper(obj) -> dict:
try:
return obj.__dict__
except AttributeError as e:
if isinstance(obj, Decimal):
return int(obj)
print(f"Got exception in object_dumper value: {obj} | type : {type(obj)}")
print(traceback.format_exc())
return obj
def serialize(self) -> dict:
return self.as_dict()
def __repr__(self) -> str:
return str(self.as_dict()) | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/models/BlockDevice.py | BlockDevice.py |
from abc import ABC, abstractmethod
import enum
from typing import TYPE_CHECKING, Dict
if TYPE_CHECKING:
from ..actions import ZBSAction
pass
class ISpecialInstructions(ABC):
pass
class IActionHF(ABC):
class Status(enum.Enum):
NEW = 1
PENDING = 2
RUNNING = 3
CANCELED = 4
READY = 5
HOLDING = 6
REVERT = 7
PAUSE = 8 # should stop
STOPPED = 9 # action stopped
@abstractmethod
def get_action_id(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_id' is abstract, please implement")
@abstractmethod
def get_action_type(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_type' is abstract, please implement")
@abstractmethod
def get_status(self) -> Status:
raise NotImplementedError(
"ActionHF 'get_status' is abstract, please implement")
@abstractmethod
def set_status(self, status: Status):
raise NotImplementedError(
"ActionHF 'set_status' is abstract, please implement")
@abstractmethod
def get_special_instructions(self) -> ISpecialInstructions:
raise NotImplementedError(
"ActionHF 'get_special_instructions' is abstract, please implement")
@abstractmethod
def set_special_instructions(self, special_instructions: ISpecialInstructions):
raise NotImplementedError(
"ActionHF 'set_special_instructions' is abstract, please implement")
class IDeviceHF(ABC):
@abstractmethod
def get_dev_id(self) -> str:
raise NotImplementedError(
"DeviceHF 'get_dev_id' is abstract, please implement")
@abstractmethod
def get_size(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_size' is abstract, please implement")
@abstractmethod
def get_usage(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_usage' is abstract, please implement")
@abstractmethod
def get_unlock_ts(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_unlock_ts' is abstract, please implement")
class IFileSystemHF(ABC):
@abstractmethod
def get_fs_id(self) -> str:
raise NotImplementedError(
"IFileSystemHF 'get_fs_id' is abstract, please implement")
@abstractmethod
def get_devices(self) -> Dict[str, IDeviceHF]:
raise NotImplementedError(
"IFileSystemHF 'get_devices' is abstract, please implement")
@abstractmethod
def get_existing_actions(self) -> Dict[str, IActionHF]:
raise NotImplementedError(
"IFileSystemHF 'get_existing_actions' is abstract, please implement") | zesty.zbs-api-migration-history | /zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447.tar.gz/zesty.zbs-api-migration-history-1.0.2023.7.30.1690729447/src/models/hf_interface.py | hf_interface.py |
import json
import uuid
from copy import deepcopy
from dataclasses import dataclass
from enum import Enum, auto
from typing import Dict, List, Optional, Union
class StepStatus(Enum):
# Note: this protocol needs to consider Dead-locking:
# examples: we should not have a step get stuck because of a failed communication or otherwise
INIT = auto()
ACK = auto()
DONE = auto()
FAIL = auto()
class DeviceType(Enum):
STANDARD = "standard"
GP3 = "gp3"
GP2 = "gp2"
ST1 = "st1"
SC1 = "sc1"
IO1 = "io1"
IO2 = "io2"
def __str__(self):
return self.value
class InstructionMetaclass(type):
def __new__(mcs, instruction_type: str):
return globals()[instruction_type]
def __call__(cls, action_id: str, *args, **kwargs):
if not issubclass(cls, StepInstruction):
raise TypeError(f"{cls.__name__} is not of StepInstruction type")
instruction = cls(action_id, *args, **kwargs)
return instruction
@classmethod
def deserialize_instruction(mcs, instruction_val: Union[str, dict]) -> 'StepInstruction':
if isinstance(instruction_val, str):
instruction_val = json.loads(instruction_val)
try:
instruction_type = instruction_val.pop('instruction_type')
instruction = mcs(instruction_type)(**instruction_val)
except Exception as e:
raise Exception(f"Failed to create instance from {instruction_type} class | Error: {e}")
instruction.set_values_from_dict(instruction_val)
return instruction
@dataclass
class StepInstruction:
action_id: str
step_id: Optional[str]
status: Optional[StepStatus]
def __post_init__(self) -> None:
if self.status is None:
self.status = StepStatus.INIT
if self.step_id is None:
self.step_id = str(uuid.uuid4())
self._instruction_type = None
def as_dict(self) -> dict:
dict_values = deepcopy(self.__dict__)
# Enum members
for key, value in filter(lambda t: isinstance(t[1], Enum), self.__dict__.items()):
dict_values[key] = value.name
dict_values.update({
key: getattr(self, key).name
})
dict_values['instruction_type'] = self.instruction_type
dict_values.pop('_instruction_type')
return dict_values
def serialize(self) -> str:
return json.dumps(self.as_dict())
@property
def instruction_type(self):
if not self._instruction_type:
self._instruction_type = type(self).__name__
return self._instruction_type
def set_values_from_dict(self, instruction_data: dict):
# Set Enum members values
enum_members = list(filter(lambda k: isinstance(getattr(self, k), Enum), self.__dict__.keys()))
for key in enum_members:
enum_cls = getattr(self, key).__class__
setattr(self, key, getattr(enum_cls, instruction_data[key]))
for key, val in instruction_data.items():
if key in enum_members or key == 'instruction_type':
continue
setattr(self, key, val)
# TODO: move correct/common elements here
@dataclass
class MachineInstruction(StepInstruction):
...
@dataclass
class CloudInstruction(StepInstruction):
...
@dataclass
class AddDiskCloud(CloudInstruction):
instance_id: str # Note: will need region/az but should
# be available from other source/context
dev_type: DeviceType
dev_delete_on_terminate: bool
dev_size_gb: int
dev_iops: Optional[int] = None
dev_throughput: Optional[int] = None
@dataclass
class AddDiskMachine(MachineInstruction):
dev_path: str
fs_id: str
fs_mount_path: str
volume_id: str
dev_map: Optional[str] = None
@dataclass
class ModifyDiskCloud(CloudInstruction):
dev_id: str # this is volume id
dev_type: Optional[DeviceType] = None
dev_size_gb: Optional[int] = None # Note: today this is change in size - should be final size
dev_iops: Optional[int] = None
dev_throughput: Optional[int] = None
# check if we need dev old size
def __post_init__(self) -> None:
super().__post_init__() # TODO: check if necessary for __post_init__
if self.dev_type is None \
and self.dev_size_gb is None \
and self.dev_iops is None \
and self.dev_throughput is None:
raise Exception(f"Must modify at least one attribute")
@dataclass
class DetachDiskCloud(CloudInstruction):
volume_id: str
@dataclass
class TakeSnapshotCloud(CloudInstruction):
dev_id: str
snapshot_id: str
@dataclass
class ExtendDiskSizeMachine(MachineInstruction):
# Note: this is a copy of AddDiskMachine
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
# dev_path: str # This is necessary for Monitored disk Extend only actions
# Probably better to have a separate payload/step
btrfs_dev_id: int
@dataclass
class ResizeDisksMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
resize_btrfs_dev_ids: Dict[str, int] # action_id, btrfs_dev_id
@dataclass
class GradualRemoveChunk:
iterations: int # Note: 0 iterations will represent delete
chunk_size_gb: int # Note: respective chunk_size for 0 iter represents pause/delete threshold
@dataclass
class RemoveDiskMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
btrfs_dev_id: int
dev_path: str
chunk_scheme: List[GradualRemoveChunk] # Note: Order matters
cutoff_gb: int
def as_dict(self) -> dict:
return {
"instruction_type": self.instruction_type,
"action_id": self.action_id,
"step_id": self.step_id,
"status": self.status.name,
"fs_id": self.fs_id,
"fs_mount_path": self.fs_mount_path,
"btrfs_dev_id": self.btrfs_dev_id,
"dev_path": self.dev_path,
"chunk_scheme": [{"iterations": chunk.iterations, "chunk_size_gb": chunk.chunk_size_gb} for chunk in
self.chunk_scheme], # Note: Order matters
"cutoff_gb": self.cutoff_gb
}
###### TODO: discuss with Osher - separate instructions, or overlap into one
@dataclass
class ModifyRemoveDisksMachine(MachineInstruction):
# will service Revert/ChangingTarget/NewResize
# because they are all correlated to the same
# instruction data - changing cutoff
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
cutoff_gb: int
revert_btrfs_dev_ids: Dict[str, int] # action_id, btrfs_dev_id
###### TODO: END discuss with Osher
@dataclass
class RemoveDiskCloud(CloudInstruction):
volume_id: str
@dataclass
class StartMigrationMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
account_id: str
action_id: str
fs_mount_path: str
volume_id: str
dev_path: str
reboot: bool | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/step_instructions.py | step_instructions.py |
import uuid as uuid_gen
from abc import abstractmethod
from typing import Dict
from .models.hf_interface import IActionHF, ISpecialInstructions
class ZBSActionData:
mount_path = None
device = None
filesystem = None
fs_type = None
is_partition = False
partition_number = None
LV = None
VG = None
lvm_path = None
chunk_size = None
def __init__(self, mount_path=None,
device=None,
filesystem=None,
fs_type=None,
is_partition=False,
partition_number=None,
LV=None,
VG=None,
lvm_path=None,
chunk_size=None,
dev_id=None,
dev_path=None,
parent=None,
btrfs_dev_id=None,
partition_id=None,
windows_old_size=None,
size=None,
_map=None):
self.mount_path = mount_path
self.filesystem = filesystem
self.fs_type = fs_type
self.device = device
self.is_partition = is_partition
self.partition_number = partition_number
self.LV = LV
self.VG = VG
self.lvm_path = lvm_path
self.chunk_size = chunk_size
self.dev_id = dev_id
self.dev_path = dev_path
self.parent = parent
self.btrfs_dev_id = btrfs_dev_id
self.partition_id = partition_id
self.windows_old_size = windows_old_size
self.size = size
self.map = _map
def serialize(self):
return self.__dict__
def set_data(self, json):
self.mount_path = json.get('mount_path')
self.filesystem = json.get('filesystem')
self.fs_type = json.get('fs_type')
self.device = json.get('device')
self.is_partition = json.get('is_partition', False)
self.partition_number = json.get('partition_number', '')
self.LV = json.get('LV', '')
self.VG = json.get('VG', '')
self.lvm_path = json.get('lvm_path', '')
self.chunk_size = json.get('chunk_size', 0)
self.dev_id = json.get('dev_id')
self.dev_path = json.get('dev_path')
self.parent = json.get('parent')
self.btrfs_dev_id = json.get('btrfs_dev_id')
self.partition_id = json.get('partition_id')
self.windows_old_size = json.get('windows_old_size')
self.size = json.get('size')
self.map = json.get('_map')
return self
class ZBSAgentReceiver:
"""
The ZBSAgentReceiver (Receiver class in the Command pattern) contain some important business logic.
It knows how to perform any kind of action sent by the ZBS Backend.
ZBSAgent is an abstract class, while the concrete implementations should be per OS
"""
@abstractmethod
def do_nothing(self, data: ZBSActionData) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'do_nothing' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def extend_fs(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'extend_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def add_disk(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'add_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_fs(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def remove_disk(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'remove_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_ebs_structure(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_ebs_structure' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def start_migration(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'start_migration' is abstract, please implement a concrete per OD receiver")
class SpecialInstructions(ISpecialInstructions):
"""
Constructor for special instructions with optional parameters:
* dev_id: identify the device for the filesystem to which the action is attached
* size: specify the capacity for a new device or the additional capacity when extending a device
* sub_actions: when an action implements multiple actions, specify a dictionary:
-- { int(specifies action priorities): list(actions that can be run in parallel) }
-- Actions in a list keyed to a higher order cannot start until all Actions of lower orders complete
"""
def __init__(self, dev_id: str = None, size: int = None, sub_actions: Dict[int, Dict[str, IActionHF]] = None):
self.dev_id = dev_id
self.size = size
self.sub_actions = sub_actions
def __repr__(self):
return str(self.__dict__)
class ZBSAction(IActionHF):
"""
Base command class
Delegates the business logic to the receiver
There are receivers per OS (Linux and Windows for now)
"""
TYPE_FIELD_NAME = "type"
DATA_FIELD_NAME = "data"
STATUS_FIELD_NAME = "status"
UUID_FIELD_NAME = "uuid"
SPECIAL_INSTRUCTIONS_FIELD_NAME = "_ZBSAction__special_instructions"
__uuid = None
__status: IActionHF.Status = IActionHF.Status.NEW
__special_instructions: SpecialInstructions
subclasses = {}
def __init__(self, receiver: ZBSAgentReceiver = None, data: ZBSActionData = None, uuid: str = None):
self.receiver = receiver
self.data = data
if uuid is not None:
self.__uuid = uuid
else:
self.__uuid = str(uuid_gen.uuid4())
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses[cls.__name__] = cls
def __repr__(self):
special_instructions = self.get_special_instructions() if isinstance(self.get_special_instructions(),
Dict) else self.get_special_instructions().__dict__
repr_dict = dict(zip(['Action Type', 'Action Status', 'SpecialInstructions'],
[self.get_action_type(),
str(self.get_status().name),
special_instructions]))
return str(repr_dict)
def set_data(self, data: ZBSActionData):
self.data = data
def set_receiver(self, receiver: ZBSAgentReceiver):
self.receiver = receiver
def serialize(self):
result = self.__dict__
result[ZBSAction.TYPE_FIELD_NAME] = self.get_action_type()
result[ZBSAction.DATA_FIELD_NAME] = self.data.serialize() if self.data is not None else None
result[ZBSAction.STATUS_FIELD_NAME] = self.get_status().name
result[ZBSAction.UUID_FIELD_NAME] = self.get_action_id()
if hasattr(self, '_ZBSAction__special_instructions'):
result[
ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME] = self.get_special_instructions().__dict__ if self.__special_instructions is not None else None
return result
# ActionHF interface implementation
def get_action_id(self) -> str:
return self.__uuid
def get_action_type(self) -> str:
return str(type(self).__name__)
def get_status(self) -> IActionHF.Status:
return self.__status
def set_status(self, status: IActionHF.Status):
self.__status = status
def get_special_instructions(self) -> SpecialInstructions:
return self.__special_instructions
def set_special_instructions(self, special_instructions: SpecialInstructions):
self.__special_instructions = special_instructions
@staticmethod
def deserialize_type(json):
return json[ZBSAction.TYPE_FIELD_NAME]
@staticmethod
def deserialize_data(json):
return ZBSActionData().set_data(json[ZBSAction.DATA_FIELD_NAME])
@staticmethod
def deserialize_uuid(serialized_action):
return serialized_action.get(ZBSAction.UUID_FIELD_NAME)
@staticmethod
def deserialize_status(serialized_action):
return serialized_action.get(ZBSAction.STATUS_FIELD_NAME)
@staticmethod
def deserialize_special_instructions(serialized_action):
if not isinstance(serialized_action, dict):
serialized_action = serialized_action.serialize()
special_instructions = SpecialInstructions(
dev_id = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('dev_id'),
size = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('size'),
sub_actions = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('sub_actions'),
)
for key, val in serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).items():
if key not in ['dev_id', 'size', 'sub_actions']:
setattr(special_instructions, str(key), val)
return special_instructions
@staticmethod
def deserialize_action(serialized_action):
action_type = ZBSAction.deserialize_type(serialized_action)
action_data = ZBSAction.deserialize_data(serialized_action) if serialized_action.get(
ZBSAction.DATA_FIELD_NAME) is not None else None
action_uuid = ZBSAction.deserialize_uuid(serialized_action)
action_status = ZBSAction.deserialize_status(serialized_action)
action_to_perform = ZBSActionFactory.create_action(action_type, action_uuid)
action_to_perform.set_data(action_data)
action_to_perform.set_status(IActionHF.Status[serialized_action.get('status')])
if ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME in serialized_action:
special_instructions = ZBSAction.deserialize_special_instructions(serialized_action)
action_to_perform.set_special_instructions(special_instructions)
return action_to_perform
@abstractmethod
def execute(self):
raise NotImplementedError("BaseAction is abstract, please implement a concrete action")
class DoNothingAction(ZBSAction):
"""
Do nothing action
"""
def execute(self):
print("Do nothing || Action ID : {}".format(self.get_action_id()))
class Factory:
def create(self, uuid): return DoNothingAction(uuid=uuid)
class ExtendFileSystemAction(ZBSAction):
"""
Extend File System Action.
"""
def execute(self, fs):
try:
return self.receiver.extend_fs(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return ExtendFileSystemAction(uuid=uuid)
class AddDiskAction(ZBSAction):
"""
Add Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.add_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return AddDiskAction(uuid=uuid)
class RemoveDiskAction(ZBSAction):
"""
Remove Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.remove_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return RemoveDiskAction(uuid=uuid)
class BalanceFileSystemAction(ZBSAction):
"""
Balance File System Action.
"""
def execute(self):
try:
self.receiver.balance_fs(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceFileSystemAction(uuid=uuid)
class BalanceEBSStructureAction(ZBSAction):
"""
Balance EBS structure Action.
"""
def execute(self):
try:
self.receiver.extend_fs(self.data, self.get_action_id())
self.receiver.remove_disk(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceEBSStructureAction(uuid=uuid)
class MigrationStartAction(ZBSAction):
"""
Migration Start Action.
The purpose of this action is to get a BE request to start a migration action for a mount point
Returns: if migration started successfully or failed with the error
"""
def execute(self, account_id):
try:
return self.receiver.start_migration(self.get_special_instructions(), self.get_action_id(), account_id)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return MigrationStartAction(uuid=uuid)
class ZBSActionFactory:
actions = {}
@staticmethod
def create_action(action_type, uuid=None):
if action_type not in ZBSActionFactory.actions:
action_class = ZBSAction.subclasses.get(action_type)
if action_class:
ZBSActionFactory.actions[action_type] = action_class.Factory()
else:
raise ValueError(f'Could not find action class `{action_type}`')
return ZBSActionFactory.actions[action_type].create(uuid) | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/actions.py | actions.py |
import json
import logging
from typing import List
import requests
import config as cfg
"""
USAGE:
First you have to init factory with base settings
factory = RequestFactory(stage=${STAGE}, version=${VERSION}, api_key=${API_KEY})
Then need to create request instance depend on the type of the request you want to send
metrics_request = factory.create_request("Metrics")
Pass the data to set_data function
metrics_request.set_data(
agent_version,
overview,
plugins
)
Then send it to the BackEnd and receive the response
response = metrics_request.send()
"""
DEFAULT_BASE_URL = "https://api{}.cloudvisor.io"
ESTABLISH_CONN_TIMEOUT = 10
RECEIVE_RESPONSE_TIMEOUT = 30
class RequestFactory:
requests = {}
stage = None
version = None
api_key = None
api_base = None
logger = None
def __init__(self, logger:logging.Logger, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
logger.debug(f'Creating RequestFactory with stage {stage} and api_base {api_base}')
def create_request(self, request_type):
if self.logger:
self.logger.debug(f'Create request typed {request_type} with stage {self.stage} and api_base {self.api_base}')
if request_type not in RequestFactory.requests:
request_class = Request.subclasses.get(request_type)
if request_class:
RequestFactory.requests[request_type] = request_class.Factory(self.logger, self.stage, self.version,
self.api_key, self.api_base)
return RequestFactory.requests[request_type].create()
class Request:
stage = None
version = None
api_key = None
prefix = None
api_base = None
api_is_private_endpoint = False
subclasses = {}
def __init__(self, logger:logging.Logger, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.prefix = ""
if self.stage == 'staging':
self.prefix = "-staging"
if api_base != DEFAULT_BASE_URL:
self.api_is_private_endpoint = True
self.api_base = api_base.format(self.prefix)
logger.debug(f"Create request with base_url {api_base}, "
f"api_is_private_endpoint = {self.api_is_private_endpoint}")
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses[cls.__name__] = cls
def send(self, logger = None):
print(vars(self))
url = self.build_url()
if logger:
logger.debug(f"send post request with url {url}")
else:
print(f"send post request with url {url}")
res = requests.post(
url,
data=json.dumps(self.message, separators=(',', ':')),
headers={"Cache-Control": "no-cache", "Pragma": "no-cache", "x-api-key": self.api_key},
timeout=(ESTABLISH_CONN_TIMEOUT, RECEIVE_RESPONSE_TIMEOUT)
)
return self.Response(res)
class Metrics(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-metrics")
else:
return '{}{}'.format(self.api_base, cfg.post_metrics_ep)
def set_data(self, agent_version, overview, plugins, package_version=None, autoupdate_last_execution_time=None):
self.message = {
"agent": {
"version": agent_version,
"package_version": package_version,
"autoupdate_last_execution_time": autoupdate_last_execution_time
},
"overview": overview,
"plugins": plugins
}
class Response:
raw_data: dict = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, logger:logging.Logger, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
def create(self): return Metrics(logger=self.logger, stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class MetricsCollection(Request):
message: List[dict] = []
def build_url(self):
if self.api_is_private_endpoint:
return f'{self.api_base}/bulk-post-metrics'
else:
return f'{self.api_base}{cfg.bulk_post_metrics_ep}'
def set_data(self, metrics: dict):
self.message.append(metrics)
def clear(self):
self.message = []
class Response:
raw_data: dict = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, logger:logging.Logger, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
def create(self): return MetricsCollection(logger=self.logger, stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class NotifyException(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-notify-exception")
else:
return '{}{}'.format(self.api_base, cfg.notify_exception_ep)
def set_data(self, account_id, instance_id, exception, msg):
self.message = {
"exception": exception,
"message": msg,
"instance_id": instance_id,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, logger:logging.Logger, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
def create(self): return NotifyException(logger=self.logger, stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class FsResizeCompleted(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-delete-resize-item")
else:
return '{}{}'.format(self.api_base, cfg.fs_resize_completed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, logger:logging.Logger, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
def create(self): return FsResizeCompleted(logger=self.logger, stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class HoldingRemoveAction(Request):
message = {}
def build_url(self):
return '{}{}'.format(self.api_base, cfg.hold_remove_action_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, index, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"index": index,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, logger:logging.Logger,stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
def create(self): return HoldingRemoveAction(logger=self.logger, stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class FsResizeFailed(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-fs-resize-failed")
else:
return '{}{}'.format(self.api_base, cfg.resize_failed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, error, resize_steps, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"error": error,
"resize_steps": resize_steps,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, logger:logging.Logger,stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
def create(self): return FsResizeFailed(logger=self.logger, stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class SyncMachineActions(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-sync-machine-actions")
else:
return '{}{}'.format(self.api_base, cfg.sync_machine_actions_ep)
def set_data(self, action_id, account_id, status, fs_id):
self.message = {
'account_id': account_id,
'actions': {
action_id: {
'fs_id': fs_id,
'instruction_status': status
}
}
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, logger:logging.Logger, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
def create(self): return SyncMachineActions(logger=self.logger, stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class MigrationStartActionCompleted(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-migration-start-action-complete")
else:
return '{}{}'.format(self.api_base, cfg.migration_start_action_completed_ep)
def set_data(self, account_id, fs_id, action_id, mount_path, volume_id, region, cloud_vendor, dev_path, exit_code,
error):
self.message = {
"account_id": account_id,
"fs_id": fs_id,
"action_id": action_id,
"mount_path": mount_path,
"volume_id": volume_id,
"region": region,
"cloud_vendor": cloud_vendor,
"dev_path": dev_path,
"exit_code": exit_code,
"error": error
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, logger:logging.Logger,stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
self.logger = logger
def create(self): return MigrationStartActionCompleted(logger=self.logger, stage=self.stage, version=self.version,
api_key=self.api_key,
api_base=self.api_base) | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/protocol.py | protocol.py |
import json
import time
import traceback
from typing import Dict
from copy import deepcopy
from decimal import Decimal
from zesty.id_handler import create_zesty_id, create_zesty_filesystem_id
from ..actions import ZBSAction
from .BlockDevice import BlockDevice
from .Usage import Usage
GB_IN_BYTES = 1024**3
class FileSystem:
"""
This object interacts with DynamoDB representing a FileSystem.
As per the data model migration ZES-2884,
these will be backwards compatible and awkward in appearance until
the code is brought up to date.
"""
def __init__(
self,
fs_id: str,
account_id: str = None,
account_uuid: str = None,
agent_update_required: bool = None,
btrfs_version: str = None,
cloud: str = None,
cloud_vendor: str = None,
cycle_period: int = None,
delete_on_termination: bool = None,
devices: Dict[str, BlockDevice] = None,
encrypted: dict = None,
existing_actions: Dict[str, ZBSAction] = None,
expiredAt: int = None,
fs_cost: float = None,
fs_devices_to_count: int = None,
fs_size: int = None,
fs_type: str = None,
fs_usage: int = None,
has_unallocated_space: bool = None,
inodes: Dict[str, Usage] = None,
instance_id: str = None,
instance_type: str = None,
is_ephemeral: bool = None,
is_partition: bool = None,
is_zesty_disk: bool = None,
label: str = None,
last_update: int = None,
LV: str = None,
lvm_path: str = None,
mount_path: str = None,
name: str = None,
org_id: str = None,
partition_id: str = None,
partition_number: int = None,
platform: str = None,
potential_savings: float = None,
region: str = None,
resizable: bool = None,
space: Dict[str, Usage] = None,
tags: Dict[str, str] = None,
unallocated_chunk: int = None,
update_data_ts: int = 0,
VG: str = None,
wrong_fs_alert: bool = None,
zesty_disk_iops: int = None,
zesty_disk_throughput: int = None,
zesty_disk_vol_type: str = None,
max_utilization_in_72_hrs: int = None,
package_version: str = None,
autoupdate_last_execution_time: str = None,
statvfs_raw_data: Dict[str, str] = None,
pvc_id: str = None,
mount_options: list = None,
leading_device: str = None,
policies: Dict[str, dict] = None,
instance_tags: Dict[str, str] = None,
is_manageable: bool = False, #related migration
is_emr: bool = False
):
# Initialize empty dict not as default arg
existing_actions = {} if existing_actions is None else existing_actions
devices = {} if devices is None else devices
inodes = {} if inodes is None else inodes
space = {} if space is None else space
tags = {} if tags is None else tags
instance_tags = {} if instance_tags is None else instance_tags
self.account_id = account_id
self.account_uuid = account_uuid
self.agent_update_required = agent_update_required
self.btrfs_version = btrfs_version
if cloud is None and cloud_vendor is None:
self.cloud = 'Amazon'
self.cloud_vendor = 'Amazon'
elif cloud:
self.cloud = cloud
self.cloud_vendor = cloud
elif cloud_vendor:
self.cloud = cloud_vendor
self.cloud_vendor = cloud_vendor
self.cycle_period = cycle_period
self.devices = self.init_devices(devices)
self.delete_on_termination = delete_on_termination
self.encrypted = encrypted
self.existing_actions = existing_actions
self.expiredAt = expiredAt
self.fs_cost = fs_cost
self.fs_devices_to_count = fs_devices_to_count
try:
self.fs_id = create_zesty_filesystem_id(
cloud=self.cloud_vendor,
fs_id=fs_id
)
except Exception as e:
self.fs_id = fs_id
self.fs_size = fs_size
self.fs_type = fs_type
self.fs_usage = fs_usage
self.has_unallocated_space = has_unallocated_space
self.inodes = Usage(inodes)
self.instance_id = instance_id
self.instance_type = instance_type
self.is_ephemeral = is_ephemeral
self.is_partition = is_partition
self.is_zesty_disk = is_zesty_disk
self.label = label
if last_update is None:
self.last_update = int(time.time()) - 60
else:
self.last_update = last_update
self.LV = LV
self.lvm_path = lvm_path
self.mount_path = mount_path
self.name = name
self.org_id = org_id
self.partition_id = partition_id
self.partition_number = partition_number
self.platform = platform
self.potential_savings = potential_savings
self.region = region
self.resizable = resizable
self.space = Usage(space)
self.tags = tags
self.unallocated_chunk = unallocated_chunk
self.update_data_ts = update_data_ts
self.VG = VG
self.wrong_fs_alert = wrong_fs_alert
self.zesty_disk_iops = zesty_disk_iops
self.zesty_disk_throughput = zesty_disk_throughput
self.zesty_disk_vol_type = zesty_disk_vol_type
self.max_utilization_in_72_hrs = max_utilization_in_72_hrs
self.package_version = package_version
self.autoupdate_last_execution_time = autoupdate_last_execution_time
self.statvfs_raw_data = statvfs_raw_data
self.pvc_id = pvc_id
self.mount_options = mount_options
self.leading_device = leading_device
self.policies = policies
self.instance_tags = instance_tags
self.is_manageable = is_manageable #related migration
self.is_emr = is_emr
@staticmethod
def init_devices(devices: Dict[str, BlockDevice]):
if not devices:
return {}
else:
devices = deepcopy(devices)
for dev in devices:
if isinstance(devices[dev], BlockDevice):
continue
devices[dev] = BlockDevice(
**devices.get(dev, {})
)
return devices
def as_dict(self) -> dict:
return_dict = json.loads(json.dumps(self, default=self.object_dumper))
return {k: v for k, v in return_dict.items() if v is not None}
@staticmethod
def object_dumper(obj) -> dict:
try:
return obj.__dict__
except AttributeError as e:
if isinstance(obj, Decimal):
return int(obj)
print(f"Got exception in object_dumper value: {obj} | type : {type(obj)}")
print(traceback.format_exc())
return obj
def serialize(self) -> dict:
return self.as_dict()
def __repr__(self) -> str:
return f"FileSystem:{self.fs_id}" | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/models/FileSystem.py | FileSystem.py |
from typing import Dict, Union
from sqlalchemy.orm import Session, sessionmaker, Query
from sqlalchemy.sql.elements import or_, Label
from .InstancesTags import InstancesTags
try:
from sqlalchemy import Column, engine, case, func, cast, String, text
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.dialects.postgresql import BOOLEAN, FLOAT, INTEGER, BIGINT, \
JSON, TIMESTAMP, VARCHAR
except ImportError:
raise ImportError("sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
Base = declarative_base()
class EbsVolume(Base):
# TODO: Move this model into our Alembic system
# when a modification of this model is needed.
__tablename__ = "disks"
volume_id = Column(VARCHAR, primary_key=True)
org_id = Column(VARCHAR, index=True)
account_uuid = Column(VARCHAR, index=True)
account_id = Column(VARCHAR, index=True)
region = Column(VARCHAR, index=True)
volume_type = Column(VARCHAR, index=True)
cloud = Column(VARCHAR, index=True)
availability_zone = Column(VARCHAR)
create_time = Column(TIMESTAMP)
encrypted = Column(BOOLEAN)
size = Column(INTEGER)
snapshot_id = Column(VARCHAR)
state = Column(VARCHAR)
iops = Column(INTEGER)
tags = Column(JSON)
attachments = Column(JSON)
attached_to = Column(JSON)
monthly_cost = Column(FLOAT, default=0)
is_unused_resource = Column(INTEGER, default=0)
unused_since = Column(VARCHAR)
agent_installed = Column(BOOLEAN, default=False)
_zbs_supported_os = Column(INTEGER)
potential_savings = Column(FLOAT, default=0)
# dict for custom_order_by class method
col_to_actual_sorting_col = {"instance_tags": "instance_tags_keys"}
def __init__(
self,
volume_aws_schema: Dict,
account_uuid: str = None):
if account_uuid:
self.account_uuid = account_uuid
else:
self.account_uuid = volume_aws_schema["account_uuid"]
self.volume_id = volume_aws_schema["volume_id"]
self.org_id = volume_aws_schema["org_id"]
self.account_id = volume_aws_schema["account_id"]
self.cloud = volume_aws_schema["cloud"]
self.region = volume_aws_schema["region"]
self.volume_type = volume_aws_schema["volume_type"]
self.availability_zone = volume_aws_schema["availability_zone"]
self.create_time = volume_aws_schema["create_time"]
self.encrypted = volume_aws_schema["encrypted"]
self.size = volume_aws_schema["size"]
self.snapshot_id = volume_aws_schema["snapshot_id"]
self.state = volume_aws_schema["state"]
self.iops = volume_aws_schema.get("iops", 0)
self.tags = volume_aws_schema.get("tags", {})
self.attachments = volume_aws_schema.get("attachments", [])
self.attached_to = volume_aws_schema.get("attached_to", [])
self.monthly_cost = volume_aws_schema.get("monthly_cost", 0)
self.is_unused_resource = volume_aws_schema.get(
"is_unused_resource", 0)
self.unused_since = volume_aws_schema.get("unused_since", None)
self.agent_installed = volume_aws_schema.get("agent_installed", False)
self._zbs_supported_os = volume_aws_schema.get("_zbs_supported_os")
self.potential_savings = volume_aws_schema.get("potential_savings", 0)
def __repr__(self):
return f"{self.__tablename__}:{self.volume_id}"
@classmethod
def instance_id_filter(cls, query: Query, value: str):
val = f'%{value}%'
query = query.filter(
case((or_(cls.attached_to == None, func.json_array_length(cls.attached_to) == 0), False),
else_=cast(cls.attached_to, String).ilike(val)))
return query
@classmethod
def instance_name_filter(cls, query: Query, value: str):
subq = query.session.query(InstancesTags.instance_name)
val = '%{}%'.format(value.replace("%", "\\%"))
query = query.filter((subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (
cls.account_id == InstancesTags.account_id))).ilike(val))
return query
@classmethod
def instance_tags_filter(cls, query: Query, value: str):
session = query.session
subq = session.query(InstancesTags.instance_tags)
python_types_to_pg = {int: BIGINT, float: FLOAT, bool: BOOLEAN}
for key_val in value:
key = key_val.get('key')
val = key_val.get('value')
if key is not None and val is not None:
if not isinstance(val, str):
query = query.filter(cast(cast(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)).op('->')(key)), String), python_types_to_pg[type(val)]) == val)
else:
val = f'%{val}%'
query = query.filter(cast(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)).op('->')(key)), String).ilike(val))
elif key is not None:
query = query.filter(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id))).op('?')(key))
elif val is not None:
if isinstance(val, str):
query = query.filter(cast(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)), String)
.regexp_replace(r'.+\: "[^"]*(' + str(val) + r')[^"]*"[,\s}].*', "\\1") == f"{val}")
else:
if isinstance(val, bool):
val = f'"{val}"'
query = query.filter(cast(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)), String)
.regexp_replace(r'.+\: (' + str(val) + r')[,\s}].*', "\\1") == f"{val}")
return query
# Custom query
@classmethod
def custom_query(cls, session: Union[Session, sessionmaker]) -> Query:
q = session.query(cls)
subq_2 = session.query(func.json_object_keys(InstancesTags.instance_tags))
subq_3 = session.query(InstancesTags.instance_tags)
instance_name_clause = "regexp_replace(cast(array((select instances_tags.instance_name from instances_tags " \
"inner join json_array_elements(disks.attached_to) as attached_to_set " \
"on instances_tags.instance_id = replace(cast(attached_to_set.value as varchar), '\"', '') " \
"and instances_tags.account_id = disks.account_id)) as varchar), '[\\{\\}\"]', '', 'g')"
q = q.add_columns(case((or_(cls.attached_to == None, func.json_array_length(cls.attached_to) == 0), ''),
else_=cast(cls.attached_to, String).regexp_replace(r'[\[\]"]', '', 'g'))
.label("instance_id"),
Label('instance_name', text(instance_name_clause)),
func.array(subq_2.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) &
(cls.account_id == InstancesTags.account_id)))
.label('instance_tags_keys'),
subq_3.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) &
(cls.account_id == InstancesTags.account_id))
.label('instance_tags'))
return q
@classmethod
def custom_order_by(cls, sorting_column: str, sorting_order: str) -> str:
actual_sorting_column = cls.col_to_actual_sorting_col.get(sorting_column, sorting_column)
return f"{actual_sorting_column} {sorting_order}"
def get_volume_id(self):
return self.volume_id
def as_dict(self):
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def create_tables(engine: engine.base.Engine) -> None: #type: ignore
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/models/EbsVolume.py | EbsVolume.py |
import time
from enum import Enum, auto
from typing import Dict, List, Optional, Union
from uuid import UUID as _UUID
from uuid import uuid4
from sqlalchemy import Enum as sa_ENUM
from sqlalchemy import INT
from sqlalchemy.sql.schema import ForeignKey
from sqlalchemy.dialects.postgresql import ARRAY, UUID
try:
from sqlalchemy import Column, String, case, cast, engine, func, or_
from sqlalchemy.dialects.postgresql import (BIGINT, BOOLEAN, FLOAT, JSON,
TIMESTAMP, VARCHAR)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Query, Session, aliased, sessionmaker
except ImportError:
raise ImportError(
"sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
from ..actions import ZBSAction
from .BlockDevice import BlockDevice
from .Usage import Usage
Base = declarative_base()
class BaseMixin:
"""Base for database models with some default fields"""
created_at = Column(TIMESTAMP, server_default=func.now())
updated_at = Column(TIMESTAMP, onupdate=func.now())
class ManagedFsMixin:
fs_id = Column(VARCHAR, primary_key=True)
account_id = Column(VARCHAR, index=True, default=None)
account_uuid = Column(VARCHAR, index=True, default=None)
agent_update_required = Column(BOOLEAN, default=None)
btrfs_version = Column(VARCHAR, default=None)
cloud = Column(VARCHAR, default=None)
cloud_vendor = Column(VARCHAR, default=None)
cycle_period = Column(BIGINT, default=None)
delete_on_termination = Column(BOOLEAN, default=None)
devices = Column(JSON, default=None)
encrypted = Column(JSON, default=None)
existing_actions = Column(JSON, default=None)
expiredAt = Column(BIGINT, default=None)
fs_cost = Column(FLOAT, default=None)
fs_devices_to_count = Column(BIGINT, default=None)
fs_size = Column(BIGINT, default=None)
fs_type = Column(VARCHAR, default=None)
fs_usage = Column(BIGINT, default=None)
has_unallocated_space = Column(BOOLEAN, default=None)
inodes = Column(JSON, default=None)
instance_id = Column(VARCHAR, default=None)
instance_type = Column(VARCHAR, default=None)
is_ephemeral = Column(BOOLEAN, default=None)
is_partition = Column(BOOLEAN, default=None)
is_zesty_disk = Column(BOOLEAN, default=None)
label = Column(VARCHAR, default=None)
last_update = Column(BIGINT, default=None)
LV = Column(VARCHAR, default=None)
lvm_path = Column(VARCHAR, default=None)
mount_path = Column(VARCHAR, default=None)
name = Column(VARCHAR, default=None)
org_id = Column(VARCHAR, index=True)
partition_id = Column(VARCHAR, default=None)
partition_number = Column(BIGINT, default=None)
platform = Column(VARCHAR, default=None)
potential_savings = Column(FLOAT, default=None)
region = Column(VARCHAR, index=True)
resizable = Column(BOOLEAN, default=None)
space = Column(JSON, default=None)
tags = Column(JSON, default=None)
unallocated_chunk = Column(BIGINT, default=None)
update_data_ts = Column(BIGINT, default=0)
VG = Column(VARCHAR, default=None)
wrong_fs_alert = Column(BOOLEAN, default=None)
zesty_disk_iops = Column(BIGINT, default=None)
zesty_disk_throughput = Column(BIGINT, default=None)
zesty_disk_vol_type = Column(VARCHAR, default=None)
max_utilization_in_72_hrs = Column(BIGINT, default=None)
package_version = Column(VARCHAR, default=None)
autoupdate_last_execution_time = Column(VARCHAR, default=None)
policies = Column(JSON, default=None)
instance_tags = Column(JSON, default=None)
migration_uuid = Column(UUID(as_uuid=True), nullable=True)
is_manageable = Column(BOOLEAN, default=False)
# dict for custom_order_by class method
col_to_actual_sorting_col = {
"policies": "policies_name",
"instance_tags": "instance_tags_keys"}
def __init__(
self,
fs_id: str,
account_id: str = None,
account_uuid: str = None,
agent_update_required: bool = None,
btrfs_version: str = None,
cloud: str = None,
cloud_vendor: str = None,
cycle_period: int = None,
delete_on_termination: bool = None,
devices: Dict[str, BlockDevice] = None,
encrypted: dict = None,
existing_actions: Dict[str, ZBSAction] = None,
expiredAt: int = None,
fs_cost: float = None,
fs_devices_to_count: int = None,
fs_size: int = None,
fs_type: str = None,
fs_usage: int = None,
has_unallocated_space: bool = None,
inodes: Dict[str, Usage] = None,
instance_id: str = None,
instance_type: str = None,
is_ephemeral: bool = None,
is_partition: bool = None,
is_zesty_disk: bool = None,
label: str = None,
last_update: int = None,
LV: str = None,
lvm_path: str = None,
mount_path: str = None,
name: str = None,
org_id: str = None,
partition_id: str = None,
partition_number: int = None,
platform: str = None,
potential_savings: float = None,
region: str = None,
resizable: bool = None,
space: Dict[str, Usage] = None,
tags: Dict[str, str] = None,
unallocated_chunk: int = None,
update_data_ts: int = 0,
VG: str = None,
wrong_fs_alert: bool = None,
zesty_disk_iops: int = None,
zesty_disk_throughput: int = None,
zesty_disk_vol_type: str = None,
max_utilization_in_72_hrs: int = None,
package_version: str = None,
autoupdate_last_execution_time: str = None,
statvfs_raw_data: Dict[str, str] = None, # unused to support initialization with **dict, do not remove
policies: Dict[str, dict] = None,
instance_tags: Dict[str, str] = None,
is_emr: bool = False, # unused to support initialization with **dict, do not remove
is_manageable: bool = False,
**kwargs
):
self.fs_id = fs_id
self.account_id = account_id
self.account_uuid = account_uuid
self.agent_update_required = agent_update_required
self.btrfs_version = btrfs_version
if cloud is None and cloud_vendor is None:
self.cloud = 'Amazon'
self.cloud_vendor = 'Amazon'
elif cloud:
self.cloud = cloud
self.cloud_vendor = cloud
elif cloud_vendor:
self.cloud = cloud_vendor
self.cloud_vendor = cloud_vendor
self.cycle_period = cycle_period
self.delete_on_termination = delete_on_termination
self.devices = devices
if devices:
for dev in self.devices:
if isinstance(self.devices[dev], BlockDevice):
self.devices[dev] = self.devices[dev].asdict()
else:
self.devices[dev] = self.devices.get(dev, {})
self.encrypted = encrypted
if existing_actions:
for action in existing_actions:
self.existing_actions[action] = self.existing_actions[action].serialize(
)
self.expiredAt = expiredAt
self.fs_cost = fs_cost
self.fs_devices_to_count = fs_devices_to_count
self.fs_size = fs_size
self.fs_type = fs_type
self.fs_usage = fs_usage
self.has_unallocated_space = has_unallocated_space
self.inodes = inodes
self.instance_id = instance_id
self.instance_type = instance_type
self.is_ephemeral = is_ephemeral
self.is_partition = is_partition
self.is_zesty_disk = is_zesty_disk
self.label = label
if last_update:
self.last_update = last_update
else:
self.last_update = int(time.time()) - 60
self.LV = LV
self.lvm_path = lvm_path
self.mount_path = mount_path
self.name = name
self.org_id = org_id
self.partition_id = partition_id
self.partition_number = partition_number
self.platform = platform
self.potential_savings = potential_savings
self.region = region
self.resizable = resizable
self.space = space
self.tags = tags
self.unallocated_chunk = unallocated_chunk
self.update_data_ts = update_data_ts
self.VG = VG
self.wrong_fs_alert = wrong_fs_alert
self.zesty_disk_iops = zesty_disk_iops
self.zesty_disk_throughput = zesty_disk_throughput
self.zesty_disk_vol_type = zesty_disk_vol_type
self.max_utilization_in_72_hrs = max_utilization_in_72_hrs
self.package_version = package_version
self.autoupdate_last_execution_time = autoupdate_last_execution_time
self.policies = policies
self.instance_tags = instance_tags
self.is_manageable = is_manageable
def __repr__(self) -> str:
return f"{self.__tablename__}:{self.fs_id}"
def asdict(self) -> dict:
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def as_dict(self) -> dict:
return self.asdict()
# Custom filters
@classmethod
def policies_filter(cls, query: Query, value: str):
query = query.filter(
cast(cls.policies, String).contains(f'"name": "{value}"'))
return query
@classmethod
def instance_name_filter(cls, query: Query, value: str):
val = '%{}%'.format(value.replace("%", "\\%"))
query = query.filter(
case((cls.instance_tags == None, ''), else_=func.replace(cast(cls.instance_tags.op('->')('Name'), String), "\"", "")).ilike(val))
return query
# Custom query
@classmethod
def custom_query(cls, session: Union[Session, sessionmaker]) -> Query:
clsb = aliased(cls)
subq = session.query(func.json_object_keys(clsb.instance_tags))
q = session.query(cls)
q = q.add_columns(case((or_(cls.policies == None, cast(cls.policies, String) == 'null'), ''),
else_=cast(cls.policies, String).regexp_replace(r'.+"name":\s"([^"]+).+', "\\1"))
.label("policies_name"),
case((cls.instance_tags == None, ''),
else_=func.replace(cast(cls.instance_tags.op('->')('Name'), String), "\"", ""))
.label('instance_name'),
case((cast(cls.instance_tags, String) == 'null', []),
else_=func.array(subq.scalar_subquery().where(cls.fs_id == clsb.fs_id)))
.label('instance_tags_keys')
)
return q
@classmethod
def custom_order_by(cls, sorting_column: str, sorting_order: str) -> str:
actual_sorting_column = cls.col_to_actual_sorting_col.get(
sorting_column, sorting_column)
return f"{actual_sorting_column} {sorting_order}"
class ManagedFs(ManagedFsMixin, BaseMixin, Base):
__tablename__ = "managed_filesystems"
class MigrationStatus(Enum):
Active = auto()
Aborting = auto()
Aborted = auto()
Completed = auto()
Failed = auto()
class RunningMigrations(BaseMixin, Base):
__tablename__ = "active_migration"
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), nullable=False, primary_key=True)
finished_at = Column(TIMESTAMP, nullable=True)
account_id = Column(VARCHAR, default=None)
region = Column(VARCHAR(255))
reboot = Column(BOOLEAN, default=False)
# array of day numbers when reboot is allowed 0-6
days = Column(ARRAY(VARCHAR))
# timeframe from-to in %I:%M %p
from_ = Column(VARCHAR)
to = Column(VARCHAR)
status = Column(sa_ENUM(MigrationStatus), nullable=False, server_default=MigrationStatus.Active.name)
is_rebooting = Column(BOOLEAN, default=False) # TODO: can this be deleted?
snapshot_id = Column(VARCHAR(255))
snapshot_remove_after = Column(INT, nullable=True) # in days
snapshot_create_started_at = Column(TIMESTAMP, nullable=True)
snapshot_deleted_at = Column(TIMESTAMP, nullable=True)
ebs_id = Column(VARCHAR(255))
ebs_remove_after = Column(INT, nullable=True) # in days
ebs_detached_at = Column(TIMESTAMP, nullable=True)
ebs_deleted_at = Column(TIMESTAMP, nullable=True)
def __init__(
self,
fs_id: str,
migration_uuid: _UUID,
account_id: str = None,
region: str = None,
days: Optional[List[int]] = None,
from_: Optional[str] = None,
to: Optional[str] = None,
reboot: bool = False,
status: MigrationStatus = MigrationStatus.Active,
ebs_remove_after: int = 1,
snapshot_remove_after: int = 7):
self.migration_uuid = migration_uuid
self.fs_id = fs_id
self.account_id = account_id
self.region = region
self.days = days
self.from_ = from_
self.to = to
self.reboot = reboot
self.status = status
self.ebs_remove_after = ebs_remove_after
self.snapshot_remove_after = snapshot_remove_after
@staticmethod
def new_migration(
fs_id,
days: Optional[List[int]] = None,
from_: Optional[str] = None,
to: Optional[str] = None,
reboot: bool = False,
ebs_remove_after: int = 1,
snapshot_remove_after: int = 7) -> 'RunningMigrations':
return RunningMigrations(
fs_id,
uuid4(),
days, from_,
to, reboot,
ebs_remove_after,
snapshot_remove_after,
)
class MigrationHistory(BaseMixin, Base):
__tablename__ = "migration_history"
time_start = Column(TIMESTAMP)
time_end = Column(TIMESTAMP)
status = Column(VARCHAR)
phase = Column(VARCHAR, primary_key=True)
progress = Column(FLOAT)
completed = Column(BOOLEAN)
failed = Column(BOOLEAN)
failure_reason = Column(VARCHAR)
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), ForeignKey("active_migration.migration_uuid", ondelete="CASCADE"),
nullable=False, primary_key=True, index=True)
# should be returned from the agent in seconds
estimated = Column(INT)
name = Column(VARCHAR)
weight = Column(INT)
abortable = Column(BOOLEAN)
index = Column(INT, primary_key=True)
def __init__(
self,
status: str,
phase: str,
progress: int,
eta: int,
name: str,
weight: int,
abortable: bool,
start_time: int,
end_time: int,
migration_uuid: 'UUID',
fs_id: str,
index: int):
self.status = status
self.phase = phase
self.progress = progress
self.estimated = eta
self.name = name
self.weight = weight
self.time_start = start_time
self.time_end = end_time
self.abortable = abortable
self.index = index
self.migration_uuid = migration_uuid
self.fs_id = fs_id
class WrongActionException(Exception):
pass
class MigrationActions(BaseMixin, Base):
__tablename__ = "migration_actions"
id = Column(INT, primary_key=True, autoincrement=True)
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), ForeignKey("active_migration.migration_uuid", ondelete="CASCADE"),
nullable=False)
action = Column(VARCHAR)
value = Column(VARCHAR)
allowed_actions = ['start', 'reboot', 'reboot_now', 'abort']
def __init__(self, fs_id, migration_uuid, action, value):
self.fs_id = fs_id
self.migration_uuid = migration_uuid
self.set_action(action)
self.value = value
def set_action(self, action):
if action not in self.allowed_actions:
raise WrongActionException
self.action = action
def create_tables(engine: engine.base.Engine) -> None:
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/models/ManagedFS.py | ManagedFS.py |
import json
import time
from datetime import datetime
from typing import Dict, Union
from .ManagedFS import Base, BaseMixin
try:
from sqlalchemy import (Column, PrimaryKeyConstraint, String, case, cast,
engine, func, or_, select, text)
from sqlalchemy.dialects.postgresql import (BIGINT, BOOLEAN, FLOAT, JSON,
VARCHAR)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Query, Session, aliased, sessionmaker
except ImportError:
raise ImportError(
"sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
class InstancesTags(BaseMixin, Base):
__tablename__ = "instances_tags"
instance_id = Column(VARCHAR, primary_key=True)
account_id = Column(VARCHAR, index=True, default=None)
account_uuid = Column(VARCHAR, index=True, default=None)
instance_name = Column(VARCHAR, default=None)
instance_tags = Column(JSON, default=None)
expired_at = Column(BIGINT, default=None)
__table_args__ = (
PrimaryKeyConstraint('instance_id', name='instances_tags_pkey'),)
def __init__(
self,
instance_id: str,
account_id: str = None,
account_uuid: str = None,
instance_name: str = None,
instance_tags: dict = None,
expired_at: int = None
):
self.instance_id = instance_id
self.account_id = account_id
self.account_uuid = account_uuid
self.instance_name = instance_name
self.instance_tags = instance_tags
self.expired_at = expired_at or int(datetime.utcnow().timestamp()) + 3 * 3600
def __eq__(self, other) -> bool:
return self.__hash__() == other.__hash__()
def __hash__(self) -> int:
return hash(''.join(map(lambda c: getattr(self, c.name) or '',
filter(lambda c: c.name not in ['instance_tags', 'expired_at', 'created_at', 'updated_at'],
self.__table__.columns))) +
json.dumps(self.instance_tags))
def __repr__(self) -> str:
return f"{self.__tablename__}:{self.instance_id}"
def asdict(self) -> dict:
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def as_dict(self) -> dict:
return self.asdict()
def create_tables(engine: engine.base.Engine) -> None:
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/models/InstancesTags.py | InstancesTags.py |
import json
import traceback
from decimal import Decimal
from typing import Dict
from zesty.id_handler import create_zesty_id
GB_IN_BYTES = 1024**3
class BlockDevice:
def __init__(
self,
size: int,
btrfs_dev_id: str = None,
cloud_vendor: str = 'Amazon',
created: str = None,
dev_usage: int = None,
iops: int = None,
throughput: int = None,
lun: int = None,
map: str = None,
iops_stats: Dict[str, int] = None,
parent: str = None,
unlock_ts: int = 0,
volume_id: str = None,
volume_type: str = None,
device: str = None,
btrfs_size: int = None,
extendable: bool = True,
removable: bool = True
):
"""
Block Device class doc:
:param size: Size of the device in Bytes
:param btrfs_dev_id: ID of the device inside the BTRFS structure
:param cloud_vendor: Cloud vendor (AWS/Azure/GCP)
:param created: Device creation date
:param dev_usage: How much of the device is in use (in Bytes)
:param iops: Device IOPS amount
:param lun: LUN number (Only for Azure)
:param map: The mount slot of the device inside the OS
:param iops_stats: Dict with IOPS statistics
:param parent: If it's a partition so this one represent the parent device
:param unlock_ts: TS when the device will be ready to be extended again
:param volume_id: Device ID
:param volume_type: Type of the device in the cloud
:param device: Device mount slot from the cloud
:param btrfs_size: The usable size for the filesystem in bytes
:param extendable: Whether ZestyDisk Handsfree logic is allowed to extend the device
:param removable: Whether ZestyDisk Handsfree logic is allowed to remove the device from the filesystem
"""
# Init empty dict here instead of passing as default value
iops_stats = {} if iops_stats is None else iops_stats
self.size = size
self.cloud_vendor = cloud_vendor
try:
self.volume_id = create_zesty_id(
cloud=self.cloud_vendor,
resource_id=volume_id
)
except:
self.volume_id = volume_id
self.btrfs_dev_id = btrfs_dev_id
self.created = created
self.dev_usage = dev_usage
self.iops = iops
self.throughput = throughput
self.lun = lun
self.map = map
self.iops_stats = iops_stats
if device:
self.device = device
if parent:
self.parent = parent
if not unlock_ts:
self.unlock_ts = 0
else:
self.unlock_ts = unlock_ts
self.volume_type = volume_type
self.btrfs_size = btrfs_size
self.extendable = extendable
self.removable = removable
def as_dict(self) -> dict:
return_dict = json.loads(json.dumps(self, default=self.object_dumper))
return {k: v for k, v in return_dict.items() if v is not None}
@staticmethod
def object_dumper(obj) -> dict:
try:
return obj.__dict__
except AttributeError as e:
if isinstance(obj, Decimal):
return int(obj)
print(f"Got exception in object_dumper value: {obj} | type : {type(obj)}")
print(traceback.format_exc())
return obj
def serialize(self) -> dict:
return self.as_dict()
def __repr__(self) -> str:
return str(self.as_dict()) | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/models/BlockDevice.py | BlockDevice.py |
from abc import ABC, abstractmethod
import enum
from typing import TYPE_CHECKING, Dict
if TYPE_CHECKING:
from ..actions import ZBSAction
pass
class ISpecialInstructions(ABC):
pass
class IActionHF(ABC):
class Status(enum.Enum):
NEW = 1
PENDING = 2
RUNNING = 3
CANCELED = 4
READY = 5
HOLDING = 6
REVERT = 7
PAUSE = 8 # should stop
STOPPED = 9 # action stopped
@abstractmethod
def get_action_id(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_id' is abstract, please implement")
@abstractmethod
def get_action_type(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_type' is abstract, please implement")
@abstractmethod
def get_status(self) -> Status:
raise NotImplementedError(
"ActionHF 'get_status' is abstract, please implement")
@abstractmethod
def set_status(self, status: Status):
raise NotImplementedError(
"ActionHF 'set_status' is abstract, please implement")
@abstractmethod
def get_special_instructions(self) -> ISpecialInstructions:
raise NotImplementedError(
"ActionHF 'get_special_instructions' is abstract, please implement")
@abstractmethod
def set_special_instructions(self, special_instructions: ISpecialInstructions):
raise NotImplementedError(
"ActionHF 'set_special_instructions' is abstract, please implement")
class IDeviceHF(ABC):
@abstractmethod
def get_dev_id(self) -> str:
raise NotImplementedError(
"DeviceHF 'get_dev_id' is abstract, please implement")
@abstractmethod
def get_size(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_size' is abstract, please implement")
@abstractmethod
def get_usage(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_usage' is abstract, please implement")
@abstractmethod
def get_unlock_ts(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_unlock_ts' is abstract, please implement")
class IFileSystemHF(ABC):
@abstractmethod
def get_fs_id(self) -> str:
raise NotImplementedError(
"IFileSystemHF 'get_fs_id' is abstract, please implement")
@abstractmethod
def get_devices(self) -> Dict[str, IDeviceHF]:
raise NotImplementedError(
"IFileSystemHF 'get_devices' is abstract, please implement")
@abstractmethod
def get_existing_actions(self) -> Dict[str, IActionHF]:
raise NotImplementedError(
"IFileSystemHF 'get_existing_actions' is abstract, please implement") | zesty.zbs-api-securonix-agent-baseurl-debug | /zesty.zbs_api_securonix_agent_baseurl_debug-2.0.2023.7.26.1690405274-py3-none-any.whl/zesty/models/hf_interface.py | hf_interface.py |
import json
import uuid
from copy import deepcopy
from dataclasses import dataclass
from enum import Enum, auto
from typing import Dict, List, Optional, Union
class StepStatus(Enum):
# Note: this protocol needs to consider Dead-locking:
# examples: we should not have a step get stuck because of a failed communication or otherwise
INIT = auto()
ACK = auto()
DONE = auto()
FAIL = auto()
class DeviceType(Enum):
STANDARD = "standard"
GP3 = "gp3"
GP2 = "gp2"
ST1 = "st1"
SC1 = "sc1"
IO1 = "io1"
IO2 = "io2"
def __str__(self):
return self.value
class InstructionMetaclass(type):
def __new__(mcs, instruction_type: str):
return globals()[instruction_type]
def __call__(cls, action_id: str, *args, **kwargs):
if not issubclass(cls, StepInstruction):
raise TypeError(f"{cls.__name__} is not of StepInstruction type")
instruction = cls(action_id, *args, **kwargs)
return instruction
@classmethod
def deserialize_instruction(mcs, instruction_val: Union[str, dict]) -> 'StepInstruction':
if isinstance(instruction_val, str):
instruction_val = json.loads(instruction_val)
try:
instruction_type = instruction_val.pop('instruction_type')
instruction = mcs(instruction_type)(**instruction_val)
except Exception as e:
raise Exception(f"Failed to create instance from {instruction_type} class | Error: {e}")
instruction.set_values_from_dict(instruction_val)
return instruction
@dataclass
class StepInstruction:
action_id: str
step_id: Optional[str]
status: Optional[StepStatus]
def __post_init__(self) -> None:
if self.status is None:
self.status = StepStatus.INIT
if self.step_id is None:
self.step_id = str(uuid.uuid4())
self._instruction_type = None
def as_dict(self) -> dict:
dict_values = deepcopy(self.__dict__)
# Enum members
for key, value in filter(lambda t: isinstance(t[1], Enum), self.__dict__.items()):
dict_values[key] = value.name
dict_values.update({
key: getattr(self, key).name
})
dict_values['instruction_type'] = self.instruction_type
dict_values.pop('_instruction_type')
return dict_values
def serialize(self) -> str:
return json.dumps(self.as_dict())
@property
def instruction_type(self):
if not self._instruction_type:
self._instruction_type = type(self).__name__
return self._instruction_type
def set_values_from_dict(self, instruction_data: dict):
# Set Enum members values
enum_members = list(filter(lambda k: isinstance(getattr(self, k), Enum), self.__dict__.keys()))
for key in enum_members:
enum_cls = getattr(self, key).__class__
setattr(self, key, getattr(enum_cls, instruction_data[key]))
for key, val in instruction_data.items():
if key in enum_members or key == 'instruction_type':
continue
setattr(self, key, val)
# TODO: move correct/common elements here
@dataclass
class MachineInstruction(StepInstruction):
...
@dataclass
class CloudInstruction(StepInstruction):
...
@dataclass
class AddDiskCloud(CloudInstruction):
instance_id: str # Note: will need region/az but should
# be available from other source/context
dev_type: DeviceType
dev_delete_on_terminate: bool
dev_size_gb: int
dev_iops: Optional[int] = None
dev_throughput: Optional[int] = None
@dataclass
class AddDiskMachine(MachineInstruction):
dev_path: str
fs_id: str
fs_mount_path: str
volume_id: str
dev_map: Optional[str] = None
@dataclass
class ModifyDiskCloud(CloudInstruction):
dev_id: str # this is volume id
dev_type: Optional[DeviceType] = None
dev_size_gb: Optional[int] = None # Note: today this is change in size - should be final size
dev_iops: Optional[int] = None
dev_throughput: Optional[int] = None
# check if we need dev old size
def __post_init__(self) -> None:
super().__post_init__() # TODO: check if necessary for __post_init__
if self.dev_type is None \
and self.dev_size_gb is None \
and self.dev_iops is None \
and self.dev_throughput is None:
raise Exception(f"Must modify at least one attribute")
@dataclass
class DetachDiskCloud(CloudInstruction):
volume_id: str
@dataclass
class TakeSnapshotCloud(CloudInstruction):
dev_id: str
snapshot_id: str
@dataclass
class ExtendDiskSizeMachine(MachineInstruction):
# Note: this is a copy of AddDiskMachine
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
# dev_path: str # This is necessary for Monitored disk Extend only actions
# Probably better to have a separate payload/step
btrfs_dev_id: int
@dataclass
class ResizeDisksMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
resize_btrfs_dev_ids: Dict[str, int] # action_id, btrfs_dev_id
@dataclass
class GradualRemoveChunk:
iterations: int # Note: 0 iterations will represent delete
chunk_size_gb: int # Note: respective chunk_size for 0 iter represents pause/delete threshold
@dataclass
class RemoveDiskMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
btrfs_dev_id: int
dev_path: str
chunk_scheme: List[GradualRemoveChunk] # Note: Order matters
cutoff_gb: int
def as_dict(self) -> dict:
return {
"instruction_type": self.instruction_type,
"action_id": self.action_id,
"step_id": self.step_id,
"status": self.status.name,
"fs_id": self.fs_id,
"fs_mount_path": self.fs_mount_path,
"btrfs_dev_id": self.btrfs_dev_id,
"dev_path": self.dev_path,
"chunk_scheme": [{"iterations": chunk.iterations, "chunk_size_gb": chunk.chunk_size_gb} for chunk in
self.chunk_scheme], # Note: Order matters
"cutoff_gb": self.cutoff_gb
}
###### TODO: discuss with Osher - separate instructions, or overlap into one
@dataclass
class ModifyRemoveDisksMachine(MachineInstruction):
# will service Revert/ChangingTarget/NewResize
# because they are all correlated to the same
# instruction data - changing cutoff
fs_id: str # will there be ability to grab this from context?
fs_mount_path: str
cutoff_gb: int
revert_btrfs_dev_ids: Dict[str, int] # action_id, btrfs_dev_id
###### TODO: END discuss with Osher
@dataclass
class RemoveDiskCloud(CloudInstruction):
volume_id: str
@dataclass
class StartMigrationMachine(MachineInstruction):
fs_id: str # will there be ability to grab this from context?
account_id: str
action_id: str
fs_mount_path: str
volume_id: str
dev_path: str
reboot: bool | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/step_instructions.py | step_instructions.py |
import uuid as uuid_gen
from abc import abstractmethod
from typing import Dict
from .models.hf_interface import IActionHF, ISpecialInstructions
class ZBSActionData:
mount_path = None
device = None
filesystem = None
fs_type = None
is_partition = False
partition_number = None
LV = None
VG = None
lvm_path = None
chunk_size = None
def __init__(self, mount_path=None,
device=None,
filesystem=None,
fs_type=None,
is_partition=False,
partition_number=None,
LV=None,
VG=None,
lvm_path=None,
chunk_size=None,
dev_id=None,
dev_path=None,
parent=None,
btrfs_dev_id=None,
partition_id=None,
windows_old_size=None,
size=None,
_map=None):
self.mount_path = mount_path
self.filesystem = filesystem
self.fs_type = fs_type
self.device = device
self.is_partition = is_partition
self.partition_number = partition_number
self.LV = LV
self.VG = VG
self.lvm_path = lvm_path
self.chunk_size = chunk_size
self.dev_id = dev_id
self.dev_path = dev_path
self.parent = parent
self.btrfs_dev_id = btrfs_dev_id
self.partition_id = partition_id
self.windows_old_size = windows_old_size
self.size = size
self.map = _map
def serialize(self):
return self.__dict__
def set_data(self, json):
self.mount_path = json.get('mount_path')
self.filesystem = json.get('filesystem')
self.fs_type = json.get('fs_type')
self.device = json.get('device')
self.is_partition = json.get('is_partition', False)
self.partition_number = json.get('partition_number', '')
self.LV = json.get('LV', '')
self.VG = json.get('VG', '')
self.lvm_path = json.get('lvm_path', '')
self.chunk_size = json.get('chunk_size', 0)
self.dev_id = json.get('dev_id')
self.dev_path = json.get('dev_path')
self.parent = json.get('parent')
self.btrfs_dev_id = json.get('btrfs_dev_id')
self.partition_id = json.get('partition_id')
self.windows_old_size = json.get('windows_old_size')
self.size = json.get('size')
self.map = json.get('_map')
return self
class ZBSAgentReceiver:
"""
The ZBSAgentReceiver (Receiver class in the Command pattern) contain some important business logic.
It knows how to perform any kind of action sent by the ZBS Backend.
ZBSAgent is an abstract class, while the concrete implementations should be per OS
"""
@abstractmethod
def do_nothing(self, data: ZBSActionData) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'do_nothing' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def extend_fs(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'extend_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def add_disk(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'add_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_fs(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_fs' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def remove_disk(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'remove_disk' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def balance_ebs_structure(self, data: ZBSActionData, action_id) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'balance_ebs_structure' is abstract, please implement a concrete per OD receiver")
@abstractmethod
def start_migration(self, data: ZBSActionData, action_id, account_id=None) -> None:
raise NotImplementedError(
"ZBSAgentReceiver 'start_migration' is abstract, please implement a concrete per OD receiver")
class SpecialInstructions(ISpecialInstructions):
"""
Constructor for special instructions with optional parameters:
* dev_id: identify the device for the filesystem to which the action is attached
* size: specify the capacity for a new device or the additional capacity when extending a device
* sub_actions: when an action implements multiple actions, specify a dictionary:
-- { int(specifies action priorities): list(actions that can be run in parallel) }
-- Actions in a list keyed to a higher order cannot start until all Actions of lower orders complete
"""
def __init__(self, dev_id: str = None, size: int = None, sub_actions: Dict[int, Dict[str, IActionHF]] = None):
self.dev_id = dev_id
self.size = size
self.sub_actions = sub_actions
def __repr__(self):
return str(self.__dict__)
class ZBSAction(IActionHF):
"""
Base command class
Delegates the business logic to the receiver
There are receivers per OS (Linux and Windows for now)
"""
TYPE_FIELD_NAME = "type"
DATA_FIELD_NAME = "data"
STATUS_FIELD_NAME = "status"
UUID_FIELD_NAME = "uuid"
SPECIAL_INSTRUCTIONS_FIELD_NAME = "_ZBSAction__special_instructions"
__uuid = None
__status: IActionHF.Status = IActionHF.Status.NEW
__special_instructions: SpecialInstructions
subclasses = {}
def __init__(self, receiver: ZBSAgentReceiver = None, data: ZBSActionData = None, uuid: str = None):
self.receiver = receiver
self.data = data
if uuid is not None:
self.__uuid = uuid
else:
self.__uuid = str(uuid_gen.uuid4())
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses[cls.__name__] = cls
def __repr__(self):
special_instructions = self.get_special_instructions() if isinstance(self.get_special_instructions(),
Dict) else self.get_special_instructions().__dict__
repr_dict = dict(zip(['Action Type', 'Action Status', 'SpecialInstructions'],
[self.get_action_type(),
str(self.get_status().name),
special_instructions]))
return str(repr_dict)
def set_data(self, data: ZBSActionData):
self.data = data
def set_receiver(self, receiver: ZBSAgentReceiver):
self.receiver = receiver
def serialize(self):
result = self.__dict__
result[ZBSAction.TYPE_FIELD_NAME] = self.get_action_type()
result[ZBSAction.DATA_FIELD_NAME] = self.data.serialize() if self.data is not None else None
result[ZBSAction.STATUS_FIELD_NAME] = self.get_status().name
result[ZBSAction.UUID_FIELD_NAME] = self.get_action_id()
if hasattr(self, '_ZBSAction__special_instructions'):
result[
ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME] = self.get_special_instructions().__dict__ if self.__special_instructions is not None else None
return result
# ActionHF interface implementation
def get_action_id(self) -> str:
return self.__uuid
def get_action_type(self) -> str:
return str(type(self).__name__)
def get_status(self) -> IActionHF.Status:
return self.__status
def set_status(self, status: IActionHF.Status):
self.__status = status
def get_special_instructions(self) -> SpecialInstructions:
return self.__special_instructions
def set_special_instructions(self, special_instructions: SpecialInstructions):
self.__special_instructions = special_instructions
@staticmethod
def deserialize_type(json):
return json[ZBSAction.TYPE_FIELD_NAME]
@staticmethod
def deserialize_data(json):
return ZBSActionData().set_data(json[ZBSAction.DATA_FIELD_NAME])
@staticmethod
def deserialize_uuid(serialized_action):
return serialized_action.get(ZBSAction.UUID_FIELD_NAME)
@staticmethod
def deserialize_status(serialized_action):
return serialized_action.get(ZBSAction.STATUS_FIELD_NAME)
@staticmethod
def deserialize_special_instructions(serialized_action):
if not isinstance(serialized_action, dict):
serialized_action = serialized_action.serialize()
special_instructions = SpecialInstructions(
dev_id = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('dev_id'),
size = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('size'),
sub_actions = serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).get('sub_actions'),
)
for key, val in serialized_action.get(ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME, {}).items():
if key not in ['dev_id', 'size', 'sub_actions']:
setattr(special_instructions, str(key), val)
return special_instructions
@staticmethod
def deserialize_action(serialized_action):
action_type = ZBSAction.deserialize_type(serialized_action)
action_data = ZBSAction.deserialize_data(serialized_action) if serialized_action.get(
ZBSAction.DATA_FIELD_NAME) is not None else None
action_uuid = ZBSAction.deserialize_uuid(serialized_action)
action_status = ZBSAction.deserialize_status(serialized_action)
action_to_perform = ZBSActionFactory.create_action(action_type, action_uuid)
action_to_perform.set_data(action_data)
action_to_perform.set_status(IActionHF.Status[serialized_action.get('status')])
if ZBSAction.SPECIAL_INSTRUCTIONS_FIELD_NAME in serialized_action:
special_instructions = ZBSAction.deserialize_special_instructions(serialized_action)
action_to_perform.set_special_instructions(special_instructions)
return action_to_perform
@abstractmethod
def execute(self):
raise NotImplementedError("BaseAction is abstract, please implement a concrete action")
class DoNothingAction(ZBSAction):
"""
Do nothing action
"""
def execute(self):
print("Do nothing || Action ID : {}".format(self.get_action_id()))
class Factory:
def create(self, uuid): return DoNothingAction(uuid=uuid)
class ExtendFileSystemAction(ZBSAction):
"""
Extend File System Action.
"""
def execute(self, fs):
try:
return self.receiver.extend_fs(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return ExtendFileSystemAction(uuid=uuid)
class AddDiskAction(ZBSAction):
"""
Add Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.add_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return AddDiskAction(uuid=uuid)
class RemoveDiskAction(ZBSAction):
"""
Remove Disk Action.
"""
def execute(self, fs):
try:
return self.receiver.remove_disk(self.get_special_instructions(), self.get_action_id(), fs)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return RemoveDiskAction(uuid=uuid)
class BalanceFileSystemAction(ZBSAction):
"""
Balance File System Action.
"""
def execute(self):
try:
self.receiver.balance_fs(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceFileSystemAction(uuid=uuid)
class BalanceEBSStructureAction(ZBSAction):
"""
Balance EBS structure Action.
"""
def execute(self):
try:
self.receiver.extend_fs(self.data, self.get_action_id())
self.receiver.remove_disk(self.data, self.get_action_id())
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return BalanceEBSStructureAction(uuid=uuid)
class MigrationStartAction(ZBSAction):
"""
Migration Start Action.
The purpose of this action is to get a BE request to start a migration action for a mount point
Returns: if migration started successfully or failed with the error
"""
def execute(self, account_id):
try:
return self.receiver.start_migration(self.get_special_instructions(), self.get_action_id(), account_id)
except AttributeError as ex:
print("Failed to execute command '{}': error is '{}'".format(self.get_action_type(), ex))
class Factory:
def create(self, uuid): return MigrationStartAction(uuid=uuid)
class ZBSActionFactory:
actions = {}
@staticmethod
def create_action(action_type, uuid=None):
if action_type not in ZBSActionFactory.actions:
action_class = ZBSAction.subclasses.get(action_type)
if action_class:
ZBSActionFactory.actions[action_type] = action_class.Factory()
else:
raise ValueError(f'Could not find action class `{action_type}`')
return ZBSActionFactory.actions[action_type].create(uuid) | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/actions.py | actions.py |
import json
from typing import List
import requests
import config as cfg
"""
USAGE:
First you have to init factory with base settings
factory = RequestFactory(stage=${STAGE}, version=${VERSION}, api_key=${API_KEY})
Then need to create request instance depend on the type of the request you want to send
metrics_request = factory.create_request("Metrics")
Pass the data to set_data function
metrics_request.set_data(
agent_version,
overview,
plugins
)
Then send it to the BackEnd and receive the response
response = metrics_request.send()
"""
DEFAULT_BASE_URL = "https://api{}.cloudvisor.io"
ESTABLISH_CONN_TIMEOUT = 10
RECEIVE_RESPONSE_TIMEOUT = 30
class RequestFactory:
requests = {}
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create_request(self, request_type):
if request_type not in RequestFactory.requests:
request_class = Request.subclasses.get(request_type)
if request_class:
RequestFactory.requests[request_type] = request_class.Factory(self.stage, self.version, self.api_key,
self.api_base)
return RequestFactory.requests[request_type].create()
class Request:
stage = None
version = None
api_key = None
prefix = None
api_base = None
api_is_private_endpoint = False
subclasses = {}
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.prefix = ""
if self.stage == 'staging':
self.prefix = "-staging"
if api_base != DEFAULT_BASE_URL:
self.api_is_private_endpoint = True
self.api_base = api_base.format(self.prefix)
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses[cls.__name__] = cls
def send(self):
res = requests.post(
self.build_url(),
data=json.dumps(self.message, separators=(',', ':')),
headers={"Cache-Control": "no-cache", "Pragma": "no-cache", "x-api-key": self.api_key},
timeout=(ESTABLISH_CONN_TIMEOUT, RECEIVE_RESPONSE_TIMEOUT)
)
return self.Response(res)
class Metrics(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-metrics")
else:
return '{}{}'.format(self.api_base, cfg.post_metrics_ep)
def set_data(self, agent_version, overview, plugins, package_version=None, autoupdate_last_execution_time=None):
self.message = {
"agent": {
"version": agent_version,
"package_version": package_version,
"autoupdate_last_execution_time": autoupdate_last_execution_time
},
"overview": overview,
"plugins": plugins
}
class Response:
raw_data: dict = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return Metrics(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class MetricsCollection(Request):
message: List[dict] = []
def build_url(self):
if self.api_is_private_endpoint:
return f'{self.api_base}/bulk-post-metrics'
else:
return f'{self.api_base}{cfg.bulk_post_metrics_ep}'
def set_data(self, metrics: dict):
self.message.append(metrics)
def clear(self):
self.message = []
class Response:
raw_data: dict = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return MetricsCollection(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class NotifyException(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-notify-exception")
else:
return '{}{}'.format(self.api_base, cfg.notify_exception_ep)
def set_data(self, account_id, instance_id, exception, msg):
self.message = {
"exception": exception,
"message": msg,
"instance_id": instance_id,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
for k, v in self.raw_data.items():
setattr(self, str(k), v)
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return NotifyException(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class FsResizeCompleted(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-delete-resize-item")
else:
return '{}{}'.format(self.api_base, cfg.fs_resize_completed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return FsResizeCompleted(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class HoldingRemoveAction(Request):
message = {}
def build_url(self):
return '{}{}'.format(self.api_base, cfg.hold_remove_action_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, index, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"index": index,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return HoldingRemoveAction(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class FsResizeFailed(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-fs-resize-failed")
else:
return '{}{}'.format(self.api_base, cfg.resize_failed_ep)
def set_data(self, dev_path, filesystems, action_id, exit_code, resize_output, error, resize_steps, account_id):
self.message = {
"dev_path": dev_path,
"filesystems": filesystems,
"action_id": action_id,
"exit_code": exit_code,
"resize_output": resize_output,
"error": error,
"resize_steps": resize_steps,
"account_id": account_id
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return FsResizeFailed(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class SyncMachineActions(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-sync-machine-actions")
else:
return '{}{}'.format(self.api_base, cfg.sync_machine_actions_ep)
def set_data(self, action_id, account_id, status, fs_id):
self.message = {
'account_id': account_id,
'actions': {
action_id: {
'fs_id': fs_id,
'instruction_status': status
}
}
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return SyncMachineActions(stage=self.stage, version=self.version, api_key=self.api_key,
api_base=self.api_base)
class MigrationStartActionCompleted(Request):
message = {}
def build_url(self):
if self.api_is_private_endpoint:
return '{}{}'.format(self.api_base, "/post-migration-start-action-complete")
else:
return '{}{}'.format(self.api_base, cfg.migration_start_action_completed_ep)
def set_data(self, account_id, fs_id, action_id, mount_path, volume_id, region, cloud_vendor, dev_path, exit_code,
error):
self.message = {
"account_id": account_id,
"fs_id": fs_id,
"action_id": action_id,
"mount_path": mount_path,
"volume_id": volume_id,
"region": region,
"cloud_vendor": cloud_vendor,
"dev_path": dev_path,
"exit_code": exit_code,
"error": error
}
class Response:
raw_data = None
status_code = None
success = None
message = None
def __init__(self, res):
self.status_code = res.status_code
self.raw_data = res.json()
self.success = self.raw_data.get('Success')
self.message = self.raw_data.get('message')
class Factory:
stage = None
version = None
api_key = None
api_base = None
def __init__(self, stage, version, api_key, api_base: str = DEFAULT_BASE_URL):
self.stage = stage
self.version = version
self.api_key = api_key
self.api_base = api_base
def create(self): return MigrationStartActionCompleted(stage=self.stage, version=self.version,
api_key=self.api_key,
api_base=self.api_base) | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/protocol.py | protocol.py |
import json
import time
import traceback
from typing import Dict, Optional
from copy import deepcopy
from decimal import Decimal
from zesty.id_handler import create_zesty_id, create_zesty_filesystem_id
from ..actions import ZBSAction
from .BlockDevice import BlockDevice
from .Usage import Usage
GB_IN_BYTES = 1024**3
class FileSystem:
"""
This object interacts with DynamoDB representing a FileSystem.
As per the data model migration ZES-2884,
these will be backwards compatible and awkward in appearance until
the code is brought up to date.
"""
def __init__(
self,
fs_id: str,
account_id: str = None,
account_uuid: str = None,
agent_update_required: bool = None,
btrfs_version: str = None,
cloud: str = None,
cloud_vendor: str = None,
cycle_period: int = None,
delete_on_termination: bool = None,
devices: Dict[str, BlockDevice] = None,
encrypted: dict = None,
existing_actions: Dict[str, ZBSAction] = None,
expiredAt: int = None,
fs_cost: float = None,
fs_devices_to_count: int = None,
fs_size: int = None,
fs_type: str = None,
fs_usage: int = None,
has_unallocated_space: bool = None,
inodes: Dict[str, Usage] = None,
instance_id: str = None,
instance_type: str = None,
is_ephemeral: bool = None,
is_partition: bool = None,
is_zesty_disk: bool = None,
label: str = None,
last_update: int = None,
LV: str = None,
lvm_path: str = None,
mount_path: str = None,
name: str = None,
org_id: str = None,
partition_id: str = None,
partition_number: int = None,
platform: str = None,
potential_savings: float = None,
region: str = None,
resizable: bool = None,
space: Dict[str, Usage] = None,
tags: Dict[str, str] = None,
unallocated_chunk: int = None,
update_data_ts: int = 0,
VG: str = None,
wrong_fs_alert: bool = None,
zesty_disk_iops: int = None,
zesty_disk_throughput: int = None,
zesty_disk_vol_type: str = None,
max_utilization_in_72_hrs: int = None,
package_version: str = None,
autoupdate_last_execution_time: str = None,
statvfs_raw_data: Dict[str, str] = None,
pvc_id: str = None,
mount_options: list = None,
leading_device: str = None,
policies: Dict[str, dict] = None,
instance_tags: Dict[str, str] = None,
is_manageable: bool = False, #related migration
is_emr: bool = False,
target: Optional[int] = None,
iops_tps_vol_type_triggered: bool = False,
iops_tps_vol_type_change_ts: Optional[int] = None
):
# Initialize empty dict not as default arg
existing_actions = {} if existing_actions is None else existing_actions
devices = {} if devices is None else devices
inodes = {} if inodes is None else inodes
space = {} if space is None else space
tags = {} if tags is None else tags
instance_tags = {} if instance_tags is None else instance_tags
self.account_id = account_id
self.account_uuid = account_uuid
self.agent_update_required = agent_update_required
self.btrfs_version = btrfs_version
if cloud is None and cloud_vendor is None:
self.cloud = 'Amazon'
self.cloud_vendor = 'Amazon'
elif cloud:
self.cloud = cloud
self.cloud_vendor = cloud
elif cloud_vendor:
self.cloud = cloud_vendor
self.cloud_vendor = cloud_vendor
self.cycle_period = cycle_period
self.devices = self.init_devices(devices)
self.delete_on_termination = delete_on_termination
self.encrypted = encrypted
self.existing_actions = existing_actions
self.expiredAt = expiredAt
self.fs_cost = fs_cost
self.fs_devices_to_count = fs_devices_to_count
try:
self.fs_id = create_zesty_filesystem_id(
cloud=self.cloud_vendor,
fs_id=fs_id
)
except Exception as e:
self.fs_id = fs_id
self.fs_size = fs_size
self.fs_type = fs_type
self.fs_usage = fs_usage
self.has_unallocated_space = has_unallocated_space
self.inodes = Usage(inodes)
self.instance_id = instance_id
self.instance_type = instance_type
self.is_ephemeral = is_ephemeral
self.is_partition = is_partition
self.is_zesty_disk = is_zesty_disk
self.label = label
if last_update is None:
self.last_update = int(time.time()) - 60
else:
self.last_update = last_update
self.LV = LV
self.lvm_path = lvm_path
self.mount_path = mount_path
self.name = name
self.org_id = org_id
self.partition_id = partition_id
self.partition_number = partition_number
self.platform = platform
self.potential_savings = potential_savings
self.region = region
self.resizable = resizable
self.space = Usage(space)
self.tags = tags
self.unallocated_chunk = unallocated_chunk
self.update_data_ts = update_data_ts
self.VG = VG
self.wrong_fs_alert = wrong_fs_alert
self.zesty_disk_iops = zesty_disk_iops
self.zesty_disk_throughput = zesty_disk_throughput
self.zesty_disk_vol_type = zesty_disk_vol_type
self.max_utilization_in_72_hrs = max_utilization_in_72_hrs
self.package_version = package_version
self.autoupdate_last_execution_time = autoupdate_last_execution_time
self.statvfs_raw_data = statvfs_raw_data
self.pvc_id = pvc_id
self.mount_options = mount_options
self.leading_device = leading_device
self.policies = policies
self.instance_tags = instance_tags
self.is_manageable = is_manageable #related migration
self.is_emr = is_emr
self.target = target
self.iops_tps_vol_type_triggered = iops_tps_vol_type_triggered
self.iops_tps_vol_type_change_ts = iops_tps_vol_type_change_ts
@staticmethod
def init_devices(devices: Dict[str, BlockDevice]):
if not devices:
return {}
else:
devices = deepcopy(devices)
for dev in devices:
if isinstance(devices[dev], BlockDevice):
continue
devices[dev] = BlockDevice(
**devices.get(dev, {})
)
return devices
def as_dict(self) -> dict:
return_dict = json.loads(json.dumps(self, default=self.object_dumper))
return {k: v for k, v in return_dict.items() if v is not None}
@staticmethod
def object_dumper(obj) -> dict:
try:
return obj.__dict__
except AttributeError as e:
if isinstance(obj, Decimal):
return int(obj)
print(f"Got exception in object_dumper value: {obj} | type : {type(obj)}")
print(traceback.format_exc())
return obj
def serialize(self) -> dict:
return self.as_dict()
def __repr__(self) -> str:
return f"FileSystem:{self.fs_id}" | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/models/FileSystem.py | FileSystem.py |
import enum
from typing import Dict, Union
from sqlalchemy import func
from sqlalchemy.orm import Session, sessionmaker, Query
from sqlalchemy.sql.elements import or_, Label
from sqlalchemy.ext.hybrid import hybrid_property
from .InstancesTags import InstancesTags
from .common_base import Base
try:
from sqlalchemy import Column, engine, case, func, cast, String, text
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.dialects.postgresql import BOOLEAN, FLOAT, INTEGER, BIGINT, \
JSON, TIMESTAMP, VARCHAR
except ImportError:
raise ImportError("sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
class EbsVolumeConfig(enum.Enum):
none = 'None'
unattached = 'Unattached'
potentialZesty = 'Potential ZestyDisk'
class EbsVolume(Base):
# TODO: Move this model into our Alembic system
# when a modification of this model is needed.
__tablename__ = "disks"
volume_id = Column(VARCHAR, primary_key=True)
org_id = Column(VARCHAR, index=True)
account_uuid = Column(VARCHAR, index=True)
account_id = Column(VARCHAR, index=True)
region = Column(VARCHAR, index=True)
volume_type = Column(VARCHAR, index=True)
cloud = Column(VARCHAR, index=True)
availability_zone = Column(VARCHAR)
create_time = Column(TIMESTAMP)
encrypted = Column(BOOLEAN)
size = Column(INTEGER)
snapshot_id = Column(VARCHAR)
state = Column(VARCHAR)
iops = Column(INTEGER)
tags = Column(JSON)
attachments = Column(JSON)
attached_to = Column(JSON)
monthly_cost = Column(FLOAT, default=0)
is_unused_resource = Column(INTEGER, default=0)
unused_since = Column(VARCHAR)
agent_installed = Column(BOOLEAN, default=False)
_zbs_supported_os = Column(INTEGER)
potential_savings = Column(FLOAT, default=0)
image_id = Column(VARCHAR, nullable=True)
image_name = Column(VARCHAR, nullable=True)
# dict for custom_order_by class method
col_to_actual_sorting_col = {"instance_tags": "instance_tags_keys"}
def __init__(
self,
volume_aws_schema: Dict,
account_uuid: str = None):
if account_uuid:
self.account_uuid = account_uuid
else:
self.account_uuid = volume_aws_schema["account_uuid"]
self.volume_id = volume_aws_schema["volume_id"]
self.org_id = volume_aws_schema["org_id"]
self.account_id = volume_aws_schema["account_id"]
self.cloud = volume_aws_schema["cloud"]
self.region = volume_aws_schema["region"]
self.volume_type = volume_aws_schema["volume_type"]
self.availability_zone = volume_aws_schema["availability_zone"]
self.create_time = volume_aws_schema["create_time"]
self.encrypted = volume_aws_schema["encrypted"]
self.size = volume_aws_schema["size"]
self.snapshot_id = volume_aws_schema["snapshot_id"]
self.state = volume_aws_schema["state"]
self.iops = volume_aws_schema.get("iops", 0)
self.tags = volume_aws_schema.get("tags", {})
self.attachments = volume_aws_schema.get("attachments", [])
self.attached_to = volume_aws_schema.get("attached_to", [])
self.monthly_cost = volume_aws_schema.get("monthly_cost", 0)
self.is_unused_resource = volume_aws_schema.get(
"is_unused_resource", 0)
self.unused_since = volume_aws_schema.get("unused_since", None)
self.agent_installed = volume_aws_schema.get("agent_installed", False)
self.potential_savings = volume_aws_schema.get("potential_savings", 0)
self._zbs_supported_os = volume_aws_schema.get("_zbs_supported_os")
self.image_id = volume_aws_schema.get("ami_id")
self.image_name = volume_aws_schema.get("ami_name")
def __repr__(self):
return f"{self.__tablename__}:{self.volume_id}"
@classmethod
def instance_id_filter(cls, query: Query, value: str):
val = f'%{value}%'
query = query.filter(
case((or_(cls.attached_to == None, func.json_array_length(cls.attached_to) == 0), False),
else_=cast(cls.attached_to, String).ilike(val)))
return query
@classmethod
def instance_name_filter(cls, query: Query, value: str):
subq = query.session.query(InstancesTags.instance_name)
val = '%{}%'.format(value.replace("%", "\\%"))
query = query.filter((subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (
cls.account_id == InstancesTags.account_id))).ilike(val))
return query
@classmethod
def instance_tags_filter(cls, query: Query, value: str):
session = query.session
subq = session.query(InstancesTags.instance_tags)
python_types_to_pg = {int: BIGINT, float: FLOAT, bool: BOOLEAN}
for key_val in value:
key = key_val.get('key')
val = key_val.get('value')
if key is not None and val is not None:
if not isinstance(val, str):
query = query.filter(cast(cast(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)).op('->')(key)), String), python_types_to_pg[type(val)]) == val)
else:
val = f'%{val}%'
query = query.filter(cast(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)).op('->')(key)), String).ilike(val))
elif key is not None:
query = query.filter(func.jsonb(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id))).op('?')(key))
elif val is not None:
if isinstance(val, str):
query = query.filter(cast(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)), String)
.regexp_replace(r'.+\: "[^"]*(' + str(val) + r')[^"]*"[,\s}].*', "\\1") == f"{val}")
else:
if isinstance(val, bool):
val = f'"{val}"'
query = query.filter(cast(subq.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) & (cls.account_id == InstancesTags.account_id)), String)
.regexp_replace(r'.+\: (' + str(val) + r')[,\s}].*', "\\1") == f"{val}")
return query
# Custom query
@classmethod
def custom_query(cls, session: Union[Session, sessionmaker]) -> Query:
q = session.query(cls)
subq_2 = session.query(func.json_object_keys(InstancesTags.instance_tags))
subq_3 = session.query(InstancesTags.instance_tags)
instance_name_clause = "regexp_replace(cast(array((select instances_tags.instance_name from instances_tags " \
"inner join json_array_elements(disks.attached_to) as attached_to_set " \
"on instances_tags.instance_id = replace(cast(attached_to_set.value as varchar), '\"', '') " \
"and instances_tags.account_id = disks.account_id)) as varchar), '[\\{\\}\"]', '', 'g')"
q = q.add_columns(case((or_(cls.attached_to == None, func.json_array_length(cls.attached_to) == 0), ''),
else_=cast(cls.attached_to, String).regexp_replace(r'[\[\]"]', '', 'g'))
.label("instance_id"),
Label('instance_name', text(instance_name_clause)),
func.array(subq_2.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) &
(cls.account_id == InstancesTags.account_id)))
.label('instance_tags_keys'),
subq_3.scalar_subquery().where(
(func.jsonb(cls.attached_to).op('->>')(0) == InstancesTags.instance_id) &
(cls.account_id == InstancesTags.account_id))
.label('instance_tags'))
return q
@classmethod
def custom_order_by(cls, sorting_column: str, sorting_order: str) -> str:
actual_sorting_column = cls.col_to_actual_sorting_col.get(sorting_column, sorting_column)
return f"{actual_sorting_column} {sorting_order}"
def get_volume_id(self):
return self.volume_id
def as_dict(self):
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
@hybrid_property
def is_attached(self):
return len(self.attached_to) > 0
@is_attached.expression
def is_attached(cls):
return func.json_array_length(cls.attached_to) > 0
def create_tables(engine: engine.base.Engine) -> None: #type: ignore
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/models/EbsVolume.py | EbsVolume.py |
import time
from enum import Enum, auto
from typing import Dict, List, Optional, Union
from uuid import UUID as _UUID
from uuid import uuid4
from sqlalchemy import INT
from sqlalchemy import Enum as sa_ENUM
from sqlalchemy.dialects.postgresql import ARRAY, UUID
from sqlalchemy.sql.schema import ForeignKey
try:
from sqlalchemy import Column, String, case, cast, engine, func, or_
from sqlalchemy.dialects.postgresql import (BIGINT, BOOLEAN, FLOAT, JSON,
TIMESTAMP, VARCHAR)
from sqlalchemy.orm import Query, Session, aliased, sessionmaker
except ImportError:
raise ImportError(
"sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
from ..actions import ZBSAction
from .BlockDevice import BlockDevice
from .common_base import Base, BaseMixin
from .Usage import Usage
class ManagedFsMixin:
fs_id = Column(VARCHAR, primary_key=True)
account_id = Column(VARCHAR, index=True, default=None)
account_uuid = Column(VARCHAR, index=True, default=None)
agent_update_required = Column(BOOLEAN, default=None)
btrfs_version = Column(VARCHAR, default=None)
cloud = Column(VARCHAR, default=None)
cloud_vendor = Column(VARCHAR, default=None)
cycle_period = Column(BIGINT, default=None)
delete_on_termination = Column(BOOLEAN, default=None)
devices = Column(JSON, default=None)
encrypted = Column(JSON, default=None)
existing_actions = Column(JSON, default=None)
expiredAt = Column(BIGINT, default=None)
fs_cost = Column(FLOAT, default=None)
fs_devices_to_count = Column(BIGINT, default=None)
fs_size = Column(BIGINT, default=None)
fs_type = Column(VARCHAR, default=None)
fs_usage = Column(BIGINT, default=None)
has_unallocated_space = Column(BOOLEAN, default=None)
inodes = Column(JSON, default=None)
instance_id = Column(VARCHAR, default=None)
instance_type = Column(VARCHAR, default=None)
is_ephemeral = Column(BOOLEAN, default=None)
is_partition = Column(BOOLEAN, default=None)
is_zesty_disk = Column(BOOLEAN, default=None)
label = Column(VARCHAR, default=None)
last_update = Column(BIGINT, default=None)
LV = Column(VARCHAR, default=None)
lvm_path = Column(VARCHAR, default=None)
mount_path = Column(VARCHAR, default=None)
name = Column(VARCHAR, default=None)
org_id = Column(VARCHAR, index=True)
partition_id = Column(VARCHAR, default=None)
partition_number = Column(BIGINT, default=None)
platform = Column(VARCHAR, default=None)
potential_savings = Column(FLOAT, default=None)
region = Column(VARCHAR, index=True)
resizable = Column(BOOLEAN, default=None)
space = Column(JSON, default=None)
tags = Column(JSON, default=None)
unallocated_chunk = Column(BIGINT, default=None)
update_data_ts = Column(BIGINT, default=0)
VG = Column(VARCHAR, default=None)
wrong_fs_alert = Column(BOOLEAN, default=None)
zesty_disk_iops = Column(BIGINT, default=None)
zesty_disk_throughput = Column(BIGINT, default=None)
zesty_disk_vol_type = Column(VARCHAR, default=None)
max_utilization_in_72_hrs = Column(BIGINT, default=None)
package_version = Column(VARCHAR, default=None)
autoupdate_last_execution_time = Column(VARCHAR, default=None)
policies = Column(JSON, default=None)
instance_tags = Column(JSON, default=None)
migration_uuid = Column(UUID(as_uuid=True), nullable=True)
is_manageable = Column(BOOLEAN, default=False)
iops_tps_vol_type_triggered = Column(BOOLEAN, default=False)
iops_tps_vol_type_change_ts = Column(BIGINT, nullable=True, default=None)
# dict for custom_order_by class method
col_to_actual_sorting_col = {
"policies": "policies_name",
"instance_tags": "instance_tags_keys"}
def __init__(
self,
fs_id: str,
account_id: str = None,
account_uuid: str = None,
agent_update_required: bool = None,
btrfs_version: str = None,
cloud: str = None,
cloud_vendor: str = None,
cycle_period: int = None,
delete_on_termination: bool = None,
devices: Dict[str, BlockDevice] = None,
encrypted: dict = None,
existing_actions: Dict[str, ZBSAction] = None,
expiredAt: int = None,
fs_cost: float = None,
fs_devices_to_count: int = None,
fs_size: int = None,
fs_type: str = None,
fs_usage: int = None,
has_unallocated_space: bool = None,
inodes: Dict[str, Usage] = None,
instance_id: str = None,
instance_type: str = None,
is_ephemeral: bool = None,
is_partition: bool = None,
is_zesty_disk: bool = None,
label: str = None,
last_update: int = None,
LV: str = None,
lvm_path: str = None,
mount_path: str = None,
name: str = None,
org_id: str = None,
partition_id: str = None,
partition_number: int = None,
platform: str = None,
potential_savings: float = None,
region: str = None,
resizable: bool = None,
space: Dict[str, Usage] = None,
tags: Dict[str, str] = None,
unallocated_chunk: int = None,
update_data_ts: int = 0,
VG: str = None,
wrong_fs_alert: bool = None,
zesty_disk_iops: int = None,
zesty_disk_throughput: int = None,
zesty_disk_vol_type: str = None,
max_utilization_in_72_hrs: int = None,
package_version: str = None,
autoupdate_last_execution_time: str = None,
statvfs_raw_data: Dict[str, str] = None, # unused to support initialization with **dict, do not remove
policies: Dict[str, dict] = None,
instance_tags: Dict[str, str] = None,
is_emr: bool = False, # unused to support initialization with **dict, do not remove
is_manageable: bool = False,
iops_tps_vol_type_triggered: bool = False,
iops_tps_vol_type_change_ts: Optional[int] = None,
**kwargs
):
self.fs_id = fs_id
self.account_id = account_id
self.account_uuid = account_uuid
self.agent_update_required = agent_update_required
self.btrfs_version = btrfs_version
if cloud is None and cloud_vendor is None:
self.cloud = 'Amazon'
self.cloud_vendor = 'Amazon'
elif cloud:
self.cloud = cloud
self.cloud_vendor = cloud
elif cloud_vendor:
self.cloud = cloud_vendor
self.cloud_vendor = cloud_vendor
self.cycle_period = cycle_period
self.delete_on_termination = delete_on_termination
self.devices = devices
if devices:
for dev in self.devices:
if isinstance(self.devices[dev], BlockDevice):
self.devices[dev] = self.devices[dev].asdict()
else:
self.devices[dev] = self.devices.get(dev, {})
self.encrypted = encrypted
if existing_actions:
for action in existing_actions:
self.existing_actions[action] = self.existing_actions[action].serialize(
)
self.expiredAt = expiredAt
self.fs_cost = fs_cost
self.fs_devices_to_count = fs_devices_to_count
self.fs_size = fs_size
self.fs_type = fs_type
self.fs_usage = fs_usage
self.has_unallocated_space = has_unallocated_space
self.inodes = inodes
self.instance_id = instance_id
self.instance_type = instance_type
self.is_ephemeral = is_ephemeral
self.is_partition = is_partition
self.is_zesty_disk = is_zesty_disk
self.label = label
if last_update:
self.last_update = last_update
else:
self.last_update = int(time.time()) - 60
self.LV = LV
self.lvm_path = lvm_path
self.mount_path = mount_path
self.name = name
self.org_id = org_id
self.partition_id = partition_id
self.partition_number = partition_number
self.platform = platform
self.potential_savings = potential_savings
self.region = region
self.resizable = resizable
self.space = space
self.tags = tags
self.unallocated_chunk = unallocated_chunk
self.update_data_ts = update_data_ts
self.VG = VG
self.wrong_fs_alert = wrong_fs_alert
self.zesty_disk_iops = zesty_disk_iops
self.zesty_disk_throughput = zesty_disk_throughput
self.zesty_disk_vol_type = zesty_disk_vol_type
self.max_utilization_in_72_hrs = max_utilization_in_72_hrs
self.package_version = package_version
self.autoupdate_last_execution_time = autoupdate_last_execution_time
self.policies = policies
self.instance_tags = instance_tags
self.is_manageable = is_manageable
self.iops_tps_vol_type_triggered = iops_tps_vol_type_triggered
self.iops_tps_vol_type_change_ts = iops_tps_vol_type_change_ts
def __repr__(self) -> str:
return f"{self.__tablename__}:{self.fs_id}"
def asdict(self) -> dict:
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def as_dict(self) -> dict:
return self.asdict()
# Custom filters
@classmethod
def policies_filter(cls, query: Query, value: str):
query = query.filter(
cast(cls.policies, String).contains(f'"name": "{value}"'))
return query
@classmethod
def instance_name_filter(cls, query: Query, value: str):
val = '%{}%'.format(value.replace("%", "\\%"))
query = query.filter(
case((cls.instance_tags == None, ''), else_=func.replace(cast(cls.instance_tags.op('->')('Name'), String), "\"", "")).ilike(val))
return query
# Custom query
@classmethod
def custom_query(cls, session: Union[Session, sessionmaker]) -> Query:
clsb = aliased(cls)
subq = session.query(func.json_object_keys(clsb.instance_tags))
q = session.query(cls)
q = q.add_columns(case((or_(cls.policies == None, cast(cls.policies, String) == 'null'), ''),
else_=cast(cls.policies, String).regexp_replace(r'.+"name":\s"([^"]+).+', "\\1"))
.label("policies_name"),
case((cls.instance_tags == None, ''),
else_=func.replace(cast(cls.instance_tags.op('->')('Name'), String), "\"", ""))
.label('instance_name'),
case((cast(cls.instance_tags, String) == 'null', []),
else_=func.array(subq.scalar_subquery().where(cls.fs_id == clsb.fs_id)))
.label('instance_tags_keys')
)
return q
@classmethod
def custom_order_by(cls, sorting_column: str, sorting_order: str) -> str:
actual_sorting_column = cls.col_to_actual_sorting_col.get(
sorting_column, sorting_column)
return f"{actual_sorting_column} {sorting_order}"
class ManagedFs(ManagedFsMixin, BaseMixin, Base):
__tablename__ = "managed_filesystems"
class MigrationStatus(Enum):
Active = auto()
Aborting = auto()
Aborted = auto()
Completed = auto()
Failed = auto()
class RunningMigrations(BaseMixin, Base):
__tablename__ = "active_migration"
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), nullable=False, primary_key=True)
finished_at = Column(TIMESTAMP, nullable=True)
account_id = Column(VARCHAR, default=None)
region = Column(VARCHAR(255))
reboot = Column(BOOLEAN, default=False)
# array of day numbers when reboot is allowed 0-6
days = Column(ARRAY(VARCHAR))
# timeframe from-to in %I:%M %p
from_ = Column(VARCHAR)
to = Column(VARCHAR)
status = Column(sa_ENUM(MigrationStatus), nullable=False, server_default=MigrationStatus.Active.name)
is_rebooting = Column(BOOLEAN, default=False) # TODO: can this be deleted?
snapshot_id = Column(VARCHAR(255))
snapshot_remove_after = Column(INT, nullable=True) # in days
snapshot_create_started_at = Column(TIMESTAMP, nullable=True)
snapshot_deleted_at = Column(TIMESTAMP, nullable=True)
ebs_id = Column(VARCHAR(255))
ebs_remove_after = Column(INT, nullable=True) # in days
ebs_detached_at = Column(TIMESTAMP, nullable=True)
ebs_deleted_at = Column(TIMESTAMP, nullable=True)
def __init__(
self,
fs_id: str,
migration_uuid: _UUID,
account_id: str = None,
region: str = None,
days: Optional[List[int]] = None,
from_: Optional[str] = None,
to: Optional[str] = None,
reboot: bool = False,
status: MigrationStatus = MigrationStatus.Active,
ebs_remove_after: int = 1,
snapshot_remove_after: int = 7):
self.migration_uuid = migration_uuid
self.fs_id = fs_id
self.account_id = account_id
self.region = region
self.days = days
self.from_ = from_
self.to = to
self.reboot = reboot
self.status = status
self.ebs_remove_after = ebs_remove_after
self.snapshot_remove_after = snapshot_remove_after
@staticmethod
def new_migration(
fs_id,
days: Optional[List[int]] = None,
from_: Optional[str] = None,
to: Optional[str] = None,
reboot: bool = False,
ebs_remove_after: int = 1,
snapshot_remove_after: int = 7) -> 'RunningMigrations':
return RunningMigrations(
fs_id,
uuid4(),
days, from_,
to, reboot,
ebs_remove_after,
snapshot_remove_after,
)
class MigrationHistory(BaseMixin, Base):
__tablename__ = "migration_history"
time_start = Column(TIMESTAMP)
time_end = Column(TIMESTAMP)
status = Column(VARCHAR)
phase = Column(VARCHAR, primary_key=True)
progress = Column(FLOAT)
completed = Column(BOOLEAN)
failed = Column(BOOLEAN)
failure_reason = Column(VARCHAR)
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), ForeignKey("active_migration.migration_uuid", ondelete="CASCADE"),
nullable=False, primary_key=True, index=True)
# should be returned from the agent in seconds
estimated = Column(INT)
name = Column(VARCHAR)
weight = Column(INT)
abortable = Column(BOOLEAN)
index = Column(INT, primary_key=True)
def __init__(
self,
status: str,
phase: str,
progress: int,
eta: int,
name: str,
weight: int,
abortable: bool,
start_time: int,
end_time: int,
migration_uuid: 'UUID',
fs_id: str,
index: int):
self.status = status
self.phase = phase
self.progress = progress
self.estimated = eta
self.name = name
self.weight = weight
self.time_start = start_time
self.time_end = end_time
self.abortable = abortable
self.index = index
self.migration_uuid = migration_uuid
self.fs_id = fs_id
class WrongActionException(Exception):
pass
class MigrationActions(BaseMixin, Base):
__tablename__ = "migration_actions"
id = Column(INT, primary_key=True, autoincrement=True)
fs_id = Column(VARCHAR)
migration_uuid = Column(UUID(as_uuid=True), ForeignKey("active_migration.migration_uuid", ondelete="CASCADE"),
nullable=False)
action = Column(VARCHAR)
value = Column(VARCHAR)
allowed_actions = ['start', 'reboot', 'reboot_now', 'abort']
def __init__(self, fs_id, migration_uuid, action, value):
self.fs_id = fs_id
self.migration_uuid = migration_uuid
self.set_action(action)
self.value = value
def set_action(self, action):
if action not in self.allowed_actions:
raise WrongActionException
self.action = action
def create_tables(engine: engine.base.Engine) -> None:
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/models/ManagedFS.py | ManagedFS.py |
import json
import time
from datetime import datetime
from typing import Dict, Union
from .common_base import Base, BaseMixin
try:
from sqlalchemy import (Column, PrimaryKeyConstraint, String, case, cast,
engine, func, or_, select, text)
from sqlalchemy.dialects.postgresql import (BIGINT, BOOLEAN, FLOAT, JSON,
VARCHAR)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Query, Session, aliased, sessionmaker
except ImportError:
raise ImportError(
"sqlalchemy is required by zesty.zbs-api but needs to be vendored separately. Add postgres-utils to your project's requirements that depend on zbs-api.")
class InstancesTags(BaseMixin, Base):
__tablename__ = "instances_tags"
instance_id = Column(VARCHAR, primary_key=True)
account_id = Column(VARCHAR, index=True, default=None)
account_uuid = Column(VARCHAR, index=True, default=None)
instance_name = Column(VARCHAR, default=None)
instance_tags = Column(JSON, default=None)
expired_at = Column(BIGINT, default=None)
__table_args__ = (
PrimaryKeyConstraint('instance_id', name='instances_tags_pkey'),)
def __init__(
self,
instance_id: str,
account_id: str = None,
account_uuid: str = None,
instance_name: str = None,
instance_tags: dict = None,
expired_at: int = None
):
self.instance_id = instance_id
self.account_id = account_id
self.account_uuid = account_uuid
self.instance_name = instance_name
self.instance_tags = instance_tags
self.expired_at = expired_at or int(datetime.utcnow().timestamp()) + 3 * 3600
def __eq__(self, other) -> bool:
return self.__hash__() == other.__hash__()
def __hash__(self) -> int:
return hash(''.join(map(lambda c: getattr(self, c.name) or '',
filter(lambda c: c.name not in ['instance_tags', 'expired_at', 'created_at', 'updated_at'],
self.__table__.columns))) +
json.dumps(self.instance_tags))
def __repr__(self) -> str:
return f"{self.__tablename__}:{self.instance_id}"
def asdict(self) -> dict:
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
def as_dict(self) -> dict:
return self.asdict()
def create_tables(engine: engine.base.Engine) -> None:
Base.metadata.create_all(engine, checkfirst=True) | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/models/InstancesTags.py | InstancesTags.py |
import json
import traceback
from decimal import Decimal
from typing import Dict
from zesty.id_handler import create_zesty_id
GB_IN_BYTES = 1024**3
class BlockDevice:
def __init__(
self,
size: int,
btrfs_dev_id: str = None,
cloud_vendor: str = 'Amazon',
created: str = None,
dev_usage: int = None,
iops: int = None,
throughput: int = None,
lun: int = None,
map: str = None,
iops_stats: Dict[str, int] = None,
parent: str = None,
unlock_ts: int = 0,
volume_id: str = None,
volume_type: str = None,
device: str = None,
btrfs_size: int = None,
extendable: bool = True,
removable: bool = True
):
"""
Block Device class doc:
:param size: Size of the device in Bytes
:param btrfs_dev_id: ID of the device inside the BTRFS structure
:param cloud_vendor: Cloud vendor (AWS/Azure/GCP)
:param created: Device creation date
:param dev_usage: How much of the device is in use (in Bytes)
:param iops: Device IOPS amount
:param lun: LUN number (Only for Azure)
:param map: The mount slot of the device inside the OS
:param iops_stats: Dict with IOPS statistics
:param parent: If it's a partition so this one represent the parent device
:param unlock_ts: TS when the device will be ready to be extended again
:param volume_id: Device ID
:param volume_type: Type of the device in the cloud
:param device: Device mount slot from the cloud
:param btrfs_size: The usable size for the filesystem in bytes
:param extendable: Whether ZestyDisk Handsfree logic is allowed to extend the device
:param removable: Whether ZestyDisk Handsfree logic is allowed to remove the device from the filesystem
"""
# Init empty dict here instead of passing as default value
iops_stats = {} if iops_stats is None else iops_stats
self.size = size
self.cloud_vendor = cloud_vendor
try:
self.volume_id = create_zesty_id(
cloud=self.cloud_vendor,
resource_id=volume_id
)
except:
self.volume_id = volume_id
self.btrfs_dev_id = btrfs_dev_id
self.created = created
self.dev_usage = dev_usage
self.iops = iops
self.throughput = throughput
self.lun = lun
self.map = map
self.iops_stats = iops_stats
if device:
self.device = device
if parent:
self.parent = parent
if not unlock_ts:
self.unlock_ts = 0
else:
self.unlock_ts = unlock_ts
self.volume_type = volume_type
self.extendable = extendable
self.removable = removable
# if btrfs_size is None (e.g- windows/old collector, set btrfs_size to size)
self.btrfs_size = btrfs_size if btrfs_size is not None else size
def as_dict(self) -> dict:
return_dict = json.loads(json.dumps(self, default=self.object_dumper))
return {k: v for k, v in return_dict.items() if v is not None}
@staticmethod
def object_dumper(obj) -> dict:
try:
return obj.__dict__
except AttributeError as e:
if isinstance(obj, Decimal):
return int(obj)
print(f"Got exception in object_dumper value: {obj} | type : {type(obj)}")
print(traceback.format_exc())
return obj
def serialize(self) -> dict:
return self.as_dict()
def __repr__(self) -> str:
return str(self.as_dict()) | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/models/BlockDevice.py | BlockDevice.py |
from abc import ABC, abstractmethod
import enum
from typing import TYPE_CHECKING, Dict
if TYPE_CHECKING:
from ..actions import ZBSAction
pass
class ISpecialInstructions(ABC):
pass
class IActionHF(ABC):
class Status(enum.Enum):
NEW = 1
PENDING = 2
RUNNING = 3
CANCELED = 4
READY = 5
HOLDING = 6
REVERT = 7
PAUSE = 8 # should stop
STOPPED = 9 # action stopped
@abstractmethod
def get_action_id(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_id' is abstract, please implement")
@abstractmethod
def get_action_type(self) -> str:
raise NotImplementedError(
"ActionHF 'get_action_type' is abstract, please implement")
@abstractmethod
def get_status(self) -> Status:
raise NotImplementedError(
"ActionHF 'get_status' is abstract, please implement")
@abstractmethod
def set_status(self, status: Status):
raise NotImplementedError(
"ActionHF 'set_status' is abstract, please implement")
@abstractmethod
def get_special_instructions(self) -> ISpecialInstructions:
raise NotImplementedError(
"ActionHF 'get_special_instructions' is abstract, please implement")
@abstractmethod
def set_special_instructions(self, special_instructions: ISpecialInstructions):
raise NotImplementedError(
"ActionHF 'set_special_instructions' is abstract, please implement")
class IDeviceHF(ABC):
@abstractmethod
def get_dev_id(self) -> str:
raise NotImplementedError(
"DeviceHF 'get_dev_id' is abstract, please implement")
@abstractmethod
def get_size(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_size' is abstract, please implement")
@abstractmethod
def get_usage(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_usage' is abstract, please implement")
@abstractmethod
def get_unlock_ts(self) -> int:
raise NotImplementedError(
"DeviceHF 'get_unlock_ts' is abstract, please implement")
class IFileSystemHF(ABC):
@abstractmethod
def get_fs_id(self) -> str:
raise NotImplementedError(
"IFileSystemHF 'get_fs_id' is abstract, please implement")
@abstractmethod
def get_devices(self) -> Dict[str, IDeviceHF]:
raise NotImplementedError(
"IFileSystemHF 'get_devices' is abstract, please implement")
@abstractmethod
def get_existing_actions(self) -> Dict[str, IActionHF]:
raise NotImplementedError(
"IFileSystemHF 'get_existing_actions' is abstract, please implement") | zesty.zbs-api-securonix | /zesty.zbs_api_securonix-1.0.2023.8.29.1693298608-py3-none-any.whl/zesty/models/hf_interface.py | hf_interface.py |
## Description
Zesty Disk API
## Installation
pip install zesty.zbs-api
## Usage
from zesty.zbs-api import *
## Alembic and Postgres Data Models.
For instructions on how to manage the Postgres data model as well as using Alembic to automatically prepare database migrations, please refer to the `README.md` in the `alembic` folder which is located at the root of this repository.
## SQLAlchemy Dependency
Some of the models within this package require SQLAlchemy to be imported. To keep this package light as it's used across many places in our code base, we do not install SQLAlchemy with this package. Thus, if you need to use any of the models that depend on SQLAlchemy, please install the Postgres-Utils package with your requirements.txt.
| zesty.zbs-api | /zesty.zbs-api-1.0.2023.8.29.1693309720.tar.gz/zesty.zbs-api-1.0.2023.8.29.1693309720/README.md | README.md |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.