code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# AdvancedVisualizations
When I was pursuing my MS, I took a phenomenal course titled advanced data analysis,
As a pythonista, I was initially disappointed that so many of the approaches were just not doable in python.
## Design Principles:
1. All visuals should feel like using seaborn, where plots are easy to do, look good out of the box, but are highly customizable using matplotlib.
2. The library should accumulate boring, neglected approaches, and make it easier to them in python. These approaches often seem to be out of scope of many other libraries, and their common thread is that they are sometimes a bit tricky to interpret, or are somewhat neglected.
3. Examples should include how to interpret the results a few different ways, as well as examples where the technique does not work, or does not work well.
4. Sometimes, it is just difficult | AdvancedVisualizations | /AdvancedVisualizations-0.0.1.tar.gz/AdvancedVisualizations-0.0.1/README.md | README.md |
import datetime
import json
import logging
import random
import re
import requests
import urllib3
from bs4 import BeautifulSoup
from urllib3.exceptions import InsecureRequestWarning
# pylint: disable=no-member
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
# pylint: enable=no-member
logging.getLogger("urllib3").setLevel(logging.WARNING)
logging.getLogger("requests").setLevel(logging.WARNING)
urllib3.disable_warnings()
LOGGER = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
BASE_URL = "https://portail.advans-group.com/"
URLS = {
"SET_ACTIVITY": f"{BASE_URL}ActivityReport/SetActivity?Length=",
"GET_ACTIVITY": f"{BASE_URL}ActivityReport/GetActivity?Length=",
"CLOSE_ACTIVITY": f"{BASE_URL}ActivityReport/CloseMonth?Length=",
"LOGIN": f"{BASE_URL}Account/Login?ReturnUrl=/ActivityReport",
}
class AdvansERPChecker:
def __init__(self, login=None, password=None):
LOGGER.info("Opening session")
self._length = 0
self.tasks = []
self.login = login
self.password = password
self._session = None
def _login_erp_session(self):
# Open login page and retrievers csrf token
self._session = requests.sessions.Session()
home_page = self._session.get(URLS["LOGIN"], verify=False, timeout=30)
soup = BeautifulSoup(home_page.text, "html.parser")
token = soup.select("input[name=__RequestVerificationToken]")[0]["value"]
# Log in
LOGGER.info("Logging in")
credentials = None
if credentials is None:
credentials = {}
credentials["login"] = self.login
credentials["password"] = self.password
now = datetime.datetime.now()
res = self._session.post(
URLS["LOGIN"],
data={
"month": now.month,
"year": now.year,
"UserName": credentials["login"],
"Password": credentials["password"],
"__RequestVerificationToken": token,
},
)
# Extract "Length" attribute (don't know why it's necessary)
soup = BeautifulSoup(res.text, "html.parser")
# import pdb
# pdb.set_trace()
self._length = re.search(
r".*\?Length=([0-9]+)$",
soup.select("form[action*=GetActivity]")[0]["action"],
).group(1)
return res
def extract_activities(self):
login_res = self._login_erp_session()
tasks = list(set(re.findall(r'"affaires":(\[{.*?}\])', login_res.text)))
if tasks:
self.tasks = json.loads(tasks[0])
else:
LOGGER.warning("Couldn't retrievers tasks from page")
self._session.close()
return self.tasks
def is_current_day_filled(self):
now = datetime.datetime.now()
if now.isoweekday() < 6:
self._login_erp_session()
res = self._session.get(
URLS["GET_ACTIVITY"]
+ str(self._length)
+ "&month="
+ str(now.month - 1)
+ "&year="
+ str(now.year)
+ "&X-Requested-With=XMLHttpRequest&_="
+ str(random.randint(10000, 99999))
)
current_activities = res.json()
existing_activities = {
(acti["year"], acti["month"], acti["day"]): acti
for acti in current_activities["data"]
}
today = (now.year, now.month, now.day)
self._session.close()
if today in existing_activities.keys():
return True
return False
return True
if __name__ == "__main__":
ERP_CHECKER = AdvansERPChecker("HDUPRAS", "WFMKR54QJB1q")
a = ERP_CHECKER.is_current_day_filled()
ERP_CHECKER.extract_activities()
for task in ERP_CHECKER.tasks:
print(task) | Advans-ERP-Checker | /Advans_ERP_Checker-0.1.1.tar.gz/Advans_ERP_Checker-0.1.1/advans_erp_checker.py | advans_erp_checker.py |
# AdventOfCodeInputReader
# <p align="center">AdventOfCodeInputReader</p>
<p align="center">A python package to get the input for Advent Of Code challenges as a list</p>
## Contents
* [Getting started](#getting-started)
* [Using AdventOfCodeInputReader](#using-adventofcodeinputreader)
* [Methods](#methods)
* [get_input()](#get_input)
* [get_input_by_day()](#get_input_by_day)
* [get_input_by_year_and_day()](#get_input_by_year_and_day)
## Getting Started
* Installation using pip (Python Package Index):
```
$ pip install AdventOfCodeInputReader
```
## Using AdventOfCodeInputReader
### Import and Initialise
The AdventOfCode class (defined in __init__.py) has one mandatory parameter (`session_id`) and two optional parameters (`year` and `day`). To get `session_id`, you need to log into Advent of Code and retreieve the session id from your browser cookies. Your session id is required as the challenge input is different for each user.
```python
from AdventOfCodeInputReader import AdventOfCodeInputReader
# Initialize with only session_id
reader = AdventOfCodeInputReader("your_session_id")
# Initialize with session_id and year only
reader = AdventOfCodeInputReader("your_session_id",2021)
# Initialize with session_id and year and day
reader = AdventOfCodeInputReader("your_session_id",2021,1)
```
## Methods
The AdventOfCode class has three methods: `get_input()`, `get_input_by_day()`, `get_input_by_year_and_day()`. All three methods will return a list of strings, with each line of the input as an element in the list.
### get_input()
The `get_input(day=None, year=None)` method has two optional parameters, `day` and `year`. If you have specified the values for `day` and `year` at initalization, you can use this method without any arguments. If you have included values for `day` and `year` at initization but included `day` and `year` arguments for `get_input()`, the method will override and use the arguments provided in `get_input()` for this specific run only. It will not update the class value.
```python
# Using get_input() with year and day specified at initalization
reader = AdventOfCodeInputReader("your_session_id",2021,1)
input = reader.get_input()
# Using get_input() with only year specified at initalization
reader = AdventOfCodeInputReader("your_session_id",2021)
input = reader.get_input(day=1)
# Using get_input() without year and day specified at initalization
reader = AdventOfCodeInputReader("your_session_id")
input = reader.get_input(day=1,year=2021)
```
This method returns a list of strings, with each line of the input as an element in the list.
If the input from Advent of Code looks like:
```
line 1
line 2
line 3
```
The method will return:
```
["line 1", "line 2", "line 3"]
```
### get_input_by_day()
The `get_input_by_day(day)` method requires the `year` to be declared at initialization. If you have included value for `day` at initization but included `day` argument for `get_input_by_day()`, the method will override and use the argument provided in `get_input_by_day()` for this specific run only. It will not update the class value.
```python
reader = AdventOfCodeInputReader("your_session_id",2021)
input = reader.get_input_by_day(1)
```
This method returns a list of strings, with each line of the input as an element in the list.
If the input from Advent of Code looks like:
```
line 1
line 2
line 3
```
The method will return:
```
["line 1", "line 2", "line 3"]
```
### get_input_by_year_and_day()
The `get_input_by_year_and_day(year, day)` method requires the `year` and `day` arguments to be specified when using the method. If you have included value for `year` or `day` at initization, the `year` and `day` arguments provided will be used instead of the values specified at initalization for this specific run only. It will not update the class value.
```python
reader = AdventOfCodeInputReader("your_session_id")
input = reader.get_input_by_year_and_day(year=2021,day=1)
```
This method returns a list of strings, with each line of the input as an element in the list.
If the input from Advent of Code looks like:
```
line 1
line 2
line 3
```
The method will return:
```
["line 1", "line 2", "line 3"]
``` | AdventOfCodeInputReader | /AdventOfCodeInputReader-0.0.3.tar.gz/AdventOfCodeInputReader-0.0.3/README.md | README.md |
from FGAme import *
from sfx import SFX
from enemy import *
import os
import pygame
from pygame.locals import *
_ROOT = os.path.abspath(os.path.dirname(__file__))
PLAYERCHARGES = USEREVENT + 2
class Player(AABB):
def __init__(self, armor_health, shot_charges, special_charges, turbo_charges, shield_charges, *args, **kwargs):
self.armor_health = armor_health
self.shot_charges = shot_charges
self.special_charges = special_charges
self.turbo_charges = turbo_charges
self.shield_charges = shield_charges
super(Player, self).__init__(*args, **kwargs)
self.create_charges(self)
# on('pre-collision').do(sound_hit)
def move_player(self, dx, dy):
self.vel += (dx, dy)
def shield(self, player_mass, player_color, shield_charges):
if self.shield_charges > 0:
# shield_sfx = os.path.join(_ROOT, 'sfx/shield.wav')
# Music.play_sound(shield_sfx)
self.vel = vec(0, 0)
self.mass *= 10
self.color = 'darkblue'
schedule(1, self.shield_cooldown, player_mass=player_mass, player_color=player_color, shield_charges=shield_charges)
else:
pass
def shield_cooldown(self, player_mass, player_color, shield_charges):
self.mass = player_mass
self.color = player_color
self.shield_charges -= 1
def special_move(self, world, special_charges):
if special_charges > 0:
shot = RegularPoly(10,
length=15,
pos=(self.pos.x, self.pos.y + 40),
vel=(0,550),
omega=20,
color='blue',
mass='inf')
world.add(shot)
self.special_charges -= 1
else:
pass
def shot(self, world, shot_charges):
if shot_charges > 0:
shot_sound = os.path.join(_ROOT, 'sfx/shot.wav')
SFX.play_sound(shot_sound)
shot = AABB(
shape=(2, 3),
pos=(self.pos.x, self.pos.y+20),
vel=(0, 1000),
mass='inf',
color='blue')
world.add(shot)
self.shot_charges -= 1
else:
pass
def turbo(self, turbo_charges):
if turbo_charges > 0:
self.vel *= 3
self.turbo_charges -= 1
else:
pass
def create_charges(self, *args, **kwargs):
PLAYERCHARGES = USEREVENT + 2
pygame.time.set_timer(PLAYERCHARGES, 3500)
def charges_listener(self, shot_charges, turbo_charges, shield_charges, special_charges):
if pygame.event.get(PLAYERCHARGES):
self.refill_charges(shot_charges, turbo_charges, shield_charges, special_charges)
def refill_charges(self, shot_charges, turbo_charges, shield_charges, special_charges):
if special_charges == 0 and shot_charges == 0:
self.special_charges = 1
if shot_charges == 0:
self.shot_charges = 10
if turbo_charges < 3:
self.turbo_charges += 1
if shield_charges == 0:
self.shield_charges = 1
else:
pass
# def sound_hit(self, col, dx):
# sound_hit = os.path.join(_ROOT, 'sfx/hit.wav')
# SFX.play_sound(sound_hit)
def check_defeat(self):
if self.x < 0 or self.x > 800 or self.y < 0 or self.y > 600:
print("DERROTA. Você foi destruido.")
exit()
elif self.armor_health <= 0:
print("DERROTA. Você foi destruido.")
exit()
@listen('post-collision')
def detect_hit(arena, col):
A, B = col
if isinstance(A, Player) and isinstance(B, Enemy):
multi_damage = 0
while(multi_damage < 3):
A.deal_damage()
B.deal_damage()
elif isinstance(A, Enemy) and isinstance(B, Player):
multi_damage = 0
while(multi_damage < 3):
B.deal_damage()
A.deal_damage()
elif isinstance(A, Player) and isinstance(B, Circle):
multi_damage = 0
while(multi_damage < 2):
A.deal_damage()
elif isinstance(A, Circle) and isinstance(B, Player):
multi_damage = 0
while(multi_damage < 2):
B.deal_damage()
elif isinstance(A, AABB) and isinstance(B, Player):
B.deal_damage()
elif isinstance(A, Player) and isinstance(B, AABB):
A.deal_damage()
def deal_damage(self):
self.armor_health -= 10 | Advento | /Advento-0.1.3.tar.gz/Advento-0.1.3/src/player.py | player.py |
from FGAme import *
from sfx import SFX
from player import *
import pygame
from pygame.locals import *
import os
_ROOT = os.path.abspath(os.path.dirname(__file__))
ENEMYSHOT = USEREVENT + 1
class Enemy(AABB):
def __init__(self, armor_health, *args, **kwargs):
self.armor_health = armor_health
super(Enemy, self).__init__(*args, **kwargs)
self.create_enemy_shots(self)
# on('pre-collision').do(sound_hit)
def enemy_movement(self, vel):
if self.x >= 750:
self.vel = (self.vel.x*(-1), self.vel.y)
elif self.x <= 50:
self.vel = (self.vel.x*(-1), self.vel.y)
else:
pass
def create_enemy_shots(self, *args, **kwargs):
ENEMYSHOT = USEREVENT + 1
pygame.time.set_timer(ENEMYSHOT, 500)
def enemy_shot_listener(self, world):
if pygame.event.get(ENEMYSHOT):
self.enemy_shot(world)
def enemy_shot(self, world):
shot = Circle(3,
pos=(self.pos.x, self.pos.y-40),
vel=(0, -500),
mass='inf',
color='red')
world.add(shot)
# def sound_hit(self, col, dx):
# sound_hit = os.path.join(_ROOT, 'sfx/hit.wav')
# SFX.play_sound(sound_hit)
def check_defeat(self):
if self.x < 0 or self.x > 800 or self.y < 0 or self.y > 600:
print("VITORIA! O inimigo foi destruido.")
exit()
elif self.armor_health <= 0:
print("VITORIA! O inimigo foi destruido.")
exit()
@listen('post-collision')
def detect_hit(arena, col):
A, B = col
if isinstance(A, Enemy) and isinstance(B, AABB):
A.deal_damage()
elif isinstance(A, AABB) and isinstance(B, Enemy):
B.deal_damage()
elif isinstance(A, Enemy) and isinstance(B, RegularPoly):
multi_damage = 0
while(multi_damage < 3):
A.deal_damage()
elif isinstance(A, RegularPoly) and isinstane(B, Enemy):
multi_damage = 0
while(multi_damage < 3):
B.deal_damage()
else:
pass
def deal_damage(self):
self.armor_health -= 10 | Advento | /Advento-0.1.3.tar.gz/Advento-0.1.3/src/enemy.py | enemy.py |
from FGAme import *
from player import Player
from enemy import Enemy
from sfx import SFX
import os
import pygame
from pygame.locals import *
_ROOT = os.path.abspath(os.path.dirname(__file__))
class Battlefield(World):
def init(self):
self.player = Player(shape=(15, 25),
pos=(400, 35),
color='blue',
mass=150,
armor_health=100,
shot_charges = 10,
special_charges = 1,
turbo_charges = 3,
shield_charges = 1)
self.enemy = Enemy(shape=(35, 35),
pos=(400, 530),
color='red',
mass='inf',
vel=(300,0),
armor_health=100)
self.add([self.player, self.enemy])
self.damping = 2
# main_theme = os.path.join(_ROOT, 'sfx/main_theme.mp3')
# SFX.play_music(main_theme)
self.draw_platforms()
self.draw_margin()
on('long-press', 'left').do(self.player.move_player, -25, 0)
on('long-press', 'right').do(self.player.move_player, 25, 0)
on('long-press', 'up').do(self.player.move_player, 0, 25)
on('long-press', 'down').do(self.player.move_player, 0, -25)
on('key-down', 'q').do(self.player.shot, self, self.player.special_charges)
on('key-down', 'w').do(self.player.turbo, self.player.turbo_charges)
on('key-down', 'e').do(self.player.shield, self.player.mass, self.player.color, self.player.shield_charges)
on('key-down', 'r').do(self.player.special_move, self, self.player.special_charges)
on('frame-enter').do(self.player.charges_listener,
self.player.shot_charges,
self.player.turbo_charges,
self.player.shield_charges,
self.player.special_charges)
on('frame-enter').do(self.enemy.enemy_shot_listener, self)
on('frame-enter').do(self.enemy.enemy_movement, self.enemy.vel)
on('frame-enter').do(self.remove_out_of_bounds_shot)
on('frame-enter').do(self.player.check_defeat)
on('frame-enter').do(self.enemy.check_defeat)
def draw_platforms(self):
self.platform1 = self.add.aabb(shape=(115, 10), pos=(67, 452), mass='inf')
self.platform2 = self.add.aabb(shape=(87, 15), pos=(235, 150), mass='inf')
self.platform3 = self.add.aabb(shape=(122, 10), pos=(400, 350), mass='inf')
self.platform4 = self.add.aabb(shape=(115, 20), pos=(650, 75), mass='inf')
self.platform5 = self.add.aabb(shape=(53, 15), pos=(625, 250), mass='inf')
self.platform6 = self.add.aabb(shape=(67, 10), pos=(750, 440), mass='inf')
def remove_out_of_bounds_shot(self):
try:
if shot.pos.x<0 or shot.pos.x>600:
self.remove(shot)
elif shot.pos.y<0 or shot.pos.y>800:
self.remove(shot)
else:
pass
except NameError:
pass
else:
pass
def draw_margin(self):
self.add.margin(10,0,10,0) | Advento | /Advento-0.1.3.tar.gz/Advento-0.1.3/src/battlefield.py | battlefield.py |
import os
import glob
import docopt
import markdown
import pkgutil
from adventuredocs import plugins
from bs4 import BeautifulSoup
from jinja2 import Environment, FileSystemLoader
class Section(object):
""""
Attributes:
index (int): --
name (str): --
path (str): --
soup (BeautifulSoup): --
"""
def __init__(self, index, name, path, soup):
self.index = index
self.name = name
self.path = path
self.soup = soup
@property
def contents(self):
return self.soup.prettify()
@classmethod
def from_file(cls, section_index, path_to_markdown_file):
"""Create a section object by reading
in a markdown file from path!
Arguments:
section_index (int):
path_to_markdown_file (str): --
Returns:
Section
"""
with open(path_to_markdown_file) as f:
# markdown module strictly only
# supports UTF-8
file_contents = unicode(f.read(), 'utf-8')
html = markdown.markdown(file_contents)
section_soup = BeautifulSoup(html, "html.parser")
# get the file name without the extension
__, section_file_name = os.path.split(path_to_markdown_file)
section_name, __ = os.path.splitext(section_file_name)
return cls(index=section_index,
path=path_to_markdown_file,
soup=section_soup,
name=section_name)
class AdventureDoc(object):
"""A directory of markdown files, with an ORDER file.
"""
SECTION_CHOICE_KEYWORD = "NEXT_SECTION:"
TEMPLATE = pkgutil.get_data("adventuredocs", "layout.html")
def __init__(self, sections):
self.sections = sections
def build(self):
for section_soup in self.sections:
section_soup = self.use_plugins(section_soup)
# Use collected sections with jinja
return (Environment().from_string(self.TEMPLATE)
.render(title=u'AdventureDocs',
sections=self.sections)).encode('UTF-8')
@staticmethod
def get_sections(directory):
"""Collect the files specified in the
ORDER file, returning a list of
dictionary representations of each file.
Returns:
list[Section]: list of sections which
"""
with open(os.path.join(directory, "ORDER")) as f:
order_file_lines = f.readlines()
ordered_section_file_paths = []
for line_from_order_file in order_file_lines:
section_path = os.path.join(directory, line_from_order_file)
ordered_section_file_paths.append(section_path.strip())
sections = []
for i, section_file_path in enumerate(ordered_section_file_paths):
sections.append(Section.from_file(i, section_file_path))
return sections
# NOTE: this currently actually changes the section's
# beautiful soup but should make copy instead!
def use_plugins(self, section):
for _, module_name, _ in pkgutil.iter_modules(plugins.__path__):
module_name = "adventuredocs.plugins." + module_name
plugin = __import__(module_name, fromlist=["change_soup"])
change_soup_function = getattr(plugin, "change_soup")
plugin.change_soup(self, section)
return section
@classmethod
def from_directory(cls, directory):
ordered_sections = cls.get_sections(directory)
return AdventureDoc(ordered_sections)
def main():
arguments = docopt.docopt(__doc__)
source_directory = arguments["<source>"]
adoc = AdventureDoc.from_directory(source_directory)
destination = arguments["<destination>"] or "adocs-output.html"
with open(destination, 'w') as f:
f.write(adoc.build()) | AdventureDocs | /AdventureDocs-0.7.0.tar.gz/AdventureDocs-0.7.0/adventuredocs/adocs.py | adocs.py |
# Simple GAN
This is my attempt to make a wrapper class for a GAN in keras which can be used to abstract the whole architecture process.
[](https://travis-ci.com/deven96/Simple_GAN)
- [Simple GAN](#simple-gan)
- [Overview](#overview)
- [Flow Chart](#flow-chart)
- [Installation](#installation)
- [Example](#example)
- [Credits](#credits)
- [Contribution](#contribution)
- [License (MIT)](#license-mit)
## Overview

## Flow Chart
Setting up a Generative Adversarial Network involves having a discriminator and a generator working in tandem, with the ultimate goal being that the generator can come up with samples that are indistinguishable from valid samples by the discriminator.

## Installation
```bash
pip install adversarials
```
## Example
```python
import numpy as np
from keras.datasets import mnist
from adversarials.core import Log
from adversarials import SimpleGAN
if __name__ == '__main__':
(X_train, _), (_, _) = mnist.load_data()
# Rescale -1 to 1
X_train = (X_train.astype(np.float32) - 127.5) / 127.5
X_train = np.expand_dims(X_train, axis=3)
Log.info('X_train.shape = {}'.format(X_train.shape))
gan = SimpleGAN(save_to_dir="./assets/images",
save_interval=20)
gan.train(X_train, epochs=40)
```
## Credits
- [Understanding Generative Adversarial Networks](https://towardsdatascience.com/understanding-generative-adversarial-networks-4dafc963f2ef) - Noaki Shibuya
- [Github Keras Gan](https://github.com/osh/KerasGAN)
- [Simple gan](https://github.com/daymos/simple_keras_GAN/blob/master/gan.py)
## Contribution
You are very welcome to modify and use them in your own projects.
Please keep a link to the [original repository](https://github.com/deven96/Simple_GAN). If you have made a fork with substantial modifications that you feel may be useful, then please [open a new issue on GitHub](https://github.com/deven96/Simple_GAN/issues) with a link and short description.
## License (MIT)
This project is opened under the [MIT 2.0 License](https://github.com/deven96/Simple_GAN/blob/master/LICENSE) which allows very broad use for both academic and commercial purposes.
A few of the images used for demonstration purposes may be under copyright. These images are included under the "fair usage" laws.
| Adversarials | /Adversarials-1.0.1.tar.gz/Adversarials-1.0.1/README.md | README.md |
import os
from typing import Tuple, Union
import matplotlib.pyplot as plt
import numpy as np
from adversarials.core.base import ModelBase
from adversarials.core.utils import FS, File, Log
from keras.datasets import mnist
from keras.layers import (BatchNormalization, Dense, Dropout, Flatten, Input,
Reshape)
from keras.layers.advanced_activations import LeakyReLU
from keras.models import Sequential
from keras.optimizers import Adam
plt.switch_backend('agg') # allows code to run without a system DISPLAY
class SimpleGAN(ModelBase):
"""Simple Generative Adversarial Network.
Methods:
def __init__(self, size: Union[Tuple[int], int]=28, channels: int=1, batch_size: int=32, **kwargs)
def train(self, X_train, epochs: int=10):
def plot_images(self, samples: int=16, step:int=0):
# Plot generated images
Attributes:
G (keras.model.Model): Generator model.
D (keras.model.Model): Discriminator model.
model (keras.model.Model): Combined G & D model.
shape (Tuple[int]): Input image shape.
"""
def __init__(self, size: Union[Tuple[int], int]=28, channels: int=1, batch_size: int=32, **kwargs):
"""def __init__(size: Union[Tuple[int], int]=28, channels: int=1, batch_size: int=32, **kwargs)
Args:
size (int, optional): Defaults to 28. Image size. int tuple
consisting of image width and height or int in which case
width and height are uniformly distributed.
channels (int, optional): Defaults to 1. Image channel.
1 - grayscale, 3 - colored.
batch_size (int, optional): Defaults to 32. Mini-batch size.
Keyword Args:
optimizer (keras.optimizer.Optimizer, optional): Defaults to Adam
with learning rate of 1e-3, beta_1=0.5, decay = 8e-8.
save_interval (int, optional): Defaults to 100. Interval of training on
which to save generated images.
save_to_dir (Union[bool, str], optional): Defaults to False. Save generated images
to directory.
save_model (str, optional): Defaults to False. Save generated images
to directory.
Raises:
TypeError: Expected one of int, tuple - Got `type(size)`.
"""
super(SimpleGAN, self).__init__(**kwargs)
if isinstance(size, tuple):
self.width, self.height = size
elif isinstance(size, int):
self.width, self.height = size, size
else:
raise TypeError('Expected one of int, tuple. Got {}'
.format(type(size)))
self.channels = channels
self.batch_size = batch_size
# Extract keyword arguments.
self.optimizer = kwargs.get('optimizer',
Adam(lr=0.0002, beta_1=0.5, decay=8e-8))
self.save_to_dir = kwargs.get ('save_to_dir', False)
self.save_interval = kwargs.get('save_interval', 100)
self.save_model = kwargs.get('save_model', FS.MODEL_DIR+"/SimpleGAN/model.h5")
self._log('Compiling generator.', level='info')
# Generator Network.
self.G = self.__generator()
self.G.compile(loss='binary_crossentropy', optimizer=self.optimizer)
self._log('Compiling discriminator.', level='info')
# Discriminator Network.
self.D = self.__discriminator()
self.D.compile(loss='binary_crossentropy',
optimizer=self.optimizer,
metrics=['accuracy'])
# Stacked model.
self._log('Combining generator & Discriminator', level='info')
self._model = self.__stacked_generator_discriminator()
self._model.compile(loss='binary_crossentropy',
optimizer=self.optimizer)
def train(self, X_train, epochs: int=10):
"""
Train function to be used after GAN initialization
X_train[np.array]: full set of images to be used
"""
half_batch = self.batch_size // 2
if self.save_to_dir:
File.make_dirs(self.save_to_dir, verbose=1)
for cnt in range(epochs):
# get legits and syntethic images to be used in training discriminator
random_index = np.random.randint(0,
len(X_train) - half_batch)
self._log('Random Index: {}'.format(random_index))
legit_images = X_train[random_index: random_index + half_batch]\
.reshape(half_batch, self.width, self.height, self.channels)
gen_noise = np.random.normal(0, 1, (half_batch, 100))
syntetic_images = self.G.predict(gen_noise)
# combine synthetics and legits and assign labels
x_combined_batch = np.concatenate((legit_images, syntetic_images))
y_combined_batch = np.concatenate((np.ones((half_batch, 1)),
np.zeros((half_batch, 1))))
d_loss = self.D.train_on_batch(x_combined_batch,
y_combined_batch)
# train generator (discriminator training is false by default)
noise = np.random.normal(0, 1, (self.batch_size, 100))
y_mislabled = np.ones((self.batch_size, 1))
g_loss = self._model.train_on_batch(noise,
y_mislabled)
self._log(('Epoch: {:,}, [Discriminator :: d_loss: {:.4f}],'
'[Generator :: loss: {:.4f}]')
.format(cnt, d_loss[0], g_loss))
if cnt % self.save_interval == 0:
self.plot_images(step=cnt)
if self.save_model:
self._model.save(self.save_model)
def call(self, n: int=1, dim: int=100):
"""Inference method. Given a random latent sample. Generate an image.
Args:
samples (int, optional): Defaults to 1. Number of images to
be generated.
dim (int, optional): Defaults to 100. Noise dimension.
Returns:
np.ndarray: Array-like generated images.
"""
noise = np.random.normal(0, 1, size=(n, dim))
return self.G.predict(noise)
def plot_images(self, samples=16, step=0):
""" Plot and generate images
samples (int, optional): Defaults to 16. Noise samples to generate.
step (int, optional): Defaults to 0. Number of training step currently.
"""
filename = r"{0}/generated_{1}.png".format(self.save_to_dir,step)
# Generate images.
images = self.call(samples)
plt.figure(figsize=(10, 10))
for i in range(images.shape[0]):
plt.subplot(4, 4, i+1)
image = images[i, :, :, :]
image = np.reshape(image, [self.height, self.width])
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.tight_layout()
if self.save_to_dir:
plt.savefig(filename)
plt.close('all')
else:
plt.show()
def __generator(self):
"""Generator sequential model architecture.
Summary:
Dense -> LeakyReLU -> BatchNormalization -> Dense ->
LeakyReLU -> BatchNormalization -> Dense -> LeakyReLU ->
BatchNormalization -> Dense -> Reshape
Returns:
keras.model.Model: Generator model.
"""
model = Sequential()
model.add(Dense(256, input_shape=(100,)))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(1024))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(self.width * self.height *
self.channels, activation='tanh'))
model.add(Reshape((self.width, self.height, self.channels)))
return model
def __discriminator(self):
"""Discriminator sequential model architecture.
Summary:
Flatten -> Dense -> LeakyReLU ->
Dense -> LeakyReLU -> Dense
Returns:
keras.model.Model: Generator model.
"""
model = Sequential()
model.add(Flatten(input_shape=self.shape))
model.add(Dense((self.width * self.height * self.channels),
input_shape=self.shape))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(256))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(128))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(1, activation='sigmoid'))
model.summary()
return model
def __stacked_generator_discriminator(self):
self.D.trainable = False
model = Sequential()
model.add(self.G)
model.add(self.D)
return model
@property
def shape(self):
"""Input image shape.
Returns:
Tuple[int]: image shape.
"""
return self.width, self.height, self.channels
@property
def model(self):
"""Stacked generator-discriminator model.
Returns:
keras.model.Model: Combined G & D model.
"""
return self._model
# @property
# def G(self):
# """Generator model.
# Returns:
# keras.model.Model: Generator model.
# """
# return self.__generator()
# @property
# def D(self):
# """Discriminator model.
# Returns:
# keras.model.Model: Discriminator model.
# """
# return self.__discriminator() | Adversarials | /Adversarials-1.0.1.tar.gz/Adversarials-1.0.1/adversarials/core/arch.py | arch.py |
import os
import json
import pickle
import configparser
from abc import ABCMeta
from typing import Callable, Any
# Third party libraries.
import yaml
from easydict import EasyDict
# In order to use LibYAML bindings, which is much faster than pure Python.
# Download and install [LibYAML](https://pyyaml.org/wiki/LibYAML).
try:
from yaml import CLoader as Loader, CDumper as Dumper
except ImportError:
from yaml import Loader, Dumper
# Exported classes & functions.
__all__ = [
'Config',
]
################################################################################################
# +--------------------------------------------------------------------------------------------+
# | Config: Configuration avatar class to convert save & load config files.
# +--------------------------------------------------------------------------------------------+
################################################################################################
class Config(metaclass=ABCMeta):
@staticmethod
def from_yaml(file: str):
"""Load configuration from a YAML file.
Args:
file (str): A `.yml` or `.yaml` filename.
Raises:
AssertionError: File is not a YAML file.
FileNotFoundError: `file` was not found.
Returns:
easydict.EasyDict: config dictionary object.
"""
assert (file.endswith('yaml') or
file.endswith('yml')), 'File is not a YAML file.'
if not os.path.isfile(file):
raise FileNotFoundError('{} was not found'.format(file))
with open(file, mode="r") as f:
cfg = EasyDict(yaml.load(f, Loader=Loader))
return cfg
@staticmethod
def from_cfg(file: str, ext: str = 'cfg'):
"""Load configuration from an cfg file.
Args:
file (str): An cfg filename.
ext (str, optional): Defaults to 'cfg'. Config file extension.
Raises:
AssertionError: File is not an `${ext}` file.
FileNotFoundError: `file` was not found.
Returns:
easydict.EasyDict: config dictionary object.
"""
assert file.endswith(ext), f'File is not a/an `{ext}` file.'
if not os.path.isfile(file):
raise FileNotFoundError('{} was not found'.format(file))
cfg = configparser.ConfigParser(dict_type=EasyDict)
cfg.read(file)
return cfg
@staticmethod
def from_json(file: str):
"""Load configuration from a json file.
Args:
file (str): A JSON filename.
Raises:
AssertionError: File is not a JSON file.
FileNotFoundError: `file` was not found.
Returns:
easydict.EasyDict: config dictionary object.
"""
assert file.endswith('json'), 'File is not a `JSON` file.'
if not os.path.isfile(file):
raise FileNotFoundError('{} was not found'.format(file))
with open(file, mode='r') as f:
cfg = EasyDict(json.load(f))
return cfg
@staticmethod
def to_yaml(cfg: EasyDict, file: str, **kwargs):
"""Save configuration object into a YAML file.
Args:
cfg (EasyDict): Configuration: as a dictionary instance.
file (str): Path to write the configuration to.
Keyword Args:
Passed into `dumper`.
Raises:
AssertionError: `dumper` must be callable.
"""
# Use LibYAML (which is much faster than pure Python) dumper.
kwargs.setdefault('Dumper', Dumper)
# Write to a YAML file.
Config._to_file(cfg=cfg, file=file, dumper=yaml.dump, **kwargs)
@staticmethod
def to_json(cfg: EasyDict, file: str, **kwargs):
"""Save configuration object into a JSON file.
Args:
cfg (EasyDict): Configuration: as dictionary instance.
file (str): Path to write the configuration to.
Keyword Args:
Passed into `dumper`.
Raises:
AssertionError: `dumper` must be callable.
"""
# Write to a JSON file.
Config._to_file(cfg=cfg, file=file, dumper=json.dump, **kwargs)
@staticmethod
def to_cfg(cfg: EasyDict, file: str, **kwargs):
"""Save configuration object into a cfg or ini file.
Args:
cfg (Any): Configuration: as dictionary instance.
file (str): Path to write the configuration to.
Keyword Args:
Passed into `dumper`.
"""
print(cfg, file, **kwargs)
return NotImplemented
@staticmethod
def to_pickle(cfg: Any, file: str, **kwargs):
"""Save configuration object into a pickle file.
Args:
cfg (Any): Configuration: as dictionary instance.
file (str): Path to write the configuration to.
Keyword Args:
Passed into `dumper`.
Raises:
AssertionError: `dumper` must be callable.
"""
Config._to_file(cfg=cfg, file=file, dumper=pickle.dump, **kwargs)
@staticmethod
def _to_file(cfg: Any, file: str, dumper: Callable, **kwargs):
"""Save configuration object into a file as allowed by `dumper`.
Args:
cfg (Any): Configuration: as dictionary instance.
file (str): Path to write the configuration to.
dumper (Callable): Function/callable handler to save object to disk.
Keyword Args:
Passed into `dumper`.
Raises:
AssertionError: `dumper` must be callable.
"""
assert callable(dumper), "`dumper` must be callable."
# Create directory if it doesn't exist.
# if director(y|ies) doesn't already exist.
if not os.path.isdir(file):
# Create director(y|ies).
os.makedirs(file)
# Write configuration to file.
with open(file, mode="wb", encoding="utf-8") as f:
dumper(cfg, f, **kwargs) | Adversarials | /Adversarials-1.0.1.tar.gz/Adversarials-1.0.1/adversarials/core/config.py | config.py |
import os.path
from adversarials.core import Config
__all__ = [
'FS', 'SETUP', 'LOGGER',
]
################################################################################################
# +--------------------------------------------------------------------------------------------+
# | FS: File System.
# +--------------------------------------------------------------------------------------------+
################################################################################################
class FS:
# Project name & absolute directory.
PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
APP_NAME = os.path.basename(PROJECT_DIR)
ASSET_DIR = os.path.join(PROJECT_DIR, "assets")
CACHE_DIR = os.path.join(ASSET_DIR, "cache")
MODEL_DIR = os.path.join(CACHE_DIR, "models")
################################################################################################
# +--------------------------------------------------------------------------------------------+
# | Setup configuration constants.
# +--------------------------------------------------------------------------------------------+
################################################################################################
class SETUP:
# Global setup configuration.
__global = Config.from_cfg(os.path.join(FS.PROJECT_DIR, "adversarials/config/",
"setup/global.cfg"))
# Build mode/type.
MODE = __global['config']['MODE']
################################################################################################
# +--------------------------------------------------------------------------------------------+
# | Logger: Logging configuration paths.
# +--------------------------------------------------------------------------------------------+
################################################################################################
class LOGGER:
# Root Logger:
ROOT = os.path.join(FS.PROJECT_DIR, 'adversarials/config/logger',
f'{SETUP.MODE}.cfg') | Adversarials | /Adversarials-1.0.1.tar.gz/Adversarials-1.0.1/adversarials/core/consts.py | consts.py |
import os.path
from abc import ABCMeta, abstractmethod
from adversarials.core.utils import File, Log
from adversarials.core.consts import FS
class _Base(object):
def __init__(self, *args, **kwargs):
# Verbosity level: 0 or 1.
self._verbose = kwargs.setdefault('verbose', 1)
self._name = kwargs.get('name', self.__class__.__name__)
def __repr__(self):
"""Object representation of Sub-classes."""
# args = self._get_args()
kwargs = self._get_kwargs()
# Format arguments.
# fmt = ", ".join(map(repr, args))
fmt = ""
# Format keyword arguments.
for k, v in kwargs:
# Don't include these in print-out.
if k in ('captions', 'filename', 'ids'):
continue
fmt += ", {}={!r}".format(k, v)
# Object representation of Class.
return '{}({})'.format(self.__class__.__name__, fmt.lstrip(', '))
def __str__(self):
"""String representation of Sub-classes."""
return "{}()".format(self.__class__.__name__,
", ".join(map(str, self._get_args())))
def __format__(self, format_spec):
if format_spec == "!r":
return self.__repr__()
return self.__str__()
def _log(self, *args, level: str = 'log', **kwargs):
"""Logging method helper based on verbosity."""
# No logging if verbose is not 'on'.
if not kwargs.pop('verbose', self._verbose):
return
# Validate log levels.
_levels = ('log', 'debug', 'info', 'warn', 'error', 'critical')
if level.lower() not in _levels:
raise ValueError("`level` must be one of {}".format(_levels))
# Call the appropriate log level, eg: Log.info(*args, **kwargs)
eval(f'Log.{level.lower()}(*args, **kwargs)')
# noinspection PyMethodMayBeStatic
def _get_args(self):
# names = ('data_dir', 'sub_dirs', 'results_dir')
# return [getattr(self, f'_{name}') for name in names]
return []
def _get_kwargs(self):
return sorted([(k.lstrip('_'), getattr(self, f'{k}'))
for k in self.__dict__.keys()])
@property
def name(self):
return self._name
@property
def verbose(self):
return self._verbose
class ModelBase(_Base, metaclass=ABCMeta):
def __init__(self, *args, **kwargs):
super(ModelBase, self).__init__(*args, **kwargs)
# Extract Keyword arguments.
self._cache_dir = kwargs.get('cache_dir',
os.path.join(FS.MODEL_DIR, self._name))
# Create cache directory if it doesn't exist.
File.make_dirs(self._cache_dir, verbose=self._verbose)
def __call__(self, *args, **kwargs):
return self.call()
# noinspection PyUnusedLocal
@abstractmethod
def call(self, *args, **kwargs):
return NotImplemented
@staticmethod
def int_shape(x):
"""Returns the shape of tensor or variable as tuple of int or None entries.
Args:
x (Union[tf.Tensor, tf.Variable]): Tensor or variable. hasattr(x, 'shape')
Returns:
tuple: A tuple of integers (or None entries).
"""
try:
shape = x.shape
if not isinstance(shape, tuple):
shape = tuple(shape.as_list())
return shape
except ValueError:
return None
@property
def cache_dir(self):
return self._cache_dir | Adversarials | /Adversarials-1.0.1.tar.gz/Adversarials-1.0.1/adversarials/core/base.py | base.py |
import os
import sys
import logging
from abc import ABCMeta
from typing import Iterable
from logging.config import fileConfig
from adversarials.core.consts import FS, LOGGER
__all__ = [
'File', 'Log',
]
################################################################################################
# +--------------------------------------------------------------------------------------------+
# | File: File utility class for working with directories & files.
# +--------------------------------------------------------------------------------------------+
################################################################################################
class File(metaclass=ABCMeta):
@staticmethod
def make_dirs(path: str, verbose: int = 0):
"""Create Directory if it doesn't exist.
Args:
path (str): Directory/directories to be created.
verbose (bool, optional): Defaults to 0. 0 turns of logging,
while 1 gives feedback on creation of director(y|ies).
Example:
```python
>>> path = os.path.join("path/to", "be/created/")
>>> File.make_dirs(path, verbose=1)
INFO | "path/to/be/created/" has been created.
```
"""
# if director(y|ies) doesn't already exist.
if not os.path.isdir(path):
# Create director(y|ies).
os.makedirs(path)
if verbose:
# Feedback based on verbosity.
Log.info('"{}" has been created.'.format(os.path.relpath(path)))
@staticmethod
def get_dirs(path: str, exclude: Iterable[str] = None, optimize: bool = False):
"""Retrieve all directories in a given path.
Args:
path (str): Base directory of directories to retrieve.
exclude (Iterable[str], optional): Defaults to None. List of paths to
remove from results.
optimize (bool, optional): Defaults to False. Return an generator object,
to prevent loading all directories in memory, otherwise: return results
as a normal list.
Raises:
FileNotFoundError: `path` was not found.
Returns:
Union[Generator[str], List[str]]: Generator expression if optimization is turned on,
otherwise list of directories in given path.
"""
# Return only list of directories.
return File.listdir(path, exclude=exclude, dirs_only=True, optimize=optimize)
@staticmethod
def get_files(path: str, exclude: Iterable[str] = None, optimize: bool = False):
"""Retrieve all files in a given path.
Args:
path (str): Base directory of files to retrieve.
exclude (Iterable[str], optional): Defaults to None. List of paths to
remove from results.
optimize (bool, optional): Defaults to False. Return an generator object,
to prevent loading all directories in memory, otherwise: return results
as a normal list.
Raises:
FileNotFoundError: `path` was not found.
Returns:
Union[Generator[str], List[str]]: Generator expression if optimization is turned on,
otherwise list of files in given path.
"""
# Return only list of directories.
return File.listdir(path, exclude=exclude, files_only=True, optimize=optimize)
@staticmethod
def listdir(path: str, exclude: Iterable[str] = None,
dirs_only: bool = False, files_only: bool = False,
optimize: bool = False):
"""Retrieve files/directories in a given path.
Args:
path (str): Base directory of path to retrieve.
exclude (Iterable[str], optional): Defaults to None. List of paths to
remove from results.
dirs_only (bool, optional): Defaults to False. Return only directories in `path`.
files_only (bool, optional): Defaults to False. Return only files in `path`.
optimize (bool, optional): Defaults to False. Return an generator object,
to prevent loading all directories in memory, otherwise: return results
as a normal list.
Raises:
FileNotFoundError: `path` was not found.
Returns:
Union[Generator[str], List[str]]: Generator expression if optimization is turned on,
otherwise list of directories in given path.
"""
if not os.path.isdir(path):
raise FileNotFoundError('"{}" was not found!'.format(path))
# Get all files in `path`.
if files_only:
paths = (os.path.join(path, p) for p in os.listdir(path)
if os.path.isfile(os.path.join(path, p)))
else:
# Get all directories in `path`.
if dirs_only:
paths = (os.path.join(path, p) for p in os.listdir(path)
if os.path.isdir(os.path.join(path, p)))
else:
# Get both files and directories.
paths = (os.path.join(path, p) for p in os.listdir(path))
# Exclude paths from results.
if exclude is not None:
# Remove excluded paths.
paths = filter(lambda p: os.path.basename(p) not in exclude, paths)
# Convert generator expression to list.
if not optimize:
paths = list(paths)
return paths
################################################################################################
# +--------------------------------------------------------------------------------------------+
# | Log: For logging and printing download progress, etc...
# +--------------------------------------------------------------------------------------------+
################################################################################################
class Log(metaclass=ABCMeta):
# File logger configuration.
fileConfig(LOGGER.ROOT)
_logger = logging.getLogger()
# Log Level.
level = _logger.level
@staticmethod
def setLevel(level: int):
Log._logger.setLevel(level=level)
@staticmethod
def debug(*args, **kwargs):
sep = kwargs.pop('sep', " ")
Log._logger.debug(sep.join(map(repr, args)), **kwargs)
@staticmethod
def info(*args, **kwargs):
sep = kwargs.pop('sep', " ")
Log._logger.info(sep.join(map(repr, args)), **kwargs)
@staticmethod
def warn(*args, **kwargs):
sep = kwargs.pop('sep', " ")
Log._logger.warning(sep.join(map(repr, args)), **kwargs)
@staticmethod
def error(*args, **kwargs):
sep = kwargs.pop('sep', " ")
Log._logger.error(sep.join(map(repr, args)), **kwargs)
@staticmethod
def critical(*args, **kwargs):
sep = kwargs.pop('sep', " ")
Log._logger.critical(sep.join(map(repr, args)), **kwargs)
@staticmethod
def log(*args, **kwargs):
"""Logging method avatar based on verbosity.
Args:
*args
Keyword Args:
verbose (int, optional): Defaults to 1.
level (int, optional): Defaults to ``Log.level``.
sep (str, optional): Defaults to " ".
Returns:
None
"""
# No logging if verbose is not 'on'.
if not kwargs.pop('verbose', 1):
return
# Handle for callbacks & log level.
sep = kwargs.pop('sep', " ")
Log._logger.log(
Log.level, sep.join(map(repr, args)), **kwargs)
@staticmethod
def progress(count: int, max_count: int):
"""Prints task progress *(in %)*.
Args:
count {int}: Current progress so far.
max_count {int}: Total progress length.
"""
# Percentage completion.
pct_complete = count / max_count
# Status-message. Note the \r which means the line should
# overwrite itself.
msg = "\r- Progress: {0:.02%}".format(pct_complete)
# Print it.
# Log.log(msg)
sys.stdout.write(msg)
sys.stdout.flush()
@staticmethod
def report_hook(block_no: int, read_size: bytes, file_size: bytes):
"""Calculates download progress given the block number, read size,
and the total file size of the URL target.
Args:
block_no {int}: Current download state.
read_size {bytes}: Current downloaded size.
file_size {bytes}: Total file size.
Returns:
None.
"""
# Calculates download progress given the block number, a read size,
# and the total file size of the URL target.
pct_complete = float(block_no * read_size) / float(file_size)
msg = "\r\t -Download progress {:.02%}".format(pct_complete)
# Log.log(msg)
sys.stdout.stdwrite(msg)
sys.stdout.flush() | Adversarials | /Adversarials-1.0.1.tar.gz/Adversarials-1.0.1/adversarials/core/utils.py | utils.py |
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
[INSERT CONTACT METHOD].
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/CODE_OF_CONDACT.md | CODE_OF_CONDACT.md |
Contributing
============
.. note::
Here, I explain how to contribute to a project that adopted this template.
Actually, you can use this same scheme when contributing to this template. If
you are completely new to ``git`` this might not be the best beginner tutorial,
but will be very good still ;-)
You will notice that the text that appears is a mirror of the
``CONTRIBUTING.rst`` file. You can also point your community to that file (or
the docs) to guide them in the steps required to interact with you project.
.. include:: ../CONTRIBUTING.rst
:start-after: .. start-here
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/docs/contributing.rst | contributing.rst |
The rationale behind the project
================================
In the following sections I explain the rationale I used to configure this
template repository. Also, these are the same strategies I adopt in my current
projects. In summary, the following sections represent my view on software
development practices, either it is open or closed source, done in teams or by
single individuals. Yes, I also follow these *rules* when I am the sole
developer.
Branch organization
-------------------
This repository complies with the ideas of agile development. Why? Because I
found the agile strategy to fit best the kind of projects I develop. Is this
the best strategy for your project? Well, you have to evaluate that.
In the "python-project-skeleton" layout, there is only one production branch,
the `main` branch, representing the latest state of the software. This layout
contrasts with the opposite strategy where the `stable` and the `latest`
branches are kept separate. For the kind of projects I develop, keeping the
`stable` and the `latest` separated offers no benefit. Honestly, I believe
such separation defeats the purpose of agile development with Semantic
Versioning 2, which is the versioning scheme I adopted here, and in most of my
projects.
Versioning
----------
The versioning scheme adopted here is the `Semantic Versioning 2`_.
The development process
-----------------------
Any code addition must come in the form of pull requests from forked
repositories. Any merged pull request is followed the by respective increment
in the software version and consecutive deployment of the new version in PyPI.
If there are 5 pull request merges in one day, there will be 5 new versions
releases that same day. Each new version should be tagged accordingly in git.
Additions to the documentation *also* count to the version number.
Additions to the CI platform *may* skip an increase in version number, using the
``[SKIP]`` tag in the merge commit message, as explained in :ref:`version release <Version release>`.
.. _Semantic Versioning 2: https://semver.org/#semantic-versioning-200
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/docs/the_rationale_behind.rst | the_rationale_behind.rst |
How to use this template
========================
You can use the ``python-project-skeleton`` as a template for your own
repositories thanks to the `template feature of GitHub
<https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-from-a-template>`_,
*(Yes, you could use a cookiecutter, but this is not a coockiecutter).*
Changing names
--------------
Once you have created a new repository from this template, these are the names
you need to change in the files to adapt the template to you and your project:
.. note::
If you are using bash, you can use:
:code:`grep -rliI "text to search" .`
to identify which files have the string you want to change.
.. note::
In bash, use the following to replace a string in multiple files.
:code:`find ./ -type f -exec sed -i -e 's/apple/orange/g' {} \;`
Kudos to https://stackoverflow.com/questions/6758963.
1. Where it says ``joaomcteixeira``, rename it to your GitHub user name. In
``setup.py`` and in ``AUTHORS.rst`` there is also my full written name, rename
it to yours.
2. Where ever it says ``sampleproject`` change it for the name of your project.
That is, what you would like your users to import.
.. code::
import sampleproject
3. In ``setup.py`` rename :code:`jmct-sampleproject` for the name your project will
have at PyPI. Usually this is the same as the name you decided above.
4. Replace ``python-project-skeleton`` to the name of the repository of your
project. It might be the same name as the Python package and the PyPI package,
that is up to you entirely.
5. This template is deployed at ``test.pypi.org``. You need to change the last
line of the `version-bump-and-package.yml
<https://github.com/joaomcteixeira/python-project-skeleton/blob/master/.github/workflows/version-bump-and-package.yml>`_
file and remove the ``--repository testpypi`` tag so that your project is
deployed at the main PyPI repository.
6. Remove and edit some of the unnecessary files in the ``docs`` folder. Mainly those
related explicitly to this template repository and reference herein.
Despite you can use the find/replace commands, I suggest you navigate every file
and understand its functionality and which parameters you need to change to fit
the template to your project. They are not that many files.
Please read the :ref:`Template Configuration` section for all the details on the
different workflows implemented.
Finally, you can remove those files or lines that refer to integrations that you
do not want. For example, if you don't want to have your project tracked by
CodeClimate, remove the files and lines related to it.
Enjoy, and if you like this template, please let me know.
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/docs/how_to_use_the_template.rst | how_to_use_the_template.rst |
Continuous Integration
----------------------
Here, I overview the implementation strategies for the different continuous
integration and quality report platforms. In the previous versions of this
skeleton template, I used `Travis-CI`_ and `AppVeyor-CI`_ to run the testing
workflows. In the current version, I have migrated all these operations to GitHub
Actions.
**Does this mean you should not use Travis or AppVeyor?** *Of course not.*
Simply, the needs of my projects and the time I have available to dedicate to
CI configuration do not require a dedicated segregation of strategies into
multiple servers and services and I can perfectly accommodate my needs in
GitHub actions.
**Are GitHub actions better than Travis or AppVeyor?** I am not qualified to
answer that question.
When using this repository, keep in mind that you don't need to use all
services adopted here. You can drop or add any other at your will by removing
the files or lines related to them or adding new ones following the patterns
presented here.
The following list summarizes the platforms adopted:
#. Building and testing
* GitHub actions
#. Quality Control
* Codacy
* Code Climate
#. Test Coverage
* Codecov
#. Documentation
* Read the Docs
I acknowledge the existence of many other platforms for the same purposes of
those listed. I chose these because they fit the size and scope of my projects
and are also the most used within my field of work.
Configuring GitHub Actions
~~~~~~~~~~~~~~~~~~~~~~~~~~
You may wish to read about `GitHub actions
<https://github.com/features/actions>`_. Here, I have one Action workflow per
environment defined in ``tox``. Each of these workflows runs on each of the
python supported versions and OSes. These tests regard unit tests,
documentation build, lint, integrity checks, and, finally, version bump and
package deploying on PyPI.
The CI workflows trigger every time a new pull request is created and at each
commit to that PR. However, the ``lint`` tests do not trigger when the PR is
merged to the ``main`` branch. On the other hand, the ``version bump`` workflow
triggers only when the PR is accepted.
In this repository you can find two PRs demonstrating: one where `tests pass
<https://github.com/joaomcteixeira/python-project-skeleton/pull/10>`_ and
another where `tests fail
<https://github.com/joaomcteixeira/python-project-skeleton/pull/11>`_.
Version release
```````````````
Every time a Pull Request is merged to `main` branch, the `deployment workflow
<https://github.com/joaomcteixeira/python-project-skeleton/blob/master/.github/workflows/version-bump-and-package.yml>`_
triggers. This action bumps the new version number according to the
merge commit message, creates a new GitHub tag for that commit, and
publishes in PyPI the new software version.
As discussed in another section, here I follow the rules of `Semantic
Versioning 2 <https://semver.org/>`_.
If the Pull Request merge commit starts with ``[MAJOR]``, a major version
increase takes place (attention to the rules of SV2!), if a PR merge commit
message starts with ``[FEATURE]`` it triggers a *minor* update. Finally, if the
commit message as not special tag, a *patch* update is triggered. Whether to
trigger a *major*, *minor*, or *patch* update concern mainly the main
repository maintainer.
This whole workflow can be deactivate if the commit to the ``main`` branch
starts with ``[SKIP]``.
In conclusion, every commit to ``main`` without the ``[SKIP]`` tag will be
followed by a version upgrade, new tag, new commit to ``main`` and consequent
release to PyPI. You have a visual representation of the commit workflow in the
`Network plot
<https://github.com/joaomcteixeira/python-project-skeleton/network>`_.
**How version numbers are managed?**
There are two main version string handlers out there:
`bump2version`_ and `versioneer`_. I chose *bump2version* for this repository
template. Why? I have no argument against *versioneer* or others, simply I
found ``bumpversion`` to be so simple, effective, and configurable that I could
only adopt it. Congratulations to both projects nonetheless.
Code coverage
~~~~~~~~~~~~~
``Codecov`` is used very frequently to report test coverage rates. Activate
your repository under ``https://about.codecov.io/``, and follow their
instructions.
`Coverage`_ reports are sent to Codecov servers when the ``test.yml`` workflow
takes place. This happens for each PR and each merge commit to ``maint``.
The `.coveragerc`_ file, mirrored below, configures ``Coverage`` reports.
.. literalinclude:: ../.coveragerc
The :code:`.coveragerc` can be expanded to further restraint coverage analysis,
for example adding these lines to the :code:`exclude` tag:
::
[report]
exclude_lines =
if self.debug:
pragma: no cover
raise NotImplementedError
if __name__ == .__main__.:
Remember that if you don't want to use these services, you can simply remove
the respective files from your project.
Code Quality
~~~~~~~~~~~~
Here, we have both ``Codacy`` and ``Code Climate`` as code quality inspectors.
There are also others out there, feel free to suggested different ones in the
`Discussion tab
<https://github.com/joaomcteixeira/python-project-skeleton/discussions>`_.
Codacy
``````
There is not much to configure for `Codacy` as far as this template is
concerned. The only setup provided is to exclude the analysis of test scripts,
this configuration is provided by the :code:`.codacy.yaml` file at the root
director of the repository. If you wish Codacy to perform quality analysis on
your test scripts just remove the file or comment the line. Here we mirror the
`.codacy.yaml`_ file:
.. literalinclude:: ../.codacy.yaml
:language: yaml
Code Climate
````````````
There is not much to configure for `Code Climate`_, as well. The only setup
provided here is to exclude the analysis of test scripts and other *dev* files
Code Climate checks by default, the :code:`.codeclimate.yml` file at the root
directory of the repository configures this behavior (look at the bottom
lines). If you wish Code Climate to perform quality analysis on your test
scripts just remove the file or comment the line.
Code Climate provides a **technical debt** percentage that can be retrieved
nicely with :ref:`Badges`.
.. _Travis-CI: https://travis-ci.org
.. _.travis.yml: https://github.com/joaomcteixeira/python-project-skeleton/blob/latest/.travis.yml
.. _tox: https://tox.readthedocs.io/en/latest/
.. _Appveyor-CI: https://www.appveyor.com/
.. _tox.ini: https://github.com/joaomcteixeira/python-project-skeleton/blob/latest/tox.ini
.. _.appveyor.yml: https://github.com/joaomcteixeira/python-project-skeleton/blob/latest/.appveyor.yml
.. _.codacy.yaml: https://github.com/joaomcteixeira/python-project-skeleton/blob/latest/.codacy.yaml
.. _.codeclimate.yml: https://github.com/joaomcteixeira/python-project-skeleton/blob/latest/.codeclimate.yml
.. _Codacy: https://app.codacy.com/
.. _Code Climate: https://codeclimate.com/
.. _coverage: https://pypi.org/project/coverage/
.. _.coveragerc: https://github.com/joaomcteixeira/python-project-skeleton/blob/latest/.coveragerc
.. _bump2version: https://pypi.org/project/bumpversion/
.. _versioneer: https://github.com/warner/python-versioneer
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/docs/nonlisted/ci_platforms.rst | ci_platforms.rst |
Python Project Layout
---------------------
The file structure for this Python project follows the ``src``, ``tests``,
``docs`` and ``devtools`` folders layout.
The src layout
~~~~~~~~~~~~~~
I discovered storing the project's source code underneath a ``src`` directory
layer instead of directly in the project's root folder is one of the most
controversial discussions regarding the organization of Python projects. Here, I
adopted the ``src``-based layout discussed by `ionel`_ in his `blog post`_.
After more than one year of using ``src``, I have nothing to complain about; on
the contrary, it saved many issues related to ``import`` statements. Either
importing from the repository or the installed version? The ``src`` guarantees
stable imports. More on ionel's blog (see also `src-nosrc example`_).
The ``src/sampleproject`` folder hosts the actual source of the project. In the
current version of this template, I don't discuss how to organize a source code
of a project. I am looking forward doing that in future versions.
Testing
~~~~~~~
Here, tests are encapsulated in a separate ``tests`` folder. With this
encapsulation, outside the main library folder, it is easier to control that
tests do not import from relative paths and can only access the library code
after library installation (regardless of the installation mode). Also, having
``tests`` in a separated folder facilitates the configuration files layout on
excluding tests from deployment (``MANIFEST.in``) and code quality
(``.codacy.yaml``) or coverage (``.coveragerc``).
Documentation
~~~~~~~~~~~~~
All documentation related files are stored in a ``docs`` folder. Files in
``docs`` will be compiled using Sphinx to generate the HTML documentation web
pages. You can follow the file and folder structure in this repository as an
example of how to assemble the documentation. You will see files that contain
text and others that import text from relative files. The latter strategy avoids
repeating information.
devtools
~~~~~~~~
The ``devtools`` folder hosts the files related to development. In this case, I
used the idea explained by `Chodera Lab`_ in their `structuring your project`_
guidelines.
.. _ionel: https://github.com/ionelmc
.. _blog post: https://blog.ionelmc.ro/2014/05/25/python-packaging/
.. _src-nosrc example: https://github.com/ionelmc/python-packaging-blunders
.. _Chodera lab: https://github.com/choderalab
.. _structuring your project: https://github.com/choderalab/software-development/blob/master/STRUCTURING_YOUR_PROJECT.md
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/docs/nonlisted/project_layout.rst | project_layout.rst |
Read the Docs
-------------
Activate your project at `Read the Docs platform`_ (RtD), their web interface
is easy enough to follow without further explanations. If your documentation is
building under the :ref:`tox workflow <Uniformed Tests with tox>` it will build
in at Read the Docs.
Docs Requirements
~~~~~~~~~~~~~~~~~
Requirements to build the documentation page are listed in
``devtools/docs_requirements.txt``:
.. literalinclude:: ../devtools/docs_requirements.txt
Here we use `Sphinx`_ as the documentation builder and the
`sphinx-py3doc-enhanced-theme`_ as theme, though you can use many different
theme flavors, see `Sphinx Themes`_.
Build version
~~~~~~~~~~~~~
By default, RtD has two main documentation versions (also called builds): the
*latest* and the *stable*. The *latest* points to the ``master`` branch while
the *stable* points to the `latest GitHub tag`_. However, as we have discussed
in :ref:`The Rationale behind the project` section, here we keep only the
*latest* version (that of the ``master`` branch) and other versions for the
different releases of interest.
Google Analytics
~~~~~~~~~~~~~~~~
Read the Docs allows straight forward implementation of Google Analytics
tracking in the project documentation, just follow their instructions_.
Local Build
~~~~~~~~~~~
The ``[testenv:docs]`` in ``tox.ini`` file simulates the RtD execution. If that
test passes, RtD should pass.
To build a local version of the documentation, go to the main repository folder
and run:
::
tox -e docs
The documentation is at ``dist/docs/index.html``. The ``tox`` run also reports
on inconsistencies and errors. If there are inconsistencies or errors in the
documentation build, the PR won't pass the CI tests.
.. _Read the Docs platform: https://readthedocs.org/
.. _Sphinx: http://www.sphinx-doc.org/en/master/
.. _Sphinx Themes: https://sphinx-themes.org/
.. _sphinx-py3doc-enhanced-theme: https://github.com/ionelmc/sphinx-py3doc-enhanced-theme
.. _latest Github tag: https://github.com/joaomcteixeira/python-project-skeleton/tags
.. _instructions: https://docs.readthedocs.io/en/stable/guides/google-analytics.html
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/docs/nonlisted/readthedocs.rst | readthedocs.rst |
Configuration Files
-------------------
MANIFEST.in
~~~~~~~~~~~
The ``MANIFEST.in`` file configures which files in the repository/folder are
grabbed or ignored when assembly the distributed package. You can see that I
package the ``src``, the ``docs``, other ``*.rst`` files, the ``LICENSE`` and
nothing more. All configuration files, tests, and other Python related or IDE
related files are excluded.
There is a debate on whether tests should be deployed along with the library
source. Should they? Tox and the CI integration guarantees tests are run on
*installed* versions of the software. So, I am the kind of person that
considers there is no need to deploy tests alongside with the source. Users
aren't going to run tests. Developers will.
Let me know if you think differently.
It is actually easy to work with MANIFEST.in file. Feel free to add or remove
files as your project needs.
tox.ini
~~~~~~~
Tox configuration file might be the trickiest one to learn and operate with
until you are familiar with tox's workflows. `Read all about tox in their
documentation pages <https://tox.readthedocs.io/en/latest/>`_. The ``tox.ini``
file contains several comments explaining the implementations I adopted.
bumpversion.cfg
~~~~~~~~~~~~~~~
The ``.bumpversion.cfg`` configuration is heavily related with the GitHub
Actions workflows for automated packaging and deployment and the
``CONTRIBUTING.rst`` instructions. Specially, the use of commit and tag
messages, and the trick with CHANGELOG.rst.
I have also used bumpversion in other projects to update the version on some
badges.
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/docs/nonlisted/configuration_files.rst | configuration_files.rst |
Badges
------
Badges point to the current status of the different Continuous Integration tools
in the ``master`` branch. You can also configure badges to report on other
branches, though. Are tests passing? Is the package building properly? Is
documentation building properly? What is the quality of the code? Red lights
mean something is wrong and you should attend it shortly!
Each platform provide their own badges, and you can modify and emulate the badge
strategy in the ``README.rst`` file. Yet, you can further tune the badges style
by creating your own personalized badge with `Shields.io`_.
You have noticed already that there is no badge for PyPI. That is because
"python-project-skeleton" is deployed at ``test.pypi.org``. You will find also
at Shields.io how to add the PyPI badge.
.. _Shields.io: https://shields.io/
| Adversary-Armor | /Adversary-Armor-0.1.1.tar.gz/Adversary-Armor-0.1.1/docs/nonlisted/badges.rst | badges.rst |
<div align="center">
<img src="adversary_name.png"></img>
</div>
<br>
[](https://travis-ci.com/airbnb/artificial-adversary)
[](https://github.com/100/Cranium/blob/master/LICENSE)
This repo is primarily maintained by [Devin Soni](https://github.com/100/) and [Philbert Lin](https://github.com/philin).
Note that this project is under active development. If you encounter bugs, please report them in the `issues` tab.
## Introduction
When classifying user-generated text, there are many ways that users can modify their content to avoid detection. These methods are typically cosmetic modifications to the texts that change the raw characters or words used, but leave the original meaning visible enough for human readers to understand. Such methods include replacing characters with similar looking ones, removing or adding punctuation and spacing, and swapping letters in words. For example `please wire me 10,000 US DOLLARS to bank of scamland` is probably an obvious scam message, but `[email protected] me 10000 US DoLars to,BANK of ScamIand` would fool many classifiers.
This library allows you to generate texts using these methods, and simulate these kind of attacks on your machine learning models. By exposing your model to these texts offline, you will be able to better prepare for them when you encounter them in an online setting. Compared to other libraries, this one differs in that it treats the model as a black box and uses only generic attacks that do not depend on knowledge of the model itself.
## Installation
```
pip install Adversary
python -m textblob.download_corpora
```
## Usage
See [`Example.ipynb`](https://github.com/airbnb/artificial-adversary/blob/master/Example.ipynb) for a quick illustrative example.
```python
from Adversary import Adversary
gen = Adversary(verbose=True, output='Output/')
texts_original = ['tell me awful things']
texts_generated = gen.generate(texts_original)
metrics_single, metrics_group = gen.attack(texts_original, texts_generated, lambda x: 1)
```
### Use cases:
**1) For data-set augmentation:** In order to prepare for these attacks in the wild, an obvious method is to train on examples that are close in nature to the expected attacks. Training on adversarial examples has become a standard technique, and it has been shown to produce more robust classifiers. Using the texts generated by this library will allow you to build resilient models that can handle obfuscation of input text.
**2) For performance bounds:** If you do not want to alter an existing model, this library will allow you to obtain performance expectations under each possible type of attack.
### Included attacks:
- Text-level:
- Adding generic words to mask the suspicious parts (`good_word_attack`)
- Swapping words (`swap_words`)
- Removing spacing between words (`remove_spacing`)
- Word-level:
- Replacing words with synonyms (`synonym`)
- Replacing letters with similar-looking symbols (`letter_to_symbol`)
- Swapping letters (`swap_letters`)
- Inserting punctuation (`insert_punctuation`)
- Inserting duplicate characters (`insert_duplicate_characters`)
- Deleting characters (`delete_characters`)
- Changing case (`change_case`)
- Replacing digits with words (`num_to_word`)
### Interface:
**Constructor**
```
Adversary(
verbose=False,
output=None
)
```
- **verbose:** If verbose, prints output while generating texts and while conducting attack
- **output:** If output, pickles generated texts and metrics DataFrames to folder at `output` path
**Returns:** None
---
#### Note: only provide instances of the positive class in the below functions.
**Generate attacked texts**
```
Adversary.generate(
texts,
text_sample_rate=1.0,
word_sample_rate=0.3,
attacks='all',
max_attacks=2,
random_seed=None,
save=False
)
```
- **texts:** List of original strings
- **text_sample_rate:** P(individual text is attacked) if in [0, 1], else, number of copies of each text to use
- **word_sample_rate:** P(word_i is sampled in a given word attack | word's text is sampled)
- **attacks:** Description of attack configuration - either 'all', `list` of `str` corresponding to attack names, or `dict` of attack name to probability
- **max_attacks:** Maximum number of attacks that can be applied to a single text
- **random_seed:** Seed for calls to random module functions
- **save:** Whether the generated texts should be pickled as output
**Returns:** List of tuples of generated strings in format (attacked text, list of attacks, index of original text).
Due to the probabilistic sampling and length heuristics used in certain attacks, some of the generated texts may not differ from the original.
---
**Simulate attack on texts**
```
Adversary.attack(
texts_original,
texts_generated,
predict_function,
save=False
)
```
- **texts_original:** List of original texts
- **texts_generated:** List of generated texts (output of generate function)
- **predict_function:** Function that maps `str` input text to `int` classification label (0 or 1) - this probably wraps a machine learning model's `predict` function
- **save:** Whether the generated metrics `DataFrame`s should be pickled as output
**Returns:** Tuple of two DataFrames containing performance metrics (single attacks, and grouped attacks, respectively)
## Contributing
Check the `issues` tab on GitHub for outstanding issues.
Otherwise, feel free to add new attacks in `attacks.py` or other features in a pull request and the maintainers will look through them.
Please make sure you pass the CI checks and add tests if applicable.
#### Acknowledgments
Credits to [Airbnb](https://airbnb.io/) for giving me the freedom to create this tool during my internship, and [Jack Dai](https://github.com/jdai8) for the (obvious in hindsight) name for the project.
| Adversary | /Adversary-1.1.1.tar.gz/Adversary-1.1.1/README.md | README.md |
# AdvertsAPI
An unofficial API for the community based Irish marketplace, **adverts.ie**.
## Getting Started
### Prerequisites
* Python 3.6 >=
### Installing
``` shell
pip install AdvertsAPI
```
### Dependecies
``` shell
pip install requests
pip install beautifulsoup4
pip install mechanize
```
### Imports
``` python
import AdvertsAPI
from AdvertsAPI.category import Category
from AdvertsAPI.utils import pretty_json
```
## Available Methods
* Constructor
* login
* get_ad_panel
* place_offer
* withdraw_offer
* full_ad_info
* search_query
* logout
## Constructor
**Default:** category=None, county=None, min_price=0, max_price=0, view='grid_view', search=None
``` python
advert = AdvertsAPI.AdvertsAPI()
```
**Bike in Dublin:** category=Category.SPORTS_FITNESS__BIKES, county='Dublin', min_price=0, max_price=0, view='grid_view', search=None
``` python
advert = AdvertsAPI.AdvertsAPI(category=Category.SPORTS_FITNESS__BIKES, county='Dublin')
```
**Mountain bike in Dublin:** category=Category.SPORTS_FITNESS__BIKES, county='Dublin', min_price=0, max_price=0, view='grid_view', search='mountain'
``` python
advert = AdvertsAPI.AdvertsAPI(category=Category.SPORTS_FITNESS__BIKES, county='Dublin', search='mountain')
```
**ps4 between 100-300 euro:** category=Category.CONSOLES_GAMES, county=None, min_price=100, max_price=300, view='grid_view', search='ps4'
``` python
advert = AdvertsAPI.AdvertsAPI(category=Category.CONSOLES_GAMES, min_price=100, max_price=300, search='ps4')
```
**free items in Dublin:** category=None, county='Dublin', min_price=0, max_price=0, view='grid_view', search=None
``` python
advert = AdvertsAPI.AdvertsAPI(county='Dublin')
```
## login
``` python
adverts = AdvertsAPI.AdvertsAPI()
adverts.login('username', 'password')
```
## get_ad_panel
**NOTE:** given the contructor details, it will produce the url for the search.
``` python
ads = advert.get_ad_panel()
```
However you could also pass in the url of the search and it will compute for you.
``` python
ads = advert.get_ad_panel('https://www.adverts.ie/for-sale/price_0-0/')
```
It will return a list of the top 30 ads for that search. The attributes that can be retreived are: `price`, `title`, `area`, `county`, `category` and `url`
``` python
for ad in ads:
print(ad.price)
print(ad.title)
print(ad.area)
print(ad.county)
print(ad.category)
print(ad.url)
```
## place_offer
This will place an offer given the url of the ad directly.
Below is an example given a url that is static.
``` python
advert.place_offer('ad_url', 100)
```
Below will place an offer on the first ad of the ads retreieved [above](#get_ad_panel).
``` python
advert.place_offer(ads[0].url, 0)
```
## withdraw_offer
Withdraws an offer to a particular ad, given the url. You must have an active offer to withdraw the ad.
``` python
advert.withdraw_offer('ad_url')
```
## full_ad_info
This will return all the information of the ad given the url. The attributes that can be retreived are: `title`, `price`, `seller`, `positive`, `negative`, `location` `date_entered`, `ad_views` and `description`.
``` python
advert.full_ad_info('ad_url')
```
``` python
ads = advert.full_ad_info(ads[0].url)
```
You can retreive the data as follows:
``` python
for ad in ads:
print(ad.title)
print(ad.price)
print(ad.seller)
print(ad.positive)
print(ad.negative)
print(ad.location)
print(ad.date_entered)
print(ad.ad_views)
print(ad.description)
```
## search_query
Search query will use the constructor generated url to search for ads.
``` python
advert.search_query()
```
However you can pass a search query to the method and it will return the search results of the query.
``` python
# search_query (given query)
ads = advert.search_query('mountain bikes')
```
Similar to the [get_ad_panel](#get_ad_panel) method, you can retreive the data as below:
``` python
for ad in ads:
print(ad.title)
# ...
print(ad.url)
```
## logout
Not very necessary as it will discard the cookies as the program stops.
``` python
adverts.logout()
```
## Usage
For documented examples see [examples.py](https://github.com/ahmedhamedaly/Adverts-API/blob/master/examples/examples.py)
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## Roadmap
See the [open issues](https://github.com/ahmedhamedaly/Adverts-API/issues) for a list of proposed features (and known issues).
## License
Distributed under the [MIT](https://github.com/ahmedhamedaly/Adverts-API/blob/master/LICENSE) License. See `LICENSE` for more information.
| AdvertsAPI | /AdvertsAPI-1.0.1.tar.gz/AdvertsAPI-1.0.1/README.md | README.md |
# Rocksdb Tuning Advisor
## Motivation
The performance of Rocksdb is contingent on its tuning. However,
because of the complexity of its underlying technology and a large number of
configurable parameters, a good configuration is sometimes hard to obtain. The aim of
the python command-line tool, Rocksdb Advisor, is to automate the process of
suggesting improvements in the configuration based on advice from Rocksdb
experts.
## Overview
Experts share their wisdom as rules comprising of conditions and suggestions in the INI format (refer
[rules.ini](https://github.com/facebook/rocksdb/blob/master/tools/advisor/advisor/rules.ini)).
Users provide the Rocksdb configuration that they want to improve upon (as the
familiar Rocksdb OPTIONS file —
[example](https://github.com/facebook/rocksdb/blob/master/examples/rocksdb_option_file_example.ini))
and the path of the file which contains Rocksdb logs and statistics.
The [Advisor](https://github.com/facebook/rocksdb/blob/master/tools/advisor/advisor/rule_parser_example.py)
creates appropriate DataSource objects (for Rocksdb
[logs](https://github.com/facebook/rocksdb/blob/master/tools/advisor/advisor/db_log_parser.py),
[options](https://github.com/facebook/rocksdb/blob/master/tools/advisor/advisor/db_options_parser.py),
[statistics](https://github.com/facebook/rocksdb/blob/master/tools/advisor/advisor/db_stats_fetcher.py) etc.)
and provides them to the [Rules Engine](https://github.com/facebook/rocksdb/blob/master/tools/advisor/advisor/rule_parser.py).
The Rules uses rules from experts to parse data-sources and trigger appropriate rules.
The Advisor's output gives information about which rules were triggered,
why they were triggered and what each of them suggests. Each suggestion
provided by a triggered rule advises some action on a Rocksdb
configuration option, for example, increase CFOptions.write_buffer_size,
set bloom_bits to 2 etc.
## Usage
### Prerequisites
The tool needs the following to run:
* python3
### Running the tool
An example command to run the tool:
```shell
cd rocksdb/tools/advisor
python3 -m advisor.rule_parser_example --rules_spec=advisor/rules.ini --rocksdb_options=test/input_files/OPTIONS-000005 --log_files_path_prefix=test/input_files/LOG-0 --stats_dump_period_sec=20
```
### Command-line arguments
Most important amongst all the input that the Advisor needs, are the rules
spec and starting Rocksdb configuration. The configuration is provided as the
familiar Rocksdb Options file (refer [example](https://github.com/facebook/rocksdb/blob/master/examples/rocksdb_option_file_example.ini)).
The Rules spec is written in the INI format (more details in
[rules.ini](https://github.com/facebook/rocksdb/blob/master/tools/advisor/advisor/rules.ini)).
In brief, a Rule is made of conditions and is triggered when all its
constituent conditions are triggered. When triggered, a Rule suggests changes
(increase/decrease/set to a suggested value) to certain Rocksdb options that
aim to improve Rocksdb performance. Every Condition has a 'source' i.e.
the data source that would be checked for triggering that condition.
For example, a log Condition (with 'source=LOG') is triggered if a particular
'regex' is found in the Rocksdb LOG files. As of now the Rules Engine
supports 3 types of Conditions (and consequently data-sources):
LOG, OPTIONS, TIME_SERIES. The TIME_SERIES data can be sourced from the
Rocksdb [statistics](https://github.com/facebook/rocksdb/blob/master/include/rocksdb/statistics.h)
or [perf context](https://github.com/facebook/rocksdb/blob/master/include/rocksdb/perf_context.h).
For more information about the remaining command-line arguments, run:
```shell
cd rocksdb/tools/advisor
python3 -m advisor.rule_parser_example --help
```
### Sample output
Here, a Rocksdb log-based rule has been triggered:
```shell
Rule: stall-too-many-memtables
LogCondition: stall-too-many-memtables regex: Stopping writes because we have \d+ immutable memtables \(waiting for flush\), max_write_buffer_number is set to \d+
Suggestion: inc-bg-flush option : DBOptions.max_background_flushes action : increase suggested_values : ['2']
Suggestion: inc-write-buffer option : CFOptions.max_write_buffer_number action : increase
scope: col_fam:
{'default'}
```
## Running the tests
Tests for the code have been added to the
[test/](https://github.com/facebook/rocksdb/tree/master/tools/advisor/test)
directory. For example, to run the unit tests for db_log_parser.py:
```shell
cd rocksdb/tools/advisor
python3 -m unittest -v test.test_db_log_parser
```
| Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/README.md | README.md |
from advisor.db_log_parser import NO_COL_FAMILY
from advisor.db_options_parser import DatabaseOptions
from advisor.rule_parser import Suggestion
import copy
import random
class ConfigOptimizer:
SCOPE = 'scope'
SUGG_VAL = 'suggested values'
@staticmethod
def apply_action_on_value(old_value, action, suggested_values):
chosen_sugg_val = None
if suggested_values:
chosen_sugg_val = random.choice(list(suggested_values))
new_value = None
if action is Suggestion.Action.set or not old_value:
assert(chosen_sugg_val)
new_value = chosen_sugg_val
else:
# For increase/decrease actions, currently the code tries to make
# a 30% change in the option's value per iteration. An addend is
# also present (+1 or -1) to handle the cases when the option's
# old value was 0 or the final int() conversion suppressed the 30%
# change made to the option
old_value = float(old_value)
mul = 0
add = 0
if action is Suggestion.Action.increase:
if old_value < 0:
mul = 0.7
add = 2
else:
mul = 1.3
add = 2
elif action is Suggestion.Action.decrease:
if old_value < 0:
mul = 1.3
add = -2
else:
mul = 0.7
add = -2
new_value = int(old_value * mul + add)
return new_value
@staticmethod
def improve_db_config(options, rule, suggestions_dict):
# this method takes ONE 'rule' and applies all its suggestions on the
# appropriate options
required_options = []
rule_suggestions = []
for sugg_name in rule.get_suggestions():
option = suggestions_dict[sugg_name].option
action = suggestions_dict[sugg_name].action
# A Suggestion in the rules spec must have the 'option' and
# 'action' fields defined, always call perform_checks() method
# after parsing the rules file using RulesSpec
assert(option)
assert(action)
required_options.append(option)
rule_suggestions.append(suggestions_dict[sugg_name])
current_config = options.get_options(required_options)
# Create the updated configuration from the rule's suggestions
updated_config = {}
for sugg in rule_suggestions:
# case: when the option is not present in the current configuration
if sugg.option not in current_config:
try:
new_value = ConfigOptimizer.apply_action_on_value(
None, sugg.action, sugg.suggested_values
)
if sugg.option not in updated_config:
updated_config[sugg.option] = {}
if DatabaseOptions.is_misc_option(sugg.option):
# this suggestion is on an option that is not yet
# supported by the Rocksdb OPTIONS file and so it is
# not prefixed by a section type.
updated_config[sugg.option][NO_COL_FAMILY] = new_value
else:
for col_fam in rule.get_trigger_column_families():
updated_config[sugg.option][col_fam] = new_value
except AssertionError:
print(
'WARNING(ConfigOptimizer): provide suggested_values ' +
'for ' + sugg.option
)
continue
# case: when the option is present in the current configuration
if NO_COL_FAMILY in current_config[sugg.option]:
old_value = current_config[sugg.option][NO_COL_FAMILY]
try:
new_value = ConfigOptimizer.apply_action_on_value(
old_value, sugg.action, sugg.suggested_values
)
if sugg.option not in updated_config:
updated_config[sugg.option] = {}
updated_config[sugg.option][NO_COL_FAMILY] = new_value
except AssertionError:
print(
'WARNING(ConfigOptimizer): provide suggested_values ' +
'for ' + sugg.option
)
else:
for col_fam in rule.get_trigger_column_families():
old_value = None
if col_fam in current_config[sugg.option]:
old_value = current_config[sugg.option][col_fam]
try:
new_value = ConfigOptimizer.apply_action_on_value(
old_value, sugg.action, sugg.suggested_values
)
if sugg.option not in updated_config:
updated_config[sugg.option] = {}
updated_config[sugg.option][col_fam] = new_value
except AssertionError:
print(
'WARNING(ConfigOptimizer): provide ' +
'suggested_values for ' + sugg.option
)
return current_config, updated_config
@staticmethod
def pick_rule_to_apply(rules, last_rule_name, rules_tried, backtrack):
if not rules:
print('\nNo more rules triggered!')
return None
# if the last rule provided an improvement in the database performance,
# and it was triggered again (i.e. it is present in 'rules'), then pick
# the same rule for this iteration too.
if last_rule_name and not backtrack:
for rule in rules:
if rule.name == last_rule_name:
return rule
# there was no previous rule OR the previous rule did not improve db
# performance OR it was not triggered for this iteration,
# then pick another rule that has not been tried yet
for rule in rules:
if rule.name not in rules_tried:
return rule
print('\nAll rules have been exhausted')
return None
@staticmethod
def apply_suggestions(
triggered_rules,
current_rule_name,
rules_tried,
backtrack,
curr_options,
suggestions_dict
):
curr_rule = ConfigOptimizer.pick_rule_to_apply(
triggered_rules, current_rule_name, rules_tried, backtrack
)
if not curr_rule:
return tuple([None]*4)
# if a rule has been picked for improving db_config, update rules_tried
rules_tried.add(curr_rule.name)
# get updated config based on the picked rule
curr_conf, updated_conf = ConfigOptimizer.improve_db_config(
curr_options, curr_rule, suggestions_dict
)
conf_diff = DatabaseOptions.get_options_diff(curr_conf, updated_conf)
if not conf_diff: # the current and updated configs are the same
curr_rule, rules_tried, curr_conf, updated_conf = (
ConfigOptimizer.apply_suggestions(
triggered_rules,
None,
rules_tried,
backtrack,
curr_options,
suggestions_dict
)
)
print('returning from apply_suggestions')
return (curr_rule, rules_tried, curr_conf, updated_conf)
# TODO(poojam23): check if this method is required or can we directly set
# the config equal to the curr_config
@staticmethod
def get_backtrack_config(curr_config, updated_config):
diff = DatabaseOptions.get_options_diff(curr_config, updated_config)
bt_config = {}
for option in diff:
bt_config[option] = {}
for col_fam in diff[option]:
bt_config[option][col_fam] = diff[option][col_fam][0]
print(bt_config)
return bt_config
def __init__(self, bench_runner, db_options, rule_parser, base_db):
self.bench_runner = bench_runner
self.db_options = db_options
self.rule_parser = rule_parser
self.base_db_path = base_db
def run(self):
# In every iteration of this method's optimization loop we pick ONE
# RULE from all the triggered rules and apply all its suggestions to
# the appropriate options.
# bootstrapping the optimizer
print('Bootstrapping optimizer:')
options = copy.deepcopy(self.db_options)
old_data_sources, old_metric = (
self.bench_runner.run_experiment(options, self.base_db_path)
)
print('Initial metric: ' + str(old_metric))
self.rule_parser.load_rules_from_spec()
self.rule_parser.perform_section_checks()
triggered_rules = self.rule_parser.get_triggered_rules(
old_data_sources, options.get_column_families()
)
print('\nTriggered:')
self.rule_parser.print_rules(triggered_rules)
backtrack = False
rules_tried = set()
curr_rule, rules_tried, curr_conf, updated_conf = (
ConfigOptimizer.apply_suggestions(
triggered_rules,
None,
rules_tried,
backtrack,
options,
self.rule_parser.get_suggestions_dict()
)
)
# the optimizer loop
while curr_rule:
print('\nRule picked for next iteration:')
print(curr_rule.name)
print('\ncurrent config:')
print(curr_conf)
print('updated config:')
print(updated_conf)
options.update_options(updated_conf)
# run bench_runner with updated config
new_data_sources, new_metric = (
self.bench_runner.run_experiment(options, self.base_db_path)
)
print('\nnew metric: ' + str(new_metric))
backtrack = not self.bench_runner.is_metric_better(
new_metric, old_metric
)
# update triggered_rules, metric, data_sources, if required
if backtrack:
# revert changes to options config
print('\nBacktracking to previous configuration')
backtrack_conf = ConfigOptimizer.get_backtrack_config(
curr_conf, updated_conf
)
options.update_options(backtrack_conf)
else:
# run advisor on new data sources
self.rule_parser.load_rules_from_spec() # reboot the advisor
self.rule_parser.perform_section_checks()
triggered_rules = self.rule_parser.get_triggered_rules(
new_data_sources, options.get_column_families()
)
print('\nTriggered:')
self.rule_parser.print_rules(triggered_rules)
old_metric = new_metric
old_data_sources = new_data_sources
rules_tried = set()
# pick rule to work on and set curr_rule to that
curr_rule, rules_tried, curr_conf, updated_conf = (
ConfigOptimizer.apply_suggestions(
triggered_rules,
curr_rule.name,
rules_tried,
backtrack,
options,
self.rule_parser.get_suggestions_dict()
)
)
# return the final database options configuration
return options | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/db_config_optimizer.py | db_config_optimizer.py |
import argparse
from advisor.db_config_optimizer import ConfigOptimizer
from advisor.db_log_parser import NO_COL_FAMILY
from advisor.db_options_parser import DatabaseOptions
from advisor.rule_parser import RulesSpec
CONFIG_OPT_NUM_ITER = 10
def main(args):
# initialise the RulesSpec parser
rule_spec_parser = RulesSpec(args.rules_spec)
# initialise the benchmark runner
bench_runner_module = __import__(
args.benchrunner_module, fromlist=[args.benchrunner_class]
)
bench_runner_class = getattr(bench_runner_module, args.benchrunner_class)
ods_args = {}
if args.ods_client and args.ods_entity:
ods_args['client_script'] = args.ods_client
ods_args['entity'] = args.ods_entity
if args.ods_key_prefix:
ods_args['key_prefix'] = args.ods_key_prefix
db_bench_runner = bench_runner_class(args.benchrunner_pos_args, ods_args)
# initialise the database configuration
db_options = DatabaseOptions(args.rocksdb_options, args.misc_options)
# set the frequency at which stats are dumped in the LOG file and the
# location of the LOG file.
db_log_dump_settings = {
"DBOptions.stats_dump_period_sec": {
NO_COL_FAMILY: args.stats_dump_period_sec
}
}
db_options.update_options(db_log_dump_settings)
# initialise the configuration optimizer
config_optimizer = ConfigOptimizer(
db_bench_runner,
db_options,
rule_spec_parser,
args.base_db_path
)
# run the optimiser to improve the database configuration for given
# benchmarks, with the help of expert-specified rules
final_db_options = config_optimizer.run()
# generate the final rocksdb options file
print(
'Final configuration in: ' +
final_db_options.generate_options_config('final')
)
print(
'Final miscellaneous options: ' +
repr(final_db_options.get_misc_options())
)
if __name__ == '__main__':
'''
An example run of this tool from the command-line would look like:
python3 -m advisor.config_optimizer_example
--base_db_path=/tmp/rocksdbtest-155919/dbbench
--rocksdb_options=temp/OPTIONS_boot.tmp --misc_options bloom_bits=2
--rules_spec=advisor/rules.ini --stats_dump_period_sec=20
--benchrunner_module=advisor.db_bench_runner
--benchrunner_class=DBBenchRunner --benchrunner_pos_args ./../../db_bench
readwhilewriting use_existing_db=true duration=90
'''
parser = argparse.ArgumentParser(description='This script is used for\
searching for a better database configuration')
parser.add_argument(
'--rocksdb_options', required=True, type=str,
help='path of the starting Rocksdb OPTIONS file'
)
# these are options that are column-family agnostic and are not yet
# supported by the Rocksdb Options file: eg. bloom_bits=2
parser.add_argument(
'--misc_options', nargs='*',
help='whitespace-separated list of options that are not supported ' +
'by the Rocksdb OPTIONS file, given in the ' +
'<option_name>=<option_value> format eg. "bloom_bits=2 ' +
'rate_limiter_bytes_per_sec=128000000"')
parser.add_argument(
'--base_db_path', required=True, type=str,
help='path for the Rocksdb database'
)
parser.add_argument(
'--rules_spec', required=True, type=str,
help='path of the file containing the expert-specified Rules'
)
parser.add_argument(
'--stats_dump_period_sec', required=True, type=int,
help='the frequency (in seconds) at which STATISTICS are printed to ' +
'the Rocksdb LOG file'
)
# ODS arguments
parser.add_argument(
'--ods_client', type=str, help='the ODS client binary'
)
parser.add_argument(
'--ods_entity', type=str,
help='the servers for which the ODS stats need to be fetched'
)
parser.add_argument(
'--ods_key_prefix', type=str,
help='the prefix that needs to be attached to the keys of time ' +
'series to be fetched from ODS'
)
# benchrunner_module example: advisor.db_benchmark_client
parser.add_argument(
'--benchrunner_module', required=True, type=str,
help='the module containing the BenchmarkRunner class to be used by ' +
'the Optimizer, example: advisor.db_bench_runner'
)
# benchrunner_class example: DBBenchRunner
parser.add_argument(
'--benchrunner_class', required=True, type=str,
help='the name of the BenchmarkRunner class to be used by the ' +
'Optimizer, should be present in the module provided in the ' +
'benchrunner_module argument, example: DBBenchRunner'
)
parser.add_argument(
'--benchrunner_pos_args', nargs='*',
help='whitespace-separated positional arguments that are passed on ' +
'to the constructor of the BenchmarkRunner class provided in the ' +
'benchrunner_class argument, example: "use_existing_db=true ' +
'duration=900"'
)
args = parser.parse_args()
main(args) | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/config_optimizer_example.py | config_optimizer_example.py |
import copy
from advisor.db_log_parser import DataSource, NO_COL_FAMILY
from advisor.ini_parser import IniParser
import os
class OptionsSpecParser(IniParser):
@staticmethod
def is_new_option(line):
return '=' in line
@staticmethod
def get_section_type(line):
'''
Example section header: [TableOptions/BlockBasedTable "default"]
Here ConfigurationOptimizer returned would be
'TableOptions.BlockBasedTable'
'''
section_path = line.strip()[1:-1].split()[0]
section_type = '.'.join(section_path.split('/'))
return section_type
@staticmethod
def get_section_name(line):
# example: get_section_name('[CFOptions "default"]')
token_list = line.strip()[1:-1].split('"')
# token_list = ['CFOptions', 'default', '']
if len(token_list) < 3:
return None
return token_list[1] # return 'default'
@staticmethod
def get_section_str(section_type, section_name):
# Example:
# Case 1: get_section_str('DBOptions', NO_COL_FAMILY)
# Case 2: get_section_str('TableOptions.BlockBasedTable', 'default')
section_type = '/'.join(section_type.strip().split('.'))
# Case 1: section_type = 'DBOptions'
# Case 2: section_type = 'TableOptions/BlockBasedTable'
section_str = '[' + section_type
if section_name == NO_COL_FAMILY:
# Case 1: '[DBOptions]'
return (section_str + ']')
else:
# Case 2: '[TableOptions/BlockBasedTable "default"]'
return section_str + ' "' + section_name + '"]'
@staticmethod
def get_option_str(key, values):
option_str = key + '='
# get_option_str('db_log_dir', None), returns 'db_log_dir='
if values:
# example:
# get_option_str('max_bytes_for_level_multiplier_additional',
# [1,1,1,1,1,1,1]), returned string:
# 'max_bytes_for_level_multiplier_additional=1:1:1:1:1:1:1'
if isinstance(values, list):
for value in values:
option_str += (str(value) + ':')
option_str = option_str[:-1]
else:
# example: get_option_str('write_buffer_size', 1048576)
# returned string: 'write_buffer_size=1048576'
option_str += str(values)
return option_str
class DatabaseOptions(DataSource):
@staticmethod
def is_misc_option(option_name):
# these are miscellaneous options that are not yet supported by the
# Rocksdb options file, hence they are not prefixed with any section
# name
return '.' not in option_name
@staticmethod
def get_options_diff(opt_old, opt_new):
# type: Dict[option, Dict[col_fam, value]] X 2 ->
# Dict[option, Dict[col_fam, Tuple(old_value, new_value)]]
# note: diff should contain a tuple of values only if they are
# different from each other
options_union = set(opt_old.keys()).union(set(opt_new.keys()))
diff = {}
for opt in options_union:
diff[opt] = {}
# if option in options_union, then it must be in one of the configs
if opt not in opt_old:
for col_fam in opt_new[opt]:
diff[opt][col_fam] = (None, opt_new[opt][col_fam])
elif opt not in opt_new:
for col_fam in opt_old[opt]:
diff[opt][col_fam] = (opt_old[opt][col_fam], None)
else:
for col_fam in opt_old[opt]:
if col_fam in opt_new[opt]:
if opt_old[opt][col_fam] != opt_new[opt][col_fam]:
diff[opt][col_fam] = (
opt_old[opt][col_fam],
opt_new[opt][col_fam]
)
else:
diff[opt][col_fam] = (opt_old[opt][col_fam], None)
for col_fam in opt_new[opt]:
if col_fam in opt_old[opt]:
if opt_old[opt][col_fam] != opt_new[opt][col_fam]:
diff[opt][col_fam] = (
opt_old[opt][col_fam],
opt_new[opt][col_fam]
)
else:
diff[opt][col_fam] = (None, opt_new[opt][col_fam])
if not diff[opt]:
diff.pop(opt)
return diff
def __init__(self, rocksdb_options, misc_options=None):
super().__init__(DataSource.Type.DB_OPTIONS)
# The options are stored in the following data structure:
# Dict[section_type, Dict[section_name, Dict[option_name, value]]]
self.options_dict = None
self.column_families = None
# Load the options from the given file to a dictionary.
self.load_from_source(rocksdb_options)
# Setup the miscellaneous options expected to be List[str], where each
# element in the List has the format "<option_name>=<option_value>"
# These options are the ones that are not yet supported by the Rocksdb
# OPTIONS file, so they are provided separately
self.setup_misc_options(misc_options)
def setup_misc_options(self, misc_options):
self.misc_options = {}
if misc_options:
for option_pair_str in misc_options:
option_name = option_pair_str.split('=')[0].strip()
option_value = option_pair_str.split('=')[1].strip()
self.misc_options[option_name] = option_value
def load_from_source(self, options_path):
self.options_dict = {}
with open(options_path, 'r') as db_options:
for line in db_options:
line = OptionsSpecParser.remove_trailing_comment(line)
if not line:
continue
if OptionsSpecParser.is_section_header(line):
curr_sec_type = (
OptionsSpecParser.get_section_type(line)
)
curr_sec_name = OptionsSpecParser.get_section_name(line)
if curr_sec_type not in self.options_dict:
self.options_dict[curr_sec_type] = {}
if not curr_sec_name:
curr_sec_name = NO_COL_FAMILY
self.options_dict[curr_sec_type][curr_sec_name] = {}
# example: if the line read from the Rocksdb OPTIONS file
# is [CFOptions "default"], then the section type is
# CFOptions and 'default' is the name of a column family
# that for this database, so it's added to the list of
# column families stored in this object
if curr_sec_type == 'CFOptions':
if not self.column_families:
self.column_families = []
self.column_families.append(curr_sec_name)
elif OptionsSpecParser.is_new_option(line):
key, value = OptionsSpecParser.get_key_value_pair(line)
self.options_dict[curr_sec_type][curr_sec_name][key] = (
value
)
else:
error = 'Not able to parse line in Options file.'
OptionsSpecParser.exit_with_parse_error(line, error)
def get_misc_options(self):
# these are options that are not yet supported by the Rocksdb OPTIONS
# file, hence they are provided and stored separately
return self.misc_options
def get_column_families(self):
return self.column_families
def get_all_options(self):
# This method returns all the options that are stored in this object as
# a: Dict[<sec_type>.<option_name>: Dict[col_fam, option_value]]
all_options = []
# Example: in the section header '[CFOptions "default"]' read from the
# OPTIONS file, sec_type='CFOptions'
for sec_type in self.options_dict:
for col_fam in self.options_dict[sec_type]:
for opt_name in self.options_dict[sec_type][col_fam]:
option = sec_type + '.' + opt_name
all_options.append(option)
all_options.extend(list(self.misc_options.keys()))
return self.get_options(all_options)
def get_options(self, reqd_options):
# type: List[str] -> Dict[str, Dict[str, Any]]
# List[option] -> Dict[option, Dict[col_fam, value]]
reqd_options_dict = {}
for option in reqd_options:
if DatabaseOptions.is_misc_option(option):
# the option is not prefixed by '<section_type>.' because it is
# not yet supported by the Rocksdb OPTIONS file; so it has to
# be fetched from the misc_options dictionary
if option not in self.misc_options:
continue
if option not in reqd_options_dict:
reqd_options_dict[option] = {}
reqd_options_dict[option][NO_COL_FAMILY] = (
self.misc_options[option]
)
else:
# Example: option = 'TableOptions.BlockBasedTable.block_align'
# then, sec_type = 'TableOptions.BlockBasedTable'
sec_type = '.'.join(option.split('.')[:-1])
# opt_name = 'block_align'
opt_name = option.split('.')[-1]
if sec_type not in self.options_dict:
continue
for col_fam in self.options_dict[sec_type]:
if opt_name in self.options_dict[sec_type][col_fam]:
if option not in reqd_options_dict:
reqd_options_dict[option] = {}
reqd_options_dict[option][col_fam] = (
self.options_dict[sec_type][col_fam][opt_name]
)
return reqd_options_dict
def update_options(self, options):
# An example 'options' object looks like:
# {'DBOptions.max_background_jobs': {NO_COL_FAMILY: 2},
# 'CFOptions.write_buffer_size': {'default': 1048576, 'cf_A': 128000},
# 'bloom_bits': {NO_COL_FAMILY: 4}}
for option in options:
if DatabaseOptions.is_misc_option(option):
# this is a misc_option i.e. an option that is not yet
# supported by the Rocksdb OPTIONS file, so it is not prefixed
# by '<section_type>.' and must be stored in the separate
# misc_options dictionary
if NO_COL_FAMILY not in options[option]:
print(
'WARNING(DatabaseOptions.update_options): not ' +
'updating option ' + option + ' because it is in ' +
'misc_option format but its scope is not ' +
NO_COL_FAMILY + '. Check format of option.'
)
continue
self.misc_options[option] = options[option][NO_COL_FAMILY]
else:
sec_name = '.'.join(option.split('.')[:-1])
opt_name = option.split('.')[-1]
if sec_name not in self.options_dict:
self.options_dict[sec_name] = {}
for col_fam in options[option]:
# if the option is not already present in the dictionary,
# it will be inserted, else it will be updated to the new
# value
if col_fam not in self.options_dict[sec_name]:
self.options_dict[sec_name][col_fam] = {}
self.options_dict[sec_name][col_fam][opt_name] = (
copy.deepcopy(options[option][col_fam])
)
def generate_options_config(self, nonce):
# this method generates a Rocksdb OPTIONS file in the INI format from
# the options stored in self.options_dict
this_path = os.path.abspath(os.path.dirname(__file__))
file_name = '../temp/OPTIONS_' + str(nonce) + '.tmp'
file_path = os.path.join(this_path, file_name)
with open(file_path, 'w') as fp:
for section in self.options_dict:
for col_fam in self.options_dict[section]:
fp.write(
OptionsSpecParser.get_section_str(section, col_fam) +
'\n'
)
for option in self.options_dict[section][col_fam]:
values = self.options_dict[section][col_fam][option]
fp.write(
OptionsSpecParser.get_option_str(option, values) +
'\n'
)
fp.write('\n')
return file_path
def check_and_trigger_conditions(self, conditions):
for cond in conditions:
reqd_options_dict = self.get_options(cond.options)
# This contains the indices of options that are specific to some
# column family and are not database-wide options.
incomplete_option_ix = []
options = []
missing_reqd_option = False
for ix, option in enumerate(cond.options):
if option not in reqd_options_dict:
print(
'WARNING(DatabaseOptions.check_and_trigger): ' +
'skipping condition ' + cond.name + ' because it '
'requires option ' + option + ' but this option is' +
' not available'
)
missing_reqd_option = True
break # required option is absent
if NO_COL_FAMILY in reqd_options_dict[option]:
options.append(reqd_options_dict[option][NO_COL_FAMILY])
else:
options.append(None)
incomplete_option_ix.append(ix)
if missing_reqd_option:
continue
# if all the options are database-wide options
if not incomplete_option_ix:
try:
if eval(cond.eval_expr):
cond.set_trigger({NO_COL_FAMILY: options})
except Exception as e:
print(
'WARNING(DatabaseOptions) check_and_trigger:' + str(e)
)
continue
# for all the options that are not database-wide, we look for their
# values specific to column families
col_fam_options_dict = {}
for col_fam in self.column_families:
present = True
for ix in incomplete_option_ix:
option = cond.options[ix]
if col_fam not in reqd_options_dict[option]:
present = False
break
options[ix] = reqd_options_dict[option][col_fam]
if present:
try:
if eval(cond.eval_expr):
col_fam_options_dict[col_fam] = (
copy.deepcopy(options)
)
except Exception as e:
print(
'WARNING(DatabaseOptions) check_and_trigger: ' +
str(e)
)
# Trigger for an OptionCondition object is of the form:
# Dict[col_fam_name: List[option_value]]
# where col_fam_name is the name of a column family for which
# 'eval_expr' evaluated to True and List[option_value] is the list
# of values of the options specified in the condition's 'options'
# field
if col_fam_options_dict:
cond.set_trigger(col_fam_options_dict) | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/db_options_parser.py | db_options_parser.py |
from abc import ABC, abstractmethod
from calendar import timegm
from enum import Enum
import glob
import re
import time
NO_COL_FAMILY = 'DB_WIDE'
class DataSource(ABC):
class Type(Enum):
LOG = 1
DB_OPTIONS = 2
TIME_SERIES = 3
def __init__(self, type):
self.type = type
@abstractmethod
def check_and_trigger_conditions(self, conditions):
pass
class Log:
@staticmethod
def is_new_log(log_line):
# The assumption is that a new log will start with a date printed in
# the below regex format.
date_regex = '\d{4}/\d{2}/\d{2}-\d{2}:\d{2}:\d{2}\.\d{6}'
return re.match(date_regex, log_line)
def __init__(self, log_line, column_families):
token_list = log_line.strip().split()
self.time = token_list[0]
self.context = token_list[1]
self.message = " ".join(token_list[2:])
self.column_family = None
# example log for 'default' column family:
# "2018/07/25-17:29:05.176080 7f969de68700 [db/compaction_job.cc:1634]
# [default] [JOB 3] Compacting 24@0 + 16@1 files to L1, score 6.00\n"
for col_fam in column_families:
search_for_str = '\[' + col_fam + '\]'
if re.search(search_for_str, self.message):
self.column_family = col_fam
break
if not self.column_family:
self.column_family = NO_COL_FAMILY
def get_human_readable_time(self):
# example from a log line: '2018/07/25-11:25:45.782710'
return self.time
def get_column_family(self):
return self.column_family
def get_context(self):
return self.context
def get_message(self):
return self.message
def append_message(self, remaining_log):
self.message = self.message + '\n' + remaining_log.strip()
def get_timestamp(self):
# example: '2018/07/25-11:25:45.782710' will be converted to the GMT
# Unix timestamp 1532517945 (note: this method assumes that self.time
# is in GMT)
hr_time = self.time + 'GMT'
timestamp = timegm(time.strptime(hr_time, "%Y/%m/%d-%H:%M:%S.%f%Z"))
return timestamp
def __repr__(self):
return (
'time: ' + self.time + '; context: ' + self.context +
'; col_fam: ' + self.column_family +
'; message: ' + self.message
)
class DatabaseLogs(DataSource):
def __init__(self, logs_path_prefix, column_families):
super().__init__(DataSource.Type.LOG)
self.logs_path_prefix = logs_path_prefix
self.column_families = column_families
def trigger_conditions_for_log(self, conditions, log):
# For a LogCondition object, trigger is:
# Dict[column_family_name, List[Log]]. This explains why the condition
# was triggered and for which column families.
for cond in conditions:
if re.search(cond.regex, log.get_message(), re.IGNORECASE):
trigger = cond.get_trigger()
if not trigger:
trigger = {}
if log.get_column_family() not in trigger:
trigger[log.get_column_family()] = []
trigger[log.get_column_family()].append(log)
cond.set_trigger(trigger)
def check_and_trigger_conditions(self, conditions):
for file_name in glob.glob(self.logs_path_prefix + '*'):
# TODO(poojam23): find a way to distinguish between log files
# - generated in the current experiment but are labeled 'old'
# because they LOGs exceeded the file size limit AND
# - generated in some previous experiment that are also labeled
# 'old' and were not deleted for some reason
if re.search('old', file_name, re.IGNORECASE):
continue
with open(file_name, 'r') as db_logs:
new_log = None
for line in db_logs:
if Log.is_new_log(line):
if new_log:
self.trigger_conditions_for_log(
conditions, new_log
)
new_log = Log(line, self.column_families)
else:
# To account for logs split into multiple lines
new_log.append_message(line)
# Check for the last log in the file.
if new_log:
self.trigger_conditions_for_log(conditions, new_log) | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/db_log_parser.py | db_log_parser.py |
from abc import ABC, abstractmethod
from advisor.db_log_parser import DataSource, NO_COL_FAMILY
from advisor.db_timeseries_parser import TimeSeriesData
from enum import Enum
from advisor.ini_parser import IniParser
import re
class Section(ABC):
def __init__(self, name):
self.name = name
@abstractmethod
def set_parameter(self, key, value):
pass
@abstractmethod
def perform_checks(self):
pass
class Rule(Section):
def __init__(self, name):
super().__init__(name)
self.conditions = None
self.suggestions = None
self.overlap_time_seconds = None
self.trigger_entities = None
self.trigger_column_families = None
def set_parameter(self, key, value):
# If the Rule is associated with a single suggestion/condition, then
# value will be a string and not a list. Hence, convert it to a single
# element list before storing it in self.suggestions or
# self.conditions.
if key == 'conditions':
if isinstance(value, str):
self.conditions = [value]
else:
self.conditions = value
elif key == 'suggestions':
if isinstance(value, str):
self.suggestions = [value]
else:
self.suggestions = value
elif key == 'overlap_time_period':
self.overlap_time_seconds = value
def get_suggestions(self):
return self.suggestions
def perform_checks(self):
if not self.conditions or len(self.conditions) < 1:
raise ValueError(
self.name + ': rule must have at least one condition'
)
if not self.suggestions or len(self.suggestions) < 1:
raise ValueError(
self.name + ': rule must have at least one suggestion'
)
if self.overlap_time_seconds:
if len(self.conditions) != 2:
raise ValueError(
self.name + ": rule must be associated with 2 conditions\
in order to check for a time dependency between them"
)
time_format = '^\d+[s|m|h|d]$'
if (
not
re.match(time_format, self.overlap_time_seconds, re.IGNORECASE)
):
raise ValueError(
self.name + ": overlap_time_seconds format: \d+[s|m|h|d]"
)
else: # convert to seconds
in_seconds = int(self.overlap_time_seconds[:-1])
if self.overlap_time_seconds[-1] == 'm':
in_seconds *= 60
elif self.overlap_time_seconds[-1] == 'h':
in_seconds *= (60 * 60)
elif self.overlap_time_seconds[-1] == 'd':
in_seconds *= (24 * 60 * 60)
self.overlap_time_seconds = in_seconds
def get_overlap_timestamps(self, key1_trigger_epochs, key2_trigger_epochs):
# this method takes in 2 timeseries i.e. timestamps at which the
# rule's 2 TIME_SERIES conditions were triggered and it finds
# (if present) the first pair of timestamps at which the 2 conditions
# were triggered within 'overlap_time_seconds' of each other
key1_lower_bounds = [
epoch - self.overlap_time_seconds
for epoch in key1_trigger_epochs
]
key1_lower_bounds.sort()
key2_trigger_epochs.sort()
trigger_ix = 0
overlap_pair = None
for key1_lb in key1_lower_bounds:
while (
key2_trigger_epochs[trigger_ix] < key1_lb and
trigger_ix < len(key2_trigger_epochs)
):
trigger_ix += 1
if trigger_ix >= len(key2_trigger_epochs):
break
if (
key2_trigger_epochs[trigger_ix] <=
key1_lb + (2 * self.overlap_time_seconds)
):
overlap_pair = (
key2_trigger_epochs[trigger_ix],
key1_lb + self.overlap_time_seconds
)
break
return overlap_pair
def get_trigger_entities(self):
return self.trigger_entities
def get_trigger_column_families(self):
return self.trigger_column_families
def is_triggered(self, conditions_dict, column_families):
if self.overlap_time_seconds:
condition1 = conditions_dict[self.conditions[0]]
condition2 = conditions_dict[self.conditions[1]]
if not (
condition1.get_data_source() is DataSource.Type.TIME_SERIES and
condition2.get_data_source() is DataSource.Type.TIME_SERIES
):
raise ValueError(self.name + ': need 2 timeseries conditions')
map1 = condition1.get_trigger()
map2 = condition2.get_trigger()
if not (map1 and map2):
return False
self.trigger_entities = {}
is_triggered = False
entity_intersection = (
set(map1.keys()).intersection(set(map2.keys()))
)
for entity in entity_intersection:
overlap_timestamps_pair = (
self.get_overlap_timestamps(
list(map1[entity].keys()), list(map2[entity].keys())
)
)
if overlap_timestamps_pair:
self.trigger_entities[entity] = overlap_timestamps_pair
is_triggered = True
if is_triggered:
self.trigger_column_families = set(column_families)
return is_triggered
else:
all_conditions_triggered = True
self.trigger_column_families = set(column_families)
for cond_name in self.conditions:
cond = conditions_dict[cond_name]
if not cond.get_trigger():
all_conditions_triggered = False
break
if (
cond.get_data_source() is DataSource.Type.LOG or
cond.get_data_source() is DataSource.Type.DB_OPTIONS
):
cond_col_fam = set(cond.get_trigger().keys())
if NO_COL_FAMILY in cond_col_fam:
cond_col_fam = set(column_families)
self.trigger_column_families = (
self.trigger_column_families.intersection(cond_col_fam)
)
elif cond.get_data_source() is DataSource.Type.TIME_SERIES:
cond_entities = set(cond.get_trigger().keys())
if self.trigger_entities is None:
self.trigger_entities = cond_entities
else:
self.trigger_entities = (
self.trigger_entities.intersection(cond_entities)
)
if not (self.trigger_entities or self.trigger_column_families):
all_conditions_triggered = False
break
if not all_conditions_triggered: # clean up if rule not triggered
self.trigger_column_families = None
self.trigger_entities = None
return all_conditions_triggered
def __repr__(self):
# Append conditions
rule_string = "Rule: " + self.name + " has conditions:: "
is_first = True
for cond in self.conditions:
if is_first:
rule_string += cond
is_first = False
else:
rule_string += (" AND " + cond)
# Append suggestions
rule_string += "\nsuggestions:: "
is_first = True
for sugg in self.suggestions:
if is_first:
rule_string += sugg
is_first = False
else:
rule_string += (", " + sugg)
if self.trigger_entities:
rule_string += (', entities:: ' + str(self.trigger_entities))
if self.trigger_column_families:
rule_string += (', col_fam:: ' + str(self.trigger_column_families))
# Return constructed string
return rule_string
class Suggestion(Section):
class Action(Enum):
set = 1
increase = 2
decrease = 3
def __init__(self, name):
super().__init__(name)
self.option = None
self.action = None
self.suggested_values = None
self.description = None
def set_parameter(self, key, value):
if key == 'option':
# Note:
# case 1: 'option' is supported by Rocksdb OPTIONS file; in this
# case the option belongs to one of the sections in the config
# file and it's name is prefixed by "<section_type>."
# case 2: 'option' is not supported by Rocksdb OPTIONS file; the
# option is not expected to have the character '.' in its name
self.option = value
elif key == 'action':
if self.option and not value:
raise ValueError(self.name + ': provide action for option')
self.action = self.Action[value]
elif key == 'suggested_values':
if isinstance(value, str):
self.suggested_values = [value]
else:
self.suggested_values = value
elif key == 'description':
self.description = value
def perform_checks(self):
if not self.description:
if not self.option:
raise ValueError(self.name + ': provide option or description')
if not self.action:
raise ValueError(self.name + ': provide action for option')
if self.action is self.Action.set and not self.suggested_values:
raise ValueError(
self.name + ': provide suggested value for option'
)
def __repr__(self):
sugg_string = "Suggestion: " + self.name
if self.description:
sugg_string += (' description : ' + self.description)
else:
sugg_string += (
' option : ' + self.option + ' action : ' + self.action.name
)
if self.suggested_values:
sugg_string += (
' suggested_values : ' + str(self.suggested_values)
)
return sugg_string
class Condition(Section):
def __init__(self, name):
super().__init__(name)
self.data_source = None
self.trigger = None
def perform_checks(self):
if not self.data_source:
raise ValueError(self.name + ': condition not tied to data source')
def set_data_source(self, data_source):
self.data_source = data_source
def get_data_source(self):
return self.data_source
def reset_trigger(self):
self.trigger = None
def set_trigger(self, condition_trigger):
self.trigger = condition_trigger
def get_trigger(self):
return self.trigger
def is_triggered(self):
if self.trigger:
return True
return False
def set_parameter(self, key, value):
# must be defined by the subclass
raise NotImplementedError(self.name + ': provide source for condition')
class LogCondition(Condition):
@classmethod
def create(cls, base_condition):
base_condition.set_data_source(DataSource.Type['LOG'])
base_condition.__class__ = cls
return base_condition
def set_parameter(self, key, value):
if key == 'regex':
self.regex = value
def perform_checks(self):
super().perform_checks()
if not self.regex:
raise ValueError(self.name + ': provide regex for log condition')
def __repr__(self):
log_cond_str = "LogCondition: " + self.name
log_cond_str += (" regex: " + self.regex)
# if self.trigger:
# log_cond_str += (" trigger: " + str(self.trigger))
return log_cond_str
class OptionCondition(Condition):
@classmethod
def create(cls, base_condition):
base_condition.set_data_source(DataSource.Type['DB_OPTIONS'])
base_condition.__class__ = cls
return base_condition
def set_parameter(self, key, value):
if key == 'options':
if isinstance(value, str):
self.options = [value]
else:
self.options = value
elif key == 'evaluate':
self.eval_expr = value
def perform_checks(self):
super().perform_checks()
if not self.options:
raise ValueError(self.name + ': options missing in condition')
if not self.eval_expr:
raise ValueError(self.name + ': expression missing in condition')
def __repr__(self):
opt_cond_str = "OptionCondition: " + self.name
opt_cond_str += (" options: " + str(self.options))
opt_cond_str += (" expression: " + self.eval_expr)
if self.trigger:
opt_cond_str += (" trigger: " + str(self.trigger))
return opt_cond_str
class TimeSeriesCondition(Condition):
@classmethod
def create(cls, base_condition):
base_condition.set_data_source(DataSource.Type['TIME_SERIES'])
base_condition.__class__ = cls
return base_condition
def set_parameter(self, key, value):
if key == 'keys':
if isinstance(value, str):
self.keys = [value]
else:
self.keys = value
elif key == 'behavior':
self.behavior = TimeSeriesData.Behavior[value]
elif key == 'rate_threshold':
self.rate_threshold = float(value)
elif key == 'window_sec':
self.window_sec = int(value)
elif key == 'evaluate':
self.expression = value
elif key == 'aggregation_op':
self.aggregation_op = TimeSeriesData.AggregationOperator[value]
def perform_checks(self):
if not self.keys:
raise ValueError(self.name + ': specify timeseries key')
if not self.behavior:
raise ValueError(self.name + ': specify triggering behavior')
if self.behavior is TimeSeriesData.Behavior.bursty:
if not self.rate_threshold:
raise ValueError(self.name + ': specify rate burst threshold')
if not self.window_sec:
self.window_sec = 300 # default window length is 5 minutes
if len(self.keys) > 1:
raise ValueError(self.name + ': specify only one key')
elif self.behavior is TimeSeriesData.Behavior.evaluate_expression:
if not (self.expression):
raise ValueError(self.name + ': specify evaluation expression')
else:
raise ValueError(self.name + ': trigger behavior not supported')
def __repr__(self):
ts_cond_str = "TimeSeriesCondition: " + self.name
ts_cond_str += (" statistics: " + str(self.keys))
ts_cond_str += (" behavior: " + self.behavior.name)
if self.behavior is TimeSeriesData.Behavior.bursty:
ts_cond_str += (" rate_threshold: " + str(self.rate_threshold))
ts_cond_str += (" window_sec: " + str(self.window_sec))
if self.behavior is TimeSeriesData.Behavior.evaluate_expression:
ts_cond_str += (" expression: " + self.expression)
if hasattr(self, 'aggregation_op'):
ts_cond_str += (" aggregation_op: " + self.aggregation_op.name)
if self.trigger:
ts_cond_str += (" trigger: " + str(self.trigger))
return ts_cond_str
class RulesSpec:
def __init__(self, rules_path):
self.file_path = rules_path
def initialise_fields(self):
self.rules_dict = {}
self.conditions_dict = {}
self.suggestions_dict = {}
def perform_section_checks(self):
for rule in self.rules_dict.values():
rule.perform_checks()
for cond in self.conditions_dict.values():
cond.perform_checks()
for sugg in self.suggestions_dict.values():
sugg.perform_checks()
def load_rules_from_spec(self):
self.initialise_fields()
with open(self.file_path, 'r') as db_rules:
curr_section = None
for line in db_rules:
line = IniParser.remove_trailing_comment(line)
if not line:
continue
element = IniParser.get_element(line)
if element is IniParser.Element.comment:
continue
elif element is not IniParser.Element.key_val:
curr_section = element # it's a new IniParser header
section_name = IniParser.get_section_name(line)
if element is IniParser.Element.rule:
new_rule = Rule(section_name)
self.rules_dict[section_name] = new_rule
elif element is IniParser.Element.cond:
new_cond = Condition(section_name)
self.conditions_dict[section_name] = new_cond
elif element is IniParser.Element.sugg:
new_suggestion = Suggestion(section_name)
self.suggestions_dict[section_name] = new_suggestion
elif element is IniParser.Element.key_val:
key, value = IniParser.get_key_value_pair(line)
if curr_section is IniParser.Element.rule:
new_rule.set_parameter(key, value)
elif curr_section is IniParser.Element.cond:
if key == 'source':
if value == 'LOG':
new_cond = LogCondition.create(new_cond)
elif value == 'OPTIONS':
new_cond = OptionCondition.create(new_cond)
elif value == 'TIME_SERIES':
new_cond = TimeSeriesCondition.create(new_cond)
else:
new_cond.set_parameter(key, value)
elif curr_section is IniParser.Element.sugg:
new_suggestion.set_parameter(key, value)
def get_rules_dict(self):
return self.rules_dict
def get_conditions_dict(self):
return self.conditions_dict
def get_suggestions_dict(self):
return self.suggestions_dict
def get_triggered_rules(self, data_sources, column_families):
self.trigger_conditions(data_sources)
triggered_rules = []
for rule in self.rules_dict.values():
if rule.is_triggered(self.conditions_dict, column_families):
triggered_rules.append(rule)
return triggered_rules
def trigger_conditions(self, data_sources):
for source_type in data_sources:
cond_subset = [
cond
for cond in self.conditions_dict.values()
if cond.get_data_source() is source_type
]
if not cond_subset:
continue
for source in data_sources[source_type]:
source.check_and_trigger_conditions(cond_subset)
def print_rules(self, rules):
for rule in rules:
print('\nRule: ' + rule.name)
for cond_name in rule.conditions:
print(repr(self.conditions_dict[cond_name]))
for sugg_name in rule.suggestions:
print(repr(self.suggestions_dict[sugg_name]))
if rule.trigger_entities:
print('scope: entities:')
print(rule.trigger_entities)
if rule.trigger_column_families:
print('scope: col_fam:')
print(rule.trigger_column_families) | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/rule_parser.py | rule_parser.py |
from abc import abstractmethod
from advisor.db_log_parser import DataSource
from enum import Enum
import math
NO_ENTITY = 'ENTITY_PLACEHOLDER'
class TimeSeriesData(DataSource):
class Behavior(Enum):
bursty = 1
evaluate_expression = 2
class AggregationOperator(Enum):
avg = 1
max = 2
min = 3
latest = 4
oldest = 5
def __init__(self):
super().__init__(DataSource.Type.TIME_SERIES)
self.keys_ts = None # Dict[entity, Dict[key, Dict[timestamp, value]]]
self.stats_freq_sec = None
@abstractmethod
def get_keys_from_conditions(self, conditions):
# This method takes in a list of time-series conditions; for each
# condition it manipulates the 'keys' in the way that is supported by
# the subclass implementing this method
pass
@abstractmethod
def fetch_timeseries(self, required_statistics):
# this method takes in a list of statistics and fetches the timeseries
# for each of them and populates the 'keys_ts' dictionary
pass
def fetch_burst_epochs(
self, entities, statistic, window_sec, threshold, percent
):
# type: (str, int, float, bool) -> Dict[str, Dict[int, float]]
# this method calculates the (percent) rate change in the 'statistic'
# for each entity (over 'window_sec' seconds) and returns the epochs
# where this rate change is greater than or equal to the 'threshold'
# value
if self.stats_freq_sec == 0:
# not time series data, cannot check for bursty behavior
return
if window_sec < self.stats_freq_sec:
window_sec = self.stats_freq_sec
# 'window_samples' is the number of windows to go back to
# compare the current window with, while calculating rate change.
window_samples = math.ceil(window_sec / self.stats_freq_sec)
burst_epochs = {}
# if percent = False:
# curr_val = value at window for which rate change is being calculated
# prev_val = value at window that is window_samples behind curr_window
# Then rate_without_percent =
# ((curr_val-prev_val)*duration_sec)/(curr_timestamp-prev_timestamp)
# if percent = True:
# rate_with_percent = (rate_without_percent * 100) / prev_val
# These calculations are in line with the rate() transform supported
# by ODS
for entity in entities:
if statistic not in self.keys_ts[entity]:
continue
timestamps = sorted(list(self.keys_ts[entity][statistic].keys()))
for ix in range(window_samples, len(timestamps), 1):
first_ts = timestamps[ix - window_samples]
last_ts = timestamps[ix]
first_val = self.keys_ts[entity][statistic][first_ts]
last_val = self.keys_ts[entity][statistic][last_ts]
diff = last_val - first_val
if percent:
diff = diff * 100 / first_val
rate = (diff * self.duration_sec) / (last_ts - first_ts)
# if the rate change is greater than the provided threshold,
# then the condition is triggered for entity at time 'last_ts'
if rate >= threshold:
if entity not in burst_epochs:
burst_epochs[entity] = {}
burst_epochs[entity][last_ts] = rate
return burst_epochs
def fetch_aggregated_values(self, entity, statistics, aggregation_op):
# type: (str, AggregationOperator) -> Dict[str, float]
# this method performs the aggregation specified by 'aggregation_op'
# on the timeseries of 'statistics' for 'entity' and returns:
# Dict[statistic, aggregated_value]
result = {}
for stat in statistics:
if stat not in self.keys_ts[entity]:
continue
agg_val = None
if aggregation_op is self.AggregationOperator.latest:
latest_timestamp = max(list(self.keys_ts[entity][stat].keys()))
agg_val = self.keys_ts[entity][stat][latest_timestamp]
elif aggregation_op is self.AggregationOperator.oldest:
oldest_timestamp = min(list(self.keys_ts[entity][stat].keys()))
agg_val = self.keys_ts[entity][stat][oldest_timestamp]
elif aggregation_op is self.AggregationOperator.max:
agg_val = max(list(self.keys_ts[entity][stat].values()))
elif aggregation_op is self.AggregationOperator.min:
agg_val = min(list(self.keys_ts[entity][stat].values()))
elif aggregation_op is self.AggregationOperator.avg:
values = list(self.keys_ts[entity][stat].values())
agg_val = sum(values) / len(values)
result[stat] = agg_val
return result
def check_and_trigger_conditions(self, conditions):
# get the list of statistics that need to be fetched
reqd_keys = self.get_keys_from_conditions(conditions)
# fetch the required statistics and populate the map 'keys_ts'
self.fetch_timeseries(reqd_keys)
# Trigger the appropriate conditions
for cond in conditions:
complete_keys = self.get_keys_from_conditions([cond])
# Get the entities that have all statistics required by 'cond':
# an entity is checked for a given condition only if we possess all
# of the condition's 'keys' for that entity
entities_with_stats = []
for entity in self.keys_ts:
stat_missing = False
for stat in complete_keys:
if stat not in self.keys_ts[entity]:
stat_missing = True
break
if not stat_missing:
entities_with_stats.append(entity)
if not entities_with_stats:
continue
if cond.behavior is self.Behavior.bursty:
# for a condition that checks for bursty behavior, only one key
# should be present in the condition's 'keys' field
result = self.fetch_burst_epochs(
entities_with_stats,
complete_keys[0], # there should be only one key
cond.window_sec,
cond.rate_threshold,
True
)
# Trigger in this case is:
# Dict[entity_name, Dict[timestamp, rate_change]]
# where the inner dictionary contains rate_change values when
# the rate_change >= threshold provided, with the
# corresponding timestamps
if result:
cond.set_trigger(result)
elif cond.behavior is self.Behavior.evaluate_expression:
self.handle_evaluate_expression(
cond,
complete_keys,
entities_with_stats
)
def handle_evaluate_expression(self, condition, statistics, entities):
trigger = {}
# check 'condition' for each of these entities
for entity in entities:
if hasattr(condition, 'aggregation_op'):
# in this case, the aggregation operation is performed on each
# of the condition's 'keys' and then with aggregated values
# condition's 'expression' is evaluated; if it evaluates to
# True, then list of the keys values is added to the
# condition's trigger: Dict[entity_name, List[stats]]
result = self.fetch_aggregated_values(
entity, statistics, condition.aggregation_op
)
keys = [result[key] for key in statistics]
try:
if eval(condition.expression):
trigger[entity] = keys
except Exception as e:
print(
'WARNING(TimeSeriesData) check_and_trigger: ' + str(e)
)
else:
# assumption: all stats have same series of timestamps
# this is similar to the above but 'expression' is evaluated at
# each timestamp, since there is no aggregation, and all the
# epochs are added to the trigger when the condition's
# 'expression' evaluated to true; so trigger is:
# Dict[entity, Dict[timestamp, List[stats]]]
for epoch in self.keys_ts[entity][statistics[0]].keys():
keys = [
self.keys_ts[entity][key][epoch]
for key in statistics
]
try:
if eval(condition.expression):
if entity not in trigger:
trigger[entity] = {}
trigger[entity][epoch] = keys
except Exception as e:
print(
'WARNING(TimeSeriesData) check_and_trigger: ' +
str(e)
)
if trigger:
condition.set_trigger(trigger) | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/db_timeseries_parser.py | db_timeseries_parser.py |
from advisor.rule_parser import RulesSpec
from advisor.db_log_parser import DatabaseLogs, DataSource
from advisor.db_options_parser import DatabaseOptions
from advisor.db_stats_fetcher import LogStatsParser, OdsStatsFetcher
import argparse
def main(args):
# initialise the RulesSpec parser
rule_spec_parser = RulesSpec(args.rules_spec)
rule_spec_parser.load_rules_from_spec()
rule_spec_parser.perform_section_checks()
# initialize the DatabaseOptions object
db_options = DatabaseOptions(args.rocksdb_options)
# Create DatabaseLogs object
db_logs = DatabaseLogs(
args.log_files_path_prefix, db_options.get_column_families()
)
# Create the Log STATS object
db_log_stats = LogStatsParser(
args.log_files_path_prefix, args.stats_dump_period_sec
)
data_sources = {
DataSource.Type.DB_OPTIONS: [db_options],
DataSource.Type.LOG: [db_logs],
DataSource.Type.TIME_SERIES: [db_log_stats]
}
if args.ods_client:
data_sources[DataSource.Type.TIME_SERIES].append(OdsStatsFetcher(
args.ods_client,
args.ods_entity,
args.ods_tstart,
args.ods_tend,
args.ods_key_prefix
))
triggered_rules = rule_spec_parser.get_triggered_rules(
data_sources, db_options.get_column_families()
)
rule_spec_parser.print_rules(triggered_rules)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Use this script to get\
suggestions for improving Rocksdb performance.')
parser.add_argument(
'--rules_spec', required=True, type=str,
help='path of the file containing the expert-specified Rules'
)
parser.add_argument(
'--rocksdb_options', required=True, type=str,
help='path of the starting Rocksdb OPTIONS file'
)
parser.add_argument(
'--log_files_path_prefix', required=True, type=str,
help='path prefix of the Rocksdb LOG files'
)
parser.add_argument(
'--stats_dump_period_sec', required=True, type=int,
help='the frequency (in seconds) at which STATISTICS are printed to ' +
'the Rocksdb LOG file'
)
# ODS arguments
parser.add_argument(
'--ods_client', type=str, help='the ODS client binary'
)
parser.add_argument(
'--ods_entity', type=str,
help='the servers for which the ODS stats need to be fetched'
)
parser.add_argument(
'--ods_key_prefix', type=str,
help='the prefix that needs to be attached to the keys of time ' +
'series to be fetched from ODS'
)
parser.add_argument(
'--ods_tstart', type=int,
help='start time of timeseries to be fetched from ODS'
)
parser.add_argument(
'--ods_tend', type=int,
help='end time of timeseries to be fetched from ODS'
)
args = parser.parse_args()
main(args) | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/rule_parser_example.py | rule_parser_example.py |
from enum import Enum
class IniParser:
class Element(Enum):
rule = 1
cond = 2
sugg = 3
key_val = 4
comment = 5
@staticmethod
def remove_trailing_comment(line):
line = line.strip()
comment_start = line.find('#')
if comment_start > -1:
return line[:comment_start]
return line
@staticmethod
def is_section_header(line):
# A section header looks like: [Rule "my-new-rule"]. Essentially,
# a line that is in square-brackets.
line = line.strip()
if line.startswith('[') and line.endswith(']'):
return True
return False
@staticmethod
def get_section_name(line):
# For a section header: [Rule "my-new-rule"], this method will return
# "my-new-rule".
token_list = line.strip()[1:-1].split('"')
if len(token_list) < 3:
error = 'needed section header: [<section_type> "<section_name>"]'
raise ValueError('Parsing error: ' + error + '\n' + line)
return token_list[1]
@staticmethod
def get_element(line):
line = IniParser.remove_trailing_comment(line)
if not line:
return IniParser.Element.comment
if IniParser.is_section_header(line):
if line.strip()[1:-1].startswith('Suggestion'):
return IniParser.Element.sugg
if line.strip()[1:-1].startswith('Rule'):
return IniParser.Element.rule
if line.strip()[1:-1].startswith('Condition'):
return IniParser.Element.cond
if '=' in line:
return IniParser.Element.key_val
error = 'not a recognizable RulesSpec element'
raise ValueError('Parsing error: ' + error + '\n' + line)
@staticmethod
def get_key_value_pair(line):
line = line.strip()
key = line.split('=')[0].strip()
value = "=".join(line.split('=')[1:])
if value == "": # if the option has no value
return (key, None)
values = IniParser.get_list_from_value(value)
if len(values) == 1:
return (key, value)
return (key, values)
@staticmethod
def get_list_from_value(value):
values = value.strip().split(':')
return values | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/ini_parser.py | ini_parser.py |
from advisor.bench_runner import BenchmarkRunner
from advisor.db_log_parser import DataSource, DatabaseLogs, NO_COL_FAMILY
from advisor.db_stats_fetcher import (
LogStatsParser, OdsStatsFetcher, DatabasePerfContext
)
import shutil
import subprocess
import time
'''
NOTE: This is not thread-safe, because the output file is simply overwritten.
'''
class DBBenchRunner(BenchmarkRunner):
OUTPUT_FILE = "temp/dbbench_out.tmp"
ERROR_FILE = "temp/dbbench_err.tmp"
DB_PATH = "DB path"
THROUGHPUT = "ops/sec"
PERF_CON = " PERF_CONTEXT:"
@staticmethod
def is_metric_better(new_metric, old_metric):
# for db_bench 'throughput' is the metric returned by run_experiment
return new_metric >= old_metric
@staticmethod
def get_opt_args_str(misc_options_dict):
# given a dictionary of options and their values, return a string
# that can be appended as command-line arguments
optional_args_str = ""
for option_name, option_value in misc_options_dict.items():
if option_value:
optional_args_str += (
" --" + option_name + "=" + str(option_value)
)
return optional_args_str
def __init__(self, positional_args, ods_args=None):
# parse positional_args list appropriately
self.db_bench_binary = positional_args[0]
self.benchmark = positional_args[1]
self.db_bench_args = None
if len(positional_args) > 2:
# options list with each option given as "<option>=<value>"
self.db_bench_args = positional_args[2:]
# save ods_args, if provided
self.ods_args = ods_args
def _parse_output(self, get_perf_context=False):
'''
Sample db_bench output after running 'readwhilewriting' benchmark:
DB path: [/tmp/rocksdbtest-155919/dbbench]\n
readwhilewriting : 16.582 micros/op 60305 ops/sec; 4.2 MB/s (3433828\
of 5427999 found)\n
PERF_CONTEXT:\n
user_key_comparison_count = 500466712, block_cache_hit_count = ...\n
'''
output = {
self.THROUGHPUT: None, self.DB_PATH: None, self.PERF_CON: None
}
perf_context_begins = False
with open(self.OUTPUT_FILE, 'r') as fp:
for line in fp:
if line.startswith(self.benchmark):
# line from sample output:
# readwhilewriting : 16.582 micros/op 60305 ops/sec; \
# 4.2 MB/s (3433828 of 5427999 found)\n
print(line) # print output of the benchmark run
token_list = line.strip().split()
for ix, token in enumerate(token_list):
if token.startswith(self.THROUGHPUT):
# in above example, throughput = 60305 ops/sec
output[self.THROUGHPUT] = (
float(token_list[ix - 1])
)
break
elif get_perf_context and line.startswith(self.PERF_CON):
# the following lines in the output contain perf context
# statistics (refer example above)
perf_context_begins = True
elif get_perf_context and perf_context_begins:
# Sample perf_context output:
# user_key_comparison_count = 500, block_cache_hit_count =\
# 468, block_read_count = 580, block_read_byte = 445, ...
token_list = line.strip().split(',')
# token_list = ['user_key_comparison_count = 500',
# 'block_cache_hit_count = 468','block_read_count = 580'...
perf_context = {
tk.split('=')[0].strip(): tk.split('=')[1].strip()
for tk in token_list
if tk
}
# TODO(poojam23): this is a hack and should be replaced
# with the timestamp that db_bench will provide per printed
# perf_context
timestamp = int(time.time())
perf_context_ts = {}
for stat in perf_context.keys():
perf_context_ts[stat] = {
timestamp: int(perf_context[stat])
}
output[self.PERF_CON] = perf_context_ts
perf_context_begins = False
elif line.startswith(self.DB_PATH):
# line from sample output:
# DB path: [/tmp/rocksdbtest-155919/dbbench]\n
output[self.DB_PATH] = (
line.split('[')[1].split(']')[0]
)
return output
def get_log_options(self, db_options, db_path):
# get the location of the LOG file and the frequency at which stats are
# dumped in the LOG file
log_dir_path = None
stats_freq_sec = None
logs_file_prefix = None
# fetch frequency at which the stats are dumped in the Rocksdb logs
dump_period = 'DBOptions.stats_dump_period_sec'
# fetch the directory, if specified, in which the Rocksdb logs are
# dumped, by default logs are dumped in same location as database
log_dir = 'DBOptions.db_log_dir'
log_options = db_options.get_options([dump_period, log_dir])
if dump_period in log_options:
stats_freq_sec = int(log_options[dump_period][NO_COL_FAMILY])
if log_dir in log_options:
log_dir_path = log_options[log_dir][NO_COL_FAMILY]
log_file_name = DBBenchRunner.get_info_log_file_name(
log_dir_path, db_path
)
if not log_dir_path:
log_dir_path = db_path
if not log_dir_path.endswith('/'):
log_dir_path += '/'
logs_file_prefix = log_dir_path + log_file_name
return (logs_file_prefix, stats_freq_sec)
def _get_options_command_line_args_str(self, curr_options):
'''
This method uses the provided Rocksdb OPTIONS to create a string of
command-line arguments for db_bench.
The --options_file argument is always given and the options that are
not supported by the OPTIONS file are given as separate arguments.
'''
optional_args_str = DBBenchRunner.get_opt_args_str(
curr_options.get_misc_options()
)
# generate an options configuration file
options_file = curr_options.generate_options_config(nonce='12345')
optional_args_str += " --options_file=" + options_file
return optional_args_str
def _setup_db_before_experiment(self, curr_options, db_path):
# remove destination directory if it already exists
try:
shutil.rmtree(db_path, ignore_errors=True)
except OSError as e:
print('Error: rmdir ' + e.filename + ' ' + e.strerror)
# setup database with a million keys using the fillrandom benchmark
command = "%s --benchmarks=fillrandom --db=%s --num=1000000" % (
self.db_bench_binary, db_path
)
args_str = self._get_options_command_line_args_str(curr_options)
command += args_str
self._run_command(command)
def _build_experiment_command(self, curr_options, db_path):
command = "%s --benchmarks=%s --statistics --perf_level=3 --db=%s" % (
self.db_bench_binary, self.benchmark, db_path
)
# fetch the command-line arguments string for providing Rocksdb options
args_str = self._get_options_command_line_args_str(curr_options)
# handle the command-line args passed in the constructor, these
# arguments are specific to db_bench
for cmd_line_arg in self.db_bench_args:
args_str += (" --" + cmd_line_arg)
command += args_str
return command
def _run_command(self, command):
out_file = open(self.OUTPUT_FILE, "w+")
err_file = open(self.ERROR_FILE, "w+")
print('executing... - ' + command)
subprocess.call(command, shell=True, stdout=out_file, stderr=err_file)
out_file.close()
err_file.close()
def run_experiment(self, db_options, db_path):
# setup the Rocksdb database before running experiment
self._setup_db_before_experiment(db_options, db_path)
# get the command to run the experiment
command = self._build_experiment_command(db_options, db_path)
experiment_start_time = int(time.time())
# run experiment
self._run_command(command)
experiment_end_time = int(time.time())
# parse the db_bench experiment output
parsed_output = self._parse_output(get_perf_context=True)
# get the log files path prefix and frequency at which Rocksdb stats
# are dumped in the logs
logs_file_prefix, stats_freq_sec = self.get_log_options(
db_options, parsed_output[self.DB_PATH]
)
# create the Rocksbd LOGS object
db_logs = DatabaseLogs(
logs_file_prefix, db_options.get_column_families()
)
# Create the Log STATS object
db_log_stats = LogStatsParser(logs_file_prefix, stats_freq_sec)
# Create the PerfContext STATS object
db_perf_context = DatabasePerfContext(
parsed_output[self.PERF_CON], 0, False
)
# create the data-sources dictionary
data_sources = {
DataSource.Type.DB_OPTIONS: [db_options],
DataSource.Type.LOG: [db_logs],
DataSource.Type.TIME_SERIES: [db_log_stats, db_perf_context]
}
# Create the ODS STATS object
if self.ods_args:
key_prefix = ''
if 'key_prefix' in self.ods_args:
key_prefix = self.ods_args['key_prefix']
data_sources[DataSource.Type.TIME_SERIES].append(OdsStatsFetcher(
self.ods_args['client_script'],
self.ods_args['entity'],
experiment_start_time,
experiment_end_time,
key_prefix
))
# return the experiment's data-sources and throughput
return data_sources, parsed_output[self.THROUGHPUT] | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/db_bench_runner.py | db_bench_runner.py |
from advisor.db_log_parser import Log
from advisor.db_timeseries_parser import TimeSeriesData, NO_ENTITY
import copy
import glob
import re
import subprocess
import time
class LogStatsParser(TimeSeriesData):
STATS = 'STATISTICS:'
@staticmethod
def parse_log_line_for_stats(log_line):
# Example stat line (from LOG file):
# "rocksdb.db.get.micros P50 : 8.4 P95 : 21.8 P99 : 33.9 P100 : 92.0\n"
token_list = log_line.strip().split()
# token_list = ['rocksdb.db.get.micros', 'P50', ':', '8.4', 'P95', ':',
# '21.8', 'P99', ':', '33.9', 'P100', ':', '92.0']
stat_prefix = token_list[0] + '.' # 'rocksdb.db.get.micros.'
stat_values = [
token
for token in token_list[1:]
if token != ':'
]
# stat_values = ['P50', '8.4', 'P95', '21.8', 'P99', '33.9', 'P100',
# '92.0']
stat_dict = {}
for ix, metric in enumerate(stat_values):
if ix % 2 == 0:
stat_name = stat_prefix + metric
stat_name = stat_name.lower() # Note: case insensitive names
else:
stat_dict[stat_name] = float(metric)
# stat_dict = {'rocksdb.db.get.micros.p50': 8.4,
# 'rocksdb.db.get.micros.p95': 21.8, 'rocksdb.db.get.micros.p99': 33.9,
# 'rocksdb.db.get.micros.p100': 92.0}
return stat_dict
def __init__(self, logs_path_prefix, stats_freq_sec):
super().__init__()
self.logs_file_prefix = logs_path_prefix
self.stats_freq_sec = stats_freq_sec
self.duration_sec = 60
def get_keys_from_conditions(self, conditions):
# Note: case insensitive stat names
reqd_stats = []
for cond in conditions:
for key in cond.keys:
key = key.lower()
# some keys are prepended with '[]' for OdsStatsFetcher to
# replace this with the appropriate key_prefix, remove these
# characters here since the LogStatsParser does not need
# a prefix
if key.startswith('[]'):
reqd_stats.append(key[2:])
else:
reqd_stats.append(key)
return reqd_stats
def add_to_timeseries(self, log, reqd_stats):
# this method takes in the Log object that contains the Rocksdb stats
# and a list of required stats, then it parses the stats line by line
# to fetch required stats and add them to the keys_ts object
# Example: reqd_stats = ['rocksdb.block.cache.hit.count',
# 'rocksdb.db.get.micros.p99']
# Let log.get_message() returns following string:
# "[WARN] [db/db_impl.cc:485] STATISTICS:\n
# rocksdb.block.cache.miss COUNT : 1459\n
# rocksdb.block.cache.hit COUNT : 37\n
# ...
# rocksdb.db.get.micros P50 : 15.6 P95 : 39.7 P99 : 62.6 P100 : 148.0\n
# ..."
new_lines = log.get_message().split('\n')
# let log_ts = 1532518219
log_ts = log.get_timestamp()
# example updates to keys_ts:
# keys_ts[NO_ENTITY]['rocksdb.db.get.micros.p99'][1532518219] = 62.6
# keys_ts[NO_ENTITY]['rocksdb.block.cache.hit.count'][1532518219] = 37
for line in new_lines[1:]: # new_lines[0] does not contain any stats
stats_on_line = self.parse_log_line_for_stats(line)
for stat in stats_on_line:
if stat in reqd_stats:
if stat not in self.keys_ts[NO_ENTITY]:
self.keys_ts[NO_ENTITY][stat] = {}
self.keys_ts[NO_ENTITY][stat][log_ts] = stats_on_line[stat]
def fetch_timeseries(self, reqd_stats):
# this method parses the Rocksdb LOG file and generates timeseries for
# each of the statistic in the list reqd_stats
self.keys_ts = {NO_ENTITY: {}}
for file_name in glob.glob(self.logs_file_prefix + '*'):
# TODO(poojam23): find a way to distinguish between 'old' log files
# from current and previous experiments, present in the same
# directory
if re.search('old', file_name, re.IGNORECASE):
continue
with open(file_name, 'r') as db_logs:
new_log = None
for line in db_logs:
if Log.is_new_log(line):
if (
new_log and
re.search(self.STATS, new_log.get_message())
):
self.add_to_timeseries(new_log, reqd_stats)
new_log = Log(line, column_families=[])
else:
# To account for logs split into multiple lines
new_log.append_message(line)
# Check for the last log in the file.
if new_log and re.search(self.STATS, new_log.get_message()):
self.add_to_timeseries(new_log, reqd_stats)
class DatabasePerfContext(TimeSeriesData):
# TODO(poojam23): check if any benchrunner provides PerfContext sampled at
# regular intervals
def __init__(self, perf_context_ts, stats_freq_sec, cumulative):
'''
perf_context_ts is expected to be in the following format:
Dict[metric, Dict[timestamp, value]], where for
each (metric, timestamp) pair, the value is database-wide (i.e.
summed over all the threads involved)
if stats_freq_sec == 0, per-metric only one value is reported
'''
super().__init__()
self.stats_freq_sec = stats_freq_sec
self.keys_ts = {NO_ENTITY: perf_context_ts}
if cumulative:
self.unaccumulate_metrics()
def unaccumulate_metrics(self):
# if the perf context metrics provided are cumulative in nature, this
# method can be used to convert them to a disjoint format
epoch_ts = copy.deepcopy(self.keys_ts)
for stat in self.keys_ts[NO_ENTITY]:
timeseries = sorted(
list(self.keys_ts[NO_ENTITY][stat].keys()), reverse=True
)
if len(timeseries) < 2:
continue
for ix, ts in enumerate(timeseries[:-1]):
epoch_ts[NO_ENTITY][stat][ts] = (
epoch_ts[NO_ENTITY][stat][ts] -
epoch_ts[NO_ENTITY][stat][timeseries[ix+1]]
)
if epoch_ts[NO_ENTITY][stat][ts] < 0:
raise ValueError('DBPerfContext: really cumulative?')
# drop the smallest timestamp in the timeseries for this metric
epoch_ts[NO_ENTITY][stat].pop(timeseries[-1])
self.keys_ts = epoch_ts
def get_keys_from_conditions(self, conditions):
reqd_stats = []
for cond in conditions:
reqd_stats.extend([key.lower() for key in cond.keys])
return reqd_stats
def fetch_timeseries(self, statistics):
# this method is redundant for DatabasePerfContext because the __init__
# does the job of populating 'keys_ts'
pass
class OdsStatsFetcher(TimeSeriesData):
# class constants
OUTPUT_FILE = 'temp/stats_out.tmp'
ERROR_FILE = 'temp/stats_err.tmp'
RAPIDO_COMMAND = "%s --entity=%s --key=%s --tstart=%s --tend=%s --showtime"
# static methods
@staticmethod
def _get_string_in_quotes(value):
return '"' + str(value) + '"'
@staticmethod
def _get_time_value_pair(pair_string):
# example pair_string: '[1532544591, 97.3653601828]'
pair_string = pair_string.replace('[', '')
pair_string = pair_string.replace(']', '')
pair = pair_string.split(',')
first = int(pair[0].strip())
second = float(pair[1].strip())
return [first, second]
@staticmethod
def _get_ods_cli_stime(start_time):
diff = int(time.time() - int(start_time))
stime = str(diff) + '_s'
return stime
def __init__(
self, client, entities, start_time, end_time, key_prefix=None
):
super().__init__()
self.client = client
self.entities = entities
self.start_time = start_time
self.end_time = end_time
self.key_prefix = key_prefix
self.stats_freq_sec = 60
self.duration_sec = 60
def execute_script(self, command):
print('executing...')
print(command)
out_file = open(self.OUTPUT_FILE, "w+")
err_file = open(self.ERROR_FILE, "w+")
subprocess.call(command, shell=True, stdout=out_file, stderr=err_file)
out_file.close()
err_file.close()
def parse_rapido_output(self):
# Output looks like the following:
# <entity_name>\t<key_name>\t[[ts, value], [ts, value], ...]
# ts = timestamp; value = value of key_name in entity_name at time ts
self.keys_ts = {}
with open(self.OUTPUT_FILE, 'r') as fp:
for line in fp:
token_list = line.strip().split('\t')
entity = token_list[0]
key = token_list[1]
if entity not in self.keys_ts:
self.keys_ts[entity] = {}
if key not in self.keys_ts[entity]:
self.keys_ts[entity][key] = {}
list_of_lists = [
self._get_time_value_pair(pair_string)
for pair_string in token_list[2].split('],')
]
value = {pair[0]: pair[1] for pair in list_of_lists}
self.keys_ts[entity][key] = value
def parse_ods_output(self):
# Output looks like the following:
# <entity_name>\t<key_name>\t<timestamp>\t<value>
# there is one line per (entity_name, key_name, timestamp)
self.keys_ts = {}
with open(self.OUTPUT_FILE, 'r') as fp:
for line in fp:
token_list = line.split()
entity = token_list[0]
if entity not in self.keys_ts:
self.keys_ts[entity] = {}
key = token_list[1]
if key not in self.keys_ts[entity]:
self.keys_ts[entity][key] = {}
self.keys_ts[entity][key][token_list[2]] = token_list[3]
def fetch_timeseries(self, statistics):
# this method fetches the timeseries of required stats from the ODS
# service and populates the 'keys_ts' object appropriately
print('OdsStatsFetcher: fetching ' + str(statistics))
if re.search('rapido', self.client, re.IGNORECASE):
command = self.RAPIDO_COMMAND % (
self.client,
self._get_string_in_quotes(self.entities),
self._get_string_in_quotes(','.join(statistics)),
self._get_string_in_quotes(self.start_time),
self._get_string_in_quotes(self.end_time)
)
# Run the tool and fetch the time-series data
self.execute_script(command)
# Parse output and populate the 'keys_ts' map
self.parse_rapido_output()
elif re.search('ods', self.client, re.IGNORECASE):
command = (
self.client + ' ' +
'--stime=' + self._get_ods_cli_stime(self.start_time) + ' ' +
self._get_string_in_quotes(self.entities) + ' ' +
self._get_string_in_quotes(','.join(statistics))
)
# Run the tool and fetch the time-series data
self.execute_script(command)
# Parse output and populate the 'keys_ts' map
self.parse_ods_output()
def get_keys_from_conditions(self, conditions):
reqd_stats = []
for cond in conditions:
for key in cond.keys:
use_prefix = False
if key.startswith('[]'):
use_prefix = True
key = key[2:]
# TODO(poojam23): this is very hacky and needs to be improved
if key.startswith("rocksdb"):
key += ".60"
if use_prefix:
if not self.key_prefix:
print('Warning: OdsStatsFetcher might need key prefix')
print('for the key: ' + key)
else:
key = self.key_prefix + "." + key
reqd_stats.append(key)
return reqd_stats
def fetch_rate_url(self, entities, keys, window_len, percent, display):
# type: (List[str], List[str], str, str, bool) -> str
transform_desc = (
"rate(" + str(window_len) + ",duration=" + str(self.duration_sec)
)
if percent:
transform_desc = transform_desc + ",%)"
else:
transform_desc = transform_desc + ")"
if re.search('rapido', self.client, re.IGNORECASE):
command = self.RAPIDO_COMMAND + " --transform=%s --url=%s"
command = command % (
self.client,
self._get_string_in_quotes(','.join(entities)),
self._get_string_in_quotes(','.join(keys)),
self._get_string_in_quotes(self.start_time),
self._get_string_in_quotes(self.end_time),
self._get_string_in_quotes(transform_desc),
self._get_string_in_quotes(display)
)
elif re.search('ods', self.client, re.IGNORECASE):
command = (
self.client + ' ' +
'--stime=' + self._get_ods_cli_stime(self.start_time) + ' ' +
'--fburlonly ' +
self._get_string_in_quotes(entities) + ' ' +
self._get_string_in_quotes(','.join(keys)) + ' ' +
self._get_string_in_quotes(transform_desc)
)
self.execute_script(command)
url = ""
with open(self.OUTPUT_FILE, 'r') as fp:
url = fp.readline()
return url | Adviser-Rocksdb | /Adviser-Rocksdb-0.0.1.tar.gz/Adviser-Rocksdb-0.0.1/advisor/db_stats_fetcher.py | db_stats_fetcher.py |
import json
import time
import traceback
import logging
from pika.exceptions import ConnectionClosedByBroker, AMQPChannelError, AMQPConnectionError
class MonitorRabbit:
def __init__(
self, rabbit_conn, redis_coon,
redis_key=None, callback=None
):
"""
:param rabbit_conn: rabbit链接
:param redis_coon: redis链接
:param redis_key: redis储存的键
:param callback: 方法
"""
self.rabbit_conn = rabbit_conn
self.redis_coon = redis_coon
self.redis_key = redis_key
self._callback = callback
def start_run(self):
"""
监听队列
:return:
"""
while True:
try:
self.rabbit_conn.get_rabbit(self.callback)
except ConnectionClosedByBroker:
logging.info(f'error [{ConnectionClosedByBroker}]')
time.sleep(10)
continue
except AMQPChannelError:
logging.info(f'error [{AMQPChannelError}]')
time.sleep(10)
continue
except AMQPConnectionError:
# traceback.print_exc()
logging.info(f'error [{AMQPConnectionError}]')
time.sleep(10)
continue
except:
traceback.print_exc()
logging.info(f'error [{"unknow error"}]')
time.sleep(10)
continue
def callback(self, channel, method, properties, body):
"""
回调函数
"""
try:
req_body = body.decode('utf-8')
logging.info(req_body)
mes = {'result': json.loads(req_body)}
if self._callback:
self._callback.shop_start(json.dumps(mes))
else:
self.redis_coon.lpush(f'{self.redis_key}:start_urls', json.dumps(mes, ensure_ascii=False))
except Exception as e:
print(e)
finally:
channel.basic_ack(delivery_tag=method.delivery_tag) | Adyan-test | /Adyan_test-0.2.9-py3-none-any.whl/Adyan/Utils/monitor_rabbit.py | monitor_rabbit.py |
import asyncio
import time
from pyppeteer.launcher import launch
from faker import Faker
from pyppeteer import launcher
# launcher.DEFAULT_ARGS.remove("--enable-automation")
fake = Faker()
class GetTK:
def __init__(self, mongo_conn):
self.mongo_conn = mongo_conn
async def get_content(self, url):
browser = await launch(
{
'headless': True,
"args": [
"--disable-infobars",
],
"dumpio": True,
"userDataDir": "",
}
)
page = await browser.newPage()
await page.setViewport({'width': 1200, 'height': 700})
# await page.setUserAgent(
# 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36')
# try:
await page.goto(url, {'timeout': 0})
# except:
# return url
await page.evaluate(
'''() =>{ Object.defineProperties(navigator, { webdriver:{ get: () => false } }) }''')
await page.evaluate('''() =>{ window.navigator.chrome = { runtime: {}, }; }''')
await page.evaluate(
'''() =>{ Object.defineProperty(navigator, 'languages', { get: () => ['en-US', 'en'] }); }''')
await page.evaluate(
'''() =>{ Object.defineProperty(navigator, 'plugins', { get: () => [1, 2, 3, 4, 5,6], }); }''')
await asyncio.sleep(2)
cookie_list = await page.cookies()
print(cookie_list)
cookies = {}
for cookie in cookie_list:
print(cookie.get("name"))
if cookie.get("name") == '_m_h5_tk' or cookie.get("name") == '_m_h5_tk_enc':
cookies[cookie.get("name")] = cookie.get("value")
cookies['time'] = int(time.time() + 3600)
self.mongo_conn.update_one({'id': '1'}, cookies)
await browser.close()
def headers(self):
while True:
user_agent = fake.chrome(
version_from=63, version_to=80, build_from=999, build_to=3500
)
if "Android" in user_agent or "CriOS" in user_agent:
continue
else:
break
return user_agent
def start(self):
url = 'https://detail.1688.com/offer/627056024629.html?clickid=7ff082f5c2214f04a00580fcad6c8d52&sessionid=6947210c01f988d9ad53f73e6ec90f24'
loop = asyncio.get_event_loop()
return loop.run_until_complete(self.get_content(url)) | Adyan-test | /Adyan_test-0.2.9-py3-none-any.whl/Adyan/Utils/H5_TK.py | H5_TK.py |
import json
import logging
import random
import re
import time
import requests
from datetime import datetime
cache = {}
limit_times = [100]
def stat_called_time(func):
def _called_time(*args, **kwargs):
key = func.__name__
if key in cache.keys():
cache[key][0] += 1
else:
call_times = 1
cache[key] = [call_times, time.time()]
if cache[key][0] <= limit_times[0]:
res = func(*args, **kwargs)
cache[key][1] = time.time()
return res
else:
print('请求次数限制')
return None
return _called_time
class ProxyMeta(type):
def __new__(mcs, name, bases, attrs):
attrs['__Crawl_Func__'] = []
for k, v in attrs.items():
if 'crawl_' in k:
attrs['__Crawl_Func__'].append(k)
return type.__new__(mcs, name, bases, attrs)
class ProxyGetter(metaclass=ProxyMeta):
"""
:sample code:
ProxyGetter(
url='Obtain the proxy IP address',
name='The name of the method called',
add_whitelist='Add a whitelist address'
del_whitelist='The whitelist address is deleted'
).get_proxies()
"""
def __init__(
self, url, name,
add_whitelist=None,
del_whitelist=None
):
self.url = url
self.name = name
self.add_whitelist = add_whitelist
self.del_whitelist = del_whitelist
def get_proxies(self):
"""
通过方法名调用方法,获取代理
:param callfunc:
:return:
"""
proxies = []
res = eval(f'self.{self.name}()')
try:
if isinstance(res, dict):
return res
for proxy in res:
proxies.append(proxy)
except Exception as e:
print(f"{self.name}:{e}{res}")
if proxies:
return proxies
@stat_called_time
def jingling_ip(self):
ip_list = []
res = json.loads(requests.get(self.url).text)
if not res.get("data"):
msg = res.get('msg')
if "限制" in msg:
time.sleep(random.randint(1, 3))
if '登' in msg:
limit_times[0] += 1
_ip = re.findall('(.*?)登', res.get("msg"))
requests.get(self.add_whitelist % _ip[0], timeout=10)
res = json.loads(requests.get(self.url).text)
logging.info(f"{str(datetime.now())[:-7]}-{res}")
print(f"{str(datetime.now())[:-7]}-{res}")
if res.get("data"):
for item in res.get("data"):
ip_list.append("https://" + item['IP'])
if ip_list:
return ip_list
else:
return res
@stat_called_time
def taiyang_ip(self):
ip_list = []
res = json.loads(requests.get(self.url).text)
if not res.get("data"):
msg = res.get("msg")
if '请2秒后再试' in msg:
time.sleep(random.randint(2, 4))
limit_times[0] += 1
if '请将' in msg:
_ip = msg.split("设")[0][2:]
print(requests.get(self.add_whitelist % _ip).text)
if '当前IP' in msg:
_ip = msg.split(":")[1]
print(requests.get(self.add_whitelist % _ip).text)
res = json.loads(requests.get(self.url).text)
logging.info(f"{str(datetime.now())[:-7]}-{res}")
print(f"{str(datetime.now())[:-7]}-{res}")
if res.get("data"):
for item in res.get("data"):
ip_list.append(f"https://{item['ip']}:{item['port']}")
if ip_list:
return ip_list
else:
return res
@stat_called_time
def zhima_ip(self):
pass
@stat_called_time
def kuaidaili_ip(self):
ip_list = []
res = json.loads(requests.get(self.url).text)
limit_times[0] += 1
print(res)
for item in res.get("data").get("proxy_list"):
ip_list.append(f"https://{item}")
if ip_list:
return ip_list
else:
return res
@stat_called_time
def source_proxy(self):
res = json.loads(requests.get(self.url).text)
if not res.get("data"):
msg = res.get("msg")
if '请2秒后再试' in msg:
limit_times[0] += 1
time.sleep(random.randint(2, 4))
res = json.loads(requests.get(self.url).text)
logging.info(f"{str(datetime.now())[:-7]}-{res}")
print(f"{str(datetime.now())[:-7]}-{res}")
if res.get("data"):
return res.get("data")
else:
return res
# if __name__ == '__main__':
# print(ProxyGetter(
# 'https://tps.kdlapi.com/api/gettps/?orderid=903884027633888&num=1&pt=1&format=json&sep=1',
# 'kuaidaili_ip',
# add_whitelist='https://dev.kdlapi.com/api/setipwhitelist?orderid=903884027633888&iplist=%s&signature=thl1n7lqhisfikvznxwl631bds413640',
# # del_whitelist=proxy.get('del_whitelist')
# ).get_proxies()) | Adyan-test | /Adyan_test-0.2.9-py3-none-any.whl/Adyan/Utils/proxy.py | proxy.py |
import json
import re
import time
import pytz
import random
from datetime import datetime
from faker import Faker
from .Mongo_conn import MongoPerson
fake = Faker()
cntz = pytz.timezone("Asia/Shanghai")
class ReDict:
@classmethod
def string(
cls,
re_pattern: dict,
string_: str,
):
if string_:
return {
key: cls.compute_res(
re_pattern=re.compile(scale),
string_=string_.translate(
{
ord('\t'): '', ord('\f'): '',
ord('\r'): '', ord('\n'): '',
ord(' '): '',
})
)
for key, scale in re_pattern.items()
}
@classmethod
def compute_res(
cls,
re_pattern: re,
string_=None
):
data = [
result.groups()[0]
for result in re_pattern.finditer(string_)
]
if data:
try:
return json.loads(data[0])
except:
return data[0]
else:
return None
class GoodsCopyUtils:
@classmethod
def time_cycle(
cls,
times,
int_time=None
):
"""
入库时间规整
:param times: string - 字符串时间
:param int_time: True and False - 获时间戳
:return:
"""
if int_time:
return int(time.mktime(time.strptime(times, "%Y-%m-%d")))
if type(times) is str:
times = int(time.mktime(time.strptime(times, "%Y-%m-%d %H:%M:%S")))
return str(datetime.fromtimestamp(times, tz=cntz))
@classmethod
def merge_dic(
cls,
dic: dict,
lst: list
):
"""
合并多个dict
:param dic: dict - 主dict
:param lst: list - 多个字典列表方式传入
:return:
"""
for d in lst:
for k, v in d.items():
if v:
dic[k] = v
return dic
@classmethod
def is_None(
cls,
dic: dict,
) -> dict:
"""
:param dic: dict
:return: 返回字典中值是None的键值对
"""
return {
k: v
for k, v in dic.items()
if not v
}
@classmethod
def find(
cls, target: str,
dictData: dict,
) -> list:
queue = [dictData]
result = []
while len(queue) > 0:
data = queue.pop()
for key, value in data.items():
if key == target:
result.append(value)
elif isinstance(value, dict):
queue.append(value)
if result:
return result[0]
class Headers:
def headers(self, referer=None, mobile_headers=None):
while True:
user_agent = fake.chrome(
version_from=63, version_to=80, build_from=999, build_to=3500
)
if "Android" in user_agent or "CriOS" in user_agent:
if mobile_headers:
break
continue
else:
break
if referer:
return {
"user-agent": user_agent,
"referer": referer,
}
return {
"user-agent": user_agent,
}
class Cookies(object):
def __init__(self, db_name, mongo_config):
self.mongo_conn = MongoPerson(db_name, 'cookie', mongo_config).test()
def cookie(self):
return random.choice(list(self.mongo_conn.find()))
class SKU:
def skuMap(self, param):
data = [
self.style(i.get("data_value"), i.get("skuName"), i.get("style"), i.get("cls"))
for i in param
if i.get("data_value")
]
return self.sku(data), [s[1] for s in data]
def style(self, data_value, skuName, style, cls):
try:
pid = data_value[0].split(":")[0]
except:
pid = None
if style:
styles = []
for k, v in zip(
[
{'data_value': k, 'skuName': f'{cls[0]}--{v}'}
for k, v in zip(data_value, skuName)
], style
):
k.update({"image": v.replace("background:url(", '').replace(") center no-repeat;", '')})
styles.append(k)
props = {
"values": [
{
'vid': k.split(":")[1],
'name': v,
'image': self.props_image(styles, k)
}
for k, v in zip(data_value, skuName)
],
"name": cls[0],
"pid": pid,
}
else:
styles = [
{'data_value': k, 'skuName': f'{cls[0]}--{v}'}
for k, v in zip(data_value, skuName)
]
props = {
"values": [
{
'vid': k.split(":")[1],
'name': v,
'image': self.props_image(styles, k)
}
for k, v in zip(data_value, skuName)
],
"name": cls[0],
"pid": pid,
}
if pid:
return styles, props
else:
return styles, None
def sku(self, lis):
results = []
for data in lis:
p1 = data[1]["values"]
p2 = []
for i, s in enumerate(data[0]):
s.update({"image": p1[i]["image"]})
p2.append(s)
results.append(p2)
shape = [len(v) for v in results]
offsets = [0] * len(shape)
has_next = True
values = []
while has_next:
record = [results[i][off] for i, off in enumerate(offsets)]
if not record:
break
res = self.format_data(record)
values.append(res)
for i in range(len(shape) - 1, -1, -1):
if offsets[i] + 1 >= shape[i]:
offsets[i] = 0 # 重置并进位
if i == 0:
has_next = False # 全部占满,退出
else:
offsets[i] += 1
break
return values
def format_data(self, ls, ):
p1 = []
p2 = []
p3 = []
for li in ls:
p1.append(list(li.values())[0])
p2.append(list(li.values())[1])
p3.append(list(li.values())[2])
date_value = f";{';'.join(p1)};"
skuName = f";{';'.join(p2)};"
image = [i for i in p3 if i]
return {"data_value": date_value, "skuName": skuName, "image": image[0] if image else None}
def props_image(self, style, k):
image = [
i.get('image')
for i in style
if i.get('data_value') == k
]
if image:
return image[0] | Adyan-test | /Adyan_test-0.2.9-py3-none-any.whl/Adyan/Utils/MyUtils.py | MyUtils.py |
from datetime import datetime
from pymongo import MongoClient
class MongoConn(object):
def __init__(self, db_name, config):
"""
:param db_name:
:param config: {
"host": "192.168.20.211",
# "host": "47.107.86.234",
"port": 27017
}
"""
self.db = MongoClient(**config, connect=True)[db_name]
class DBBase(object):
def __init__(self, collection, db_name, config):
self.mg = MongoConn(db_name, config)
self.collection = self.mg.db[collection]
def exist_list(self, data, key, get_id: callable):
lst = [get_id(obj) for obj in data]
print('lst', len(lst))
set_list = set([
i.get(key)
for i in list(
self.collection.find({key: {"$in": lst}})
)
])
set_li = set(lst) - set_list
with open("./ignore/null_field.txt", "rt", encoding="utf-8") as f:
_ignore = [int(line.split(",")[0]) for line in f.readlines()]
exist = list(set_li - set(_ignore))
print(len(exist))
for obj in data:
if get_id(obj) in exist:
yield obj
def exist(self, dic):
"""
单条查询
:param dic:
:return:1,0
"""
return self.collection.find(dic).count()
def update_one(self, dic, item=None):
result = self.exist(dic)
if item and result == 1:
item['updateTime'] = datetime.strftime(datetime.now(), "%Y-%m-%d %H:%M:%S")
self.collection.update(dic, {"$set": item})
elif item:
self.collection.update(dic, {"$set": item}, upsert=True)
def insert_one(self, param):
"""
:param param: 多条list 或者 单条dict
:return:
"""
self.collection.insert(param)
def find_len(self, dic):
return self.collection.find(dic).count()
def find_one(self):
return self.collection.find_one()
def find_list(self, count, dic=None, page=None, ):
"""
查询数据
:param count:查询量
:param dic:{'city': ''} 条件查询
:param page:分页查询
:return:
"""
if dic:
return list(self.collection.find(dic).limit(count))
if page:
return list(self.collection.find().skip(page * count - count).limit(count))
def daochu(self):
return list(self.collection.find({'$and': [
{'$or': [{"transaction_medal": "A"}, {"transaction_medal": "AA"}]},
{"tpServiceYear": {'$lte': 2}},
{"overdue": {'$ne': "店铺已过期"}},
{"province": "广东"}
]}))
# return self.collection.find().skip(count).next()
def test(self):
return self.collection
class MongoPerson(DBBase):
def __init__(self, table, db_name, config):
super(MongoPerson, self).__init__(table, db_name, config) | Adyan-test | /Adyan_test-0.2.9-py3-none-any.whl/Adyan/Utils/Mongo_conn.py | Mongo_conn.py |
import random
import time
import requests
import logging
# from twisted.internet import defer, reactor
from twisted.internet import defer
from twisted.internet.error import ConnectionRefusedError
from w3lib.http import basic_auth_header
from scrapy import signals
from scrapy.http import TextResponse
from scrapy.core.downloader.handlers.http11 import TunnelError, TimeoutError
from gerapy_pyppeteer.downloadermiddlewares import reactor
from .proxy import ProxyGetter
class Proxy(object):
def __init__(self, settings, spider):
self.settings = settings
self.ip_list = []
self.ip_data = 0
try:
proxy = spider.proxy
# self.user = proxy.get("user")
# self.pwd = proxy.get("pwd")
# self.proxy_name = proxy.get('name')
self.proxies = ProxyGetter(
proxy.get('url'),
proxy.get('name'),
add_whitelist=proxy.get('add_whitelist'),
del_whitelist=proxy.get('del_whitelist')
)
except:
pass
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings, crawler.spider)
def process_response(self, request, response, spider):
"""
处理响应
:param request:
:param response:
:param spider:
:return:
"""
try:
if spider.proxy:
start_time = request.meta.get('_start_time', time.time())
logging.info(
f'【代理{request.meta["proxy"][8:]}消耗时间】 {request.url} {time.time() - start_time}'
)
del request.meta["proxy"]
except:
pass
return response
def process_request(self, request, spider):
"""
处理请求
:param request:
:param spider:
:return:
"""
request.meta.update(
{
'_start_time': time.time()
}
)
try:
proxy_switch = spider.proxy
except:
proxy_switch = False
if proxy_switch:
if isinstance(self.ip_list, list):
if len(self.ip_list) < 2:
while True:
proxies = self.proxies.get_proxies()
if proxies:
break
self.ip_list = proxies
request.meta['download_timeout'] = 5
ip_raw = random.choice(self.ip_list)
self.ip_list.remove(ip_raw)
request.meta["proxy"] = ip_raw
else:
self.ip_list = self.proxies.get_proxies()
logging.info('代理列表为空')
def process_exception(self, request, exception, spider):
"""
过滤代理错误
:param request:
:param exception:
:param spider:
:return:
"""
if isinstance(exception, (TunnelError, TimeoutError, ConnectionRefusedError)):
return request
class Request(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
@defer.inlineCallbacks
def process_request(self, request, spider):
container = []
out = defer.Deferred()
reactor.callInThread(self._get_res, request, container, out)
yield out
if len(container) > 0:
defer.returnValue(container[0])
def _get_res(self, request, container, out):
url = request.url
r = requests.get(url, headers=request.meta.get("headers"))
r.encoding = request.encoding
text = r.content
# response = TextResponse(url=r.url, status=r.status_code, body=r.text, request=request)
response = TextResponse(url=r.url, encoding="gbk", body=text, request=request)
container.append(response)
reactor.callFromThread(out.callback, response)
# except Exception as e:
# err = str(type(e)) + ' ' + str(e)
# reactor.callFromThread(out.errback, ValueError(err))
def process_response(self, request, response, spider):
# Called with the response returned from the downloader.
# Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response
def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception.
# Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name) | Adyan-test | /Adyan_test-0.2.9-py3-none-any.whl/Adyan/Utils/my_middleware.py | my_middleware.py |
import hashlib
import logging
import time
import threading
from datetime import datetime
from multiprocessing import Process
import aiohttp
import asyncio
from ..Utils import ProxyGetter, ReidsClient, Headers, MongoPerson
class Settings:
def __init__(self, host, port):
self.config = MongoPerson(
'config', 'proxy',
config={"host": host, "port": port}
).find_one()
class VaildityTester(object):
"""
校验器
"""
def __init__(self, config):
self.__raw_proxies = []
self.md5 = hashlib.md5()
self.config = config
def set_raw_proxies(self, proxiies):
self.__raw_proxies = proxiies
# 数据库连接-->创建的方式会影响程序性能--》用的时候创建
self.__conn = ReidsClient(
name=self.config.get("PROXY_NAME"),
config={
"HOST": self.config.get("HOST"),
"PORT": self.config.get("PORT"),
"DB": self.config.get("DB"),
"PAW": self.config.get("PASSWORD")
})
async def test_single_proxy(self, proxy):
"""
校验单一代理
:param proxy:
:return:
"""
url = self.config.get("TSET_API")
out_time = self.config.get("TEST_TIME_OUT")
try:
async with aiohttp.ClientSession() as session:
if isinstance(proxy, bytes):
proxy = proxy.decode('utf-8')
real_proxy = proxy.replace("s", "")
try:
async with session.get(
url,
headers=Headers().headers(url),
proxy=real_proxy,
timeout=out_time
) as response:
if response.status == 200:
try:
self.__conn.sput(proxy)
except:
pass
except Exception as e:
print('代理不可用!', proxy, e)
except Exception:
print('未知错误!')
def tester(self):
"""
使用校验器的步骤:
1、先往篮子放ip
self.set_raw_proxies(proxies)
2、启动校验器
self.tester()
校验器的开关
:return:
"""
# 1、创建任务循环loop
loop = asyncio.get_event_loop()
# 2、启动校验代理功能
tasks = [self.test_single_proxy(proxy) for proxy in self.__raw_proxies]
# 3、监听tasks是否创建
loop.run_until_complete(asyncio.wait(tasks))
class PoolAdder(object):
# s = Settings('H', 'H')
def __init__(self, threshold, config):
self.__threshold = threshold
# self.s = Settings('H','H')
proxy = config.get("PROXY_API").get(config.get('PROXY_OFF'))
# 校验
self.__tester = VaildityTester(config)
# db
self.__conn = ReidsClient(
name=config.get("PROXY_NAME"),
config={"HOST": config.get("HOST"), "PORT": config.get("PORT"), "DB": config.get("DB"),
"PAW": config.get("PASSWORD")}
)
# getter
self.__getter = ProxyGetter(
proxy.get('url'),
proxy.get('name'),
add_whitelist=proxy.get('add_whitelist'),
del_whitelist=proxy.get('del_whitelist')
)
def is_over_threshold(self):
"""
判断代理池中代理的数量是否到达最大值
:return:True:超过
"""
if self.__conn.queue_len >= self.__threshold:
return True
return False
def add_to_pool(self):
while True:
# 代理池超出最大代理数量就停止添加
if self.is_over_threshold():
break
proxy_count = 0
'__crawl_func__'
try:
proxies = self.__getter.get_proxies()
if isinstance(proxies, list):
proxy_count += len(proxies)
else:
logging.info(f'{proxies}代理网站发生异常,请查看变更!')
print(f'{proxies}代理网站发生异常,请查看变更!')
except Exception as e:
logging.info(f'代理网站发生异常,请查看变更!{e}')
continue
# print(proxies)
# 2、使用校验器校验
# 放材料
if proxies and "试" not in proxies:
self.__tester.set_raw_proxies(proxies)
self.__tester.tester()
if proxy_count == 0:
raise RuntimeError('所有的代理网站都不可用,请变更!')
class Scheduler(object):
def __init__(self, host=None, port=None):
if host:
s = Settings(host, port).config
s.pop("_id")
self.config = s
# 1、循环校验--->不断的从代理池头部获取中一片,做定期检查
@staticmethod
def vaild_proxy(config):
conn = ReidsClient(
name=config.get("PROXY_NAME"),
config={"HOST": config.get("HOST"), "PORT": config.get("PORT"), "DB": config.get("DB"),
"PAW": config.get("PASSWORD")}
)
tester = VaildityTester(config)
# 循环校验
retry = 0
while True:
count = int(conn.queue_len * 0.5)
if retry == 120:
logging.info(f'redis ip 当前存量:{count} 重试次数:{retry},请检测ip添加')
print(f'redis ip 当前存量:{count} 重试次数:{retry},请检测ip添加')
break
if count == 0:
retry += 1
time.sleep(config.get("CYCLE_VAILD_TIME"))
count = int(conn.queue_len * 0.5)
proxies = conn.redis_conn.spop(config.get("PROXY_NAME"), count)
if proxies:
retry -= 1
# 校验
tester.set_raw_proxies(proxies)
tester.tester()
time.sleep(config.get("CYCLE_VAILD_TIME"))
@staticmethod
def check_pool_add(config):
lower_threshold = config.get("LOWER_THRESHOLD")
upper_threshold = config.get("UPPER_THRESHOLD")
cycle = config.get("ADD_CYCLE_TIME")
adder = PoolAdder(upper_threshold, config=config)
conn = ReidsClient(
name=config.get("PROXY_NAME"),
config={"HOST": config.get("HOST"), "PORT": config.get("PORT"), "DB": config.get("DB"),
"PAW": config.get("PASSWORD")}
)
while True:
count = conn.queue_len
if count <= lower_threshold:
logging.info(f'获取代理,当前代理数量{count}')
print(f'获取代理,当前代理数量{count}')
adder.add_to_pool()
time.sleep(cycle)
def run(self, config):
S = Scheduler()
p1 = Process(target=S.vaild_proxy, args=(config,)) # 校验器
p2 = Process(target=S.check_pool_add, args=(config,)) # 添加器
p1.start()
p2.start() | Adyan-test | /Adyan_test-0.2.9-py3-none-any.whl/Adyan/proxy/scheduler.py | scheduler.py |
import asyncio
import json
import random
from pyppeteer import launcher
# launcher.DEFAULT_ARGS.remove("--enable-automation")
from pyppeteer.launcher import launch
from retrying import retry
class Cookies:
async def slide(self, page):
slider = await page.Jeval('#nc_1_n1z', 'node => node.style')
if slider:
print('当前页面出现滑块')
flag, page = await self.mouse_slide(page=page)
if flag:
await page.keyboard.press('Enter')
print("print enter", flag)
else:
print("正常进入")
await page.keyboard.press('Enter')
await asyncio.sleep(2)
try:
global error
print("error_1:", error)
error = await page.Jeval('.error', 'node => node.textContent')
print("error_2:", error)
except Exception as e:
error = None
finally:
if error:
print('确保账户安全重新入输入')
else:
print(page.url)
await asyncio.sleep(5)
return 'cg'
async def get_content(self, us, url):
# browser = await launch(options={'args': ['--no-sandbox']})
browser = await launch({
'headless': False,
'handleSIGINT': False,
'handleSIGTERM': False,
'handleSIGHUP': False,
})
page = await browser.newPage()
await page.setViewport({'width': 1200, 'height': 700})
await page.setUserAgent(
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36')
await page.goto(url)
await asyncio.sleep(2)
await page.evaluate(
'''() =>{ Object.defineProperties(navigator,{ webdriver:{ get: () => false } }) }''')
await page.evaluate('''() =>{ window.navigator.chrome = { runtime: {}, }; }''')
await page.evaluate(
'''() =>{ Object.defineProperty(navigator, 'languages', { get: () => ['en-US', 'en'] }); }''')
await page.evaluate(
'''() =>{ Object.defineProperty(navigator, 'plugins', { get: () => [1, 2, 3, 4, 5,6], }); }''')
if 'login' in page.url:
print(us.get('name'), us.get('pwd'))
await page.type("#fm-login-id", us.get('name'), {'delay': self.input_time_random() - 50})
await page.type("#fm-login-password", us.get('pwd'), {'delay': self.input_time_random()})
await self.slide(page)
await asyncio.sleep(2)
print('开始保存')
cookie_list = await page.cookies()
cookies = ""
for cookie in cookie_list:
if cookie.get("name") not in ["l", "tfstk", "isg"]:
coo = "{}={}; ".format(cookie.get("name"), cookie.get("value"))
cookies += coo
print(cookies)
await asyncio.sleep(1)
await browser.close()
return cookies[:-2]
def retry_if_result_none(self, result):
return result is None
@retry(retry_on_result=retry_if_result_none, )
async def mouse_slide(self, page=None):
await asyncio.sleep(2)
try:
await page.hover('#nc_1_n1z')
await page.mouse.down()
await page.mouse.move(2000, 0, {'delay': random.randint(1000, 2000)})
await page.mouse.up()
except Exception as e:
print(e, ':验证失败')
return None, page
else:
await asyncio.sleep(2)
slider_again = await page.Jeval('.nc-lang-cnt', 'node => node.textContent')
if slider_again != '验证通过':
return None, page
else:
print('验证通过')
return 1, page
def input_time_random(self):
return random.randint(100, 151)
def taobao_cookies(self, us, url):
# url = 'https://login.taobao.com/member/login.jhtml?'
loop1 = asyncio.new_event_loop()
asyncio.set_event_loop(loop1)
loop = asyncio.get_event_loop()
return loop.run_until_complete(self.get_content(us, url)) | Adyan-test | /Adyan_test-0.2.9-py3-none-any.whl/Adyan/login/cookies.py | cookies.py |
<<<<<<< HEAD
# AdyanUtils
#### 介绍
{**以下是 Gitee 平台说明,您可以替换此简介**
Gitee 是 OSCHINA 推出的基于 Git 的代码托管平台(同时支持 SVN)。专为开发者提供稳定、高效、安全的云端软件开发协作平台
无论是个人、团队、或是企业,都能够用 Gitee 实现代码托管、项目管理、协作开发。企业项目请看 [https://gitee.com/enterprises](https://gitee.com/enterprises)}
#### 软件架构
软件架构说明
#### 安装教程
1. xxxx
2. xxxx
3. xxxx
#### 使用说明
1. xxxx
2. xxxx
3. xxxx
#### 参与贡献
1. Fork 本仓库
2. 新建 Feat_xxx 分支
3. 提交代码
4. 新建 Pull Request
#### 特技
1. 使用 Readme\_XXX.md 来支持不同的语言,例如 Readme\_en.md, Readme\_zh.md
2. Gitee 官方博客 [blog.gitee.com](https://blog.gitee.com)
3. 你可以 [https://gitee.com/explore](https://gitee.com/explore) 这个地址来了解 Gitee 上的优秀开源项目
4. [GVP](https://gitee.com/gvp) 全称是 Gitee 最有价值开源项目,是综合评定出的优秀开源项目
5. Gitee 官方提供的使用手册 [https://gitee.com/help](https://gitee.com/help)
6. Gitee 封面人物是一档用来展示 Gitee 会员风采的栏目 [https://gitee.com/gitee-stars/](https://gitee.com/gitee-stars/)
=======
pass
>>>>>>> c3e1aec... update
| AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/README.md | README.md |
import json
import random
import time
import redis
map = {
'string': {"len": "strlen", "push": "mset", "pull": "keys", "del": "delete"}, # (字符串)
'hash': {"len": "hlen", "push": "hmset", "pull": "hscan", "del": "hdel"}, # (哈希表)
'list': {"len": "llen", "push": "lpush", "pull": "lrange", "del": "lrem"}, # (列表)
'set': {"len": "scard", "push": "sadd", "pull": "sscan", "del": "srem"}, # (集合)
'zset': {"len": "zcard", "push": "zadd", "pull": "zrange", "del": "zrem"}, # (有序集)
}
def redis_conn(host, port, db, password):
while True:
try:
return redis.Redis(host=host, port=port, db=db, password=password)
except:
time.sleep(60)
redis_conn(host, port, db, password)
class RedisClient(object):
"""
RedisClient(
name='proxy',
config={ "HOST": "ip", "PORT": 6379, "DB": 11 }
)
ree.get(2)
"""
def __init__(self, config, db_type=None, name=None):
"""
:param config: { "HOST": "ip", "PORT": 6379, "DB": 11 }
:param name: "name"
"""
host = config.get('HOST', 'localhost')
port = config.get('PORT', 6379)
db = config.get('DB', 0)
password = config.get('PAW', None)
self.redis_conn = redis_conn(host, port, db, password)
self.name = name
self.db_type = db_type
if not db_type:
try:
self.db_type = self.redis_conn.type(name).decode('utf-8')
except:
self.db_type = 'string'
self.map = map.get(self.db_type)
def redis_del(self, value=None, count=0):
if value:
try:
eval(f'self.redis_conn.{self.map.get("del")}(self.name,value)')
except:
eval(f'self.redis_conn.{self.map.get("del")}(self.name,count,value)')
else:
self.redis_conn.zrem(self.name, count)
def get(self, count, key=None):
"""
获取指定数量数据
:param count: 获取数量
:param key: string类型必须传key
:return:
"""
deco = lambda x, y: [i.decode('utf-8') for i in random.sample(x, y)][0]
data = self.redis_pull(key)
if isinstance(data, list) or isinstance(data[1], list):
try:
data = deco(data, count)
except:
data = deco(data[1], count)
else:
data = deco(list(data[1].values()), count)
if self.db_type == 'string' or self.db_type == 'none':
return self.redis_conn.get(data).decode('utf-8')
if self.db_type == 'zset':
return self.redis_conn.zscore(self.name, data)
return data
def redis_pull(self, value=None):
"""
获取所有数据
:param value:
tuple
(0, {b'1': b'111111', b'2': b'111111'})
(0, {b'1': b'111111', b'2': b'111111'})
list
[b'1', b'2', b'3']
:return:
"""
for param in [
f"(self.name, {self.queue_len()})",
f"(self.name, 0, {self.queue_len()})",
f'("{value}*")'
]:
try:
return eval(f'self.redis_conn.{self.map.get("pull")}{param}')
except:
pass
def redis_push(self, value):
"""
注意!!!!
有序集合值必须为数值
:param value:
多个值存储
dict:string,hash
{'1': '111111', '2': '111111'}
list:llen,scard,zcard
['111111', '111211', '1111131']
:return:
"""
if isinstance(value, str):
value = [value]
if isinstance(value, list):
value = str(value)[1:-1]
if isinstance(value, dict):
value = value = f"'{json.dumps(value)}'"
eval(f'self.redis_conn.{self.map.get("push")}(self.name,{value})')
# @property
def queue_len(self):
"""
获取redis数量长度
:param list:
:return:
"""
return eval(f'self.redis_conn.{self.map.get("len")}(self.name)') | AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/Utils/connect/redis_conn.py | redis_conn.py |
import logging
from datetime import datetime
from pymongo import MongoClient
class MongoConn(object):
def __init__(self, db_name, config):
"""
:param db_name:
:param config: {
"host": "192.168.20.211",
# "host": "47.107.86.234",
"port": 27017
}
"""
self.client = MongoClient(**config, connect=True)
self.db = self.client[db_name]
class DBBase(object):
def __init__(self, collection, db_name, config):
self.mg = MongoConn(db_name, config)
if isinstance(collection, list):
self.conn_map = {
name: self.mg.db[name]
for name in collection
}
else:
self.collection = self.mg.db[collection]
def change_collection(self, name):
if "conn_map" in dir(self):
self.collection = self.conn_map.get(name)
def exist_list(self, data, key, get_id: callable, _ignore):
lst = [get_id(obj) for obj in data]
set_list = set([
i.get(key)
for i in list(self.collection.find({key: {"$in": lst}}))
])
set_li = set(lst) - set_list
exist = list(set_li - set(_ignore))
if exist:
logging.info(f'当前页{len(exist)}条不存在')
for obj in data:
if get_id(obj) in exist:
yield obj
def exist(self, dic):
"""
单条查询
:param dic:
:return:1,0
"""
return self.collection.find(dic).count()
def update_one(self, dic, item=None):
result = self.exist(dic)
if item and result == 1:
item['updateTime'] = datetime.strftime(datetime.now(), "%Y-%m-%d %H:%M:%S")
self.collection.update(dic, {"$set": item})
elif item:
self.collection.update(dic, {"$set": item}, upsert=True)
def insert_one(self, param):
"""
:param param: 多条list 或者 单条dict
:return:
"""
try:
self.collection.insert(param)
except Exception as e:
if isinstance(param,list):
for obj in param:
try:
self.collection.insert(obj)
except:
pass
else:
raise e
def find_len(self, dic=None):
if dic:
return self.collection.find(dic).count()
return self.collection.find().count()
def find_one(self):
return self.collection.find_one()
def find_list(self, count=1000, dic=None, page=None, ):
"""
查询数据
:param count:查询量
:param dic:{'city': ''} 条件查询
:param page:分页查询
:return:
"""
index = page * count - count
if page == 1:
index = 0
if dic:
return list(self.collection.find(dic).skip(index).limit(count))
if page:
return list(self.collection.find().skip(index).limit(count))
def daochu(self):
return list(self.collection.find({'$and': [
{'$or': [{"transaction_medal": "A"}, {"transaction_medal": "AA"}]},
{"tpServiceYear": {'$lte': 2}},
{"overdue": {'$ne': "店铺已过期"}},
{"province": "广东"}
]}))
def counts(self, dic=None):
if dic:
return self.collection.find(dic).count()
return self.collection.count()
class MongoPerson(DBBase):
def __init__(self, table, db_name, config):
super(MongoPerson, self).__init__(table, db_name, config)
# mo = MongoPerson(['Category2', 'Category5', 'Category4', ], 'AlibabaClue', config={
# "host": "192.168.20.211",
# # "host": "119.29.9.92",
# # "host": "47.107.86.234",
# "port": 27017
# })
# mo.change_collection('Category5')
# print(mo.find_len())
# mo.change_collection('Category4')
# print(mo.find_len()) | AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/Utils/connect/mongo_conn.py | mongo_conn.py |
import json
import logging
import time
import traceback
import pika
from pika.exceptions import ConnectionClosedByBroker, AMQPChannelError, AMQPConnectionError
logging.getLogger("pika").setLevel(logging.WARNING)
class RabbitClient:
def __init__(self, queue_name, config):
"""
:param queue_name:
:param config: {
"ip": "ip",
"port": 30002,
"virtual_host": "my_vhost",
"username": "dev",
"pwd": "zl123456",
"prefix": ""
}
"""
self.queue_name = queue_name
self.config = config
def rabbit_conn(self):
"""
创建连接
:return:
"""
user_pwd = pika.PlainCredentials(
self.config.get("username"),
self.config.get("pwd")
)
params = pika.ConnectionParameters(
host=self.config.get("ip"),
port=self.config.get('port'),
virtual_host=self.config.get("virtual_host"),
credentials=user_pwd
)
self.conn = pika.BlockingConnection(parameters=params)
self.col = self.conn.channel()
self.col.queue_declare(
queue=self.queue_name,
durable=True
)
def push_rabbit(self, item):
self.rabbit_conn()
self.col.basic_publish(
exchange='',
routing_key=self.queue_name,
body=json.dumps(item, ensure_ascii=False)
)
def get_rabbit(self, fun):
self.rabbit_conn()
self.col.queue_declare(self.queue_name, durable=True, passive=True)
self.col.basic_consume(self.queue_name, fun)
self.col.start_consuming()
class MonitorRabbit:
def __init__(
self, rabbit_conn, redis_conn, callback=None
):
"""
:param rabbit_conn: rabbit链接
:param redis_conn: redis链接
:param redis_key: redis储存的键
:param callback: 方法
"""
self.rabbit_conn = rabbit_conn
self.redis_conn = redis_conn
self._callback = callback
def start_run(self):
"""
监听队列
:return:
"""
while True:
try:
self.rabbit_conn.get_rabbit(self.callback)
except ConnectionClosedByBroker:
logging.info(f'error [{ConnectionClosedByBroker}]')
time.sleep(10)
continue
except AMQPChannelError:
logging.info(f'error [{AMQPChannelError}]')
time.sleep(10)
continue
except AMQPConnectionError:
# traceback.print_exc()
logging.info(f'error [{AMQPConnectionError}]')
time.sleep(10)
continue
except:
traceback.print_exc()
logging.info(f'error [{"unknow error"}]')
time.sleep(10)
continue
def callback(self, channel, method, properties, body):
"""
回调函数
"""
try:
req_body = body.decode('utf-8')
logging.info(req_body)
mes = {'result': json.loads(req_body)}
if self._callback:
self._callback.shop_start(json.dumps(mes))
else:
self.redis_conn.redis_push(json.dumps(mes, ensure_ascii=False))
except Exception as e:
print(e)
finally:
channel.basic_ack(delivery_tag=method.delivery_tag) | AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/Utils/connect/rabbit_conn.py | rabbit_conn.py |
import logging
import random
import time
from requests import sessions
from scrapy import signals
from scrapy.core.downloader.handlers.http11 import TunnelError, TimeoutError
from scrapy.http import TextResponse
from twisted.internet import defer
from twisted.internet import reactor
from twisted.internet.error import ConnectionRefusedError
from w3lib.http import basic_auth_header
from Utils.crawler.proxy import GetProxy
class Proxy(object):
def __init__(self, settings, spider):
self.settings = settings
self.ip_list = []
try:
self.proxy = spider.proxy
if self.proxy.get("name"):
self.proxies = GetProxy(self.proxy)
except:
self.proxy = {}
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings, crawler.spider)
def process_response(self, request, response, spider):
if self.settings.getbool('PROXY', False):
start_time = request.meta.get('_start_time', time.time())
if self.settings.getbool('LOGGER_PROXY', False):
logging.info(
f'【代理{request.meta["proxy"][8:]}消耗时间{time.time() - start_time}】{request.url}'
)
del request.meta["proxy"]
return response
def process_request(self, request, spider):
if spider.proxy.get("name") and self.settings.getbool('PROXY', False):
request.meta.update({'_start_time': time.time()})
if isinstance(self.ip_list, list):
if len(self.ip_list) < 5:
while True:
proxies = self.proxies.get_proxies()
if proxies:
break
self.ip_list = proxies
request.meta['download_timeout'] = 5
ip_raw = random.choice(self.ip_list)
self.ip_list.remove(ip_raw)
request.meta["proxy"] = ip_raw
else:
logging.info('代理列表为空')
if spider.proxy.get("username") and self.settings.getbool('PROXY', False):
request.meta['proxy'] = f"http://{self.proxy.get('proxies')}"
request.headers['Proxy-Authorization'] = basic_auth_header(
self.proxy.get("username"),
self.proxy.get("password")
)
def process_exception(self, request, exception, spider):
if isinstance(exception, (TunnelError, TimeoutError, ConnectionRefusedError)):
return request
class RequestsDownloader(object):
@classmethod
def from_crawler(cls, crawler):
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
@defer.inlineCallbacks
def process_request(self, request, spider):
kwargs = request.meta.get("params")
if kwargs:
container = []
out = defer.Deferred()
reactor.callInThread(self._get_res, request, container, out, kwargs)
yield out
if len(container) > 0:
defer.returnValue(container[0])
def _get_res(self, request, container, out, kwargs):
try:
url = request.url
method = kwargs.pop('method')
r = sessions.Session().request(method=method, url=url, **kwargs)
r.encoding = request.encoding
text = r.content
encoding = None
response = TextResponse(url=r.url, encoding=encoding, body=text, request=request)
container.append(response)
reactor.callFromThread(out.callback, response)
except Exception as e:
err = str(type(e)) + ' ' + str(e)
reactor.callFromThread(out.errback, ValueError(err))
def process_response(self, request, response, spider):
return response
def process_exception(self, request, exception, spider):
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name) | AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/Utils/crawler/middleware.py | middleware.py |
import json
import logging
import random
import re
import time
from datetime import datetime
import requests
cache = {}
limit_times = [100]
def stat_called_time(func):
def _called_time(*args, **kwargs):
key = func.__name__
if key in cache.keys():
cache[key][0] += 1
else:
call_times = 1
cache[key] = [call_times, time.time()]
if cache[key][0] <= limit_times[0]:
res = func(*args, **kwargs)
cache[key][1] = time.time()
return res
else:
print('请求次数限制')
return None
return _called_time
class ProxyMeta(type):
def __new__(mcs, name, bases, attrs):
attrs['__Crawl_Func__'] = []
for k, v in attrs.items():
if 'crawl_' in k:
attrs['__Crawl_Func__'].append(k)
return type.__new__(mcs, name, bases, attrs)
class GetProxy(metaclass=ProxyMeta):
"""
:sample code:
GetProxy(spider.proxy).get_proxies()
"""
def __init__(self, proxy: dict):
self.url = proxy.get('url')
self.name = proxy.get('name')
self.add_whitelist = proxy.get('add_whitelist')
self.del_whitelist = proxy.get('del_whitelist')
def get_proxies(self):
"""
通过方法名调用方法,获取代理
:param callfunc:
:return:
"""
proxies = []
res = eval(f'self.{self.name}()')
try:
if isinstance(res, dict):
return res
for proxy in res:
proxies.append(proxy)
except Exception as e:
print(f"{self.name}:{e}{res}")
if proxies:
return proxies
@stat_called_time
def jingling_ip(self):
ip_list = []
res = json.loads(requests.get(self.url).text)
if not res.get("data"):
msg = res.get('msg')
if "限制" in msg:
time.sleep(random.randint(1, 3))
if '登' in msg:
limit_times[0] += 1
_ip = re.findall('(.*?)登', res.get("msg"))
requests.get(self.add_whitelist % _ip[0], timeout=10)
res = json.loads(requests.get(self.url).text)
logging.info(f"{str(datetime.now())[:-7]}-{res}")
print(f"{str(datetime.now())[:-7]}-{res}")
if res.get("data"):
for item in res.get("data"):
ip_list.append("https://" + item['IP'])
if ip_list:
return ip_list
else:
return res
@stat_called_time
def taiyang_ip(self):
ip_list = []
res = json.loads(requests.get(self.url).text)
if not res.get("data"):
msg = res.get("msg")
if '请2秒后再试' in msg:
time.sleep(random.randint(2, 4))
limit_times[0] += 1
if '请将' in msg:
_ip = msg.split("设")[0][2:]
print(requests.get(self.add_whitelist % _ip).text)
if '当前IP' in msg:
_ip = msg.split(":")[1]
print(requests.get(self.add_whitelist % _ip).text)
res = json.loads(requests.get(self.url).text)
logging.info(f"{str(datetime.now())[:-7]}-{res}")
print(f"{str(datetime.now())[:-7]}-{res}")
if res.get("data"):
for item in res.get("data"):
ip_list.append(f"https://{item['ip']}:{item['port']}")
if ip_list:
return ip_list
else:
return res
@stat_called_time
def zhima_ip(self):
pass
@stat_called_time
def kuaidaili_ip(self):
ip_list = []
res = json.loads(requests.get(self.url).text)
limit_times[0] += 1
print(res)
for item in res.get("data").get("proxy_list"):
ip_list.append(f"https://{item}")
if ip_list:
return ip_list
else:
return res
@stat_called_time
def source_proxy(self):
res = json.loads(requests.get(self.url).text)
if not res.get("data"):
msg = res.get("msg")
if '请2秒后再试' in msg:
limit_times[0] += 1
time.sleep(random.randint(2, 4))
res = json.loads(requests.get(self.url).text)
logging.info(f"{str(datetime.now())[:-7]}-{res}")
print(f"{str(datetime.now())[:-7]}-{res}")
if res.get("data"):
return res.get("data")
else:
return res
# if __name__ == '__main__':
# print(ProxyGetter(
# 'https://tps.kdlapi.com/api/gettps/?orderid=903884027633888&num=1&pt=1&format=json&sep=1',
# 'kuaidaili_ip',
# add_whitelist='https://dev.kdlapi.com/api/setipwhitelist?orderid=903884027633888&iplist=%s&signature=thl1n7lqhisfikvznxwl631bds413640',
# # del_whitelist=proxy.get('del_whitelist')
# ).get_proxies()) | AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/Utils/crawler/proxy.py | proxy.py |
from abc import ABC
from copy import deepcopy
import scrapy
from Utils.crawler.Forge import Headers
from scrapy import Request, FormRequest
from scrapy_redis.spiders import RedisSpider
class Retry:
def __init__(self, response, retry_name, retry_count=3, **kwargs):
self.response = response
self.retry_name = retry_name
self.retry_count = retry_count
self.headers = kwargs.get("headers", Headers().header())
self.formdata = kwargs.get("formdata", response.meta.get('formdata'))
self.url = kwargs.get("url", response.url)
def retry_get(self):
"""
get重试请求
"""
meta = deepcopy(self.response.meta)
retry = meta.get(self.retry_name, 0)
if retry < self.retry_count:
meta[self.retry_name] = retry + 1
return Request(
url=self.url,
callback=self.response.request.callback,
headers=self.headers,
meta=meta,
dont_filter=True
)
def retry_post(self):
"""
post重试请求
"""
meta = deepcopy(self.response.meta)
retry = meta.get(self.retry_name, 0)
if retry < 3:
meta[self.retry_name] = retry + 1
return FormRequest(
url=self.url,
callback=self.response.request.callback,
formdata=self.formdata,
headers=self.headers,
meta=meta,
dont_filter=True
)
class BaseManager:
@classmethod
def handle_result(cls, item, spider):
"""
只展示不处理结果
"""
print(f"======================={item}=========================")
class RedisCrawl(RedisSpider, ABC):
def __init__(self, *args, **kwargs):
super(RedisCrawl, self).__init__(*args, **kwargs)
self.retry = Retry
if 'manager' not in dir(self):
self.manager = BaseManager()
class Crawl(scrapy.Spider, ABC):
def __init__(self, *args, **kwargs):
super(Crawl, self).__init__(*args, **kwargs)
self.retry = Retry
if 'manager' not in dir(self):
self.manager = BaseManager()
class Pipeline:
def __init__(self, manager):
self.manager = manager
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.spider.manager)
def process_item(self, item, spider):
self.manager.handle_result(item, spider) | AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/Utils/crawler/init.py | init.py |
import json
import os
import re
import time
from datetime import datetime
from typing import Union
import fitz
import pytesseract
from PIL import Image
import pytz
cntz = pytz.timezone("Asia/Shanghai")
remap = {
ord('\t'): '', ord('\f'): '',
ord('\r'): '', ord('\n'): '',
}
class Fun:
@classmethod
def merge_dic(cls, dic: dict, lst: list):
"""
合并多个dict
:param dic: dict - 主dict
:param lst: list - 多个字典列表方式传入
:return:
"""
for d in lst:
for k, v in d.items():
if v:
dic[k] = v
return dic
@classmethod
def del_html(cls, html: str):
for i in re.findall(r"<(.*?)>", html):
html = html.replace(f'<{i}>', '')
return html.translate({**remap, ord(' '): ''})
@classmethod
def re_dict(cls, re_pattern: dict and list, string_, ) -> dict:
if isinstance(re_pattern, dict):
fun = lambda x, y: y if ' ' in x else y.replace(' ', '')
return {
key: cls.compute_res(
re_pattern=re.compile(scale),
string_=fun(scale, string_.translate(remap))
)
for key, scale in re_pattern.items()
}
if isinstance(re_pattern, list):
dic = {}
for index in range(len(re_pattern)):
string = string_
if isinstance(string_, list):
string = string_[index]
if isinstance(string, int):
string = string_[string]
dict2 = cls.re_dict(re_pattern[index], string)
for k, v in dict2.items():
if k in dic.keys():
values = dic.get(k)
if values and v:
dict2[k] = [values, v]
if values and v is None:
dict2[k] = values
dic = {**dic, **dict2}
return dic
@classmethod
def compute_res(cls, re_pattern: re.Pattern, string_=None):
data = re_pattern.findall(string_)
if data:
try:
return json.loads(data[0])
except:
return data[0]
else:
return None
@classmethod
def find(cls, target: str, dict_data: dict, index=None, ):
result = []
result_add = lambda x: [result.insert(0, d) for d in cls.find(target, x)]
if isinstance(dict_data, dict):
for key, value in dict_data.items():
if key == target and value not in result:
result.insert(0, value)
result_add(value)
if isinstance(dict_data, (list, tuple)):
for data in dict_data:
result_add(data)
if isinstance(index, int):
try:
return result[index]
except:
return None
return result
@classmethod
def timeconvert(cls, times, timestamp=None, int_time=None) -> Union[int, str]:
remap = {
ord('年'): '-', ord('月'): '-',
ord('日'): ' ', ord('/'): '-',
ord('.'): '-',
}
if isinstance(times, str):
times = times.translate(remap)
if int_time:
return int(time.mktime(time.strptime(times, int_time)))
if isinstance(times, str):
times = int(time.mktime(time.strptime(times, "%Y-%m-%d %H:%M:%S")))
if timestamp:
times = times + timestamp
return str(datetime.fromtimestamp(times, tz=cntz))
@classmethod
def is_None(cls, dic: dict) -> dict:
"""
:param dic: dict
:return: 返回字典中值是None的键值对
"""
return {
k: v
for k, v in dic.items()
if not v
}
@classmethod
def del_key(cls, dic, del_keys=None, is_None=None):
if isinstance(dic, list):
return [cls.del_key(item, del_keys, is_None) for item in dic]
if isinstance(dic, dict):
if is_None:
del_keys = Fun.is_None(dic).keys()
new_dict = {}
for key in dic.keys():
if key not in del_keys:
new_dict[key] = cls.del_key(dic[key], del_keys, is_None)
return new_dict
else:
return dic
def extract_text_from_image_and_pdf(cls, path, pdf_file):
text = ""
with fitz.open(f"{path}{pdf_file}") as pdf_document:
for page_num in range(pdf_document.page_count):
page = pdf_document.load_page(page_num)
text += page.get_text()
page = pdf_document[page_num]
images = page.get_images(full=True)
for img_index, img in enumerate(images):
xref = img[0]
base_image = pdf_document.extract_image(xref)
image_data = base_image["image"]
image_filename = f"page_{page_num + 1}_image_{img_index + 1}.png"
image_path = os.path.join(path, image_filename)
text += cls.extract_text_from_image(image_path, image_data)
return text
def extract_text_from_image(cls, image_path, image_data):
with open(image_path, "wb") as img_file:
img_file.write(image_data)
image = Image.open(image_path)
return pytesseract.image_to_string(image, lang='chi_sim') | AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/Utils/crawler/function.py | function.py |
import hashlib
import logging
import queue
import random
import re
import threading
import urllib.parse
import cpca
from faker import Faker
from requests import sessions
fake = Faker()
def ranstr(num):
# 猜猜变量名为啥叫 H
H = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'
salt = ''
for i in range(num):
salt += random.choice(H)
return salt
def hex_md5(cookie, ti, formdata):
try:
string = f'{re.findall("_m_h5_tk=(.*?)_", cookie)[0]}&{ti}&12574478&{formdata.get("data")}'
m = hashlib.md5()
m.update(string.encode('UTF-8'))
return m.hexdigest()
except:
logging.warning(f'参数错误:{[cookie, formdata]}')
def url_code(string, code='utf-8'):
if code == "utf-8" or code == "gbk":
quma = str(string).encode(code)
bianma = urllib.parse.quote(quma)
if code == "ascii":
bianma = string.encode('unicode_escape').decode(code)
return bianma
def gen_headers(string):
lsl = []
headers = {}
for l in string.split('\n')[1:-1]:
l = l.split(': ')
lsl.append(l)
for x in lsl:
headers[str(x[0]).strip(' ')] = x[1]
return headers
def address(address: list):
area_list = cpca.transform(address).values.tolist()[0]
data = {"province": area_list[0], "city": area_list[1], "district": area_list[2]}
while None in area_list:
area_list.remove(None)
data["address"] = "".join(area_list[:-1])
return data
def arabic_to_chinese(number):
chinese_numerals = {
0: '零',
1: '一',
2: '二',
3: '三',
4: '四',
5: '五',
6: '六',
7: '七',
8: '八',
9: '九'
}
units = ['', '十', '百', '千']
large_units = ['', '万', '亿', '兆']
# 将整数转换为字符串,以便于处理每个数字
num_str = str(number)
num_len = len(num_str)
# 初始化结果字符串
result = ''
# 从最高位开始循环处理数字
for i in range(num_len):
digit = int(num_str[i])
unit_index = num_len - i - 1
# 处理零值
if digit == 0:
if unit_index % 4 == 0:
# 如果是一个新的大单位(万、亿、兆等),需要添加大单位
result += large_units[unit_index // 4]
elif result[-1] != chinese_numerals[0]:
# 避免在结果中连续添加多个零
result += chinese_numerals[digit]
else:
result += chinese_numerals[digit] + units[unit_index % 4]
if unit_index % 4 == 0:
# 如果是一个新的大单位(万、亿、兆等),需要添加大单位
result += large_units[unit_index // 4]
return result
class Headers:
@classmethod
def user_agent(cls, mobile_headers):
while True:
user_agent = fake.chrome(
version_from=63, version_to=80,
build_from=999, build_to=3500
)
if "Android" in user_agent or "CriOS" in user_agent:
if mobile_headers:
break
continue
else:
break
return user_agent
@classmethod
def header(
cls, string=None,
mobile_headers=None,
headers={}
) -> dict:
if string:
headers = gen_headers(string)
if "\n" not in string:
headers['Referer'] = string
headers['user-agent'] = cls.user_agent(mobile_headers)
return headers
class Decode:
def __init__(self, string):
pass
def discern(self):
pass
def get(args):
# time.sleep(random.randint(1,3))
method = args.pop("method")
url = args.pop("url")
with sessions.Session() as session:
return session.request(method=method, url=url, **args)
class ThreadManager(object):
def __init__(self, work_num: list, func=None, **kwargs):
self.work_queue = queue.Queue() # 任务队列
self.threads = [] # 线程池
self.func = func
self.__work_queue(work_num, kwargs) # 初始化任务队列,添加任务
self.__thread_pool(len(work_num)) # 初始化线程池,创建线程
def __thread_pool(self, thread_num):
"""
初始化线程池
:param thread_num:
:return:
"""
for i in range(thread_num):
# 创建工作线程(线程池中的对象)
self.threads.append(Work(self.work_queue))
def __work_queue(self, jobs_num, kwargs):
"""
初始化工作队列
:param jobs_num:
:return:
"""
for i in jobs_num:
# 添加一项工作入队
if self.func:
self.work_queue.put((self.func, {'data': i, **kwargs}))
else:
self.work_queue.put((get, i))
def wait_allcomplete(self):
"""
等待所有线程运行完毕
:return:
"""
respone = []
for item in self.threads:
item.join()
respone.append(item.get_result())
return respone
class Work(threading.Thread):
def __init__(self, work_queue):
threading.Thread.__init__(self)
self.result = None
self.work_queue = work_queue
self.start()
def run(self) -> None:
# 死循环,从而让创建的线程在一定条件下关闭退出
while True:
try:
do, args = self.work_queue.get(block=False) # 任务异步出队,Queue内部实现了同步机制
self.result = do(args)
# print(self.result.text)
self.work_queue.task_done() # 通知系统任务完成
except:
break
def get_result(self):
try:
return self.result
except Exception:
return None | AdyanUtils | /AdyanUtils-0.7.2.tar.gz/AdyanUtils-0.7.2/Utils/crawler/Forge.py | Forge.py |
MIT License
Copyright (c) 2017 Adyen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/LICENSE.md | LICENSE.md |

# Adyen APIs Library for Python
[](https://pypi.org/project/Adyen/)
This is the officially supported Python library for using Adyen's APIs.
## Supported API versions
| API | Description | Service Name | Supported version |
| --- | ----------- | ------------ | ----------------- |
|[BIN lookup API](https://docs.adyen.com/api-explorer/BinLookup/52/overview) | The BIN Lookup API provides endpoints for retrieving information based on a given BIN. | binLookup | **v52** |
| [Balance Platform API](https://docs.adyen.com/api-explorer/balanceplatform/2/overview) | The Balance Platform API enables you to create a platform where you can onboard your users as account holders and create balance accounts, cards, and business accounts. | balancePlatform | **v2** |
| [Checkout API](https://docs.adyen.com/api-explorer/Checkout/70/overview)| Our latest integration for accepting online payments. | checkout | **v70** |
| [Data Protection API](https://docs.adyen.com/development-resources/data-protection-api) | Endpoint for requesting data erasure. | dataProtection | **v1** |
| [Legal Entity Management API](https://docs.adyen.com/api-explorer/legalentity/3/overview) | Endpoint to manage legal entities | legalEntityManagement | **v3** |
| [Management API](https://docs.adyen.com/api-explorer/Management/1/overview)| Configure and manage your Adyen company and merchant accounts, stores, and payment terminals. | management | **v1** |
| [Payments API](https://docs.adyen.com/api-explorer/Payment/68/overview)| Our classic integration for online payments. | payments | **v68** |
| [Payouts API](https://docs.adyen.com/api-explorer/Payout/68/overview)| Endpoints for sending funds to your customers. | payouts | **v68** |
| [POS Terminal Management API](https://docs.adyen.com/api-explorer/postfmapi/1/overview)| Endpoints for managing your point-of-sale payment terminals. | terminal | **v1** |
| [Recurring API](https://docs.adyen.com/api-explorer/Recurring/68/overview)| Endpoints for managing saved payment details. | recurring | **v68** |
| [Stored Value API](https://docs.adyen.com/payment-methods/gift-cards/stored-value-api) | Endpoints for managing gift cards. | storedValue | **v46** |
| [Transfers API](https://docs.adyen.com/api-explorer/transfers/3/overview) | Endpoints for managing transfers, getting information about transactions or moving fund | transfers | **v3** |
For more information, refer to our [documentation](https://docs.adyen.com/) or the [API Explorer](https://docs.adyen.com/api-explorer/).
## Prerequisites
- [Adyen test account](https://docs.adyen.com/get-started-with-adyen)
- [API key](https://docs.adyen.com/development-resources/api-credentials#generate-api-key). For testing, your API credential needs to have the [API PCI Payments role](https://docs.adyen.com/development-resources/api-credentials#roles).
- Python 3.6 or higher
- Packages (optional): requests or pycurl
## Installation
### For development purposes
Clone this repository and run
~~~~ bash
make install
~~~~
### For usage propose
Use pip command:
~~~~ bash
pip install Adyen
~~~~
## Using the library
### General use with API key
~~~~ python
import Adyen
adyen = Adyen.Adyen()
adyen.payment.client.xapikey = "YourXapikey"
adyen.payment.client.hmac = "YourHMACkey"
adyen.payment.client.platform = "test" # Environment to use the library in.
~~~~
### Consuming Services
Every API the library supports is represented by a service object. The name of the service matching the corresponding API is listed in the [Integrations](#supported-api-versions) section of this document.
#### Using all services
~~~~python
import Adyen
adyen = Adyen.Adyen()
adyen.payment.client.xapikey = "YourXapikey"
adyen.payment.client.platform = "test" # change to live for production
request = {
"amount": {
"currency": "USD",
"value": 1000 # value in minor units
},
"reference": "Your order number",
"paymentMethod": {
"type": "visa",
"encryptedCardNumber": "test_4111111111111111",
"encryptedExpiryMonth": "test_03",
"encryptedExpiryYear": "test_2030",
"encryptedSecurityCode": "test_737"
},
"shopperReference": "YOUR_UNIQUE_SHOPPER_ID_IOfW3k9G2PvXFu2j",
"returnUrl": "https://your-company.com/...",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
result = adyen.checkout.payments_api.payments(request)
~~~~
#### Using one of the services
~~~~python
from Adyen import checkout
checkout.client.xapikey = "YourXapikey"
checkout.client.platform = "test" # change to live for production
request = {
"amount": {
"currency": "USD",
"value": 1000 # value in minor units
},
"reference": "Your order number",
"paymentMethod": {
"type": "visa",
"encryptedCardNumber": "test_4111111111111111",
"encryptedExpiryMonth": "test_03",
"encryptedExpiryYear": "test_2030",
"encryptedSecurityCode": "test_737"
},
"shopperReference": "YOUR_UNIQUE_SHOPPER_ID_IOfW3k9G2PvXFu2j",
"returnUrl": "https://your-company.com/...",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
result = checkout.payments_api.payments(request)
~~~~
#### Force HTTP library
~~~~python
import Adyen
adyen = Adyen.Adyen()
adyen.client.http_force = 'requests' # or 'pycurl'
~~~~
### Using query parameters (management API only)
Define a dictionary with query parameters that you want to use.
~~~~ python
query_parameters = {
'pageSize':10,
'pageNumber':3
}
~~~~
pass the dictionary to the method as an additional argument.
~~~~ python
adyen.management.account_company_level_api.get_companies(query_parameters=query_parameters)
~~~~
### Handling exceptions
Adyen service exceptions extend the AdyenError class. After you catch this exception, you can access the
class arguments for the specifics around this error or use the debug method which prints all the arguments.
~~~~python
try:
adyen.checkout.payments(request)
except Adyen.exceptions.AdyenErorr as error:
error.debug()
~~~~
<details><summary>List of exceptions</summary>
<p>AdyenInvalidRequestError</p>
<p>AdyenAPIResponseError</p>
<p>AdyenAPIAuthenticationError</p>
<p>AdyenAPIInvalidPermission</p>
<p>AdyenAPICommunicationError</p>
<p>AdyenAPIValidationError</p>
<p>AdyenAPIUnprocessableEntity</p>
<p>AdyenAPIInvalidFormat</p>
<p>AdyenEndpointInvalidFormat</p>
</details>
### Example integration
For a closer look at how our Python library works, clone our [example integration](https://github.com/adyen-examples/adyen-python-online-payments). This includes commented code, highlighting key features and concepts, and examples of API calls that can be made using the library.
## Feedback
We value your input! Help us enhance our API Libraries and improve the integration experience by providing your feedback. Please take a moment to fill out [our feedback form](https://forms.gle/A4EERrR6CWgKWe5r9) to share your thoughts, suggestions or ideas.
## Contributing
We encourage you to contribute to this repository, so everyone can benefit from new features, bug fixes, and any other improvements.
Have a look at our [contributing guidelines](https://github.com/Adyen/adyen-python-api-library/blob/develop/CONTRIBUTING.md) to find out how to raise a pull request.
## Support
If you have a feature request, or spotted a bug or a technical problem, [create an issue here](https://github.com/Adyen/adyen-web/issues/new/choose).
For other questions, [contact our Support Team](https://www.adyen.help/hc/en-us/requests/new?ticket_form_id=360000705420).
## Licence
This repository is available under the [MIT license](https://github.com/Adyen/adyen-python-api-library/blob/main/LICENSE.md).
## See also
* [Example integration](https://github.com/adyen-examples/adyen-python-online-payments)
* [Adyen docs](https://docs.adyen.com/)
* [API Explorer](https://docs.adyen.com/api-explorer/)
| Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/README.md | README.md |
import Adyen
import unittest
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestManagement(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.xapikey = "YourXapikey"
client.platform = "test"
balance_platform_version = settings.API_BALANCE_PLATFORM_VERSION
def test_creating_balance_account(self):
request = {
"accountHolderId": "AH32272223222B59K6ZKBBFNQ",
"description": "S.Hopper - Main balance account"
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/configuration/"
"balance-account-created.json")
result = self.adyen.balancePlatform.balance_accounts_api.create_balance_account(request)
self.assertEqual('AH32272223222B59K6ZKBBFNQ', result.message['accountHolderId'])
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://balanceplatform-api-test.adyen.com/bcl/{self.balance_platform_version}/balanceAccounts',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_creating_account_holder(self):
request = {
"description": "Liable account holder used for international payments and payouts",
"reference": "S.Eller-001",
"legalEntityId": "LE322JV223222D5GG42KN6869"
}
self.adyen.client = self.test.create_client_from_file(200, request, "test/mocks/configuration/"
"account-holder-created.json")
result = self.adyen.balancePlatform.account_holders_api.create_account_holder(request)
self.assertEqual("LE322JV223222D5GG42KN6869", result.message['legalEntityId'])
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://balanceplatform-api-test.adyen.com/bcl/{self.balance_platform_version}/accountHolders',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_get_balance_platform(self):
platform_id = "YOUR_BALANCE_PLATFORM"
self.adyen.client = self.test.create_client_from_file(200, None, "test/mocks/configuration/"
"balance-platform-retrieved.json")
result = self.adyen.balancePlatform.platform_api.get_balance_platform(platform_id)
self.assertEqual(platform_id, result.message['id'])
self.adyen.client.http_client.request.assert_called_once_with(
'GET',
f'https://balanceplatform-api-test.adyen.com/bcl/{self.balance_platform_version}/balancePlatforms/{platform_id}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=None,
xapikey="YourXapikey"
)
def test_creating_payment_instrument(self):
request = {
"type": "bankAccount",
"description": "YOUR_DESCRIPTION",
"balanceAccountId": "BA3227C223222B5CTBLR8BWJB",
"issuingCountryCode": "NL"
}
self.adyen.client = self.test.create_client_from_file(200, request, "test/mocks/configuration/"
"business-account-created.json")
result = self.adyen.balancePlatform.payment_instruments_api.create_payment_instrument(request)
self.assertEqual("BA3227C223222B5CTBLR8BWJB", result.message["balanceAccountId"])
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://balanceplatform-api-test.adyen.com/bcl/{self.balance_platform_version}/paymentInstruments',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_creating_payment_instrument_group(self):
request = {
"balancePlatform": "YOUR_BALANCE_PLATFORM",
"txVariant": "mc"
}
self.adyen.client = self.test.create_client_from_file(200, request, "test/mocks/configuration/"
"payment-instrument-group-created.json")
result = self.adyen.balancePlatform.payment_instrument_groups_api.create_payment_instrument_group(request)
self.assertEqual("YOUR_BALANCE_PLATFORM", result.message['balancePlatform'])
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://balanceplatform-api-test.adyen.com/bcl/{self.balance_platform_version}/paymentInstrumentGroups',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_get_transaction_rule(self):
transactionRuleId = "TR32272223222B5CMD3V73HXG"
self.adyen.client = self.test.create_client_from_file(200, {}, "test/mocks/configuration/"
"transaction-rule-retrieved.json")
result = self.adyen.balancePlatform.transaction_rules_api.get_transaction_rule(transactionRuleId)
self.assertEqual(transactionRuleId, result.message['transactionRule']['id'])
self.adyen.client.http_client.request.assert_called_once_with(
'GET',
f'https://balanceplatform-api-test.adyen.com/bcl/{self.balance_platform_version}/'
f'transactionRules/{transactionRuleId}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=None,
xapikey="YourXapikey"
) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/ConfigurationTest.py | ConfigurationTest.py |
import Adyen
from Adyen import settings
import unittest
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestManagement(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.xapikey = "YourXapikey"
client.platform = "test"
management_version = settings.API_MANAGEMENT_VERSION
def test_get_company_account(self):
request = None
id = "YOUR_COMPANY_ACCOUNT"
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"management/"
"get_company_account"
".json")
result = self.adyen.management.account_company_level_api.get_company_account(companyId=id)
self.assertEqual(id, result.message['id'])
self.adyen.client.http_client.request.assert_called_once_with(
'GET',
f'https://management-test.adyen.com/{self.management_version}/companies/{id}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=None,
xapikey="YourXapikey"
)
def test_my_api_credential_api(self):
request = {"domain": "YOUR_DOMAIN"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"management/"
"post_me_allowed"
"_origins.json")
result = self.adyen.management.my_api_credential_api.add_allowed_origin(request)
originId = result.message['id']
self.assertEqual("YOUR_DOMAIN", result.message['domain'])
self.adyen.client = self.test.create_client_from_file(204, {},
"test/mocks/"
"management/"
"no_content.json")
result = self.adyen.management.my_api_credential_api.remove_allowed_origin(originId)
self.adyen.client.http_client.request.assert_called_once_with(
'DELETE',
f'https://management-test.adyen.com/{self.management_version}/me/allowedOrigins/{originId}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=None,
xapikey="YourXapikey"
)
def test_update_a_store(self):
request = {
"address": {
"line1": "1776 West Pinewood Avenue",
"line2": "Heartland Building",
"line3": "",
"postalCode": "20251"
}
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"management/"
"update_a_store"
".json")
storeId = "YOUR_STORE_ID"
merchantId = "YOUR_MERCHANT_ACCOUNT_ID"
result = self.adyen.management.account_store_level_api.update_store(request, merchantId, storeId)
self.adyen.client.http_client.request.assert_called_once_with(
'PATCH',
f'https://management-test.adyen.com/{self.management_version}/merchants/{merchantId}/stores/{storeId}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
self.assertEqual(storeId, result.message['id'])
self.assertEqual("1776 West Pinewood Avenue", result.message['address']['line1'])
def test_create_a_user(self):
request = {
"name": {
"firstName": "John",
"lastName": "Smith"
},
"username": "johnsmith",
"email": "[email protected]",
"timeZoneCode": "Europe/Amsterdam",
"roles": [
"Merchant standard role",
"Merchant admin"
],
"associatedMerchantAccounts": [
"YOUR_MERCHANT_ACCOUNT"
]
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"management/"
"create_a_user"
".json")
companyId = "YOUR_COMPANY_ACCOUNT"
result = self.adyen.management.users_company_level_api.create_new_user(request, companyId)
self.assertEqual(request['name']['firstName'], result.message['name']['firstName'])
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://management-test.adyen.com/{self.management_version}/companies/{companyId}/users',
json=request,
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
xapikey="YourXapikey"
)
def test_get_list_of_android_apps(self):
request = {}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"management/"
"get_list_of"
"_android_apps"
".json")
companyId = "YOUR_COMPANY_ACCOUNT"
result = self.adyen.management.terminal_actions_company_level_api.list_android_apps(companyId)
self.assertEqual("ANDA422LZ223223K5F694GCCF732K8", result.message['androidApps'][0]['id'])
def test_query_paramaters(self):
request = {}
companyId = "YOUR_COMPANY_ACCOUNT"
query_parameters = {
'pageNumber': 1,
'pageSize': 10
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"management/"
"get_list_of_merchant_accounts.json")
result = self.adyen.management.account_company_level_api. \
list_merchant_accounts(companyId, query_parameters=query_parameters)
self.adyen.client.http_client.request.assert_called_once_with(
'GET',
f'https://management-test.adyen.com/{self.management_version}/companies/{companyId}/merchants?pageNumber=1&pageSize=10',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=None,
xapikey="YourXapikey"
) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/ManagementTest.py | ManagementTest.py |
import Adyen
import unittest
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestModifications(unittest.TestCase):
ady = Adyen.Adyen()
client = ady.client
test = BaseTest(ady)
client.username = "YourWSUser"
client.password = "YourWSPassword"
client.platform = "test"
def test_capture_success(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['modificationAmount'] = {"value": "1234", "currency": "EUR"}
request['originalReference'] = "YourOriginalReference"
self.ady.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'capture-success'
'.json')
result = self.ady.payment.modifications_api.capture(request)
self.assertEqual("[capture-received]", result.message['response'])
def test_capture_error_167(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['modificationAmount'] = {"value": "1234", "currency": "EUR"}
request['originalReference'] = "YourOriginalReference"
self.ady.client = self.test.create_client_from_file(422, request,
'test/mocks/'
'capture-error-167'
'.json')
self.assertRaisesRegex(
Adyen.AdyenAPIUnprocessableEntity,
"AdyenAPIUnprocessableEntity:{'status': 422, 'errorCode': '167', 'message': 'Original pspReference required for this operation', 'errorType': 'validation'}",
self.ady.payment.modifications_api.capture,
request
)
def test_cancel_or_refund_received(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['originalReference'] = "YourOriginalReference"
self.ady.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'cancelOrRefund'
'-received.json')
result = self.ady.payment.modifications_api.cancel_or_refund(request)
self.assertEqual("[cancelOrRefund-received]",
result.message['response'])
def test_refund_received(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['originalReference'] = "YourOriginalReference"
request['modificationAmount'] = {"value": "1234", "currency": "EUR"}
self.ady.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'refund-received'
'.json')
result = self.ady.payment.modifications_api.refund(request)
self.assertEqual("[refund-received]", result.message['response'])
def test_cancel_received(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['originalReference'] = "YourOriginalReference"
self.ady.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'cancel-received'
'.json')
result = self.ady.payment.modifications_api.cancel(request)
self.assertEqual("[cancel-received]", result.message['response'])
def test_adjust_authorisation_received(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['modificationAmount'] = {"value": "1234", "currency": "EUR"}
request['originalReference'] = "YourOriginalReference"
self.ady.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'adjust-'
'authorisation-'
'received.json')
result = self.ady.payment.modifications_api.adjust_authorisation(request)
self.assertEqual("[adjustAuthorisation-received]",
result.message['response'])
TestModifications.client.http_force = "requests"
suite = unittest.TestLoader().loadTestsFromTestCase(TestModifications)
unittest.TextTestRunner(verbosity=2).run(suite)
TestModifications.client.http_force = "pycurl"
TestModifications.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestModifications)
unittest.TextTestRunner(verbosity=2).run(suite)
TestModifications.client.http_force = "other"
TestModifications.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestModifications)
unittest.TextTestRunner(verbosity=2).run(suite) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/ModificationTest.py | ModificationTest.py |
import Adyen
import unittest
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestRecurring(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.username = "YourWSUser"
client.password = "YourWSPassword"
client.platform = "test"
recurring_version = settings.API_RECURRING_VERSION
def test_list_recurring_details(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request["shopperEmail"] = "[email protected]"
request["shopperReference"] = "ref"
request['recurring'] = {}
request["recurring"]['contract'] = "RECURRING"
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'recurring/'
'listRecurring'
'Details-'
'success.json')
result = self.adyen.recurring.list_recurring_details(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/Recurring/{self.recurring_version}/listRecurringDetails',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
username='YourWSUser',
password='YourWSPassword'
)
self.assertEqual(1, len(result.message['details']))
self.assertEqual(1, len(result.message['details'][0]))
recurringDetail = result.message['details'][0]['RecurringDetail']
self.assertEqual("recurringReference",
recurringDetail['recurringDetailReference'])
self.assertEqual("cardAlias", recurringDetail['alias'])
self.assertEqual("1111", recurringDetail['card']['number'])
def test_disable(self):
request = {}
request["shopperEmail"] = "[email protected]"
request["shopperReference"] = "ref"
request["recurringDetailReference"] = "12345678889"
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'recurring/'
'disable-success'
'.json')
result = self.adyen.recurring.disable(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/Recurring/{self.recurring_version}/disable',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
username='YourWSUser',
password='YourWSPassword',
)
self.assertEqual(1, len(result.message['details']))
self.assertEqual("[detail-successfully-disabled]",
result.message['response'])
def test_disable_803(self):
request = {}
request["shopperEmail"] = "[email protected]"
request["shopperReference"] = "ref"
request["recurringDetailReference"] = "12345678889"
self.adyen.client = self.test.create_client_from_file(422, request,
'test/mocks/'
'recurring/'
'disable-error-803'
'.json')
self.assertRaisesRegex(
Adyen.AdyenAPIUnprocessableEntity,
"AdyenAPIUnprocessableEntity:{'status': 422, 'errorCode': '803', 'message': 'PaymentDetail not found', 'errorType': 'validation'}",
self.adyen.recurring.disable,
request
)
TestRecurring.client.http_force = "requests"
suite = unittest.TestLoader().loadTestsFromTestCase(TestRecurring)
unittest.TextTestRunner(verbosity=2).run(suite)
TestRecurring.client.http_force = "pycurl"
TestRecurring.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestRecurring)
unittest.TextTestRunner(verbosity=2).run(suite)
TestRecurring.client.http_force = "other"
TestRecurring.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestRecurring)
unittest.TextTestRunner(verbosity=2).run(suite) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/RecurringTest.py | RecurringTest.py |
import Adyen
import unittest
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestManagement(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.xapikey = "YourXapikey"
client.platform = "test"
lem_version = settings.API_LEGAL_ENTITY_MANAGEMENT_VERSION
def test_creating_legal_entity(self):
request = {
"type": "individual",
"individual": {
"residentialAddress": {
"city": "Amsterdam",
"country": "NL",
"postalCode": "1011DJ",
"street": "Simon Carmiggeltstraat 6 - 50"
},
"name": {
"firstName": "Shelly",
"lastName": "Eller"
},
"birthData": {
"dateOfBirth": "1990-06-21"
},
"email": "[email protected]"
}
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/legalEntityManagement/"
"individual_legal_entity_created.json")
result = self.adyen.legalEntityManagement.legal_entities_api.create_legal_entity(request)
self.assertEqual('Shelly', result.message['individual']['name']['firstName'])
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://kyc-test.adyen.com/lem/{self.lem_version}/legalEntities',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_get_transfer_instrument(self):
instrumentId = "SE322JV223222F5GNXSR89TMW"
self.adyen.client = self.test.create_client_from_file(200, None, "test/mocks/legalEntityManagement/"
"details_of_trainsfer_instrument.json")
result = self.adyen.legalEntityManagement.transfer_instruments_api.get_transfer_instrument(instrumentId)
self.assertEqual(instrumentId, result.message['id'])
self.adyen.client.http_client.request.assert_called_once_with(
'GET',
f'https://kyc-test.adyen.com/lem/{self.lem_version}/transferInstruments/{instrumentId}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=None,
xapikey="YourXapikey"
)
def test_update_business_line(self):
businessLineId = "SE322JV223222F5GVGMLNB83F"
request = {
"industryCode": "55",
"webData": [
{
"webAddress": "https://www.example.com"
}
]
}
self.adyen.client = self.test.create_client_from_file(200, request, "test/mocks/legalEntityManagement/"
"business_line_updated.json")
result = self.adyen.legalEntityManagement.business_lines_api.update_business_line(request, businessLineId)
self.assertEqual(businessLineId, result.message['id'])
self.adyen.client.http_client.request.assert_called_once_with(
'PATCH',
f'https://kyc-test.adyen.com/lem/{self.lem_version}/businessLines/{businessLineId}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_accept_terms_of_service(self):
legalEntityId = "legalId"
documentId = "documentId"
request = {
'acceptedBy': "UserId",
'ipAddress': "UserIpAddress"
}
self.adyen.client = self.test.create_client_from_file(204, request)
self.adyen.legalEntityManagement.terms_of_service_api.accept_terms_of_service(request, legalEntityId,
documentId)
self.adyen.client.http_client.request.assert_called_once_with(
'PATCH',
f'https://kyc-test.adyen.com/lem/{self.lem_version}/legalEntities/{legalEntityId}/termsOfService/{documentId}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/LegalEntityManagementTest.py | LegalEntityManagementTest.py |
import Adyen
import unittest
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestManagement(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.xapikey = "YourXapikey"
client.platform = "test"
transfers_version = settings.API_TRANSFERS_VERSION
def test_transfer_fund(self):
request = {
"amount": {
"value": 110000,
"currency": "EUR"
},
"balanceAccountId": "BAB8B2C3D4E5F6G7H8D9J6GD4",
"category": "bank",
"counterparty": {
"bankAccount": {
"accountHolder": {
"fullName": "A. Klaassen",
"address": {
"city": "San Francisco",
"country": "US",
"postalCode": "94678",
"stateOrProvince": "CA",
"street": "Brannan Street",
"street2": "274"
}
}
},
"accountIdentification": {
"type": "numberAndBic",
"accountNumber": "123456789",
"bic": "BOFAUS3NXXX"
}
},
"priority": "wire",
"referenceForBeneficiary": "Your reference sent to the beneficiary",
"reference": "Your internal reference for the transfer",
"description": "Your description for the transfer"
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/transfers/"
"make-transfer-response.json")
result = self.adyen.transfers.transfers_api.transfer_funds(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://balanceplatform-api-test.adyen.com/btl/{self.transfers_version}/transfers',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_get_all_transactions(self):
self.adyen.client = self.test.create_client_from_file(200, None, "test/mocks/transfers/"
"get-all-transactions.json")
result = self.adyen.transfers.transactions_api.get_all_transactions()
self.adyen.client.http_client.request.assert_called_once_with(
'GET',
f'https://balanceplatform-api-test.adyen.com/btl/{self.transfers_version}/transactions',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=None,
xapikey="YourXapikey"
)
def test_get_transaction(self):
transacion_id="123"
self.adyen.client = self.test.create_client_from_file(200, None, "test/mocks/transfers/"
"get-transaction.json")
result = self.adyen.transfers.transactions_api.get_transaction(transacion_id)
self.adyen.client.http_client.request.assert_called_once_with(
'GET',
f'https://balanceplatform-api-test.adyen.com/btl/{self.transfers_version}/transactions/{transacion_id}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=None,
xapikey="YourXapikey"
) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/TransfersTest.py | TransfersTest.py |
import Adyen
from Adyen import settings
import unittest
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
from Adyen.exceptions import AdyenEndpointInvalidFormat
class TestDetermineUrl(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.xapikey = "YourXapikey"
checkout_version = settings.API_CHECKOUT_VERSION
payment_version = settings.API_PAYMENT_VERSION
binLookup_version = settings.API_BIN_LOOKUP_VERSION
management_version = settings.API_MANAGEMENT_VERSION
def test_checkout_api_url_custom(self):
self.client.live_endpoint_prefix = "1797a841fbb37ca7-AdyenDemo"
url = self.adyen.client._determine_api_url("live", "checkout", "/payments")
self.assertEqual("https://1797a841fbb37ca7-AdyenDemo-checkout-"
f"live.adyenpayments.com/checkout/{self.checkout_version}/payments", url)
def test_checkout_api_url(self):
self.client.live_endpoint_prefix = None
url = self.adyen.client._determine_api_url("test", "checkout",
"/payments/details")
self.assertEqual(url, "https://checkout-test.adyen.com"
f"/{self.checkout_version}/payments/details")
def test_payments_invalid_platform(self):
request = {
'amount': {"value": "100000", "currency": "EUR"},
"reference": "Your order number",
'paymentMethod': {
"type": "scheme",
"number": "4111111111111111",
"expiryMonth": "08",
"expiryYear": "2018",
"holderName": "John Smith",
"cvc": "737"
},
'merchantAccount': "YourMerchantAccount",
'returnUrl': "https://your-company.com/..."
}
self.client.platform = "live"
self.client.live_endpoint_prefix = None
try:
self.adyen.checkout.payments_api.sessions(request)
except AdyenEndpointInvalidFormat as error:
self.assertIsNotNone(error)
def test_pal_url_live_endpoint_prefix_live_platform(self):
self.client.live_endpoint_prefix = "1797a841fbb37ca7-AdyenDemo"
url = self.adyen.client._determine_api_url(
"live", "payments", "/payments"
)
self.assertEqual(
url,
("https://1797a841fbb37ca7-AdyenDemo-pal-"
f"live.adyenpayments.com/pal/servlet/Payment/{self.payment_version}/payments")
)
def test_pal_url_live_endpoint_prefix_test_platform(self):
self.client.live_endpoint_prefix = "1797a841fbb37ca7-AdyenDemo"
url = self.adyen.client._determine_api_url(
"test", "payments", "/payments"
)
self.assertEqual(
url,
f"https://pal-test.adyen.com/pal/servlet/Payment/{self.payment_version}/payments")
def test_pal_url_no_live_endpoint_prefix_test_platform(self):
self.client.live_endpoint_prefix = None
url = self.adyen.client._determine_api_url(
"test", "payments", "/payments"
)
self.assertEqual(
url,
f"https://pal-test.adyen.com/pal/servlet/Payment/{self.payment_version}/payments")
def test_binlookup_url_no_live_endpoint_prefix_test_platform(self):
self.client.live_endpoint_prefix = None
url = self.adyen.client._determine_api_url(
"test", "binlookup", "/get3dsAvailability"
)
self.assertEqual(
url,
("https://pal-test.adyen.com/pal/servlet/"
f"BinLookup/{self.binLookup_version}/get3dsAvailability")
)
def test_checkout_api_url_orders(self):
self.client.live_endpoint_prefix = None
url = self.adyen.client._determine_api_url("test", "checkout",
"/orders")
self.assertEqual(url, "https://checkout-test.adyen.com"
f"/{self.checkout_version}/orders")
def test_checkout_api_url_order_cancel(self):
self.client.live_endpoint_prefix = None
url = self.adyen.client._determine_api_url("test", "checkout",
"/orders/cancel")
self.assertEqual(url, "https://checkout-test.adyen.com"
f"/{self.checkout_version}/orders/cancel")
def test_checkout_api_url_order_payment_methods_balance(self):
self.client.live_endpoint_prefix = None
url = self.adyen.client._determine_api_url("test", "checkout",
"/paymentMethods/"
"balance")
self.assertEqual(url, f"https://checkout-test.adyen.com/{self.checkout_version}/"
"paymentMethods/balance")
def test_checkout_api_url_sessions(self):
self.client.live_endpoint_prefix = None
url = self.adyen.client._determine_api_url("test", "checkout",
"/sessions")
self.assertEqual(url, f"https://checkout-test.adyen.com/{self.checkout_version}/"
"sessions")
def test_management_api_url_companies(self):
companyId = "YOUR_COMPANY_ID"
url = self.adyen.client._determine_api_url("test", "management", f'/companies/{companyId}/users')
self.assertEqual(url, f"https://management-test.adyen.com/{self.management_version}/"
"companies/YOUR_COMPANY_ID/users") | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/DetermineEndpointTest.py | DetermineEndpointTest.py |
import Adyen
import unittest
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestThirdPartyPayout(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.username = "YourWSUser"
client.password = "YourWSPassword"
client.platform = "test"
client.review_payout_username = "YourReviewPayoutUser"
client.review_payout_password = "YourReviewPayoutPassword"
client.store_payout_username = "YourStorePayoutUser"
client.store_payout_password = "YourStorePayoutPassword"
payout_version = settings.API_PAYOUT_VERSION
def test_confirm_success(self):
request = {
"merchantAccount": "YourMerchantAccount",
"originalReference": "YourReference"
}
resp = 'test/mocks/payout/confirm-success.json'
self.adyen.client = self.test.create_client_from_file(200, request, resp)
result = self.adyen.payout.reviewing_api.confirm_third_party(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/Payout/{self.payout_version}/confirmThirdParty',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
username='YourWSUser',
password='YourWSPassword',
)
self.assertIsNotNone(result.message['pspReference'])
expected = "[payout-confirm-received]"
self.assertEqual(expected, result.message['resultCode'])
def test_confirm_missing_reference(self):
request = {
"merchantAccount": "YourMerchantAccount",
"originalReference": ""
}
resp = 'test/mocks/payout/confirm-missing-reference.json'
self.adyen.client = self.test.create_client_from_file(500, request, resp)
self.assertRaisesRegex(
Adyen.AdyenAPICommunicationError,
"AdyenAPICommunicationError:{'status': 500, 'errorCode': '702', 'message': \"Required field 'merchantAccount' is null\", 'errorType': 'validation'}",
self.adyen.payout.reviewing_api.confirm_third_party,
request
)
def test_decline_success(self):
request = {
"merchantAccount": "YourMerchantAccount",
"originalReference": "YourReference"
}
resp = 'test/mocks/payout/decline-success.json'
self.adyen.client = self.test.create_client_from_file(200, request, resp)
result = self.adyen.payout.reviewing_api.decline_third_party(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/Payout/{self.payout_version}/declineThirdParty',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
username='YourWSUser',
password='YourWSPassword',
)
self.assertIsNotNone(result.message['pspReference'])
expected = "[payout-decline-received]"
self.assertEqual(expected, result.message['resultCode'])
def test_decline_missing_reference(self):
request = {
"merchantAccount": "YourMerchantAccount",
"originalReference": ""
}
resp = 'test/mocks/payout/decline-missing-reference.json'
self.adyen.client = self.test.create_client_from_file(500, request, resp)
self.assertRaisesRegex(
Adyen.AdyenAPICommunicationError,
"AdyenAPICommunicationError:{'status': 500, 'errorCode': '702', 'message': \"Required field 'merchantAccount' is null\", 'errorType': 'validation'}",
self.adyen.payout.reviewing_api.confirm_third_party,
request
)
def test_store_detail_bank_success(self):
request = {
"bank": {
"bankName": "AbnAmro",
"bic": "ABNANL2A",
"countryCode": "NL",
"iban": "NL32ABNA0515071439",
"ownerName": "Adyen",
"bankCity": "Amsterdam",
"taxId": "bankTaxId"
},
"merchantAccount": "YourMerchantAccount",
"recurring": {
"contract": "PAYOUT"
},
"shopperEmail": "[email protected]",
"shopperReference": "ref"
}
resp = 'test/mocks/payout/storeDetail-success.json'
self.adyen.client = self.test.create_client_from_file(200, request, resp)
result = self.adyen.payout.initialization_api.store_detail(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/Payout/{self.payout_version}/storeDetail',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
username='YourWSUser',
password='YourWSPassword'
)
self.assertIsNotNone(result.message['pspReference'])
self.assertIsNotNone(result.message['recurringDetailReference'])
expected = "Success"
self.assertEqual(expected, result.message['resultCode'])
def test_submit_success(self):
request = {
"amount": {
"value": "100000",
"currency": "EUR"
},
"reference": "payout-test", "recurring": {
"contract": "PAYOUT"
},
"merchantAccount": "YourMerchantAccount",
"shopperEmail": "[email protected]",
"shopperReference": "ref",
"selectedRecurringDetailReference": "LATEST"
}
resp = 'test/mocks/payout/submit-success.json'
self.adyen.client = self.test.create_client_from_file(200, request, resp)
result = self.adyen.payout.initialization_api.submit_third_party(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/Payout/{self.payout_version}/submitThirdParty',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
username='YourWSUser',
password='YourWSPassword'
)
self.assertIsNotNone(result.message['pspReference'])
expected = "[payout-submit-received]"
self.assertEqual(expected, result.message['resultCode'])
def test_submit_invalid_recurring_reference(self):
request = {
"amount": {
"value": "100000",
"currency": "EUR"
},
"reference": "payout-test", "recurring": {
"contract": "PAYOUT"
},
"merchantAccount": "YourMerchantAccount",
"shopperEmail": "[email protected]",
"shopperReference": "ref",
"selectedRecurringDetailReference": "1234"
}
resp = 'test/mocks/payout/submit-invalid-reference.json'
self.adyen.client = self.test.create_client_from_file(422, request, resp)
self.assertRaisesRegex(
Adyen.AdyenAPIUnprocessableEntity,
"AdyenAPIUnprocessableEntity:{'status': 422, 'errorCode': '800',"
" 'message': 'Contract not found', 'errorType': 'validation'}",
self.adyen.payout.initialization_api.submit_third_party,
request
)
def test_store_detail_and_submit_missing_reference(self):
request = {
"amount": {
"value": "100000",
"currency": "EUR"
},
"merchantAccount": "YourMerchantAccount",
"recurring": {
"contract": "PAYOUT"
},
"shopperEmail": "[email protected]",
"shopperReference": "ref",
"bank": {
"iban": "NL32ABNA0515071439",
"ownerName": "Adyen",
"countryCode": "NL",
}
}
resp = 'test/mocks/payout/submit-missing-reference.json'
self.adyen.client = self.test.create_client_from_file(422, request, resp)
self.assertRaisesRegex(
Adyen.AdyenAPIUnprocessableEntity,
"AdyenAPIUnprocessableEntity:{'status': 422, 'message': 'Contract not found',"
" 'errorCode': '800', 'errorType': 'validation'}",
self.adyen.payout.initialization_api.store_detail_and_submit_third_party,
request
)
def test_store_detail_and_submit_missing_payment(self):
request = {
"amount": {
"value": "100000",
"currency": "EUR"
},
"merchantAccount": "YourMerchantAccount",
"reference": "payout-test",
"recurring": {
"contract": "PAYOUT"
},
"shopperEmail": "[email protected]",
"shopperReference": "ref"
}
resp = 'test/mocks/payout/storeDetailAndSubmit-missing-payment.json'
self.adyen.client = self.test.create_client_from_file(422, request, resp)
self.assertRaisesRegex(
Adyen.AdyenAPIUnprocessableEntity,
"AdyenAPIUnprocessableEntity:{'status': 422, 'errorCode': '000',"
" 'message': 'Please supply paymentDetails', 'errorType': 'validation'}",
self.adyen.payout.initialization_api.store_detail_and_submit_third_party,
request
)
def test_store_detail_and_submit_invalid_iban(self):
request = {
"amount": {
"value": "100000",
"currency": "EUR"
},
"merchantAccount": "YourMerchantAccount",
"reference": "payout-test",
"recurring": {
"contract": "PAYOUT"
},
"shopperEmail": "[email protected]",
"shopperReference": "ref",
"bank": {
"countryCode": "NL",
"iban": "4111111111111111",
"ownerName": "Adyen",
}
}
resp = 'test/mocks/payout/storeDetailAndSubmit-invalid-iban.json'
self.adyen.client = self.test.create_client_from_file(422, request, resp)
self.assertRaisesRegex(
Adyen.AdyenAPIUnprocessableEntity,
"AdyenAPIUnprocessableEntity:{'status': 422, 'errorCode': '161',"
" 'message': 'Invalid iban', 'errorType': 'validation'}",
self.adyen.payout.initialization_api.store_detail_and_submit_third_party,
request
)
def test_store_detail_and_submit_card_success(self):
request = {
"amount": {
"value": "100000",
"currency": "EUR"
},
"merchantAccount": "YourMerchantAccount",
"reference": "payout-test",
"recurring": {
"contract": "PAYOUT"
},
"shopperEmail": "[email protected]",
"shopperReference": "ref",
"card": {
"number": "4111111111111111",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "737",
"holderName": "John Smith"
}
}
resp = 'test/mocks/payout/storeDetailAndSubmit-card-success.json'
self.adyen.client = self.test.create_client_from_file(200, request, resp)
result = self.adyen.payout.initialization_api.store_detail_and_submit_third_party(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/Payout/{self.payout_version}/storeDetailAndSubmitThirdParty',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
username='YourWSUser',
password='YourWSPassword'
)
self.assertIsNotNone(result.message['pspReference'])
expected = "[payout-submit-received]"
self.assertEqual(expected, result.message['resultCode'])
def test_store_detail_and_submit_bank_success(self):
request = {
"amount": {
"value": "100000",
"currency": "EUR"
},
"bank": {
"countryCode": "NL",
"iban": "NL32ABNA0515071439",
"ownerName": "Adyen",
},
"merchantAccount": "YourMerchantAccount",
"recurring": {
"contract": "PAYOUT"
},
"reference": "YourReference",
"shopperEmail": "[email protected]",
"shopperReference": "ref"
}
resp = 'test/mocks/payout/storeDetailAndSubmit-bank-success.json'
self.adyen.client = self.test.create_client_from_file(200, request, resp)
result = self.adyen.payout.initialization_api.store_detail_and_submit_third_party(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/Payout/{self.payout_version}/storeDetailAndSubmitThirdParty',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
username='YourWSUser',
password='YourWSPassword'
)
self.assertIsNotNone(result.message['pspReference'])
expected = "[payout-submit-received]"
self.assertEqual(expected, result.message['resultCode'])
TestThirdPartyPayout.client.http_force = "requests"
suite = unittest.TestLoader().loadTestsFromTestCase(TestThirdPartyPayout)
unittest.TextTestRunner(verbosity=2).run(suite)
TestThirdPartyPayout.client.http_force = "pycurl"
TestThirdPartyPayout.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestThirdPartyPayout)
unittest.TextTestRunner(verbosity=2).run(suite)
TestThirdPartyPayout.client.http_force = "other"
TestThirdPartyPayout.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestThirdPartyPayout)
unittest.TextTestRunner(verbosity=2).run(suite) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/ThirdPartyPayoutTest.py | ThirdPartyPayoutTest.py |
import unittest
import Adyen
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestCheckoutUtility(unittest.TestCase):
ady = Adyen.Adyen()
client = ady.client
test = BaseTest(ady)
client.xapikey = "YourXapikey"
client.platform = "test"
checkout_version = settings.API_CHECKOUT_VERSION
def test_origin_keys_success_mocked(self):
request = {
"originDomains": {
"https://www.your-domain1.com",
"https://www.your-domain2.com",
"https://www.your-domain3.com"
}
}
self.ady.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkoututility/"
"originkeys"
"-success.json")
result = self.ady.checkout.utility_api.origin_keys(request)
self.assertEqual("pub.v2.7814286629520534.aHR0cHM6Ly93d3cu"
"eW91ci1kb21haW4xLmNvbQ.UEwIBmW9-c_uXo5wS"
"Er2w8Hz8hVIpujXPHjpcEse3xI",
result.message['originKeys']
['https://www.your-domain1.com'])
self.assertEqual("pub.v2.7814286629520534.aHR0cHM6Ly93d3cu"
"eW91ci1kb21haW4zLmNvbQ.fUvflu-YIdZSsLEH8"
"Qqmr7ksE4ag_NYiiMXK0s6aq_4",
result.message['originKeys']
['https://www.your-domain3.com'])
self.assertEqual("pub.v2.7814286629520534.aHR0cHM6Ly93d3cue"
"W91ci1kb21haW4yLmNvbQ.EP6eXBJKk0t7-QIUl6e_"
"b1qMuMHGepxG_SlUqxAYrfY",
result.message['originKeys']
['https://www.your-domain2.com'])
def test_checkout_utility_api_url_custom(self):
url = self.ady.client._determine_api_url("test", "checkout", "/originKeys")
self.assertEqual(url, "https://checkout-test.adyen.com/{}/originKeys".format(self.checkout_version))
def test_applePay_session(self):
request = {
"displayName": "YOUR_MERCHANT_NAME",
"domainName": "YOUR_DOMAIN_NAME",
"merchantIdentifier": "YOUR_MERCHANT_ID"
}
self.ady.client = self.test.create_client_from_file(200, request, "test/mocks/"
"checkoututility/"
"applepay-sessions"
"-success.json")
result = self.ady.checkout.utility_api.get_apple_pay_session(request)
self.assertEqual("BASE_64_ENCODED_DATA", result.message['data']) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/CheckoutUtilityTest.py | CheckoutUtilityTest.py |
import Adyen
import unittest
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestPayments(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.username = "YourWSUser"
client.password = "YourWSPassword"
client.platform = "test"
client.xapikey = ""
def test_authorise_success_mocked(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "737",
"holderName": "John Doe"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-success'
'.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("Authorised", result.message['resultCode'])
self.assertEqual("8/2018",
result.message['additionalData']['expiryDate'])
self.assertEqual("411111",
result.message['additionalData']['cardBin'])
self.assertEqual("1111",
result.message['additionalData']['cardSummary'])
self.assertEqual("Holder",
result.message['additionalData']['cardHolderName'])
self.assertEqual("true",
result.message['additionalData']['threeDOffered'])
self.assertEqual("false",
result.message['additionalData']
['threeDAuthenticated'])
self.assertEqual("69746", result.message['authCode'])
self.assertEqual(11, len(result.message['fraudResult']['results']))
fraud_checks = result.message['fraudResult']['results']
fraud_check_result = fraud_checks[0]['FraudCheckResult']
self.assertEqual("CardChunkUsage", fraud_check_result['name'])
self.assertEqual(8, fraud_check_result['accountScore'])
self.assertEqual(2, fraud_check_result['checkId'])
def test_authorise_error010_mocked(self):
request = {}
request['merchantAccount'] = "testaccount"
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "737",
"holderName": "John Doe"
}
self.adyen.client = self.test.create_client_from_file(403, request,
'test/mocks/'
'authorise-error'
'-010'
'.json')
self.assertRaises(Adyen.AdyenAPIInvalidPermission,
self.adyen.payment.general_api.authorise, request)
def test_authorise_error_cvc_declined_mocked(self):
request = {}
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "787",
"holderName": "John Doe"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-error-'
'cvc-declined'
'.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("Refused", result.message['resultCode'])
def test_authorise_success_3d_mocked(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "787",
"holderName": "John Doe"
}
request['browserInfo'] = {
"userAgent": "YourUserAgent",
"acceptHeader": "YourAcceptHeader"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-success'
'-3d.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("RedirectShopper", result.message['resultCode'])
self.assertIsNotNone(result.message['md'])
self.assertIsNotNone(result.message['issuerUrl'])
self.assertIsNotNone(result.message['paRequest'])
def test_authorise_3d_success_mocked(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['md'] = "testMD"
request['paResponse'] = "paresponsetest"
request['browserInfo'] = {
"userAgent": "YourUserAgent",
"acceptHeader": "YourAcceptHeader"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise3d-'
'success.json')
result = self.adyen.payment.general_api.authorise3d(request)
self.assertEqual("Authorised", result.message['resultCode'])
self.assertIsNotNone(result.message['pspReference'])
def test_authorise_cse_success_mocked(self):
request = {}
request['amount'] = {"value": "1234", "currency": "EUR"}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['additionalData'] = {
"card.encrypted.json": "YourCSEToken"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-success'
'-cse.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("Authorised", result.message['resultCode'])
def test_authorise_cse_error_expired_mocked(self):
request = {}
request['amount'] = {"value": "1234", "currency": "EUR"}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['additionalData'] = {
"card.encrypted.json": "YourCSEToken"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-error-'
'expired.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("Refused", result.message['resultCode'])
self.assertEqual("DECLINED Expiry Incorrect",
result.message['additionalData']['refusalReasonRaw'])
def test_error_401_mocked(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "787",
"holderName": "John Doe"
}
self.adyen.client = self.test.create_client_from_file(401, request,
'test/mocks/'
'authorise'
'-error-'
'010.json')
self.assertRaisesRegex(Adyen.AdyenAPIAuthenticationError,
"AdyenAPIAuthenticationError:{'status': 403, 'errorCode': '010',"
" 'message': 'Not allowed', 'errorType': 'security',"
" 'pspReference': '8514836072314693'}",
self.adyen.payment.general_api.authorise, request)
class TestPaymentsWithXapiKey(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.platform = "test"
client.username = "YourWSUser"
client.password = "YourWSPassword"
client.xapikey = ""
def test_authorise_success_mocked(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "737",
"holderName": "John Doe"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-success'
'.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("Authorised", result.message['resultCode'])
self.assertEqual("8/2018",
result.message['additionalData']['expiryDate'])
self.assertEqual("411111",
result.message['additionalData']['cardBin'])
self.assertEqual("1111",
result.message['additionalData']['cardSummary'])
self.assertEqual("Holder",
result.message['additionalData']['cardHolderName'])
self.assertEqual("true",
result.message['additionalData']['threeDOffered'])
self.assertEqual("false",
result.message['additionalData']
['threeDAuthenticated'])
self.assertEqual("69746", result.message['authCode'])
self.assertEqual(11, len(result.message['fraudResult']['results']))
fraud_checks = result.message['fraudResult']['results']
fraud_check_result = fraud_checks[0]['FraudCheckResult']
self.assertEqual("CardChunkUsage", fraud_check_result['name'])
self.assertEqual(8, fraud_check_result['accountScore'])
self.assertEqual(2, fraud_check_result['checkId'])
def test_authorise_error010_mocked(self):
request = {}
request['merchantAccount'] = "testaccount"
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "737",
"holderName": "John Doe"
}
self.adyen.client = self.test.create_client_from_file(403, request,
'test/mocks/'
'authorise-error'
'-010'
'.json')
self.assertRaises(Adyen.AdyenAPIInvalidPermission,
self.adyen.payment.general_api.authorise, request)
def test_authorise_error_cvc_declined_mocked(self):
request = {}
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "787",
"holderName": "John Doe"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-error-'
'cvc-declined'
'.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("Refused", result.message['resultCode'])
def test_authorise_success_3d_mocked(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "787",
"holderName": "John Doe"
}
request['browserInfo'] = {
"userAgent": "YourUserAgent",
"acceptHeader": "YourAcceptHeader"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-success'
'-3d.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("RedirectShopper", result.message['resultCode'])
self.assertIsNotNone(result.message['md'])
self.assertIsNotNone(result.message['issuerUrl'])
self.assertIsNotNone(result.message['paRequest'])
def test_authorise_3d_success_mocked(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['md'] = "testMD"
request['paResponse'] = "paresponsetest"
request['browserInfo'] = {
"userAgent": "YourUserAgent",
"acceptHeader": "YourAcceptHeader"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise3d-'
'success.json')
result = self.adyen.payment.general_api.authorise3d(request)
self.assertEqual("Authorised", result.message['resultCode'])
self.assertIsNotNone(result.message['pspReference'])
def test_authorise_cse_success_mocked(self):
request = {}
request['amount'] = {"value": "1234", "currency": "EUR"}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['additionalData'] = {
"card.encrypted.json": "YourCSEToken"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-success'
'-cse.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("Authorised", result.message['resultCode'])
def test_authorise_cse_error_expired_mocked(self):
request = {}
request['amount'] = {"value": "1234", "currency": "EUR"}
request['merchantAccount'] = "YourMerchantAccount"
request['reference'] = "YourReference"
request['additionalData'] = {
"card.encrypted.json": "YourCSEToken"
}
self.adyen.client = self.test.create_client_from_file(200, request,
'test/mocks/'
'authorise'
'-error-'
'expired.json')
result = self.adyen.payment.general_api.authorise(request)
self.assertEqual("Refused", result.message['resultCode'])
self.assertEqual("DECLINED Expiry Incorrect",
result.message['additionalData']['refusalReasonRaw'])
def test_error_401_mocked(self):
request = {}
request['merchantAccount'] = "YourMerchantAccount"
request['amount'] = {"value": "100000", "currency": "EUR"}
request['reference'] = "123456"
request['card'] = {
"number": "5136333333333335",
"expiryMonth": "08",
"expiryYear": "2018",
"cvc": "787",
"holderName": "John Doe"
}
self.adyen.client = self.test.create_client_from_file(401, request,
'test/mocks/'
'authorise'
'-error-'
'010.json')
self.assertRaisesRegex(Adyen.AdyenAPIAuthenticationError,
"AdyenAPIAuthenticationError:{'status': 403, 'errorCode': '010',"
" 'message': 'Not allowed', 'errorType': 'security',"
" 'pspReference': '8514836072314693'}",
self.adyen.payment.general_api.authorise, request) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/PaymentTest.py | PaymentTest.py |
import unittest
import Adyen
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
REQUEST_KWARGS = {
'merchantAccount': 'YourMerchantAccount',
'amount': '1000'
}
class TestBinLookup(unittest.TestCase):
ady = Adyen.Adyen()
client = ady.client
test = BaseTest(ady)
client.username = "YourWSUser"
client.password = "YourWSPassword"
client.platform = "test"
binLookup_version = settings.API_BIN_LOOKUP_VERSION
def test_get_cost_estimate_success(self):
self.ady.client.http_client.request.reset_mock()
expected = {
'cardBin': {
'bin': '458012',
'commercial': False,
'fundingSource': 'CHARGE',
'fundsAvailability': 'N',
'issuingBank': 'Bank Of America',
'issuingCountry': 'US', 'issuingCurrency': 'USD',
'paymentMethod': 'Y',
'summary': '6789'
},
'costEstimateAmount': {
'currency': 'USD', 'value': 1234
},
'resultCode': 'Success',
'surchargeType': 'PASSTHROUGH'
}
self.ady.client = self.test.create_client_from_file(
status=200,
request=REQUEST_KWARGS,
filename='test/mocks/binlookup/getcostestimate-success.json'
)
result = self.ady.binlookup.get_cost_estimate(REQUEST_KWARGS)
self.assertEqual(expected, result.message)
self.ady.client.http_client.request.assert_called_once_with(
'POST',
'https://pal-test.adyen.com/pal/servlet/'
f'BinLookup/{self.binLookup_version}/getCostEstimate',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
'merchantAccount': 'YourMerchantAccount',
'amount': '1000',
},
password='YourWSPassword',
username='YourWSUser'
)
def test_get_cost_estimate_error_mocked(self):
self.ady.client = self.test.create_client_from_file(
status=200,
request=REQUEST_KWARGS,
filename=(
"test/mocks/binlookup/"
"getcostestimate-error-invalid-data-422.json"
)
)
result = self.ady.binlookup.get_cost_estimate(REQUEST_KWARGS)
self.assertEqual(422, result.message['status'])
self.assertEqual("101", result.message['errorCode'])
self.assertEqual("Invalid card number", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
TestBinLookup.client.http_force = "requests"
suite = unittest.TestLoader().loadTestsFromTestCase(TestBinLookup)
unittest.TextTestRunner(verbosity=2).run(suite)
TestBinLookup.client.http_force = "pycurl"
TestBinLookup.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestBinLookup)
unittest.TextTestRunner(verbosity=2).run(suite)
TestBinLookup.client.http_force = "other"
TestBinLookup.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestBinLookup)
unittest.TextTestRunner(verbosity=2).run(suite) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/BinLookupTest.py | BinLookupTest.py |
import unittest
import Adyen
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestTerminal(unittest.TestCase):
adyen = Adyen.Adyen(username="YourWSUser",
password="YourWSPassword",
platform="test",
xapikey="YourXapikey")
test = BaseTest(adyen)
client = adyen.client
terminal_version = settings.API_TERMINAL_VERSION
def test_assign_terminals(self):
request = {
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE",
"terminals": [
"P400Plus-275479597"
]
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/assignTerminals.json"
)
result = self.adyen.terminal.assign_terminals(request=request)
self.assertIn("P400Plus-275479597", result.message["results"])
self.client.http_client.request.assert_called_once_with(
"POST",
f"https://postfmapi-test.adyen.com/postfmapi/terminal/{self.terminal_version}/assignTerminals",
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE",
"terminals": [
"P400Plus-275479597"
]
},
xapikey="YourXapikey"
)
def test_assign_terminals_422(self):
request = {
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE",
"terminals": [
"P400Plus-123456789"
]
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/assignTerminals-422.json"
)
result = self.adyen.terminal.assign_terminals(request=request)
self.assertEqual(422, result.message["status"])
self.assertEqual("000", result.message["errorCode"])
self.assertEqual("Terminals not found: P400Plus-123456789", result.message["message"])
self.assertEqual("validation", result.message["errorType"])
def test_find_terminal(self):
request = {
"terminal": "P400Plus-275479597",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/findTerminal.json"
)
result = self.adyen.terminal.find_terminal(request=request)
self.assertIn("P400Plus-275479597", result.message["terminal"])
self.client.http_client.request.assert_called_once_with(
"POST",
f"https://postfmapi-test.adyen.com/postfmapi/terminal/{self.terminal_version}/findTerminal",
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
"terminal": "P400Plus-275479597",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
},
xapikey="YourXapikey"
)
def test_find_terminal_422(self):
request = {
"terminal": "P400Plus-123456789"
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/findTerminal-422.json"
)
result = self.adyen.terminal.find_terminal(request=request)
self.assertEqual(422, result.message["status"])
self.assertEqual("000", result.message["errorCode"])
self.assertEqual("Terminal not found", result.message["message"])
self.assertEqual("validation", result.message["errorType"])
def test_get_stores_under_account(self):
request = {
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/getStoresUnderAccount.json"
)
result = self.adyen.terminal.get_stores_under_account(request=request)
self.assertEqual(result.message["stores"], [
{
"store": "YOUR_STORE",
"description": "YOUR_STORE",
"address": {
"city": "The City",
"countryCode": "NL",
"postalCode": "1234",
"streetAddress": "The Street"
},
"status": "Active",
"merchantAccountCode": "YOUR_MERCHANT_ACCOUNT"
}
])
self.client.http_client.request.assert_called_once_with(
"POST",
f"https://postfmapi-test.adyen.com/postfmapi/terminal/{self.terminal_version}/getStoresUnderAccount",
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
},
xapikey="YourXapikey"
)
def test_get_terminal_details(self):
request = {
"terminal": "P400Plus-275479597",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/getTerminalDetails.json"
)
result = self.adyen.terminal.get_terminal_details(request=request)
self.assertEqual(result.message["deviceModel"], "P400Plus")
self.assertEqual(result.message["terminal"], "P400Plus-275479597")
self.client.http_client.request.assert_called_once_with(
"POST",
f"https://postfmapi-test.adyen.com/postfmapi/terminal/{self.terminal_version}/getTerminalDetails",
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
"terminal": "P400Plus-275479597",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
},
xapikey="YourXapikey"
)
def test_get_terminal_details_422(self):
request = {
"terminal": "P400Plus-123456789"
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/getTerminalDetails-422.json"
)
result = self.adyen.terminal.get_terminal_details(request=request)
self.assertEqual(422, result.message["status"])
self.assertEqual("000", result.message["errorCode"])
self.assertEqual("Terminal not found", result.message["message"])
self.assertEqual("validation", result.message["errorType"])
def test_get_terminals_under_account(self):
request = {
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/getTerminalsUnderAccount.json"
)
result = self.adyen.terminal.get_terminals_under_account(request=request)
self.assertEqual(result.message["merchantAccounts"], [
{
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"inStoreTerminals": [
"P400Plus-275479597"
],
"stores": [
{
"store": "YOUR_STORE",
"inStoreTerminals": [
"M400-401972715"
]
}
]
}
])
self.client.http_client.request.assert_called_once_with(
"POST",
f"https://postfmapi-test.adyen.com/postfmapi/terminal/{self.terminal_version}/getTerminalsUnderAccount",
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
},
xapikey="YourXapikey"
)
def test_get_terminals_under_account_store(self):
request = {
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE"
}
self.test.create_client_from_file(
200, request, "test/mocks/terminal/getTerminalsUnderAccount-store.json"
)
result = self.adyen.terminal.get_terminals_under_account(request=request)
self.assertEqual(result.message["merchantAccounts"], [
{
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"stores": [
{
"store": "YOUR_STORE",
"inStoreTerminals": [
"M400-401972715"
]
}
]
}
])
self.client.http_client.request.assert_called_once_with(
"POST",
f"https://postfmapi-test.adyen.com/postfmapi/terminal/{self.terminal_version}/getTerminalsUnderAccount",
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
"companyAccount": "YOUR_COMPANY_ACCOUNT",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE",
},
xapikey="YourXapikey"
)
TestTerminal.client.http_force = "requests"
suite = unittest.TestLoader().loadTestsFromTestCase(TestTerminal)
unittest.TextTestRunner(verbosity=2).run(suite)
TestTerminal.client.http_force = "pycurl"
TestTerminal.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestTerminal)
unittest.TextTestRunner(verbosity=2).run(suite)
TestTerminal.client.http_force = "other"
TestTerminal.client.http_init = False
suite = unittest.TestLoader().loadTestsFromTestCase(TestTerminal)
unittest.TextTestRunner(verbosity=2).run(suite) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/TerminalTest.py | TerminalTest.py |
import Adyen
import unittest
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestManagement(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.xapikey = "YourXapikey"
client.platform = "test"
stored_value_version = settings.API_STORED_VALUE_VERSION
def issue(self):
request = {
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE_ID",
"paymentMethod": {
"type": "valuelink"
},
"giftCardPromoCode": "1324",
"reference": "YOUR_REFERENCE"
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/storedValue/issue-giftcard.json")
result = self.adyen.storedValue.issue(request)
self.assertEqual(result.message['paymentMethod']['type'], 'givex')
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/StoredValue/{self.stored_value_version}/issue',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_activate_giftcard(self):
request = {
"status": "active",
"amount": {
"currency": "USD",
"value": 1000
},
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE_ID",
"paymentMethod": {
"type": "svs",
"number": "6006491286999921374",
"securityCode": "1111"
},
"reference": "YOUR_REFERENCE"
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/storedValue/activate-giftcards.json")
result = self.adyen.storedValue.change_status(request)
self.assertEqual(result.message['currentBalance']['value'], 1000)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/StoredValue/{self.stored_value_version}/changeStatus',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_load_funds(self):
request = {
"amount": {
"currency": "USD",
"value": 2000
},
"loadType": "merchandiseReturn",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE_ID",
"paymentMethod": {
"type": "svs",
"number": "6006491286999921374",
"securityCode": "1111"
},
"reference": "YOUR_REFERENCE"
}
self.adyen.client = self.test.create_client_from_file(200, request, "test/mocks/storedValue/load-funds.json")
result = self.adyen.storedValue.load(request)
self.assertEqual(result.message['resultCode'], 'Success')
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/StoredValue/{self.stored_value_version}/load',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_check_balance(self):
request = {
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE_ID",
"paymentMethod": {
"type": "svs",
"number": "603628672882001915092",
"securityCode": "5754"
},
"reference": "YOUR_REFERENCE"
}
self.adyen.client = self.test.create_client_from_file(200, request, "test/mocks/storedValue/check-balance.json")
result = self.adyen.storedValue.check_balance(request)
self.assertEqual(result.message['currentBalance']['value'], 5600)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/StoredValue/{self.stored_value_version}/checkBalance',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_merge_balance(self):
request = {
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"store": "YOUR_STORE_ID",
"sourcePaymentMethod": {
"number": "7777182708544835",
"securityCode": "2329"
},
"paymentMethod": {
"type": "valuelink",
"number": "8888182708544836",
"securityCode": "2330"
},
"reference": "YOUR_REFERENCE"
}
self.adyen.client = self.test.create_client_from_file(200, request, "test/mocks/storedValue/merge-balance.json")
result = self.adyen.storedValue.merge_balance(request)
self.assertEqual(result.message['pspReference'], "881564657480267D")
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/StoredValue/{self.stored_value_version}/mergeBalance',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
)
def test_void_transaction(self):
request = {
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"originalReference": "851564654294247B",
"reference": "YOUR_REFERENCE"
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/storedValue/undo-transaction.json")
result = self.adyen.storedValue.void_transaction(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://pal-test.adyen.com/pal/servlet/StoredValue/{self.stored_value_version}/voidTransaction',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json=request,
xapikey="YourXapikey"
) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/StoredValueTest.py | StoredValueTest.py |
import unittest
from json import load
import Adyen
from Adyen.util import (
generate_notification_sig,
is_valid_hmac_notification,
get_query
)
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class UtilTest(unittest.TestCase):
ady = Adyen.Adyen()
client = ady.client
def test_notification_request_item_hmac(self):
request = {
"pspReference": "7914073381342284",
"merchantReference": "TestPayment-1407325143704",
"merchantAccountCode": "TestMerchant",
"amount": {
"currency": "EUR",
"value": 1130
},
"eventCode": "AUTHORISATION",
"success": "true",
"eventDate": "2019-05-06T17:15:34.121+02:00",
"operations": [
"CANCEL",
"CAPTURE",
"REFUND"
],
"paymentMethod": "visa",
}
key = "44782DEF547AAA06C910C43932B1EB0C" \
"71FC68D9D0C057550C48EC2ACF6BA056"
hmac_calculation = generate_notification_sig(request, key)
hmac_calculation_str = hmac_calculation.decode("utf-8")
expected_hmac = "coqCmt/IZ4E3CzPvMY8zTjQVL5hYJUiBRg8UU+iCWo0="
self.assertTrue(hmac_calculation_str != "")
self.assertEqual(hmac_calculation_str, expected_hmac)
request['additionalData'] = {'hmacSignature': hmac_calculation_str}
hmac_validate = is_valid_hmac_notification(request, key)
self.assertIn('additionalData', request)
self.assertDictEqual(request['additionalData'],
{'hmacSignature': hmac_calculation_str})
self.assertTrue(hmac_validate)
def test_webhooks_with_slashes(self):
hmac_key = "74F490DD33F7327BAECC88B2947C011FC02D014A473AAA33A8EC93E4DC069174"
with open('test/mocks/util/backslash_notification.json') as file:
backslash_notification = load(file)
self.assertTrue(is_valid_hmac_notification(backslash_notification, hmac_key))
with open('test/mocks/util/colon_notification.json') as file:
colon_notification = load(file)
self.assertTrue(is_valid_hmac_notification(colon_notification, hmac_key))
with open('test/mocks/util/forwardslash_notification.json') as file:
forwardslash_notification = load(file)
self.assertTrue(is_valid_hmac_notification(forwardslash_notification, hmac_key))
with open('test/mocks/util/mixed_notification.json') as file:
mixed_notification = load(file)
self.assertTrue(is_valid_hmac_notification(mixed_notification, hmac_key))
def test_query_string_creation(self):
query_parameters = {
"pageSize":7,
"pageNumber":3
}
query_string = get_query(query_parameters)
self.assertEqual(query_string,'?pageSize=7&pageNumber=3')
def test_passing_xapikey_in_method(self):
request = {'merchantAccount': "YourMerchantAccount"}
self.test = BaseTest(self.ady)
self.client.platform = "test"
self.ady.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentmethods"
"-success.json")
result = self.ady.checkout.payments_api.payment_methods(request, xapikey="YourXapikey")
self.assertEqual("AliPay", result.message['paymentMethods'][0]['name'])
self.assertEqual("Credit Card",
result.message['paymentMethods'][2]['name'])
self.assertEqual("Credit Card via AsiaPay",
result.message['paymentMethods'][3]['name']) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/UtilTest.py | UtilTest.py |
import Adyen
import unittest
from Adyen import settings
try:
from BaseTest import BaseTest
except ImportError:
from .BaseTest import BaseTest
class TestCheckout(unittest.TestCase):
adyen = Adyen.Adyen()
client = adyen.client
test = BaseTest(adyen)
client.xapikey = "YourXapikey"
client.platform = "test"
checkout_version = settings.API_CHECKOUT_VERSION
lib_version = settings.LIB_VERSION
def test_payment_methods_success_mocked(self):
request = {'merchantAccount': "YourMerchantAccount"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentmethods"
"-success.json")
result = self.adyen.checkout.payments_api.payment_methods(request)
self.assertEqual("AliPay", result.message['paymentMethods'][0]['name'])
self.assertEqual("Credit Card",
result.message['paymentMethods'][2]['name'])
self.assertEqual("Credit Card via AsiaPay",
result.message['paymentMethods'][3]['name'])
def test_payment_methods_error_mocked(self):
request = {'merchantAccount': "YourMerchantAccount"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentmethods-"
"error-forbidden"
"-403.json")
result = self.adyen.checkout.payments_api.payment_methods(request)
self.assertEqual(403, result.message['status'])
self.assertEqual("901", result.message['errorCode'])
self.assertEqual("Invalid Merchant Account", result.message['message'])
self.assertEqual("security", result.message['errorType'])
def test_payments_success_mocked(self):
request = {'amount': {"value": "100000", "currency": "EUR"},
'reference': "123456", 'paymentMethod': {
"type": "scheme",
"number": "4111111111111111",
"expiryMonth": "08",
"expiryYear": "2018",
"holderName": "John Smith",
"cvc": "737"
}, 'merchantAccount': "YourMerchantAccount",
'returnUrl': "https://your-company.com/..."}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"payments"
"-success"
".json")
result = self.adyen.checkout.payments_api.payments(request)
self.assertEqual("8535296650153317", result.message['pspReference'])
self.assertEqual("Authorised", result.message['resultCode'])
self.assertEqual("8/2018",
result.message["additionalData"]['expiryDate'])
self.assertEqual("GREEN",
result.message["additionalData"]['fraudResultType'])
def test_payments_error_mocked(self):
request = {'amount': {"value": "100000", "currency": "EUR"},
'reference': "54431", 'paymentMethod': {
"type": "scheme",
"number": "4111111111111111",
"expiryMonth": "08",
"expiryYear": "2018",
"holderName": "John Smith",
"cvc": "737"
}, 'merchantAccount': "YourMerchantAccount",
'returnUrl': "https://your-company.com/..."}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"payments-error"
"-invalid"
"-data-422"
".json")
result = self.adyen.checkout.payments_api.payments(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
'https://checkout-test.adyen.com/{}/payments'.format(self.checkout_version),
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
'returnUrl': 'https://your-company.com/...',
'reference': '54431',
'merchantAccount': 'YourMerchantAccount',
'amount': {'currency': 'EUR', 'value': '100000'},
'paymentMethod': {
'expiryYear': '2018',
'holderName': 'John Smith',
'number': '4111111111111111',
'expiryMonth': '08',
'type': 'scheme',
'cvc': '737'
}
},
xapikey='YourXapikey'
)
self.assertEqual(422, result.message['status'])
self.assertEqual("130", result.message['errorCode'])
self.assertEqual("Reference Missing", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_payments_details_success_mocked(self):
request = {'paymentData': "Hee57361f99....", 'details': {
"MD": "sdfsdfsdf...",
"PaRes": "sdkfhskdjfsdf..."
}}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentsdetails"
"-success.json")
result = self.adyen.checkout.payments_api.payments_details(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
u'https://checkout-test.adyen.com/{}/payments/details'.format(self.checkout_version),
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
json={
'paymentData': 'Hee57361f99....',
'details': {'MD': 'sdfsdfsdf...', 'PaRes': 'sdkfhskdjfsdf...'}
},
xapikey='YourXapikey'
)
self.assertEqual("8515232733321252", result.message['pspReference'])
self.assertEqual("Authorised", result.message['resultCode'])
self.assertEqual("true",
result.message['additionalData']['liabilityShift'])
self.assertEqual("AUTHORISED",
result.message['additionalData']['refusalReasonRaw'])
def test_payments_details_error_mocked(self):
request = {'paymentData': "Hee57361f99....", 'details': {
"MD": "sdfsdfsdf...",
"PaRes": "sdkfhskdjfsdf..."
}}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentsdetails"
"-error-invalid-"
"data-422.json")
result = self.adyen.checkout.payments_api.payments_details(request)
self.assertEqual(422, result.message['status'])
self.assertEqual("101", result.message['errorCode'])
self.assertEqual("Invalid card number", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_payments_session_success_mocked(self):
request = {"reference": "Your order number",
"shopperReference": "yourShopperId_IOfW3k9G2PvXFu2j",
"channel": "iOS",
"token": "TOKEN_YOU_GET_FROM_CHECKOUT_SDK",
"returnUrl": "app://", "countryCode": "NL",
"shopperLocale": "nl_NL",
"sessionValidity": "2017-04-06T13:09:13Z",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
'amount': {"value": "17408", "currency": "EUR"}}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentsession"
"-success.json")
result = self.adyen.checkout.classic_checkout_sdk_api.payment_session(request)
self.assertIsNotNone(result.message['paymentSession'])
def test_payments_session_error_mocked(self):
request = {"reference": "Your wro order number",
"shopperReference": "yourShopperId_IOfW3k9G2PvXFu2j",
"channel": "iOS",
"token": "WRONG_TOKEN",
"returnUrl": "app://", "countryCode": "NL",
"shopperLocale": "nl_NL",
"sessionValidity": "2017-04-06T13:09:13Z",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
'amount': {"value": "17408", "currency": "EUR"}}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentsession"
"-error-invalid-"
"data-422.json")
result = self.adyen.checkout.classic_checkout_sdk_api.payment_session(request)
self.assertEqual(422, result.message['status'])
self.assertEqual("14_012", result.message['errorCode'])
self.assertEqual("The provided SDK token could not be parsed.",
result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_payments_result_success_mocked(self):
request = {"payload": "VALUE_YOU_GET_FROM_CHECKOUT_SDK"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentsresult"
"-success.json")
result = self.adyen.checkout.classic_checkout_sdk_api.verify_payment_result(request)
self.assertEqual("8535253563623704", result.message['pspReference'])
self.assertEqual("Authorised", result.message['resultCode'])
def test_payments_result_error_mocked(self):
request = {"payload": "VALUE_YOU_GET_FROM_CHECKOUT_SDK"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentsresult"
"-error-invalid-"
"data-payload-"
"422.json")
result = self.adyen.checkout.classic_checkout_sdk_api.verify_payment_result(request)
self.assertEqual(422, result.message['status'])
self.assertEqual("14_018", result.message['errorCode'])
self.assertEqual("Invalid payload provided", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_payments_cancels_without_reference(self):
requests = {
"paymentReference": "Payment123",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"reference": "YourCancelReference",
}
self.adyen.client = self.test.create_client_from_file(200, requests,
"test/mocks/"
"checkout/"
"paymentscancel-"
"withoutreference-succes.json")
results = self.adyen.checkout.modifications_api.cancel_authorised_payment(requests)
self.assertIsNotNone(results.message['paymentReference'])
self.assertEqual("8412534564722331", results.message['pspReference'])
self.assertEqual("received", results.message['status'])
def test_payments_cancels_without_reference_error_mocked(self):
requests = {
"paymentReference": "Payment123",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"reference": "YourCancelReference",
}
self.adyen.client = self.test.create_client_from_file(200, requests,
"test/mocks/"
"checkout/"
"paymentsresult"
"-error-invalid-"
"data-payload-"
"422.json")
result = self.adyen.checkout.modifications_api.cancel_authorised_payment(requests)
self.assertEqual(422, result.message['status'])
self.assertEqual("14_018", result.message['errorCode'])
self.assertEqual("Invalid payload provided", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_payments_cancels_success_mocked(self):
requests = {"reference": "Your wro order number", "merchantAccount": "YOUR_MERCHANT_ACCOUNT"}
reference_id = "8836183819713023"
self.adyen.client = self.test.create_client_from_file(200, requests,
"test/mocks/"
"checkout/"
"paymentscancels"
"-success.json")
result = self.adyen.checkout.modifications_api.refund_or_cancel_payment(requests, reference_id)
self.assertEqual(reference_id, result.message["paymentPspReference"])
self.assertEqual("received", result.message['status'])
def test_payments_cancels_error_mocked(self):
request = {"reference": "Your wro order number"}
psp_reference = "8836183819713023"
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentsresult-error-invalid-"
"data-payload-422.json")
result = self.adyen.checkout.modifications_api.refund_or_cancel_payment(request, psp_reference)
self.assertEqual(422, result.message['status'])
self.assertEqual("14_018", result.message['errorCode'])
self.assertEqual("Invalid payload provided", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_payments_refunds_success_mocked(self):
requests = {
"paymentReference": "Payment123",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"reference": "YourCancelReference",
}
psp_reference = "Payment123"
self.adyen.client = self.test.create_client_from_file(200, requests,
"test/mocks/"
"checkout/"
"paymentscancel-"
"withoutreference-succes.json")
result = self.adyen.checkout.modifications_api.refund_captured_payment(requests,psp_reference)
self.assertEqual(psp_reference, result.message["paymentReference"])
self.assertIsNotNone(result.message["pspReference"])
self.assertEqual("received", result.message['status'])
def test_payments_refunds_error_mocked(self):
requests = {
"paymentReference": "Payment123",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"reference": "YourCancelReference",
}
reference_id = "Payment123"
self.adyen.client = self.test.create_client_from_file(200, requests,
"test/mocks/"
"checkout/"
"paymentsresult-error-invalid-"
"data-payload-422.json")
result = self.adyen.checkout.modifications_api.refund_captured_payment(requests, reference_id)
self.assertEqual(422, result.message['status'])
self.assertEqual("14_018", result.message['errorCode'])
self.assertEqual("Invalid payload provided", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_reversals_success_mocked(self):
requests = {
"reference": "YourReversalReference",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
psp_reference = "8836183819713023"
self.adyen.client = self.test.create_client_from_file(200, requests,
"test/mocks/"
"checkout/"
"paymentsreversals-"
"success.json")
result = self.adyen.checkout.modifications_api.refund_or_cancel_payment(requests, psp_reference)
self.assertEqual(psp_reference, result.message["paymentPspReference"])
self.assertIsNotNone(result.message["pspReference"])
self.assertEqual("received", result.message['status'])
def test_payments_reversals_failure_mocked(self):
requests = {
"reference": "YourReversalReference",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
psp_reference = "8836183819713023"
self.adyen.client = self.test.create_client_from_file(200, requests,
"test/mocks/"
"checkout/"
"paymentsresult-error-invalid-"
"data-payload-422.json")
result = self.adyen.checkout.modifications_api.refund_or_cancel_payment(requests,psp_reference)
self.assertEqual(422, result.message['status'])
self.assertEqual("14_018", result.message['errorCode'])
self.assertEqual("Invalid payload provided", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_payments_capture_success_mocked(self):
request = {
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"amount": {
"value": 2500,
"currency": "EUR"
},
"reference": "YOUR_UNIQUE_REFERENCE"
}
psp_reference = "8536214160615591"
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentcapture-"
"success.json")
result = self.adyen.checkout.modifications_api.capture_authorised_payment(request, psp_reference)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://checkout-test.adyen.com/{self.checkout_version}/payments/{psp_reference}/captures',
json=request,
xapikey='YourXapikey',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
)
self.assertEqual(psp_reference, result.message["paymentPspReference"])
self.assertIsNotNone(result.message["pspReference"])
self.assertEqual("received", result.message['status'])
self.assertEqual(2500, result.message['amount']['value'])
def test_payments_capture_error_mocked(self):
request = {
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"amount": {
"value": 2500,
"currency": "EUR"
},
"reference": "YOUR_UNIQUE_REFERENCE"
}
psp_reference = "8536214160615591"
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentsresult-error-invalid-"
"data-payload-422.json")
result = self.adyen.checkout.modifications_api.capture_authorised_payment(request, psp_reference)
self.assertEqual(422, result.message['status'])
self.assertEqual("14_018", result.message['errorCode'])
self.assertEqual("Invalid payload provided", result.message['message'])
self.assertEqual("validation", result.message['errorType'])
def test_orders_success(self):
request = {'merchantAccount': "YourMerchantAccount"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"orders"
"-success.json")
result = self.adyen.checkout.orders_api.orders(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://checkout-test.adyen.com/{self.checkout_version}/orders',
json=request,
xapikey='YourXapikey',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
)
self.assertEqual("8515930288670953", result.message['pspReference'])
self.assertEqual("Success", result.message['resultCode'])
self.assertEqual("order reference", result.message['reference'])
self.assertEqual("EUR", result.message['remainingAmount']["currency"])
self.assertEqual(2500, result.message['remainingAmount']['value'])
def test_orders_cancel_success(self):
request = {'merchantAccount': "YourMerchantAccount"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"orders-cancel"
"-success.json")
result = self.adyen.checkout.orders_api.cancel_order(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://checkout-test.adyen.com/{self.checkout_version}/orders/cancel',
json=request,
xapikey='YourXapikey',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
)
self.assertEqual("8515931182066678", result.message['pspReference'])
self.assertEqual("Received", result.message['resultCode'])
def test_paymentmethods_balance_success(self):
request = {'merchantAccount': "YourMerchantAccount"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentmethods"
"-balance"
"-success.json")
result = self.adyen.checkout.orders_api.get_balance_of_gift_card(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://checkout-test.adyen.com/{self.checkout_version}/paymentMethods/balance',
json=request,
xapikey='YourXapikey',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
)
self.assertEqual("851611111111713K", result.message['pspReference'])
self.assertEqual("Success", result.message['resultCode'])
self.assertEqual(100, result.message['balance']['value'])
self.assertEqual("EUR", result.message['balance']['currency'])
def test_sessions_success(self):
request = {'merchantAccount': "YourMerchantAccount"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"sessions"
"-success.json")
result = self.adyen.checkout.payments_api.sessions(request)
self.assertEqual("session-test-id", result.message['id'])
self.assertEqual("TestReference", result.message['reference'])
self.assertEqual("http://test-url.com", result.message['returnUrl'])
def test_sessions_error(self):
request = {'merchantAccount': "YourMerchantAccount"}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"sessions"
"-error"
"-invalid"
"-data-422"
".json")
result = self.adyen.checkout.payments_api.sessions(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://checkout-test.adyen.com/{self.checkout_version}/sessions',
json={'merchantAccount': 'YourMerchantAccount'},
xapikey='YourXapikey',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
)
self.assertEqual(422, result.message['status'])
self.assertEqual("130", result.message['errorCode'])
self.assertEqual("validation", result.message['errorType'])
def test_payment_link(self):
request = {
"reference": "YOUR_ORDER_NUMBER",
"amount": {
"value": 1250,
"currency": "BRL"
},
"countryCode": "BR",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT",
"shopperReference": "YOUR_UNIQUE_SHOPPER_ID",
"shopperEmail": "[email protected]",
"shopperLocale": "pt-BR",
"billingAddress": {
"street": "Roque Petroni Jr",
"postalCode": "59000060",
"city": "São Paulo",
"houseNumberOrName": "999",
"country": "BR",
"stateOrProvince": "SP"
},
"deliveryAddress": {
"street": "Roque Petroni Jr",
"postalCode": "59000060",
"city": "São Paulo",
"houseNumberOrName": "999",
"country": "BR",
"stateOrProvince": "SP"
}
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/"
"checkout/"
"paymentlinks"
"-success"
".json")
result = self.adyen.checkout.payment_links_api.payment_links(request)
self.adyen.client.http_client.request.assert_called_once_with(
'POST',
f'https://checkout-test.adyen.com/{self.checkout_version}/paymentLinks',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
xapikey='YourXapikey',
json=request
)
self.assertEqual("YOUR_ORDER_NUMBER", result.message["reference"])
def test_get_payment_link(self):
id = "PL61C53A8B97E6915A"
self.adyen.client = self.test.create_client_from_file(200, None,
"test/mocks/"
"checkout/"
"getpaymenlinks"
"-succes.json")
result = self.adyen.checkout.payment_links_api.get_payment_link(id)
self.adyen.client.http_client.request.assert_called_once_with(
'GET',
f'https://checkout-test.adyen.com/{self.checkout_version}/paymentLinks/{id}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
xapikey="YourXapikey",
json=None
)
self.assertEqual("TestMerchantCheckout", result.message["merchantAccount"])
def test_update_payment_link(self):
id = "PL61C53A8B97E6915A"
request = {
"status": "expired"
}
self.adyen.client = self.test.create_client_from_file(200, request,
"test/mocks/checkout"
"/updatepaymentlinks"
"-success.json")
result = self.adyen.checkout.payment_links_api.update_payment_link(request, id)
self.adyen.client.http_client.request.assert_called_once_with(
'PATCH',
f'https://checkout-test.adyen.com/{self.checkout_version}/paymentLinks/{id}',
headers={'adyen-library-name': 'adyen-python-api-library', 'adyen-library-version': settings.LIB_VERSION},
xapikey="YourXapikey",
json=request
)
self.assertEqual("expired",result.message["status"]) | Adyen | /Adyen-9.0.2.tar.gz/Adyen-9.0.2/test/CheckoutTest.py | CheckoutTest.py |
MIT License
Copyright (c) 2017 Adyen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| AdyenAntoni | /AdyenAntoni-1.0.0.tar.gz/AdyenAntoni-1.0.0/LICENSE.md | LICENSE.md |

# Adyen APIs Library for Python
[](https://docs.adyen.com/development-resources/libraries)
This is the officially supported Python library for using Adyen's APIs.
## Supported API versions
| API | Description | Service Name | Supported version |
| --- | ----------- | ------------ | ----------------- |
|[BIN lookup API](https://docs.adyen.com/api-explorer/#/BinLookup/v52/overview) | The BIN Lookup API provides endpoints for retrieving information based on a given BIN. | binLookup | **v52** |
| [Checkout API](https://docs.adyen.com/api-explorer/#/CheckoutService/v69/overview)| Our latest integration for accepting online payments. | checkout | **v69** |
| [Management API](https://docs.adyen.com/api-explorer/#/ManagementService/v1/overview)| Configure and manage your Adyen company and merchant accounts, stores, and payment terminals. | management | **v1** |
| [Payments API](https://docs.adyen.com/api-explorer/#/Payment/v68/overview)| Our classic integration for online payments. | payments | **v68** |
| [Payouts API](https://docs.adyen.com/api-explorer/#/Payout/v68/overview)| Endpoints for sending funds to your customers. | payouts | **v68** |
| [POS Terminal Management API](https://docs.adyen.com/api-explorer/#/postfmapi/v1/overview)| Endpoints for managing your point-of-sale payment terminals. | terminal | **v1** |
| [Recurring API](https://docs.adyen.com/api-explorer/#/Recurring/v68/overview)| Endpoints for managing saved payment details. | recurring | **v68** |
For more information, refer to our [documentation](https://docs.adyen.com/) or the [API Explorer](https://docs.adyen.com/api-explorer/).
## Prerequisites
- [Adyen test account](https://docs.adyen.com/get-started-with-adyen)
- [API key](https://docs.adyen.com/development-resources/api-credentials#generate-api-key). For testing, your API credential needs to have the [API PCI Payments role](https://docs.adyen.com/development-resources/api-credentials#roles).
- Python 3.6
- Packages (optional): requests or pycurl
## Installation
### For development purposes
Clone this repository and run
~~~~ bash
make install
~~~~
### For usage propose
Use pip command:
~~~~ bash
pip install Adyen
~~~~
## Using the library
### General use with API key
~~~~ python
import Adyen
adyen = Adyen.Adyen()
adyen.payment.client.xapikey = "YourXapikey"
adyen.payment.client.hmac = "YourHMACkey"
adyen.payment.client.platform = "test" # Environment to use the library in.
adyen.payment.client.merchant_account = "merchant account name from CA"
~~~~
### Consuming Services
Every API the library supports is represented by a service object. The name of the service matching the corresponding API is listed in the [Integrations](#supported-api-versions) section of this document.
#### Using all services
~~~~python
import Adyen
adyen = Adyen.Adyen()
adyen.payment.client.xapikey = "YourXapikey"
adyen.payment.client.platform = "test" # change to live for production
request = {
"amount": {
"currency": "USD",
"value": 1000 # value in minor units
},
"reference": "Your order number",
"paymentMethod": {
"type": "visa",
"encryptedCardNumber": "test_4111111111111111",
"encryptedExpiryMonth": "test_03",
"encryptedExpiryYear": "test_2030",
"encryptedSecurityCode": "test_737"
},
"shopperReference": "YOUR_UNIQUE_SHOPPER_ID_IOfW3k9G2PvXFu2j",
"returnUrl": "https://your-company.com/...",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
result = adyen.checkout.payments(request)
~~~~
#### Using one of the services
~~~~python
from Adyen import AdyenCheckoutApi
adyen = AdyenCheckoutApi()
adyen.client.xapikey = "YourXapikey"
adyen.client.platform = "test" # change to live for production
request = {
"amount": {
"currency": "USD",
"value": 1000 # value in minor units
},
"reference": "Your order number",
"paymentMethod": {
"type": "visa",
"encryptedCardNumber": "test_4111111111111111",
"encryptedExpiryMonth": "test_03",
"encryptedExpiryYear": "test_2030",
"encryptedSecurityCode": "test_737"
},
"shopperReference": "YOUR_UNIQUE_SHOPPER_ID_IOfW3k9G2PvXFu2j",
"returnUrl": "https://your-company.com/...",
"merchantAccount": "YOUR_MERCHANT_ACCOUNT"
}
result = adyen.payments(request)
~~~~
#### Force HTTP library
~~~~python
import Adyen
adyen = Adyen.Adyen()
adyen.client.http_force = 'requests' # or 'pycurl'
~~~~
### Using query parameters (management API only)
Define a dictionary with query parameters that you want to use.
~~~~ python
query_parameters = {
'pageSize':10,
'pageNumber':3
}
~~~~
pass the dictionary to the method as an additional argument.
~~~~ python
adyen.management.account_company_level_api.get_companies(query_parameters=query_parameters)
~~~~
### Handling exceptions
Adyen service exceptions extend the AdyenError class. After you catch this exception, you can access the
class arguments for the specifics around this error or use the debug method which prints all the arguments.
~~~~python
try:
adyen.checkout.payments(request)
except Adyen.exceptions.AdyenErorr as error:
error.debug()
~~~~
<details><summary>List of exceptions</summary>
<p>AdyenInvalidRequestError</p>
<p>AdyenAPIResponseError</p>
<p>AdyenAPIAuthenticationError</p>
<p>AdyenAPIInvalidPermission</p>
<p>AdyenAPICommunicationError</p>
<p>AdyenAPIValidationError</p>
<p>AdyenAPIUnprocessableEntity</p>
<p>AdyenAPIInvalidFormat</p>
<p>AdyenEndpointInvalidFormat</p>
</details>
### Example integration
For a closer look at how our Python library works, clone our [example integration](https://github.com/adyen-examples/adyen-python-online-payments). This includes commented code, highlighting key features and concepts, and examples of API calls that can be made using the library.
## Contributing
We encourage you to contribute to this repository, so everyone can benefit from new features, bug fixes, and any other improvements.
Have a look at our [contributing guidelines](https://github.com/Adyen/adyen-python-api-library/blob/develop/CONTRIBUTING.md) to find out how to raise a pull request.
## Support
If you have a feature request, or spotted a bug or a technical problem, [create an issue here](https://github.com/Adyen/adyen-web/issues/new/choose).
For other questions, [contact our Support Team](https://www.adyen.help/hc/en-us/requests/new?ticket_form_id=360000705420).
## Licence
This repository is available under the [MIT license](https://github.com/Adyen/adyen-python-api-library/blob/main/LICENSE.md).
## See also
* [Example integration](https://github.com/adyen-examples/adyen-python-online-payments)
* [Adyen docs](https://docs.adyen.com/)
* [API Explorer](https://docs.adyen.com/api-explorer/)
| AdyenAntoni | /AdyenAntoni-1.0.0.tar.gz/AdyenAntoni-1.0.0/README.md | README.md |
from __future__ import absolute_import, division, unicode_literals
try:
import requests
except ImportError:
requests = None
try:
import pycurl
except ImportError:
pycurl = None
from urllib.parse import urlencode
from urllib.request import Request, urlopen
from urllib.error import HTTPError
from io import BytesIO
import json as json_lib
import base64
class HTTPClient(object):
def __init__(
self,
user_agent_suffix,
lib_version,
force_request=None,
timeout=None,
):
# Check if requests already available, default to urllib
self.user_agent = user_agent_suffix + lib_version
if not force_request:
if requests:
self.request = self._requests_request
elif pycurl:
self.request = self._pycurl_request
else:
self.request = self._urllib_request
else:
if force_request == 'requests':
self.request = self._requests_request
elif force_request == 'pycurl':
self.request = self._pycurl_request
else:
self.request = self._urllib_request
self.timeout = timeout
def _pycurl_request(
self,
method,
url,
json=None,
data=None,
username="",
password="",
xapikey="",
headers=None
):
"""This function will send a request with a specified method to the url endpoint using pycurl.
Returning an AdyenResult object on 200 HTTP response.
Either json or data has to be provided for POST/PATCH.
If username and password are provided, basic auth will be used.
Args:
method (str): This is the method used to send the request to an endpoint.
url (str): url to send the request
json (dict, optional): Dict of the JSON to POST/PATCH
data (dict, optional): Dict, presumed flat structure of key/value
of request to place
username (str, optionl): Username for basic auth. Must be included
as part of password.
password (str, optional): Password for basic auth. Must be included
as part of username.
xapikey (str, optional): Adyen API key. Will be used for auth
if username and password are absent.
headers (dict, optional): Key/Value pairs of headers to include
Returns:
str: Raw response received
str: Raw request placed
int: HTTP status code, eg 200,404,401
dict: Key/Value pairs of the headers received.
"""
if headers is None:
headers = {}
response_headers = {}
stringbuffer = BytesIO()
curl = pycurl.Curl()
curl.setopt(curl.URL, url)
curl.setopt(curl.WRITEDATA, stringbuffer)
# Add User-Agent header to request so that the
# request can be identified as coming from the Adyen Python library.
headers['User-Agent'] = self.user_agent
if username and password:
curl.setopt(curl.USERPWD, '%s:%s' % (username, password))
elif xapikey:
headers["X-API-KEY"] = xapikey
# Convert the header dict to formatted array as pycurl needs.
header_list = ["%s:%s" % (k, v) for k, v in headers.items()]
# Ensure proper content-type when adding headers
if json:
header_list.append("Content-Type:application/json")
curl.setopt(pycurl.HTTPHEADER, header_list)
# Return regular dict instead of JSON encoded dict for request:
if method == "POST" or method == "PATCH":
raw_store = json
# Set the request body.
raw_request = json_lib.dumps(json) if json else urlencode(data)
curl.setopt(curl.POSTFIELDS, raw_request)
# Needed here as POSTFIELDS sets the method to POST
curl.setopt(curl.CUSTOMREQUEST, method)
elif method == "GET" or method == "DELETE":
curl.setopt(curl.CUSTOMREQUEST, method)
raw_store = None
curl.setopt(curl.TIMEOUT, self.timeout)
curl.perform()
# Grab the response content
result = stringbuffer.getvalue()
status_code = curl.getinfo(curl.RESPONSE_CODE)
curl.close()
# Return regular dict instead of JSON encoded dict for request:
raw_request = raw_store
return result, raw_request, status_code, response_headers
def _requests_request(
self,
method,
url,
json=None,
data=None,
username="",
password="",
xapikey="",
headers=None
):
"""This function will send a request with a specified method to the url endpoint using requests.
Returning an AdyenResult object on 200 HTTP response.
Either json or data has to be provided for POST/PATCH.
If username and password are provided, basic auth will be used.
Args:
method (str): This is the method used to send the request to an endpoint.
url (str): url to send the request
json (dict, optional): Dict of the JSON to POST/PATCH
data (dict, optional): Dict, presumed flat structure of key/value
of request to place
username (str, optionl): Username for basic auth. Must be included
as part of password.
password (str, optional): Password for basic auth. Must be included
as part of username.
xapikey (str, optional): Adyen API key. Will be used for auth
if username and password are absent.
headers (dict, optional): Key/Value pairs of headers to include
Returns:
str: Raw response received
str: Raw request placed
int: HTTP status code, eg 200,404,401
dict: Key/Value pairs of the headers received.
"""
if headers is None:
headers = {}
# Adding basic auth if username and password provided.
auth = None
if username and password:
auth = requests.auth.HTTPBasicAuth(username, password)
elif xapikey:
headers['x-api-key'] = xapikey
# Add User-Agent header to request so that the request
# can be identified as coming from the Adyen Python library.
headers['User-Agent'] = self.user_agent
request = requests.request(
method=method,
url=url,
auth=auth,
data=data,
json=json,
headers=headers,
timeout=self.timeout
)
# Ensure either json or data is returned for raw request
# Updated: Only return regular dict,
# don't switch out formats if this is not important.
message = json
return request.text, message, request.status_code, request.headers
def _urllib_request(
self,
method,
url,
json=None,
data=None,
username="",
password="",
xapikey="",
headers=None,
):
"""This function will send a request with a specified method to the url endpoint using urlib2.
Returning an AdyenResult object on 200 HTTP response.
Either json or data has to be provided for POST/PATCH.
If username and password are provided, basic auth will be used.
Args:
method (str): This is the method used to send the request to an endpoint.
url (str): url to send the request
json (dict, optional): Dict of the JSON to POST/PATCH
data (dict, optional): Dict, presumed flat structure of key/value
of request to place
username (str, optionl): Username for basic auth. Must be included
as part of password.
password (str, optional): Password for basic auth. Must be included
as part of username.
xapikey (str, optional): Adyen API key. Will be used for auth
if username and password are absent.
headers (dict, optional): Key/Value pairs of headers to include
Returns:
str: Raw response received
str: Raw request placed
int: HTTP status code, eg 200,404,401
dict: Key/Value pairs of the headers received.
"""
if headers is None:
headers = {}
if method == "POST" or method == "PATCH":
# Store regular dict to return later:
raw_store = json
raw_request = json_lib.dumps(json) if json else urlencode(data)
url_request = Request(url, data=raw_request.encode('utf8'), method=method)
raw_request = raw_store
if json:
url_request.add_header('Content-Type', 'application/json')
elif not data:
raise ValueError("Please provide either a json or a data field.")
elif method == "GET" or method == "DELETE":
url_request = Request(url, method=method)
raw_request = None
# Add User-Agent header to request so that the
# request can be identified as coming from the Adyen Python library.
headers['User-Agent'] = self.user_agent
# Set regular dict to return as raw_request:
# Adding basic auth is username and password provided.
if username and password:
basic_authstring = base64.encodebytes(('%s:%s' %
(username, password))
.encode()).decode(). \
replace('\n', '')
url_request.add_header("Authorization",
"Basic %s" % basic_authstring)
elif xapikey:
headers["X-API-KEY"] = xapikey
# Adding the headers to the request.
for key, value in headers.items():
url_request.add_header(key, str(value))
# URLlib raises all non 200 responses as en error.
try:
response = urlopen(url_request, timeout=self.timeout)
except HTTPError as e:
raw_response = e.read()
return raw_response, raw_request, e.getcode(), e.headers
else:
raw_response = response.read()
response.close()
# The dict(response.info()) is the headers of the response
# Raw response, raw request, status code and headers returned
return (raw_response, raw_request,
response.getcode(), dict(response.info()))
def request(
self,
method,
url,
json="",
data="",
username="",
password="",
headers=None,
):
"""This is overridden on module initialization. This function will make
an HTTP method call to a given url. Either json/data will be what is posted to
the end point. he HTTP request needs to be basicAuth when username and
password are provided. a headers dict maybe provided,
whatever the values are should be applied.
Args:
url (str): url to send the request
method (str): method to use for the endpoint
json (dict, optional): Dict of the JSON to POST/PATCH
data (dict, optional): Dict, presumed flat structure of
key/value of request to place as
www-form
username (str, optional): Username for basic auth. Must be
included as part of password.
password (str, optional): Password for basic auth. Must be
included as part of username.
xapikey (str, optional): Adyen API key. Will be used for auth
if username and password are absent.
headers (dict, optional): Key/Value pairs of headers to include
Returns:
str: Raw request placed
str: Raw response received
int: HTTP status code, eg 200,404,401
dict: Key/Value pairs of the headers received.
"""
raise NotImplementedError('request of HTTPClient should have been '
'overridden on initialization. '
'Otherwise, can be overridden to '
'supply your own post method') | AdyenAntoni | /AdyenAntoni-1.0.0.tar.gz/AdyenAntoni-1.0.0/Adyen/httpclient.py | httpclient.py |
from __future__ import absolute_import, division, unicode_literals
import json as json_lib
from . import util
from .httpclient import HTTPClient
from .exceptions import (
AdyenAPICommunicationError,
AdyenAPIAuthenticationError,
AdyenAPIInvalidPermission,
AdyenAPIValidationError,
AdyenInvalidRequestError,
AdyenAPIResponseError,
AdyenAPIUnprocessableEntity,
AdyenEndpointInvalidFormat)
from . import settings
from re import match
class AdyenResult(object):
"""
Args:
message (dict, optional): Parsed message returned from API client.
status_code (int, optional): Default 200. HTTP response code, ie 200,
404, 500, etc.
psp (str, optional): Psp reference returned by Adyen for a payment.
raw_request (str, optional): Raw request placed to Adyen.
raw_response (str, optional): Raw response returned by Adyen.
"""
def __init__(self, message=None, status_code=200,
psp="", raw_request="", raw_response=""):
self.message = message
self.status_code = status_code
self.psp = psp
self.raw_request = raw_request
self.raw_response = raw_response
self.details = {}
def __str__(self):
return repr(self.message)
class AdyenClient(object):
IDEMPOTENCY_HEADER_NAME = 'Idempotency-Key'
"""A requesting client that interacts with Adyen. This class holds the
adyen logic of Adyen HTTP API communication. This is the object that can
maintain its own username, password, merchant_account, hmac and skin_code.
When these values aren't within this object, the root adyen module
variables will be used.
The public methods and call_api only return AdyenResult objects.
Otherwise raising various validation and communication errors.
Args:
username (str, optional): Username of webservice user
password (str, optional): Password of webservice user
merchant_account (str, optional): Merchant account for requests to be
placed through
platform (str, optional): Defaults "test". The Adyen platform to make
requests against.
hmac (str, optional): Hmac key that is used for signature calculation.
http_timeout (int, optional): The timeout in seconds for HTTP calls,
default 30.
"""
def __init__(
self,
username=None,
password=None,
xapikey=None,
review_payout_username=None,
review_payout_password=None,
store_payout_username=None, store_payout_password=None,
platform="test", merchant_account=None,
merchant_specific_url=None,
hmac=None,
http_force=None,
live_endpoint_prefix=None,
http_timeout=30,
api_bin_lookup_version=None,
api_checkout_utility_version=None,
api_checkout_version=None,
api_management_version=None,
api_payment_version=None,
api_payout_version=None,
api_recurring_version=None,
api_terminal_version=None,
api_legal_entity_management_version=None,
):
self.username = username
self.password = password
self.xapikey = xapikey
self.review_payout_username = review_payout_username
self.review_payout_password = review_payout_password
self.store_payout_username = store_payout_username
self.store_payout_password = store_payout_password
self.platform = platform
self.merchant_specific_url = merchant_specific_url
self.hmac = hmac
self.merchant_account = merchant_account
self.psp_list = []
self.LIB_VERSION = settings.LIB_VERSION
self.USER_AGENT_SUFFIX = settings.LIB_NAME + "/"
self.http_init = False
self.http_force = http_force
self.live_endpoint_prefix = live_endpoint_prefix
self.http_timeout = http_timeout
self.api_bin_lookup_version = api_bin_lookup_version or settings.API_BIN_LOOKUP_VERSION
self.api_checkout_utility_version = api_checkout_utility_version or settings.API_CHECKOUT_UTILITY_VERSION
self.api_checkout_version = api_checkout_version or settings.API_CHECKOUT_VERSION
self.api_management_version = api_management_version or settings.API_MANAGEMENT_VERSION
self.api_payment_version = api_payment_version or settings.API_PAYMENT_VERSION
self.api_payout_version = api_payout_version or settings.API_PAYOUT_VERSION
self.api_recurring_version = api_recurring_version or settings.API_RECURRING_VERSION
self.api_terminal_version = api_terminal_version or settings.API_TERMINAL_VERSION
self.api_legal_entity_management_version = api_legal_entity_management_version or settings.API_LEGAL_ENTITY_MANAGEMENT_VERSION
def _determine_base_url_and_version(self, platform, service):
live_pal_url = settings.PAL_LIVE_ENDPOINT_URL_TEMPLATE
live_checkout_url = settings.ENDPOINT_CHECKOUT_LIVE_SUFFIX
if platform == 'live' and self.live_endpoint_prefix:
live_pal_url = live_pal_url.format(live_prefix=self.live_endpoint_prefix)
live_checkout_url = live_checkout_url.format(live_prefix=self.live_endpoint_prefix)
versions_and_urls = {
'recurring': {
'version': self.api_recurring_version,
'base_url': {
'live': live_pal_url + '/Recurring',
'test': settings.PAL_TEST_URL + '/Recurring',
}
},
'payouts': {
'version': self.api_payout_version,
'base_url': {
'live': live_pal_url + '/Payout',
'test': settings.PAL_TEST_URL + '/Payout'
}
},
'binlookup': {
'version': self.api_bin_lookup_version,
'base_url': {
'live': live_pal_url + '/BinLookup',
'test': settings.PAL_TEST_URL + '/BinLookup'
}
},
'terminal': {
'version': self.api_terminal_version,
'base_url': {
'live': settings.BASE_TERMINAL_URL.format(platform),
'test': settings.BASE_TERMINAL_URL.format(platform)
}
},
'Payment': {
'version': self.api_payment_version,
'base_url': {
'live': live_pal_url + '/Payment',
'test': settings.PAL_TEST_URL + '/Payment'
}
},
'checkout': {
'version': self.api_checkout_version,
'base_url': {
'live': live_checkout_url,
'test': settings.ENDPOINT_CHECKOUT_TEST
}
},
'management': {
'version': self.api_management_version,
'base_url': {
'live': settings.BASE_MANAGEMENT_URL.format(platform),
'test': settings.BASE_MANAGEMENT_URL.format(platform)
}
},
'legalEntityManagement': {
'version': self.api_legal_entity_management_version,
'base_url': {
'live': settings.BASE_LEGAL_ENTITY_MANAGEMENT_URL.format(platform),
'test': settings.BASE_LEGAL_ENTITY_MANAGEMENT_URL.format(platform)
}
}
}
version = versions_and_urls[service]['version']
base_url = versions_and_urls[service]['base_url'][platform]
# Match urls that require a live prefix and do not have one
if platform == 'live' and '{live_prefix}' in base_url:
errorstring = "Please set your live suffix. You can set it by running " \
"adyen.client.live_endpoint_prefix = 'Your live suffix'"
raise AdyenEndpointInvalidFormat(errorstring)
return version, base_url
def _determine_api_url(self, platform, service, endpoint):
api_version, base_url = self._determine_base_url_and_version(platform, service)
return '/'.join([base_url, api_version, endpoint])
def _review_payout_username(self, **kwargs):
if 'username' in kwargs:
return kwargs['username']
elif self.review_payout_username:
return self.review_payout_username
errorstring = "Please set your review payout " \
"webservice username. You can do this by running " \
"Adyen.review_payout_username = 'Your payout username'"
raise AdyenInvalidRequestError(errorstring)
def _review_payout_pass(self, **kwargs):
if 'password' in kwargs:
return kwargs["password"]
elif self.review_payout_password:
return self.review_payout_password
errorstring = "Please set your review payout " \
"webservice password. You can do this by running " \
"Adyen.review_payout_password = 'Your payout password"
raise AdyenInvalidRequestError(errorstring)
def _store_payout_username(self, **kwargs):
if 'username' in kwargs:
return kwargs['username']
elif self.store_payout_username:
return self.store_payout_username
errorstring = "Please set your review payout " \
"webservice username. You can do this by running " \
"Adyen.review_payout_username = 'Your payout username'"
raise AdyenInvalidRequestError(errorstring)
def _store_payout_pass(self, **kwargs):
if 'password' in kwargs:
return kwargs["password"]
elif self.store_payout_password:
return self.store_payout_password
errorstring = "Please set your review payout " \
"webservice password. You can do this by running " \
"Adyen.review_payout_password = 'Your payout password"
raise AdyenInvalidRequestError(errorstring)
def _set_credentials(self, service, endpoint, **kwargs):
xapikey = None
username = None
password = None
# username at self object has highest priority. fallback to root module
# and ensure that it is set.
if self.xapikey:
xapikey = self.xapikey
elif 'xapikey' in kwargs:
xapikey = kwargs.pop("xapikey")
if self.username:
username = self.username
elif 'username' in kwargs:
username = kwargs.pop("username")
if service == "Payout":
if any(substring in endpoint for substring in
["store", "submit"]):
username = self._store_payout_username(**kwargs)
else:
username = self._review_payout_username(**kwargs)
if not username and not xapikey:
errorstring = "Please set your webservice username.You can do this by running " \
"Adyen.username = 'Your username'"
raise AdyenInvalidRequestError(errorstring)
# password at self object has highest priority.
# fallback to root module
# and ensure that it is set.
if self.password and not xapikey:
password = self.password
elif 'password' in kwargs:
password = kwargs.pop("password")
if service == "Payout":
if any(substring in endpoint for substring in
["store", "submit"]):
password = self._store_payout_pass(**kwargs)
else:
password = self._review_payout_pass(**kwargs)
if not password and not xapikey:
errorstring = "Please set your webservice password.You can do this by running " \
"Adyen.password = 'Your password'"
raise AdyenInvalidRequestError(errorstring)
# xapikey at self object has highest priority.
# fallback to root module
# and ensure that it is set.
return xapikey, username, password
def _set_platform(self, **kwargs):
# platform at self object has highest priority. fallback to root module
# and ensure that it is set to either 'live' or 'test'.
platform = None
if self.platform:
platform = self.platform
elif 'platform' in kwargs:
platform = kwargs.pop('platform')
if not isinstance(platform, str):
errorstring = "'platform' value must be type of string"
raise TypeError(errorstring)
elif platform.lower() not in ['live', 'test']:
errorstring = "'platform' must be the value of 'live' or 'test'"
raise ValueError(errorstring)
return platform
def call_adyen_api(
self,
request_data,
service,
method,
endpoint,
idempotency_key=None,
**kwargs
):
"""This will call the adyen api. username, password, merchant_account,
and platform are pulled from root module level and or self object.
AdyenResult will be returned on 200 response. Otherwise, an exception
is raised.
Args:
idempotency_key: https://docs.adyen.com/development-resources
/api-idempotency
request_data (dict): The dictionary of the request to place. This
should be in the structure of the Adyen API.
https://docs.adyen.com/api-explorer
service (str): This is the API service to be called.
method (str): This is the method used to send the request to an endpoint.
endpoint (str): The specific endpoint of the API service to be called
Returns:
AdyenResult: The AdyenResult is returned when a request was
successful.
"""
# Initialize http client
if not self.http_init:
self._init_http_client()
# Set credentials
xapikey, username, password = self._set_credentials(service, endpoint, **kwargs)
# Set platform
platform = self._set_platform(**kwargs)
message = request_data
with_app_info = [
"authorise",
"authorise3d",
"authorise3ds2",
"payments",
"paymentSession",
"paymentLinks",
"paymentMethods/balance",
"sessions"
]
if endpoint in with_app_info and (method == 'POST' or method == 'PATCH'):
if 'applicationInfo' in request_data:
request_data['applicationInfo'].update({
"adyenLibrary": {
"name": settings.LIB_NAME,
"version": settings.LIB_VERSION
}
})
else:
request_data['applicationInfo'] = {
"adyenLibrary": {
"name": settings.LIB_NAME,
"version": settings.LIB_VERSION
}
}
# Adyen requires this header to be set and uses the combination of
# merchant account and merchant reference to determine uniqueness.
headers = {}
if idempotency_key:
headers[self.IDEMPOTENCY_HEADER_NAME] = idempotency_key
url = self._determine_api_url(platform, service, endpoint)
if 'query_parameters' in kwargs:
url = url + util.get_query(kwargs['query_parameters'])
kwargs.pop('query_parameters')
if xapikey:
raw_response, raw_request, status_code, headers = \
self.http_client.request(method, url, json=request_data,
xapikey=xapikey, headers=headers,
**kwargs)
else:
raw_response, raw_request, status_code, headers = \
self.http_client.request(method, url, json=message, username=username,
password=password,
headers=headers,
**kwargs)
# Creates AdyenResponse if request was successful, raises error if not.
adyen_result = self._handle_response(url, raw_response, raw_request,
status_code, headers)
return adyen_result
def _init_http_client(self):
self.http_client = HTTPClient(
user_agent_suffix=self.USER_AGENT_SUFFIX,
lib_version=self.LIB_VERSION,
force_request=self.http_force,
timeout=self.http_timeout,
)
self.http_init = True
def _handle_response(self, url, raw_response, raw_request,
status_code, headers):
"""This parses the content from raw communication, raising an error if
anything other than 200 was returned.
Args:
url (str): URL where request was made
raw_response (str): The raw communication sent to Adyen
raw_request (str): The raw response returned by Adyen
status_code (int): The HTTP status code
headers (dict): Key/Value of the headers.
request_dict (dict): The original request dictionary that was given
to the HTTPClient.
Returns:
AdyenResult: Result object if successful.
"""
if status_code not in [200, 201, 204]:
response = {}
# If the result can't be parsed into json, most likely is raw html.
# Some response are neither json or raw html, handle them here:
if raw_response:
response = json_lib.loads(raw_response)
# Pass raised error to error handler.
self._handle_http_error(url, response, status_code,
headers.get('pspReference'),
raw_request, raw_response,
headers)
try:
if response['errorCode']:
raise AdyenAPICommunicationError(
"Unexpected error while communicating with Adyen."
" Received the response data:'{}', HTTP Code:'{}'. "
"Please reach out to [email protected] if the "
"problem persists with the psp:{}".format(
raw_response,
status_code,
headers.get('pspReference')),
status_code=status_code,
raw_request=raw_request,
raw_response=raw_response,
url=url,
psp=headers.get('pspReference'),
headers=headers,
error_code=response['errorCode'])
except KeyError:
erstr = 'KeyError: errorCode'
raise AdyenAPICommunicationError(erstr)
else:
if status_code != 204:
response = json_lib.loads(raw_response)
else:
response = {}
psp = self._get_psp(response, headers)
return AdyenResult(message=response, status_code=status_code,
psp=psp, raw_request=raw_request,
raw_response=raw_response)
def _handle_http_error(self, url, response_obj, status_code, psp_ref,
raw_request, raw_response, headers):
"""This function handles the non 200 responses from Adyen, raising an
error that should provide more information.
Args:
url (str): url of the request
response_obj (dict): Dict containing the parsed JSON response from
Adyen
status_code (int): HTTP status code of the request
psp_ref (str): Psp reference of the request attempt
raw_request (str): The raw request placed to Adyen
raw_response (str): The raw response(body) returned by Adyen
headers(dict): headers of the response
Returns:
None
"""
if response_obj == {}:
message = raw_response
else:
message = response_obj
error_code = response_obj.get("errorCode")
if status_code == 404:
raise AdyenAPICommunicationError(message, raw_request, raw_response, url, psp_ref, headers, status_code,
error_code)
elif status_code == 400:
raise AdyenAPIValidationError(message, raw_request, raw_response, url, psp_ref, headers, status_code,
error_code)
elif status_code == 401:
raise AdyenAPIAuthenticationError(message, raw_request, raw_response, url, psp_ref, headers, status_code,
error_code)
elif status_code == 403:
raise AdyenAPIInvalidPermission(message, raw_request, raw_response, url, psp_ref, headers, status_code,
error_code)
elif status_code == 422:
raise AdyenAPIUnprocessableEntity(message, raw_request, raw_response, url, psp_ref, headers, status_code,
error_code)
elif status_code == 500:
raise AdyenAPICommunicationError(message, raw_request, raw_response, url, psp_ref, headers, status_code,
error_code)
else:
raise AdyenAPIResponseError(message, raw_request, raw_response, url, psp_ref, headers, status_code,
error_code)
@staticmethod
def _get_psp(response, headers):
psp_ref = response.get('pspReference')
if psp_ref is None:
psp_ref = headers.get('pspReference')
return psp_ref | AdyenAntoni | /AdyenAntoni-1.0.0.tar.gz/AdyenAntoni-1.0.0/Adyen/client.py | client.py |
import sys
DEFAULT_VERSION = "0.6a9"
DEFAULT_URL = "http://cheeseshop.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.5a13-py2.3.egg': '85edcf0ef39bab66e130d3f38f578c86',
'setuptools-0.5a13-py2.4.egg': 'ede4be600e3890e06d4ee5e0148e092a',
'setuptools-0.6a1-py2.3.egg': 'ee819a13b924d9696b0d6ca6d1c5833d',
'setuptools-0.6a1-py2.4.egg': '8256b5f1cd9e348ea6877b5ddd56257d',
'setuptools-0.6a2-py2.3.egg': 'b98da449da411267c37a738f0ab625ba',
'setuptools-0.6a2-py2.4.egg': 'be5b88bc30aed63fdefd2683be135c3b',
'setuptools-0.6a3-py2.3.egg': 'ee0e325de78f23aab79d33106dc2a8c8',
'setuptools-0.6a3-py2.4.egg': 'd95453d525a456d6c23e7a5eea89a063',
'setuptools-0.6a4-py2.3.egg': 'e958cbed4623bbf47dd1f268b99d7784',
'setuptools-0.6a4-py2.4.egg': '7f33c3ac2ef1296f0ab4fac1de4767d8',
'setuptools-0.6a5-py2.3.egg': '748408389c49bcd2d84f6ae0b01695b1',
'setuptools-0.6a5-py2.4.egg': '999bacde623f4284bfb3ea77941d2627',
'setuptools-0.6a6-py2.3.egg': '7858139f06ed0600b0d9383f36aca24c',
'setuptools-0.6a6-py2.4.egg': 'c10d20d29acebce0dc76219dc578d058',
'setuptools-0.6a7-py2.3.egg': 'cfc4125ddb95c07f9500adc5d6abef6f',
'setuptools-0.6a7-py2.4.egg': 'c6d62dab4461f71aed943caea89e6f20',
'setuptools-0.6a8-py2.3.egg': '2f18eaaa3f544f5543ead4a68f3b2e1a',
'setuptools-0.6a8-py2.4.egg': '799018f2894f14c9f8bcb2b34e69b391',
'setuptools-0.6a9-py2.3.egg': '8e438ad70438b07b0d8f82cae42b278f',
'setuptools-0.6a9-py2.4.egg': '8f6e01fc12fb1cd006dc0d6c04327ec1',
}
import sys, os
def _validate_md5(egg_name, data):
if egg_name in md5_data:
from md5 import md5
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
try:
import setuptools
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
except ImportError:
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
import pkg_resources
try:
pkg_resources.require("setuptools>="+version)
except pkg_resources.VersionConflict:
# XXX could we install in a subprocess here?
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first."
) % version
sys.exit(2)
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
---------------------------------------------------------------------------""",
version, download_base, delay
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
import tempfile, shutil
tmpdir = tempfile.mkdtemp(prefix="easy_install-")
try:
egg = download_setuptools(version, to_dir=tmpdir, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
main(list(argv)+[egg])
finally:
shutil.rmtree(tmpdir)
else:
if setuptools.__version__ == '0.0.1':
# tell the user to uninstall obsolete version
use_setuptools(version)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
from md5 import md5
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:]) | Adytum-NetCIDR | /Adytum-NetCIDR-0.0.1.tar.gz/Adytum-NetCIDR-0.0.1/ez_setup.py | ez_setup.py |
def validateVLSM(vlsm):
'''
>>> validateVLSM(50)
Traceback (most recent call last):
ValueError: The variable length subnet mask, or "prefix length" must be between 0 and 32, inclusive.
>>> validateVLSM('45')
Traceback (most recent call last):
ValueError: The variable length subnet mask, or "prefix length" must be between 0 and 32, inclusive.
>>> validateVLSM(32)
>>> validateVLSM('27')
>>> validateVLSM(17)
>>> validateVLSM('0')
>>> validateVLSM(-1)
Traceback (most recent call last):
ValueError: The variable length subnet mask, or "prefix length" must be between 0 and 32, inclusive.
>>> validateVLSM('-10')
Traceback (most recent call last):
ValueError: The variable length subnet mask, or "prefix length" must be between 0 and 32, inclusive.
'''
if 0 > int(vlsm) or int(vlsm) > 32:
raise ValueError, "The variable length subnet mask, or " + \
'"prefix length" must be between 0 and 32, inclusive.'
class CIDR(object):
'''
VLSM stands for variable length subnet mask and is the data
after the slash. It is also called the "prefix length"
# Let's make sure our globbing works
>>> CIDR('10.4.1.2')
10.4.1.2
>>> CIDR('10.4.1.x')
10.4.1.0/24
>>> CIDR('10.4.x.2')
10.4.0.0/16
>>> CIDR('10.4.x.x')
10.4.0.0/16
>>> CIDR('10.*.*.*')
10.0.0.0/8
# Now let's check out the zeros anf get some host counts
# while we're at it
#
# Since there may very well be many circumstances were one
# would have a valid single address ending in one or more
# zeros, I don't think it's a good idea to force this
# behavior. I will comment out for now and may completely
# remove sometime in the future.
#>>> CIDR('10.4.1.0')
#10.4.1.0/24
#>>> c = CIDR('10.4.0.0')
#>>> c
#10.4.0.0/16
#>>> c.getHostCount()
#65536
#>>> c = CIDR('10.0.0.0')
#>>> c
#10.0.0.0/8
#>>> c.getHostCount()
#16777216
#>>> CIDR('0.0.0.0')
#0.0.0.0/0
#>>> CIDR('10.0.0.2')
#10.0.0.2/32
# How about manual CIDR entries?
>>> c = CIDR('172.16.4.28/31')
>>> c
172.16.4.28/31
>>> c.getHostCount()
2
>>> c.getOctetTuple()
(172, 16, 4, 28)
>>> c = CIDR('172.16.4.28/27')
>>> c
172.16.4.28/27
>>> c.getHostCount()
32
>>> c = CIDR('172.16.4.28/15')
>>> c
172.16.4.28/15
>>> c.getHostCount()
131072
# What about some silly errors:
>>> c = CIDR('10.100.2.4/12/11')
Traceback (most recent call last):
ValueError: There appear to be too many '/' in your network notation.
'''
def __init__(self, cidr_string):
net = cidr_string.split('/')
vlsm = 32
if len(net) == 2:
net, vlsm = net
vlsm = int(vlsm)
validateVLSM(vlsm)
elif len(net) > 2:
raise ValueError, "There appear to be too many '/' in " + \
"your network notation."
else:
net = net[0]
self.vlsm = vlsm
self.net = net
self.octets = net.split('.')
#if vlsm != 32:
# if not self.zeroCheck():
self.globCheck()
self.raw = None
def __repr__(self):
if self.vlsm == 32:
return self.net
return "%s/%s" % (self.net, self.vlsm)
def getOctetTuple(self):
return tuple([ int(x) for x in self.octets ])
def globCheck(self):
# check to see if any octext is '*', 'x'
net_globs = ['*', 'x', 'X']
check = False
for char in net_globs:
if char in self.net:
check = True
if not check:
return False
glob_index = None
# Now let's look at the octets in reverse order, starting
# at the "back", since the glob closest to the "front" will
# determine how big the network really is
for index in range(len(self.octets)-1,-1,-1):
if self.octets[index] in net_globs:
glob_index = index
# Let's zero out the netblock portion
for index in range(glob_index, 4):
self.octets[index] = '0'
# Now we need to get the CIDR for the glob
if not (glob_index is None and self.vlsm is None):
self.vlsm = (glob_index) * 8
# Let's rebuild our new net address:
self.net = '.'.join(self.octets)
def zeroCheck(self):
if not self.octets[3] == '0':
return False
zeros = [1,1,1,0]
if self.octets[2] == '0':
zeros[2] = 0
if self.octets[1] == '0':
zeros[1] = 0
if self.octets[0] == '0':
zeros[0] = 0
# Set the netmask by the first zero found
self.vlsm = zeros.index(0)*8
return True
def getHostCount(self):
return 2**(32 - self.vlsm)
def _getOctetRange(self, position, chunk_size):
divide_by = 256 / chunk_size
for i in xrange(divide_by):
start = i*chunk_size
end = i*chunk_size+chunk_size-1
if start <= position <= end:
return (start, end)
def getHostRange(self):
'''
This is a lazy way of doing binary subnet math ;-)
The first thing we do is make two copies of the CIDR octets
stored in self.octets. We have one copy each for the address
representing the first host in the range and then the last
host in the range.
Next, we check to see what octet we will be dealing with
by looking at the netmask (self.vlsm). Then we get the list
index for that octet and calculate the octet number from
this.
The next bit is a little strange and really deserves a
description: chunk_size. This really means "how many times
is the current octect divided up?" We use that number and
the CIDR value for the octet in question to determine the
netblock range.
# Let's try the first octet
>>> CIDR('172.16.4.28/31').getHostRange()
(172.16.4.28, 172.16.4.29)
>>> CIDR('172.16.4.27/31').getHostRange()
(172.16.4.26, 172.16.4.27)
>>> CIDR('172.16.4.28/30').getHostRange()
(172.16.4.28, 172.16.4.31)
>>> CIDR('172.16.4.27/30').getHostRange()
(172.16.4.24, 172.16.4.27)
>>> CIDR('172.16.4.28/29').getHostRange()
(172.16.4.24, 172.16.4.31)
>>> CIDR('172.16.4.31/29').getHostRange()
(172.16.4.24, 172.16.4.31)
>>> CIDR('172.16.4.32/29').getHostRange()
(172.16.4.32, 172.16.4.39)
>>> CIDR('172.16.4.27/28').getHostRange()
(172.16.4.16, 172.16.4.31)
>>> CIDR('172.16.4.27/27').getHostRange()
(172.16.4.0, 172.16.4.31)
>>> CIDR('172.16.4.27/26').getHostRange()
(172.16.4.0, 172.16.4.63)
>>> CIDR('172.16.4.27/25').getHostRange()
(172.16.4.0, 172.16.4.127)
>>> CIDR('172.16.4.27/24').getHostRange()
(172.16.4.0, 172.16.4.255)
# Let's work on the next octet
>>> CIDR('172.16.4.27/23').getHostRange()
(172.16.4.0, 172.16.5.255)
>>> CIDR('172.16.4.27/22').getHostRange()
(172.16.4.0, 172.16.7.255)
>>> CIDR('172.16.4.27/21').getHostRange()
(172.16.0.0, 172.16.7.255)
>>> CIDR('172.16.4.27/20').getHostRange()
(172.16.0.0, 172.16.15.255)
>>> CIDR('172.16.4.27/19').getHostRange()
(172.16.0.0, 172.16.31.255)
>>> CIDR('172.16.4.27/18').getHostRange()
(172.16.0.0, 172.16.63.255)
>>> CIDR('172.16.4.27/17').getHostRange()
(172.16.0.0, 172.16.127.255)
>>> CIDR('172.16.4.27/16').getHostRange()
(172.16.0.0, 172.16.255.255)
# Now the next octet
>>> CIDR('172.16.4.27/15').getHostRange()
(172.16.0.0, 172.17.255.255)
>>> CIDR('172.16.4.27/14').getHostRange()
(172.16.0.0, 172.19.255.255)
>>> CIDR('172.16.4.27/13').getHostRange()
(172.16.0.0, 172.23.255.255)
>>> CIDR('172.16.4.27/12').getHostRange()
(172.16.0.0, 172.31.255.255)
>>> CIDR('172.16.4.27/11').getHostRange()
(172.0.0.0, 172.31.255.255)
>>> CIDR('172.16.4.27/10').getHostRange()
(172.0.0.0, 172.63.255.255)
>>> CIDR('172.16.4.27/9').getHostRange()
(172.0.0.0, 172.127.255.255)
>>> CIDR('172.16.4.27/8').getHostRange()
(172.0.0.0, 172.255.255.255)
# Now the final octet
>>> CIDR('172.16.4.27/7').getHostRange()
(172.0.0.0, 173.255.255.255)
>>> CIDR('172.16.4.27/6').getHostRange()
(172.0.0.0, 175.255.255.255)
>>> CIDR('172.16.4.27/5').getHostRange()
(168.0.0.0, 175.255.255.255)
>>> CIDR('172.16.4.27/4').getHostRange()
(160.0.0.0, 175.255.255.255)
>>> CIDR('172.16.4.27/3').getHostRange()
(160.0.0.0, 191.255.255.255)
>>> CIDR('172.16.4.27/2').getHostRange()
(128.0.0.0, 191.255.255.255)
>>> CIDR('172.16.4.27/1').getHostRange()
(128.0.0.0, 255.255.255.255)
>>> CIDR('172.16.4.27/0').getHostRange()
(0.0.0.0, 255.255.255.255)
'''
# Setup the starting and ending octets
so = [ int(x) for x in self.octets ]
eo = [ int(x) for x in self.octets ]
if self.vlsm >= 24:
sidx = 3
elif self.vlsm >= 16:
so[3] = 0
eo[3] = 255
sidx = 2
elif self.vlsm >= 8:
so[2] = so[3] = 0
eo[2] = eo[3] = 255
sidx = 1
elif self.vlsm < 8:
so[1] = so[2] = so[3] = 0
eo[1] = eo[2] = eo[3] = 255
sidx = 0
octet_number = 4 - (sidx + 1)
chunk_size = 2**(32-self.vlsm) / (256**octet_number)
start, end = self._getOctetRange(so[sidx], chunk_size)
so[sidx] = start
eo[sidx] = end
# Convert the octects back to strings
so = [ str(x) for x in so ]
eo = [ str(x) for x in eo ]
return (CIDR('.'.join(so)), CIDR('.'.join(eo)))
class Networks(list):
'''
>>> net_cidr = CIDR('192.168.4.0/24')
>>> corp_cidr = CIDR('10.5.0.0/16')
>>> vpn_cidr = CIDR('172.16.9.5/27')
>>> mynets = Networks([net_cidr, corp_cidr, vpn_cidr])
>>> home_router = CIDR('192.168.4.1')
>>> laptop1 = CIDR('192.168.4.100')
>>> webserver = CIDR('10.5.10.10')
>>> laptop2 = CIDR('172.16.9.17')
>>> google = CIDR('64.233.187.99')
>>> home_router in mynets
True
>>> laptop1 in mynets
True
>>> webserver in mynets
True
>>> laptop2 in mynets
True
>>> google in mynets
False
'''
def __contains__(self, cidr_obj):
for network in self:
if self.isInRange(cidr_obj, cidr_netblock=network):
return True
return False
def isInRange(self, cidr_obj, cidr_tuple=(), cidr_netblock=None):
'''
This might normally be a prive method, but since it will
probably generally useful, we'll make it "public."
>>> net_cidr = CIDR('192.168.4.0/24')
>>> mynet = Networks([net_cidr])
>>> router = CIDR('192.168.4.1')
>>> fileserver = CIDR('192.168.4.10')
>>> laptop = CIDR('192.168.4.100')
>>> mynet.isInRange(router, (fileserver, laptop))
False
>>> mynet.isInRange(fileserver, (router, laptop))
True
>>> mynet.isInRange(fileserver, cidr_tuple=(router, laptop))
True
>>> mynet.isInRange(router, cidr_netblock=net_cidr)
True
>>> mynet.isInRange(router)
Traceback (most recent call last):
ValueError: You must provide either a tuple of CIDR objects or a CIDR object that represents a netblock.
'''
#raise str(cidr_tuple)
if not (cidr_tuple or cidr_netblock):
raise ValueError, 'You must provide either a tuple of ' + \
'CIDR objects or a CIDR object that represents a ' + \
'netblock.'
if cidr_tuple:
start, end = cidr_tuple
else:
start, end = cidr_netblock.getHostRange()
if (start.getOctetTuple() < cidr_obj.getOctetTuple()
< end.getOctetTuple() ):
return True
return False
def _test():
import doctest, network
return doctest.testmod(network)
if __name__ == '__main__':
_test() | Adytum-NetCIDR | /Adytum-NetCIDR-0.0.1.tar.gz/Adytum-NetCIDR-0.0.1/lib/netcidr/network.py | network.py |
import sys
DEFAULT_VERSION = "0.6a8"
DEFAULT_URL = "http://cheeseshop.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.5a13-py2.3.egg': '85edcf0ef39bab66e130d3f38f578c86',
'setuptools-0.5a13-py2.4.egg': 'ede4be600e3890e06d4ee5e0148e092a',
'setuptools-0.6a1-py2.3.egg': 'ee819a13b924d9696b0d6ca6d1c5833d',
'setuptools-0.6a1-py2.4.egg': '8256b5f1cd9e348ea6877b5ddd56257d',
'setuptools-0.6a2-py2.3.egg': 'b98da449da411267c37a738f0ab625ba',
'setuptools-0.6a2-py2.4.egg': 'be5b88bc30aed63fdefd2683be135c3b',
'setuptools-0.6a3-py2.3.egg': 'ee0e325de78f23aab79d33106dc2a8c8',
'setuptools-0.6a3-py2.4.egg': 'd95453d525a456d6c23e7a5eea89a063',
'setuptools-0.6a4-py2.3.egg': 'e958cbed4623bbf47dd1f268b99d7784',
'setuptools-0.6a4-py2.4.egg': '7f33c3ac2ef1296f0ab4fac1de4767d8',
'setuptools-0.6a5-py2.3.egg': '748408389c49bcd2d84f6ae0b01695b1',
'setuptools-0.6a5-py2.4.egg': '999bacde623f4284bfb3ea77941d2627',
'setuptools-0.6a6-py2.3.egg': '7858139f06ed0600b0d9383f36aca24c',
'setuptools-0.6a6-py2.4.egg': 'c10d20d29acebce0dc76219dc578d058',
'setuptools-0.6a7-py2.3.egg': 'cfc4125ddb95c07f9500adc5d6abef6f',
'setuptools-0.6a7-py2.4.egg': 'c6d62dab4461f71aed943caea89e6f20',
'setuptools-0.6a8-py2.3.egg': '2f18eaaa3f544f5543ead4a68f3b2e1a',
'setuptools-0.6a8-py2.4.egg': '799018f2894f14c9f8bcb2b34e69b391',
}
import sys, os
def _validate_md5(egg_name, data):
if egg_name in md5_data:
from md5 import md5
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
try:
import setuptools
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
except ImportError:
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
import pkg_resources
try:
pkg_resources.require("setuptools>="+version)
except pkg_resources.VersionConflict:
# XXX could we install in a subprocess here?
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first."
) % version
sys.exit(2)
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
---------------------------------------------------------------------------""",
version, download_base, delay
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
import tempfile, shutil
tmpdir = tempfile.mkdtemp(prefix="easy_install-")
try:
egg = download_setuptools(version, to_dir=tmpdir, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
main(list(argv)+[egg])
finally:
shutil.rmtree(tmpdir)
else:
if setuptools.__version__ == '0.0.1':
# tell the user to uninstall obsolete version
use_setuptools(version)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
from md5 import md5
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:]) | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/ez_setup.py | ez_setup.py |
class FeatureBroker:
'''
'''
def __init__(self, allowReplace=False):
self.providers = {}
self.allowReplace = allowReplace
def provide(self, feature, provider, *args, **kwargs):
if not self.allowReplace:
assert not self.providers.has_key(feature), "Duplicate feature: %r" % feature
if callable(provider):
def call(): return provider(*args, **kwargs)
else:
def call(): return provider
self.providers[feature] = call
def __getitem__(self, feature):
try:
provider = self.providers[feature]
except KeyError:
raise KeyError, "Unknown feature named %r" % feature
return provider()
features = FeatureBroker()
'''
Representation of Required Features and Feature Assertions
The following are some basic assertions to test the suitability of
injected features:
'''
def noAssertion(obj):
return True
def isInstanceOf(*classes):
def test(obj): return isinstance(obj, classes)
return test
def hasAttributes(*attributes):
def test(obj):
for each in attributes:
if not hasattr(obj, each): return False
return True
return test
def hasMethods(*methods):
def test(obj):
for each in methods:
try:
attr = getattr(obj, each)
except AttributeError:
return False
return callable(attr)
return True
return test
class RequiredFeature(object):
'''
An attribute descriptor to "declare" required features
'''
def __init__(self, feature, assertion=noAssertion):
self.feature = feature
self.assertion = assertion
def __get__(self, obj, T):
return self.result # <-- will request the feature upon first call
def __getattr__(self, name):
assert name == 'result', "Unexpected attribute request other then 'result'"
self.result = self.request()
return self.result
def request(self):
obj = features[self.feature]
assert self.assertion(obj), \
"The value %r of %r does not match the specified criteria" \
% (obj, self.feature)
return obj
class Component(object):
'''
Abstract/symbolic base class for components
# Some python module defines a Bar component and states the dependencies
# We will assume that:
# - Console denotes an object with a method writeLine(string)
# - AppTitle denotes a string that represents the current application name
# - CurrentUser denotes a string that represents the current user name
>>> class Bar(Component):
... con = RequiredFeature('Console', hasMethods('writeLine'))
... title = RequiredFeature('AppTitle', isInstanceOf(str))
... user = RequiredFeature('CurrentUser', isInstanceOf(str))
... def __init__(self):
... self.X = 0
... def printYourself(self):
... self.con.writeLine('-- Bar instance --')
... self.con.writeLine('Title: %s' % self.title)
... self.con.writeLine('User: %s' % self.user)
... self.con.writeLine('X: %d' % self.X)
# Some other python module defines a basic Console component
>>> class SimpleConsole:
... def writeLine(self, s):
... print s
# Yet another python module defines a better Console component
>>> class BetterConsole:
... def __init__(self, prefix=''):
... self.prefix = prefix
... def writeLine(self, s):
... # do it better
... print s
# Some third python module knows how to discover the current user's name
>>> def getCurrentUser():
... return 'Some User'
# Finally, the main python script specifies the application name,
# decides which components/values to use for what feature,
# and creates an instance of Bar to work with
>>> features.provide('AppTitle', 'Inversion of Control ... The Python Way')
>>> features.provide('CurrentUser', getCurrentUser)
>>> features.provide('Console', SimpleConsole)
# features.provide('Console', BetterConsole, prefix='-->') # <-- transient lifestyle
# features.provide('Console', BetterConsole(prefix='-->')) # <-- singleton lifestyle
>>> bar = Bar()
>>> bar.printYourself()
-- Bar instance --
Title: Inversion of Control ... The Python Way
User: Some User
X: 0
# Evidently, none of the used components needed to know about each other
# => Lose coupling goal achieved
'''
pass
def _test():
import doctest, components
doctest.testmod(components)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/components.py | components.py |
from random import random
from math import ceil
def decision(choice_count, reverse=False):
'''
Simply put, this is a funtion to whether something is decided
to be True or False, given the number of choices in 'choice_count'.
'''
choice = bool(int(ceil(random()*choice_count)) % choice_count)
if reverse: return not choice
return choice
class Walk(object):
'''
'''
def __init__(self, start=0, length=100, step=1, mpos=1, mneg=1, mres=1):
self.setStartElement(start)
self.setWalkLength(length)
self.setStepLength(step)
self.setMultipliers(mpos, mneg, mres)
def setStartElement(self, element):
self.start = element
def setWalkLength(self, length):
self.length = length
def setStepLength(self, step):
self.step = step
def setPositiveMultiplier(self, pos):
self.positive_multiplier = pos
def setNegativeMultiplier(self, neg):
self.negative_multiplier = neg
def setResultMultiplier(self, res):
self.result_multiplier = res
def setMultipliers(self, pos, neg, res):
self.setPositiveMultiplier(pos)
self.setNegativeMultiplier(neg)
self.setResultMultiplier(res)
def nextStep(self, last_point, last_choice):
new_point = last_point
# let's do a 1 in two chance of choosing the same 'direction'
# of the walk as last time
if decision(2):
new_point += self.step
return (new_point, last_choice)
# let's do another 1 in 2 chance, but this time to choose
# which 'direction' to go in
choice = decision(2)
if choice:
# positive direction
new_point += self.step * random() * self.positive_multiplier
else:
# negative direction
new_point -= self.step * random() * self.negative_multiplier
return (new_point, choice)
def getWalk(self):
old = self.start
choice = decision(2)
for i in range(int(self.start), int(self.length)):
new, choice = self.nextStep(old, choice)
old = new
yield new * self.result_multiplier
def getWalkAsList(self):
return list(self.getWalk()) | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/math/randomwalk.py | randomwalk.py |
NOTATION10 = '0123456789'
NOTATION36 = '0123456789abcdefghijklmnopqrstuvwxyz'
NOTATION65 = '-.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz'
NOTATION68 = '$,-.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz~'
NOTATION69 = '%+-.0123456789=ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz~'
NOTATION70 = "!'()*-0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz~"
NOTATION90 = "!'#$%&()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[]^_abcdefghijklmnopqrstuvwxyz{|}"
legal_bases = [10, 36, 65, 68, 69, 70, 90]
class BaseConvert(object):
'''
"Base" class for base conversions ;-)
'''
__metaclass__ = type
base = None
notation = None
notation_map = None
def _convert(self, n):
'''
Private function for doing conversions; returns a list
'''
if True not in [ isinstance(1, x) for x in [long, int, float] ]:
raise TypeError, 'parameters bust be numbers'
converted = []
quotient, remainder = divmod(n, self.base)
converted.append(remainder)
if quotient != 0:
converted.extend(self._convert(quotient))
return converted
"""
def convert(self, n):
'''
General conversion function
'''
nums = self._convert(n)
nums.reverse()
return self.getNotation(nums)
"""
def convert(self, n, tobase=None, frombase=10):
try:
n = int(n)
except:
raise "The first parameter of 'convert' needs to be an integer!"
if not tobase:
tobase = self.base
'''
nums = [ self.notation_map[x] for x in str(n) ]
nums.reverse()
total = 0
for number in nums:
total += 1 * number
number *= frombase
if total == 0:
return '0'
converted = []
while total:
total, remainder = divmod(total, tobase)
converted.append(self.notation[remainder])
'''
converted = [ self.notation[x] for x in self._convert(n) ]
converted.reverse()
return ''.join(converted)
def getNotation(self, list_of_remainders):
'''
Get the notational representation of the converted number
'''
return ''.join([ self.notation[x] for x in list_of_remainders ])
doc_template = '''
>>> b = Base%s()
>>> zero = b.convert(0)
>>> ten = b.convert(10)
>>> hund = b.convert(100)
>>> thou = b.convert(1000)
>>> mil = b.convert(1000000)
>>> bil = b.convert(100000000)
>>> goog = b.convert(10**10)
>>> print (zero, ten, hund, thou, mil, bil, goog)
>>> zero = b.convert(zero, newbase=10, oldbase=b.base)
>>> ten = b.convert(ten, newbase=10, oldbase=b.base)
>>> hund = b.convert(hund, newbase=10, oldbase=b.base)
>>> thou = b.convert(thou, newbase=10, oldbase=b.base)
>>> mil = b.convert(mil, newbase=10, oldbase=b.base)
>>> bil = b.convert(bil, newbase=10, oldbase=b.base)
>>> goog = b.convert(goog, newbase=10, oldbase=b.base)
>>> print (zero, ten, hund, thou, mil, bil, goog)
'''
# build classes for whatever notations exist
import new
for base in [ str(x) for x in legal_bases ]:
base_klass = globals()['BaseConvert']
klass_name = 'Base'+base
notation = eval('NOTATION'+base)
notation_map = dict([ (y, x) for x, y in enumerate(notation) ])
#klass = type(klass_name, (base_klass,), {})
# {'__metaclass__':type(base_klass())})
klass = new.classobj(klass_name, (base_klass,), {'__doc__':doc_template%base})
klass.base = int(base)
klass.notation = notation
klass.notation_map = notation_map
globals()[klass_name] = klass
def _test():
import doctest, base
doctest.testmod(base)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/math/base.py | base.py |
import itertools
class Sequence(object):
'''
Actually, this is really what's called the 'Look and Say Sequence'
sequence. Conway's Sequence has a starting digit of '3'. The
integer sequence beginning with a single digit in which the
next term is obtained by describing the previous term. Starting
with 1, the sequence would be defined by "1, one 1, two 1s, one
2 one 1," etc., and the result is 1, 11, 21, 1211, 111221, ....
Similarly, starting the sequence instead with the digit d for
gives d, 1d, 111d, 311d, 13211d, 111312211d, 31131122211d,
1321132132211d, ...,
>>> seq = Sequence()
>>> seq.next()
'1'
>>> seq.next()
'11'
>>> seq.next()
'21'
>>> seq.next()
'1211'
>>> seq.next()
'111221'
>>> seq.next()
'312211'
>>> seq.getNElements(10)
['1', '11', '21', '1211', '111221', '312211', '13112221', '1113213211', '31131211131221', '13211311123113112211']
>>> seq.slice(2,4)
['21', '1211']
>>> seq = Sequence(start='1')
>>> seq.getNthElement(2)
'11'
>>> seq.getNthElement(4)
'1211'
>>> seq.getNthElement(10)
'13211311123113112211'
>>> len(seq.getNthElement(31))
5808
>>> seq = Sequence(start='3')
>>> seq.getNthElement(2)
'13'
>>> seq.getNthElement(4)
'3113'
>>> seq.getNthElement(10)
'31131122211311123113322113'
>>> seq.getNElements(10)
['3', '13', '1113', '3113', '132113', '1113122113', '311311222113', '13211321322113', '1113122113121113222113', '31131122211311123113322113']
'''
def __init__(self, start='1'):
self.start = start
self.num = start
self._results = self._generator()
def _get_current(self, position=0):
count = 1
current_part = self.num[position]
try:
while(self.num[position + count] == current_part):
count += 1
except IndexError:
# we've reached the end of the number
pass
conway_part = str(count) + str(current_part)
next_part_start = position + count
return (next_part_start, conway_part)
def _next(self):
return_val = self.num
accum = ''
next = 0
while(next < len(self.num)):
next, this_part = self._get_current(position=next)
accum = accum + this_part
self.num = accum
return return_val
def _generator(self):
while(self.num):
yield self._next()
def getNElements(self, n_last, n_first=0):
'''
For getting element slices, we really want a
new instance each time. Otherwise we'll be
getting results we don't expect.
'''
s = Sequence(self.start)
return list(itertools.islice(s._results, n_first, n_last))
def getNthElement(self, n):
return self.getNElements(n, n-1)[0]
def next(self):
try:
return self._results.next()
except StopIteration:
pass
def slice(self, start, stop):
return self.getNElements(stop, start)
def _test():
import doctest, conway
doctest.testmod(conway)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/math/conway.py | conway.py |
def msum(iterable):
"Full precision summation using multiple floats for intermediate values"
partials = [] # sorted, non-overlapping partial sums
for x in iterable:
newpartials = []
for y in partials:
if abs(x) < abs(y):
x, y = y, x
hi = x + y
lo = y - (hi - x)
if lo:
newpartials.append(lo)
x = hi
partials = newpartials + [x]
return sum(partials)
from math import frexp, ldexp
from decimal import getcontext, Decimal, Inexact
getcontext().traps[Inexact] = True
def dsum(iterable):
"Full precision summation using Decimal objects for intermediate values"
total = Decimal(0)
for x in iterable:
# transform (exactly) a float to m * 2 ** e where m and e are integers
mant, exp = frexp(x)
mant, exp = int(mant * 2 ** 53), exp-53
# Convert (mant, exp) to a Decimal and add to the cumulative sum.
# If the precision is too small for exact conversion and addition,
# then retry with a larger precision.
while 1:
try:
newtotal = total + mant * Decimal(2) ** exp
except Inexact:
getcontext().prec += 1
else:
total = newtotal
break
return float(total)
def lsum(iterable):
"Full precision summation using long integers for intermediate values"
tmant, texp = 0, 0
for x in iterable:
# transform (exactly) a float to m * 2 ** e where m and e are integers
mant, exp = frexp(x)
mant, exp = int(mant * 2 ** 53), exp-53
# adjust (tmant,texp) and (mant,exp) to a common exponent
if texp < exp:
mant <<= exp - texp
exp = texp
elif texp > exp:
tmant <<= texp - exp
texp = exp
# now with a common exponent, the mantissas can be summed directly
tmant += mant
return ldexp(float(str(tmant)), texp)
from random import random, normalvariate, shuffle
def test(nvals):
for j in xrange(1000):
vals = [7, 1e100, -7, -1e100, -9e-20, 8e-20] * 10
vals.extend([random() - 0.49995 for i in xrange(nvals)])
vals.extend([normalvariate(0, 1)**7 for i in xrange(nvals)])
s = sum(vals)
for i in xrange(nvals):
v = normalvariate(-s, random())
s += v
vals.append(v)
shuffle(vals)
assert msum(vals) == dsum(vals) == lsum(vals)
print '.',
print 'Tests Passed'
if __name__ == '__main__':
test(50) | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/math/precision.py | precision.py |
class MultiProjectIndex(object):
'''
This class should be used inside the trac mod_python
handler 'ModPythonHandler.py', specifically, in the
'send_project_index()' function.
For example, this file:
/usr/local/lib/python2.3/site-packages/trac/ModPythonHandler.py
It should be used in the following manner:
def send_project_index(req, mpr, dir):
from adytum.plugins.trac import MultiProjectIndex
text = MultiProjectIndex()
text.setTitle('Available Projects')
text.setBaseURL(mpr.idx_location)
text.setDirs(dir)
req.content_type = text.getContentType()
req.write(text.render())
'''
def __init__(self, content_type='text/html'):
self.content_type = content_type
self.location = ''
self.dirs = ''
self.title = ''
self.header_file = ''
self.footer_file = ''
self.base_url = ''
def setTitle(self, title):
self.title = title
def setLocation(self, location):
self.location = location
def setBaseURL(self, url):
self.base_url = url
def setDirs(self, dir):
import os
dirs = [ (x.lower(), x) for x in os.listdir(dir) ]
dirs.sort()
self.dirs = [ x[1] for x in dirs ]
self.base_dir = dir
default_header = 'header.tmpl'
default_footer = 'footer.tmpl'
self.header_file = '%s/%s' % (dir, default_header)
self.footer_file = '%s/%s' % (dir, default_footer)
def getContentType(self):
return self.content_type
def getHeader(self):
'''
'''
text = file(self.header_file, 'r').read()
return text.replace('%%TITLE%%', self.title)
def getProjectListing(self):
'''
We do some strange, custom stuff here in order to get the display
the way we want it.
* Internal projects are marked with zi__ProjectName
* Customer/Partner projects are marked with zp__ProjectName
'''
import os
from ConfigParser import ConfigParser
cfg = ConfigParser()
cat_heading = '<h3>%s</h3>' # takes the project grouping as a parameter
heading = '<ul>'
link = '<li><a href="%s">%s</a> - %s</li>' # takes the mpr.idx_location/location as the URL
# and project as the display link, then the desc
footing = '</ul>'
listing_data = []
for project in self.dirs:
proj_dir = os.path.join(self.base_dir, project)
#output += '(debug) proj_dir: %s<br/>' % proj_dir
proj_conf = os.path.join(proj_dir, 'conf/trac.ini')
#output += '(debug) proj_conf: %s<br/>' % proj_conf
if os.path.isfile(proj_conf):
cfg.read(proj_conf)
url = '%s/%s' % (self.base_url, project)
hidden_name = cfg.get('project', 'name')
if hidden_name.startswith('zi__'):
cat = 'Internal'
name = hidden_name.replace('zi__', '')
elif hidden_name.startswith('zp__'):
cat = 'Customers/Partners'
name = hidden_name.replace('zp__', '')
elif hidden_name.startswith('zd__'):
cat = 'Deprecated'
name = hidden_name.replace('zd__', '')
elif hidden_name.startswith('zz__'):
cat = 'Miscellany'
name = hidden_name.replace('zz__', '')
else:
cat = 'Open Source'
name = hidden_name
descr = cfg.get('project', 'descr')
listing_data.append((hidden_name, cat, name, url, descr))
listing_data.sort()
output = heading
cat_last = None
for hidden_name, cat, name, url, descr in listing_data:
if cat != cat_last:
output += cat_heading % cat
output += link % (url, name, descr)
cat_last = cat
output += footing
return output
def getFooter(self):
'''
'''
return file(self.footer_file, 'r').read()
def render(self):
'''
'''
output = self.getHeader()
output += self.getProjectListing()
output += self.getFooter()
return output | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/plugins/trac.py | trac.py |
from ConfigParser import SafeConfigParser as ConfigParser
class SVNConfigParser(object):
'''
The set* methods are for writing config files.
The get* methods are for reading config files.
'''
def __init__(self):
self.main_section = 'main'
self.comments_section = 'status'
self.parser = ConfigParser()
self.repos_section = self.main_section
self.repos_label = 'repository'
self.rev_section = self.main_section
self.rev_label = 'revision number'
self.auth_section = self.main_section
self.auth_label = 'user'
self.log_section = self.comments_section
self.log_label = 'commit log'
self.changed_section = self.comments_section
self.changed_label = 'changed files'
def read(self, fromFile):
self.parser.readfp(open(fromFile))
def getRepository(self):
return self.parser.get(self.repos_section, self.repos_label)
def getRevision(self):
return self.parser.get(self.rev_section, self.rev_label)
def getAuthor(self):
return self.parser.get(self.auth_section, self.auth_label)
def getLog(self):
return self.parser.get(self.log_section, self.log_label)
def getChanged(self):
return self.parser.get(self.changed_section, self.changed_label)
def setRepository(self, repos):
self.repos = repos
def setRevision(self, rev):
self.rev = rev
def setAuthor(self, author):
self.auth = author
def setLog(self, log):
self.log = log
def setChanged(self, changed):
self.changed = changed
def write(self, toFile):
self.parser.add_section(self.main_section)
self.parser.add_section(self.comments_section)
if self.repos:
self.parser.set(self.repos_section, self.repos_label, self.repos)
if self.rev:
self.parser.set(self.rev_section, self.rev_label, self.rev)
if self.auth:
self.parser.set(self.auth_section, self.auth_label, self.auth)
if self.log:
self.parser.set(self.log_section, self.log_label, self.log)
if self.changed:
self.parser.set(self.changed_section, self.changed_label, self.changed)
self.parser.write(open(toFile, 'w+'))
class Commit2Text(object):
'''
'''
def __init__(self, parserObject):
pass
class Commit2IRCMessage(object):
'''
>>> import convert
>>> filename = '/tmp/svncommits/1111539570.67'
>>> c = convert.Commit2IRCMessage(filename)
>>> c.render()
'''
def __init__(self, fromFile):
commit = SVNConfigParser()
commit.read(fromFile)
self.commit = commit
def render(self):
template = 'Repository: %s ** Revision: %s ** Author: %s ** Commit Comments: %s'
return template % (self.commit.getRepository(),
self.commit.getRevision(), self.commit.getAuthor(),
self.commit.getLog()) | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/plugins/svn/convert.py | convert.py |
__revision__ = "$Id: Subversion.py"
import plugins
import os
import re
import time
import getopt
import urllib2
import urlparse
import conf
import utils
import ircmsgs
import webutils
import ircutils
import privmsgs
import registry
import callbacks
import copy
import sets
from adytum.os.nodecheck import CheckDir, NotificationServer
from adytum.os.nodecheck import NOTIFY_CHANGE_CONTENTS
def configure(advanced):
from questions import output, expect, anything, something, yn
conf.registerPlugin('Subversion', True)
if yn("""This plugin offers a monitor that will listen to a particular
directory for file additions and then parse new files for
information pertaining to subversion commits, posting the
parsed information to the channel.
Would you like this monitor to be enabled?""", default=False):
conf.supybot.plugins.Subversion.monitor.setValue(True)
conf.registerPlugin('Subversion')
conf.registerChannelValue(conf.supybot.plugins.Subversion, 'monitor',
registry.Boolean(False, """Determines whether the
svn monitor is enabled. This monitor will watch for new files in a
directory, and parse them for subversion commit information."""))
conf.registerChannelValue(conf.supybot.plugins.Subversion.monitor, 'directory',
registry.String('/tmp/svncommits', """The directory the monitor will listen to
for file additions."""))
conf.registerChannelValue(conf.supybot.plugins.Subversion.monitor, 'interval',
registry.PositiveInteger(30, """The number of seconds the monitor will wait
between checks for newly added files to the monitored directory."""))
class Subversion(callbacks.PrivmsgCommandAndRegexp):
def __init__(self):
self.channel = list(conf.supybot.channels())[0]
self.nextMsgs = {}
callbacks.PrivmsgCommandAndRegexp.__init__(self)
timeout = conf.supybot.plugins.Subversion.monitor.interval.value
#self.log.info("Got '%s' for timeout" % timeout)
directory = conf.supybot.plugins.Subversion.monitor.directory.value
#self.log.info("Got '%s' for directory" % directory)
#self.log.info('callbacks.world.ircs[0]: %s' % str(dir(callbacks.world.ircs[0])))
#self.log.info('conf.supybot.channels: %s' % str(dir(conf.supybot.channels)))
#self.log.info('conf.supybot.channels: %s' % str(conf.supybot.channels.__dict__))
if not os.path.exists(directory):
os.makedirs(directory)
flags = [NOTIFY_CHANGE_CONTENTS]
check = CheckDir(directory)
checks = [check]
self.ns = NotificationServer(checks, flags, self.nsCallback, timeout=timeout)
#while True:
# self.ns.run()
#self.nsCallback('oldstuff', 'newstuff')
def nsCallback(self, olddata, newdata):
old = sets.Set(olddata['contents'])
new = sets.Set(newdata['contents'])
dir_action = ''
# check which files have been deleted
if len(old) > len(new):
files = list(old.difference(new))
dir_action = 'removed'
# check which files have been added
if len(old) < len(new):
files = list(new.difference(old))
dir_action = 'added'
#msg = 'Test message from Subversion plugin: {olddata:%s}, {newdata:%s}' % (olddata, newdata)
if dir_action:
msg = 'The following files were %s: %s' % (dir_action, str(files))
self.log.info(msg)
msg = ircmsgs.notice(self.channel, msg)
callbacks.world.ircs[0].queueMsg(msg)
#while True:
# self.ns.run()
#self.nsCall
def die(self):
callbacks.PrivmsgCommandAndRegexp.die(self)
Class = Subversion
# vim:set shiftwidth=4 tabstop=8 expandtab textwidth=78: | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/plugins/supybot/Subversion.py | Subversion.py |
import os
import time
import binascii
try:
import cPickle as pickle
except:
import pickle
from datetime import datetime
import feedparser
from supybot.src import ircmsgs
feeds = [
'http://dirtsimple.org/rss.xml',
'http://www.livejournal.com/users/glyf/data/rss',
'http://www.advogato.org/person/oubiwann/rss.xml',
'http://plone.org/rss/RSSTopic',
'http://www.zope.org/news.rss',
'http://www.pythonware.com/daily/rss.xml',
'http://projects.adytum.us/tracs/PyMonitor/timeline?daysback=90&max=50&format=rss&ticket=on&wiki=on',
'http://projects.adytum.us/tracs/adytum/timeline?daysback=90&max=50&format=rss&ticket=on&wiki=on',
]
class RSSFeeds(object):
def __init__(self, conf, irc, log, channel, schedule):
self.conf = conf
self.directory = self.conf.monitor.directory()
self.channel = channel
self.irc = irc
self.log = log
self.schedule = schedule
def getFeeds(self):
self.log.info('Getting RSS feeds...')
feeds_data = []
# loop through RSS feeds
for url in feeds:
no_dates = False
self.log.info('Getting RSS feed for %s...' % url)
# get hex repr of url
filename = binascii.hexlify(url)
filename = os.path.join(self.directory, filename)
# get the feed
d = feedparser.parse(url)
try:
if 'pythonware' in d.feed.link:
# python daily has no dates, so use their entry id instead
no_dates = True
mod = d.entries[0].id.encode('ascii', 'ignore').split('#')[-1]
else:
# get timestamp of the feed
try:
date = list(d.feed.modified_parsed)[:-2]
except:
# hmmm... that didn't work.
date = list(d.entries[0].modified_parsed)[:-2]
# Must be live journal
#from mx import DateTime
#date = d.feed.lastbuilddate.encode('ascii', 'ignore')
#date = DateTime.DateTimeFrom(date)
#date = [int(x) for x in list(date.tuple())[:-2]]
mod = datetime(*date)
except Exception, e:
self.log.error('Problem during date check: %s' % e)
no_dates = True
mod = datetime(1900,1,1)
# load pickled date data
try:
date_file = open(filename)
last_mod = pickle.load(date_file)
date_file.close()
except:
last_mod = datetime(1900,1,1)
# if they are different, get the top entry from
# from the feed
if last_mod != mod:
try:
if not no_dates:
date = list(d.entries[0].modified_parsed)[:-2]
date = datetime(*date).strftime("%d %b, %I:%M%p")
else:
date = ''
data = {
'site': d.feed.title.encode('ascii', 'ignore'),
'title': d.entries[0].title.encode('ascii', 'ignore'),
'link': d.entries[0].link.encode('ascii', 'ignore'),
'date': date,
}
feeds_data.append(data)
except Exception, e:
self.log.error('Problem during data acquisition: %s' % e)
# save the new modified date
date_file = open(filename, 'w+')
pickle.dump(mod, date_file)
date_file.close()
self.log.info('Processed RSS feeds.')
#msg = 'Test message from RSSFeeds plugin: {olddata:%s}, {newdata:%s}' % (olddata, newdata)
for entry in feeds_data:
msg = "New Blog/News Feed: %(site)s - %(title)s" % entry
if entry['date']:
msg += " - %(date)s" % entry
self.log.info(msg)
msg = ircmsgs.notice(self.channel, msg)
self.irc.queueMsg(msg)
msg = "%(link)s" % entry
self.log.info(msg)
msg = ircmsgs.notice(self.channel, msg)
self.irc.queueMsg(msg)
if not feeds_data:
self.log.info('No new RSS feeds.')
nextrun = time.time() + self.conf.monitor.interval()
self.schedule.addEvent(self.getFeeds, nextrun) | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/plugins/supybot/rssfeeds.py | rssfeeds.py |
import os
import re
import cElementTree as ElementTree
from base import DictConfig
def getTagName(element):
try:
return re.split('[{}]', element.tag)[2]
except IndexError:
return element.tag
stripNameSpace = getTagName
def getNameSpace(element):
try:
return re.split('[{}]', element.tag)[1]
except IndexError:
return None
class XmlBaseConfig(object):
'''
'''
def __init__(self, element, keep_namespace=True):
self.keep_namespace = keep_namespace
self._namespace = getNameSpace(element)
self.tag = ''
self.fulltag = ''
def getTagName(self, element):
if self.keep_namespace:
return element.tag
else:
return getTagName(element)
def getNameSpaceTag(self, tagname):
if self._namespace:
return '{%s}%s' % (self._namespace, tagname)
else:
return tagname
def makeSubElement(self, parent_element):
# create the subelement where the list will be stored
subtag_name = '%sList'%self.tag
if self.keep_namespace:
subtag = self.getNameSpaceTag(subtag_name)
else:
subtag = subtag_name
self.subtag = subtag
return parent_element.makeelement(self.subtag, {})
def treatListType(self, parent_element):
# create a new/empty subelement where the list will be stored
subelement = self.makeSubElement(parent_element)
print parent_element.tag
print parent_element.getchildren()
# iterate through all elements with the name that is
# currently stored/set in self.fulltag
for element in parent_element.findall(self.fulltag):
subelement.append(element)
if element:
#self.check(element)
pass
elif element.text:
text = element.text.strip()
if text: pass
if isinstance(self, list):
self.append(text)
else:
self.update({self.tag: text})
#parent_element.append(subelement)
aList = XmlListConfig(subelement, keep_namespace=self.keep_namespace)
if isinstance(self, list):
self.append(aList)
else:
self.update({self.subtag:aList})
# the next is for backwards compatibility
self.update({self.tag:aList})
self.check(subelement)
def treatDictType(self, parent_element):
if parent_element.items():
attrs = dict(parent_element.items())
self.update(attrs)
element = parent_element.find(self.fulltag)
if element != None:
text = ''
data = None
if element.text:
#print "has text..."
text = element.text.strip()
if text:
if isinstance(self, list):
#self.append(
pass
else:
self.update({self.tag: text})
if not text:
aDict = XmlDictConfig(element, keep_namespace=self.keep_namespace)
if element.items():
#print "adding attributes..."
aDict.update(dict(element.items()))
data = aDict
if element:
self.check(element)
self.update({self.tag: data})
def check(self, parent_element):
# get counts of all the elements at this level
counts = {}
for element in parent_element:
tag = self.getTagName(element)
count = counts.setdefault(tag,0) + 1
counts.update({tag:count})
# anything with a count more than 1 is treated as a list,
# everything else is a treated as a dict
for tag, count in counts.items():
# set tag names for current iteration
self.tag = tag
self.fulltag = self.getNameSpaceTag(tag)
if count > 1:
# treat like list
print "treating like a list..."
print tag
self.treatListType(parent_element)
else:
# treat like dict
print "treating like a dict..."
print tag
self.treatDictType(parent_element)
class XmlListConfig(XmlBaseConfig, list):
'''
'''
def __init__(self, parent_element, keep_namespace=True):
list.__init__(self)
XmlBaseConfig.__init__(self, parent_element, keep_namespace=keep_namespace)
self.check(parent_element)
def __call__(self, **kw):
pass
class XmlDictConfig(XmlBaseConfig, dict):
'''
'''
def __init__(self, parent_element, keep_namespace=True):
dict.__init__(self)
XmlBaseConfig.__init__(self, parent_element, keep_namespace=keep_namespace)
if parent_element.items():
self.update(dict(parent_element.items()))
self.check(parent_element)
class XmlConfig(DictConfig):
'''
# simple test
>>> xmlstring01 = """<config>
... <sub>This is a test</sub>
... </config>"""
>>> cfg = XmlConfig(xmlstring01)
>>> cfg.sub
'This is a test'
# namespace test
>>> xmlstring01a = """<xml xmlns="http://ns.adytum.us">
... <text>Yeay!</text>
... <parent><child><grandchild><ggchild>More Yeay!!!</ggchild></grandchild></child></parent>
... </xml>"""
>>> cfg = XmlConfig(xmlstring01a, keep_namespace=False)
>>> cfg._namespace
'http://ns.adytum.us'
>>> stripNameSpace(cfg._root)
'xml'
>>> cfg.text
'Yeay!'
>>> cfg.parent.child.grandchild.ggchild
'More Yeay!!!'
# attributes test
>>> xmlstring02 = """<config>
... <sub1 type="title" />
...
... <sub2 date="now">
... <subsub />
... </sub2>
...
... <sub3 quality="high">
... <subsub />
... <subsub />
... </sub3>
...
... <sub4 type="title4">
... <subsub1 type="title41">
... <subsub2 type="title42">
... <subsub3 />
... <subsub3 />
... </subsub2>
... </subsub1>
... </sub4>
...
... <sub5 type="title5">
... <subsub1 type="title51">
... <subsub2 type="title52">
... <subsub3 />
... <subsub3 />
... </subsub2>
... </subsub1>
... <subsub4 type="title54">
... <subsub5 type="title55">
... <subsub6>something</subsub6>
... <subsub6>another</subsub6>
... </subsub5>
... </subsub4>
... </sub5>
... </config>"""
>>> cfg = XmlConfig(xmlstring02)
>>> cfg.sub1.type
'title'
>>> cfg.sub2.date
'now'
>>> cfg.sub3.quality
'high'
>>> cfg.sub4.type
'title4'
>>> cfg.sub4.subsub1.type
'title41'
>>> cfg.sub4.subsub1.subsub2.type
'title42'
>>> cfg.sub5.type
'title5'
>>> cfg.sub5.subsub1.type
'title51'
>>> cfg.sub5.subsub1.subsub2.type
'title52'
>>> cfg.sub5.subsub4.type
'title54'
>>> cfg.sub5.subsub4.subsub5.type
'title55'
>>> cfg.sub5.subsub4.subsub5.subsub6
['something', 'another']
>>> cfg.sub5.subsub4.subsub5.subsub6List
['something', 'another']
# list test
>>> xmlstring03 = """<config>
... <sub>
... <subsub>Test 1</subsub>
... <subsub>Test 2</subsub>
... <subsub>Test 3</subsub>
... </sub>
... </config>"""
>>> cfg = XmlConfig(xmlstring03)
>>> for i in cfg.sub.subsub:
... i
'Test 1'
'Test 2'
'Test 3'
# dict test
>>> xmlstring04 = """<config>
... <sub1>Test 1</sub1>
... <sub2>Test 2</sub2>
... <sub3>Test 3</sub3>
... </config>"""
>>> cfg = XmlConfig(xmlstring04)
>>> cfg.sub1
'Test 1'
>>> cfg.sub2
'Test 2'
>>> cfg.sub3
'Test 3'
# deeper structure
>>> xmlstring05 = """<config>
... <a>Test 1</a>
... <b>Test 2</b>
... <c>
... <ca>Test 3</ca>
... <cb>Test 4</cb>
... <cc>Test 5</cc>
... <cd>
... <cda>Test 6</cda>
... <cdb>Test 7</cdb>
... <cdc>
... <cdca>Test 8</cdca>
... </cdc>
... </cd>
... </c>
... </config>"""
>>> cfg = XmlConfig(xmlstring05)
>>> cfg.a
'Test 1'
>>> cfg.b
'Test 2'
>>> cfg.c.ca
'Test 3'
>>> cfg.c.cb
'Test 4'
>>> cfg.c.cc
'Test 5'
>>> cfg.c.cd.cda
'Test 6'
>>> cfg.c.cd.cdb
'Test 7'
>>> cfg.c.cd.cdc.cdca
'Test 8'
# a test that shows possible real-world usage
>>> xmlstring06 = """<config>
... <services>
... <service type="ping">
... <defaults>
... <ping_count>4</ping_count>
... <service_name>connectivity</service_name>
... <message_template>There was a %s%% ping return from host %s</message_template>
... <ping_binary>/bin/ping</ping_binary>
... <command>ping</command>
... <ok_threshold>67,100</ok_threshold>
... <warn_threshold>26,66</warn_threshold>
... <error_threshold>0,25</error_threshold>
... </defaults>
... <hosts>
... <host name="shell1.adytum.us">
... <escalation enabled="false">
... <group level="0">
... <maillist>
... <email>[email protected]</email>
... <email>[email protected]</email>
... </maillist>
... </group>
... <group level="1">
... <maillist></maillist>
... </group>
... </escalation>
... <scheduled_downtime></scheduled_downtime>
... </host>
... <host name="shell2.adytum.us">
... <escalation enabled="true">
... <group level="0">
... <maillist>
... <email>[email protected]</email>
... <email>[email protected]</email>
... </maillist>
... </group>
... <group level="1">
... <maillist></maillist>
... </group>
... </escalation>
... <scheduled_downtime></scheduled_downtime>
... </host>
... </hosts>
... </service>
... <service type="http">
... <defaults/>
... <hosts>
... <host/>
... </hosts>
... </service>
... <service type="dummy">
... </service>
... </services>
... <system>
... <database type="zodb">
... <directory>data/zodb</directory>
... <host>localhost</host>
... <port></port>
... <user></user>
... <password></password>
... </database>
... </system>
... <constants>
... </constants>
... </config>"""
>>> cfg = XmlConfig(xmlstring06)
>>> cfg.services.service
>>> for service in cfg.services.service:
... assert(isinstance(service, DictConfig))
... if service.type == 'ping':
... print service.defaults.ping_count
... for host in service.hosts.host:
... print host.name
... print host.escalation.enabled
... for group in host.escalation.group:
... print group.level
4
shell1.adytum.us
false
0
1
shell2.adytum.us
true
0
1
>>> cfg.system.database.type
'zodb'
>>> cfg.system.database.directory
'data/zodb'
>>> cfg.system.database.host
'localhost'
>>> cfg.system.database.port
None
'''
def __init__(self, xml_source, keep_namespace=True):
# check if it's a file
if os.path.isfile(xml_source):
tree = ElementTree.parse(xml_source)
root = tree.getroot()
# check to see if it's a file object
elif isinstance(xml_source, file):
root = ElementTree.XML(xml_source.read())
# if those don't work, treat it as an XML string
else:
root = ElementTree.XML(xml_source)
self._root = root
xmldict = XmlDictConfig(root, keep_namespace=keep_namespace)
self._namespace = xmldict._namespace
self.processDict(xmldict)
def _test():
import doctest, xml
return doctest.testmod(xml)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/config/xml.py | xml.py |
from datetime import datetime
from sets import Set
class ValidateCaseInsensitiveList(object):
'''
A class for defining legal list values.
'''
def __init__(self, legal_values):
self.legal_values = legal_values
format = "'"
format += "', '".join(self.legal_values[:-1])
self.formatted_list = "'%s or '%s'" % (format, self.legal_values[-1])
def validate(self, value):
if value.lower() in self.legal_values:
return value
raise ValueError, (
"Value must be one of: %s" % self.formatted_list)
def pymonCheckBy(value):
'''
Validator for the allowed values representing the manner
in which the monitoring checks are made.
'''
legal_values = ['services', 'groups']
validator = ValidateCaseInsensitiveList(legal_values)
return validator.validate(value)
def _int(value):
try:
return int(value)
except ValueError:
raise ValueError, "Value must be coercable to an integer."
def rangedValues(range_string):
'''
Validator human-readable, easily/commonly understood format
representing ranges of values. At it's most basic, the following
is a good example:
25-30
A more complicated range value would be something like:
20-25, 27, 29, 31-33
'''
# XXX need to write a regular expression that really checks for this
values = []
ranges = range_string.split(',')
for value in ranges:
if '-' in value:
start, end = value.split('-')
start = _int(start.strip())
end = _int(end.strip())
values.extend(range(start, end+1))
else:
value = _int(value)
values.append(value)
# get rid of duplicates
values = list(Set(values))
values.sort()
return values
def logLevel(log_level):
log_level = log_level.upper()
legal_values = ['FATAL', 'CRITICAL', 'ERROR',
'WARNING', 'INFO', 'DEBUG']
lvls = ', '.join(legal_values)
if log_level in legal_values:
return log_level
else:
raise ValueError, ('Values for log level' + \
'must be one of the following: %s' % lvls)
def _parseDate(yyyymmdd_hhmmss_string):
date, time = yyyymmdd_hhmmss_string.split()
y, m, d = date.strip().split('.')
h, min, s = time.strip().split(':')
date_tuple = [ int(x) for x in (y,m,d,h,min,s) ]
return datetime(*date_tuple)
def getDateRange(range_string):
'''
This string is converted to two datetime.datetime objects in a dict:
{'start': datetime,
'end': datetime}
The range_string itself must be of the form:
YYYY.MM.DD HH:MM:SS - YYYY.MM.DD HH:MM:SS
The range_string is split by "-" and stripped of trailing/leading
spaces.
'''
date_start, date_end = range_string.split('-')
date_start = _parseDate(date_start.strip())
date_end = _parseDate(date_end.strip())
return {
'start': date_start,
'end': date_end,
} | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/config/zconfig.py | zconfig.py |
class ListConfig(list):
'''
# list check
>>> l = [1, 2, 3, 'a', 'b', 'c']
>>> cfg = ListConfig(l)
>>> cfg
[1, 2, 3, 'a', 'b', 'c']
# list-with-dict check
>>> d = {'test': 'value'}
>>> l = [1, 2, 3, d]
>>> cfg = ListConfig(l)
>>> cfg[3].test
'value'
'''
def __init__(self, aList):
self.processList(aList)
def __call__(self, **kw):
# XXX just supporting one keyword call now
key = kw.keys()[0]
# XXX just supporting dicts now
for i in self:
if isinstance(i, dict) and i.has_key(key) and i.get(key) == kw.get(key):
return i
def processList(self, aList):
for element in aList:
if isinstance(element, dict):
self.append(DictConfig(element))
elif isinstance(element, list):
self.append(ListConfig(element))
else:
self.append(element)
class IterConfig(ListConfig):
'''
The reason for implementing IterConfig is not to provide the
memory-savings associated with iterators and generators, but rather
to benefit from the convenience of the next() method.
# list-with-dict check
>>> d = {'test': 'value'}
>>> l = [1, 2, 3, d]
>>> cfg = IterConfig(l)
>>> cfg.next()
1
>>> cfg.next()
2
>>> cfg.next()
3
>>> cfg.next().test
'value'
'''
def __init__(self, aList):
super(IterConfig, self).processList(aList)
def __iter__(self):
return self
def next(self):
try:
return self.pop(0)
except IndexError:
raise StopIteration
class DictConfig(dict):
'''
# dict check
>>> d = {'test': 'value'}
>>> cfg = DictConfig(d)
>>> cfg.test
'value'
# dict-with-list check
>>> l = [1, 2, 3, 'a', 'b', 'c']
>>> d = {'test': 'value', 'mylist': l}
>>> cfg = DictConfig(d)
>>> cfg.test
'value'
>>> cfg.mylist
[1, 2, 3, 'a', 'b', 'c']
# one more level of recursion
>>> l = [1, 2, 3, d]
>>> d = {'test': 'value', 'list': l}
>>> cfg = DictConfig(d)
>>> cfg.list[3].test
'value'
# and even deeper
>>> d = {'higher': {'lower': {'level': [1, 2, 3, [4, 5, 6, {'key': 'val'}]]}}}
>>> cfg = DictConfig(d)
>>> cfg.higher.lower.level[3][3].key
'val'
# simple demonstration
>>> test_dict = { 'some': 'val', 'more': ['val1', 'val2', 'val3'], 'jeepers': [ {'creep': 'goosebumps', 'ers': 'b-movies'}, ['deeper', ['and deeper', 'and even deeper']]]}
>>> cfg = DictConfig(test_dict)
>>> cfg.some
'val'
>>> for i in cfg.more:
... i
'val1'
'val2'
'val3'
>>> cfg.jeepers[0].creep
'goosebumps'
>>> cfg.jeepers[0].ers
'b-movies'
>>> cfg.jeepers[1][0]
'deeper'
>>> cfg.jeepers[1][1][1]
'and even deeper'
# and now let's explore some recursion
>>> test_dict2 = {'val1': 'a', 'val2': 'b', 'val3': '4c'}
>>> test_list = [1, 2, 3, 'a', 'b', 'c', test_dict2]
>>> test_dict3 = {'level1': test_dict2, 'level2': [test_dict2, test_dict2, test_list]}
>>> test_dict4 = {'higher1': {'lower1': test_dict2, 'lower2': test_list, 'lower3': test_dict3}}
>>> cfg = DictConfig(test_dict4)
>>> cfg.higher1.lower1.val1
'a'
>>> cfg.higher1.lower2[0]
1
>>> cfg.higher1.lower2[4]
'b'
>>> cfg.higher1.lower2[6].val1
'a'
>>> cfg.higher1.lower3.level1.val2
'b'
>>> cfg.higher1.lower3.level2[1].val3
'4c'
>>> cfg.higher1.lower3.level2[2][6].val1
'a'
# a potential real-world example:
>>> email1 = ['[email protected]', '[email protected]', '[email protected]']
>>> email2 = ['[email protected]', '[email protected]']
>>> groups1 = [{'level': 1, 'emails': email1}, {'level': 2, 'emails': email2}]
>>> groups2 = [{'level': 2, 'emails': email1}, {'level': 1, 'emails': email2}]
>>> escalation1 = {'enabled': True, 'groups': groups1}
>>> escalation2 = {'enabled': True, 'groups': groups2}
>>> escalation3 = {'enabled': False}
>>> host1 = {'name': 'host001.myco.com', 'escalation': escalation1}
>>> host2 = {'name': 'host002.myco.com', 'escalation': escalation2}
>>> host3 = {'name': 'host003.myco.com', 'escalation': escalation3}
>>> host4 = {'name': 'host004.myco.com', 'escalation': escalation2}
>>> host5 = {'name': 'host005.myco.com', 'escalation': escalation2}
>>> host6 = {'name': 'host006.myco.com', 'escalation': escalation3}
>>> hosts1 = [host1, host2, host3, host4, host5, host6]
>>> hosts2 = [host2, host5]
>>> hosts3 = [host3, host4, host6]
>>> defaults1 = {'count': 40, 'ok': 10, 'error': 20, 'warn': 15}
>>> defaults2 = {'count': '150'}
>>> service1 = {'type': 'ping', 'name': 'Host Reachable', 'hosts': hosts1, 'defaults': defaults1}
>>> service2 = {'type': 'smtp', 'name': 'Mail Service', 'hosts': hosts2, 'defaults': defaults2}
>>> service3 = {'type': 'http', 'name': 'Web Service', 'hosts': hosts3, 'defaults': defaults2}
>>> services = {'services': [service1, service2, service3]}
>>> cfg = DictConfig(services)
>>> cfg.services[0].type
'ping'
>>> cfg.services[1].name
'Mail Service'
>>> cfg.services[2].defaults.count
'150'
>>> cfg.services[0].hosts[5].name
'host006.myco.com'
>>> cfg.services[0].hosts[5].escalation.enabled
False
>>> cfg.services[0].hosts[4].name
'host005.myco.com'
>>> cfg.services[0].hosts[4].escalation.enabled
True
>>> cfg.services[0].hosts[4].escalation.groups[0].level
2
>>> cfg.services[0].hosts[4].escalation.groups[0].emails[2]
'[email protected]'
>>> cfg.doesntexist
None
>>> cfg.services[0].hosts[4].doesntexists
None
>>> cfg.this.doesnt.exist
None
>>> if cfg.this.doesnt.exist:
... print "no!"
'''
def __init__(self, aDict):
self.processDict(aDict)
def __getattr__(self, attr):
try:
return self.__dict__[attr]
except:
#return ''
return DictConfig({})
def __repr__(self):
rep = self.__dict__
if not rep:
return str(None)
return str(self.__dict__)
def processDict(self, aDict):
for key, val in aDict.items():
if isinstance(val, dict):
subobj = DictConfig(val)
setattr(self, key, subobj)
elif isinstance(val, list):
l = ListConfig(val)
setattr(self, key, l)
else:
setattr(self, key, val)
self.update(aDict)
def _test():
import doctest, base
return doctest.testmod(base)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/config/base.py | base.py |
import os
import stat
import md5
import time
import copy
# constants
NOTIFY_CHANGE_CONTENTS = 1
NOTIFY_CHANGE_SIZE = 2
NOTIFY_CHANGE_ATIME = 3
NOTIFY_CHANGE_CTIME = 4
NOTIFY_CHANGE_MTIME = 5
NOTIFY_CHANGE_OWNERSHIP = 6
NOTIFY_CHANGE_PERMISSIONS = 7
NOTIFY_CHANGE_ALL = 8
DIR = 'directory'
CHR = 'character'
BLK = 'block'
REG = 'regular file'
FIFO = 'named pipe'
LNK = 'symbolic link'
SOCK = 'socket'
MODE = 0
INONUM = 1
DEV = 2
NLINK = 3
UID = 4
GID = 5
SIZE = 6
ATIME = 7
MTIME = 8
CTIME = 9
class NodeNotFound(Exception):
pass
class DirectoryNotFound(NodeNotFound):
pass
class FileNotFound(NodeNotFound):
pass
class CheckNode(object):
def __init__(self, path=''):
self.path = path
self.setNodeData()
def setNodeData(self):
self.stats = os.stat(self.path)
self.nodetype = self.getType(self.stats)
self.bytes = self.stats[SIZE]
self.atime = self.stats[ATIME]
self.ctime = self.stats[CTIME]
self.mtime = self.stats[MTIME]
self.owner = '%s:%s' % (self.stats[UID], self.stats[GID])
self.mode = stat.S_IMODE(self.stats[MODE])
self.md5sum = self.getMD5Sum()
def getType(self, stats):
mode = stats[MODE]
if stat.S_ISDIR(mode): return DIR
elif stat.S_ISCHR(mode): return CHR
elif stat.S_ISBLK(mode): return BLK
elif stat.S_ISREG(mode): return REG
elif stat.S_ISFIFO(mode): return FIFO
elif stat.S_ISLNK(mode): return LNK
elif stat.S_ISSOCK(mode): return SOCK
def getMD5Sum(self):
m = md5.new(self.sumString())
return m.hexdigest()
def sumString(self):
return '%s%s%s%s%s%s%s' % (self.nodetype, str(self.bytes),
str(self.mtime), self.owner, str(self.mode),
str(self.atime), str(self.ctime))
class CheckDir(CheckNode):
def setNodeData(self, path=''):
'''
setNodeData() must be overritten because not only does the base
class' method not have the data we want, we need to be able to call
a method that re-checks the status of the node and recalculate the
md5 sum.
super setNodeData() has to be called after contents is set due to the
fact that the sumString() method that this class overrides is called
(indirectly) from super setNodeData(), and this sumString() refers to
self.contents.
'''
if not path:
path = self.path
contents = os.listdir(path)
contents.sort()
self.contents = contents
super(CheckDir, self).setNodeData()
def sumString(self):
base = super(CheckDir, self).sumString()
contents = ''.join(self.contents)
return '%s%s' % (base, contents)
class CheckFile(CheckNode):
def setNodeData(self, path=''):
'''
setNodeData() must be overritten because not only does the base
class' method not have the data we want, we need to be able to call
a method that re-checks the status of the node and recalculate the
md5 sum.
super setNodeData() has to be called after contents is set due to the
fact that the sumString() method that this class overrides is called
(indirectly) from super setNodeData(), and this sumString() refers to
self.contents.
In this case, we have no reason to keep track of file contents after
md5 sum calculation, so we dump it immediately.
'''
if not path:
path = self.path
contents = open(path).read()
self.contents = contents
super(CheckFile, self).setNodeData()
self.contents = ''
def sumString(self):
base = super(CheckFile, self).sumString()
return '%s%s' % (base, self.contents)
class NodeData(object):
pass
class NotificationServer(object):
def __init__(self, checks=[], flags=[], callback=None, timeout=2):
'''
checks - a list of CheckNode (or child class) instances
flags - a list of flags used to filter changes
callback - function to execute on changes for which filters have been set
timeout - length of time between checks, in seconds
'''
self.checks = checks
self.flags = flags
self.callback = callback
self.timeout = timeout
def runOnce(self):
'''
* loop through every check that has been added
* check for changes that match the passed flag
* for each change, execute the callback
'''
for check in self.checks:
old_data = copy.copy(check.__dict__)
check.setNodeData()
new_data = check.__dict__
if new_data['md5sum'] != old_data['md5sum']:
self.callback(old_data, new_data)
def runDelay(self):
'''
* loop through every check that has been added
* check for changes that match the passed flag
* for each change, execute the callback
'''
self.runOnce()
time.sleep(self.timeout)
run = runDelay | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/os/nodecheck.py | nodecheck.py |
from datetime import tzinfo, timedelta, datetime
ZERO = timedelta(0)
HOUR = timedelta(hours=1)
# A UTC class.
class UTC(tzinfo):
"""UTC"""
def utcoffset(self, dt):
return ZERO
def tzname(self, dt):
return "UTC"
def dst(self, dt):
return ZERO
utc = UTC()
# A class building tzinfo objects for fixed-offset time zones.
# Note that FixedOffset(0, "UTC") is a different way to build a
# UTC tzinfo object.
class FixedOffset(tzinfo):
"""Fixed offset in minutes east from UTC."""
def __init__(self, offset, name):
self.__offset = timedelta(minutes = offset)
self.__name = name
def utcoffset(self, dt):
return self.__offset
def tzname(self, dt):
return self.__name
def dst(self, dt):
return ZERO
# A class capturing the platform's idea of local time.
import time as _time
STDOFFSET = timedelta(seconds = -_time.timezone)
if _time.daylight:
DSTOFFSET = timedelta(seconds = -_time.altzone)
else:
DSTOFFSET = STDOFFSET
DSTDIFF = DSTOFFSET - STDOFFSET
class LocalTimezone(tzinfo):
def utcoffset(self, dt):
if self._isdst(dt):
return DSTOFFSET
else:
return STDOFFSET
def dst(self, dt):
if self._isdst(dt):
return DSTDIFF
else:
return ZERO
def tzname(self, dt):
return _time.tzname[self._isdst(dt)]
def _isdst(self, dt):
tt = (dt.year, dt.month, dt.day,
dt.hour, dt.minute, dt.second,
dt.weekday(), 0, -1)
stamp = _time.mktime(tt)
tt = _time.localtime(stamp)
return tt.tm_isdst > 0
Local = LocalTimezone()
# A complete implementation of current DST rules for major US time zones.
def first_sunday_on_or_after(dt):
days_to_go = 6 - dt.weekday()
if days_to_go:
dt += timedelta(days_to_go)
return dt
# In the US, DST starts at 2am (standard time) on the first Sunday in April.
DSTSTART = datetime(1, 4, 1, 2)
# and ends at 2am (DST time; 1am standard time) on the last Sunday of Oct.
# which is the first Sunday on or after Oct 25.
DSTEND = datetime(1, 10, 25, 1)
class USTimeZone(tzinfo):
def __init__(self, hours, reprname, stdname, dstname):
self.stdoffset = timedelta(hours=hours)
self.reprname = reprname
self.stdname = stdname
self.dstname = dstname
def __repr__(self):
return self.reprname
def tzname(self, dt):
if self.dst(dt):
return self.dstname
else:
return self.stdname
def utcoffset(self, dt):
return self.stdoffset + self.dst(dt)
def dst(self, dt):
if dt is None or dt.tzinfo is None:
# An exception may be sensible here, in one or both cases.
# It depends on how you want to treat them. The default
# fromutc() implementation (called by the default astimezone()
# implementation) passes a datetime with dt.tzinfo is self.
return ZERO
assert dt.tzinfo is self
# Find first Sunday in April & the last in October.
start = first_sunday_on_or_after(DSTSTART.replace(year=dt.year))
end = first_sunday_on_or_after(DSTEND.replace(year=dt.year))
# Can't compare naive to aware objects, so strip the timezone from
# dt first.
if start <= dt.replace(tzinfo=None) < end:
return HOUR
else:
return ZERO
Eastern = USTimeZone(-5, "Eastern", "EST", "EDT")
Central = USTimeZone(-6, "Central", "CST", "CDT")
Mountain = USTimeZone(-7, "Mountain", "MST", "MDT")
Pacific = USTimeZone(-8, "Pacific", "PST", "PDT") | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/datetime/timezone.py | timezone.py |
from datetime import datetime
from datetime import time
import math
import pytz
DAYS_IN_YEAR = 365.25
X1 = 100
X2 = 30.6001
GREGORIAN_YEAR = 1582
GREGORIAN_MONTH = 10
GREGORIAN_DAY = 15
JD_NUM1_OFFSET = 1720994.5
JULIAN_EPOCH = 2451545.0
CENTURY_DAYS = 36525.0
COEF1 = 6.697374558
COEF2 = 2400.051336
COEF3 = 0.000025862
GMST_A = 280.46061837
GMST_B = 360.98564736629
GMST_C = 0.000387933
GMST_D = 38710000
SOLAR_SIDEREAL_RATIO = 1.002737909350795
GMT = pytz.timezone('GMT')
def cos(deg):
return math.cos(math.radians(deg))
def sin(deg):
return math.sin(math.radians(deg))
def todecimalhours(time_obj):
seconds = time_obj.second/(60.**2)
minutes = time_obj.minute/60.
return time_obj.hour + minutes + seconds
def fromdecimalhours(float_num):
pass
def hoursdayfraction(time_obj):
return todecimalhours(time_obj)/24.
def check_tz(datetime_obj):
if not datetime_obj.tzinfo:
raise "You must pass a datetime object with a timezone ('see pytz.timzone()')"
def ut(datetime_obj):
check_tz(datetime_obj)
return datetime_obj.astimezone(GMT)
def degreesToTime(degrees):
d = degrees % 360
hour, minutes = divmod((d/360) * 24, 1)
minute, seconds = divmod(minutes*60, 1)
second, micro = divmod(seconds*60, 1)
return time(int(hour), int(minute), int(second))
def isLeapYear(datetime_obj):
'''
>>> from datetime import datetime
>>> dt = datetime(1972,8,17); isLeapYear(dt)
True
>>> dt = datetime(2000,8,17); isLeapYear(dt)
True
>>> dt = datetime(2004,8,17); isLeapYear(dt)
True
>>> dt = datetime(2005,8,17); isLeapYear(dt)
False
'''
y = datetime_obj.year
if (y % 400 == 0) or (y % 4 == 0 and y % 100 != 0):
return True
else:
return False
def daysInMonth(datetime_obj):
'''
>>> from datetime import datetime
>>> dt = datetime(2000,1,1); daysInMonth(dt)
31
>>> dt = datetime(2000,2,1); daysInMonth(dt)
29
>>> dt = datetime(2001,2,1); daysInMonth(dt)
28
>>> dt = datetime(2005,3,1); daysInMonth(dt)
31
>>> dt = datetime(2005,4,1); daysInMonth(dt)
30
>>> dt = datetime(2005,5,1); daysInMonth(dt)
31
>>> dt = datetime(2005,6,1); daysInMonth(dt)
30
>>> dt = datetime(2005,7,1); daysInMonth(dt)
31
>>> dt = datetime(2005,8,1); daysInMonth(dt)
31
>>> dt = datetime(2005,9,1); daysInMonth(dt)
30
>>> dt = datetime(2005,10,1); daysInMonth(dt)
31
>>> dt = datetime(2005,11,1); daysInMonth(dt)
30
>>> dt = datetime(2005,12,1); daysInMonth(dt)
31
'''
m = datetime_obj.month
y = datetime_obj.year
if m == 2:
if isLeapYear(datetime_obj):
return 29
else:
return 28
elif m in [9,4,6,11]:
return 30
elif m in [1,3,5,7,8,10,12]:
return 31
def dayOfYear(datetime_obj):
'''
>>> from datetime import datetime
>>> dt = datetime(2000,1,1); dayOfYear(dt)
1
>>> dt = datetime(2000,12,31); dayOfYear(dt)
366
>>> dt = datetime(2005,12,31); dayOfYear(dt)
365
>>> dt = datetime(1972,8,17); dayOfYear(dt)
230
>>> dt = datetime(2005,8,17); dayOfYear(dt)
229
'''
y, m, d = datetime_obj.year, datetime_obj.month, datetime_obj.day
if isLeapYear(datetime_obj):
n = int((275*m)/9 - ((m + 9)/12) + int(d) - 30)
else:
n = int((275*m)/9 - 2*((m + 9)/12) + int(d) - 30)
return n
def julianToDateTime(jd):
'''
>>> julianToDateTime(2451544.49999).timetuple()
(1999, 12, 31, 23, 59, 59, 4, 365, 0)
>>> julianToDateTime(2451544.5).timetuple()
(2000, 1, 1, 0, 0, 0, 5, 1, 0)
>>> julianToDateTime(2453682.54411).timetuple()
(2005, 11, 8, 1, 3, 31, 1, 312, 0)
>>> julianToDateTime(2453736.49999).timetuple()
(2005, 12, 31, 23, 59, 59, 5, 365, 0)
>>> julianToDateTime(2453736.5).timetuple()
(2006, 1, 1, 0, 0, 0, 6, 1, 0)
'''
if jd < 0: raise "Can't handle negative days."
jd += 0.5
z = int(jd)
f = jd - z
a = z
if z >= 2299161:
alpha = int((z - 1867216.26)/36254.25)
a = z + 1 + alpha - int(alpha/4)
b = a + 1524
c = int((b - 122.1)/365.25)
d = int(365.25 * c)
e = int((b - d)/30.6001)
day = b - d - int(30.6001 * e) + f
if e < 13.5:
month = int(e - 1)
else:
month = int(e - 13)
if month > 2.5:
year = int(c - 4716)
else:
year = int(c - 4715)
day, hours = divmod(day, 1)
hour, minutes = divmod(hours * 24, 1)
minute, seconds = divmod(minutes * 60, 1)
second, micros = divmod(seconds * 60, 1)
micro = round(micros * 1000)
return datetime(int(year), int(month), int(day),
int(hour), int(minute), int(second), int(micro), GMT)
def dayOfWeek(datetime_obj):
'''
>>> from datetime import datetime
>>> dt = datetime(2005,11,6); dayOfWeek(dt)
0
>>> dt = datetime(2005,11,7); dayOfWeek(dt)
1
>>> dt = datetime(2005,11,11); dayOfWeek(dt)
5
>>> dt = datetime(2005,11,12); dayOfWeek(dt)
6
'''
return (datetime_obj.weekday() + 1) % 7
def julian(datetime_obj):
'''
Currently, this produces incorrect julian dates for dates
less than the Gregorian switch-over.
>>> dt = datetime(2299, 12, 31, 23, 59, 59); julian(dt) - 2561117.49999
0.0
>>> dt = datetime(2199, 12, 31, 23, 59, 59); julian(dt) - 2524593.49999
0.0
>>> dt = datetime(2099, 12, 31, 23, 59, 59); julian(dt) - 2488069.49999
0.0
>>> dt = datetime(1999, 12, 31, 23, 59, 59); julian(dt) - 2451544.49999
0.0
>>> dt = datetime(1899, 12, 31, 23, 59, 59); julian(dt) - 2415020.49999
0.0
>>> dt = datetime(1799, 12, 31, 23, 59, 59); julian(dt) - 2378496.49999
0.0
>>> dt = datetime(1699, 12, 31, 23, 59, 59); julian(dt) - 2341972.49999
0.0
>>> dt = datetime(1599, 12, 31, 23, 59, 59); julian(dt) - 2305447.49999
0.0
>>> dt = datetime(1499, 12, 31, 23, 59, 59)
>>> dt = datetime(1399, 12, 31, 23, 59, 59)
>>> dt = datetime(1299, 12, 31, 23, 59, 59)
'''
tz = datetime_obj.tzinfo
if tz and tz != GMT:
datetime_obj = ut(datetime_obj)
y, m, d, h, mn, s, nil, nil, tz = (datetime_obj.timetuple())
d = float(d + h/24. + mn/60. + s/60.**2)
if m < 3:
m += 12
y -= 1
# my day correction to bring it into accordance with the USNO Julian Calculator
d -= 0.9580655555
#julian = d + (153*m - 457)/5 + int(365.25 *y) - int(y * .01 ) + int(y * .0025 ) + 1721118.5
if datetime_obj < datetime(GREGORIAN_YEAR, GREGORIAN_MONTH,
GREGORIAN_DAY):
b = 0
else:
b = int(y / 4) - int(y / 100) + int(y / 400)
if y < 0:
c = int((365.25 * Y) - .75)
else:
c = int(365.25) * y
julian = d + int((153 * m - 457)/5) + b + c + 1721118.5
return float(julian)
def julianToDegrees(jd):
'''
>>>
'''
def meanSiderealTime(datetime_obj):
'''
Returns the Mean Sidereal Time in degrees:
>>> dt = datetime(1994, 06, 16, 18, 0, 0)
>>> degreesToTime(meanSiderealTime(dt))
datetime.time(11, 39, 5)
# vernal equinox in 2006
>>> dt = datetime(2006, 03, 20, 13, 26, 00)
>>> degreesToTime(meanSiderealTime(dt))
datetime.time(11, 17, 23)
# hmmm...
'''
jd = julian(datetime_obj)
d = jd - (julian(datetime(2000,1,1)) + 0.5)
#d = -2024.75000
mst = GMST_A + (GMST_B * d) + (GMST_C * (d/CENTURY_DAYS)**2)
return mst % 360
greenwhichMeanSiderealTime = meanSiderealTime
GMST = meanSiderealTime
def localSiderealTime(datetime_obj, longitude):
'''
>>> dt = datetime(1994, 06, 16, 18, 0, 0)
>>> longitude = -105.09 # loveland, co
>>> localSiderealTime(dt, longitude)
>>> degreesToTime(localSiderealTime(dt, longitude))
datetime.time(4, 38, 43)
'''
gmst = meanSiderealTime(datetime_obj)
return (gmst + longitude) % 360
#localMeanSiderealTime = localSiderealTime
#LMST = localSiderealTime
def equationOfTheEquinoxes(datetime_obj):
jd = julian(datetime_obj)
d = jd - (julian(datetime(2000,1,1)) + 0.5)
c = d/CENTURY_DAYS
Om = (125.04452 - 0.052954 * c) % 360
L = (280.4665 + 0.98565 * c) % 360
epsilon = (23.4393 - 0.0000004 * c) % 360
delta_psi = -0.000319 * sin(Om) - 0.000024 * sin(2*L)
return delta_psi * cos(epsilon)
def apparentSideralTime(datetime_obj):
'''
>>> dt = datetime(1994, 06, 16, 18, 0, 0)
>>> apparentSideralTime(dt)
174.77457329366436
>>> dt = datetime.now()
>>> apparentSideralTime(dt)
'''
jd = julian(datetime_obj)
d = jd - (julian(datetime(2000,1,1)) + 0.5)
c = d/CENTURY_DAYS
Om = (125.04452 - 1934.136261 * c) % 360
L = (280.4665 + 36000.7698 * c) % 360
L1 = (218.3165 + 481267.8813 * c) % 360
e = (23.4393 - 0.0000004 * c) % 360
dp = -17.2 * sin(Om) - 1.32 * sin(2 * L) - 0.23 * sin(2 * L1) + 0.21 * sin(2 * Om)
de = 9.2 * cos(Om) + 0.57 * cos(2 * L) + 0.1 * cos(2 * L1) - 0.09 * cos(2 * Om)
gmst = meanSiderealTime(datetime_obj)
correction = dp * cos(e) / 3600
return gmst + correction
def localApparentSiderealTime(datetime_obj, longitude):
'''
'''
gast = apparentSideralTime(datetime_obj)
return (gast + longitude) % 360
def currentLAST(longitude):
'''
>>> currentLAST(-105.09)
>>> degreesToTime(currentLAST(-105.09))
'''
return localApparentSiderealTime(datetime.now(), longitude)
def _test():
from doctest import testmod
import timetime
testmod(timetime)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/datetime/timetime.py | timetime.py |
try:
from ephem import Moon
from mx.DateTime import DateTime, DateTimeFromJDN
except:
raise '''
You must have pyephem and mxDateTime installed in order to use this module.
PyEphem (ephem): http://www.rhodesmill.org/brandon/projects/pyephem.html
mxDateTime (mx.DateTime): http://www.egenix.com/files/python/eGenix-mx-Extensions.html#Download-mxBASE
'''
PHASE_DAYS = 29.530587
'''
from MoonPhase import MoonPhase
m = MoonPhase('2004/08/01')
m.getStageName()
m.getPhaseDay()
from math import radians
for i in range(31):
m.setDate('2004/08/%s' % str(i+1));p = m.getPhasePercent();degs = 90*p/100;rads = radians(degs)
print i + 1, round(p), m.getStageName(), round(degs)
'''
class MoonPhase:
def __init__(self, date_str):
'''
date_str takes the form 'YYYY/MM/DD'
'''
self.moon = Moon()
self.setDate(date_str)
def getPhasePercent(self):
try:
if self.phase_percent:
return self.phase_percent
except:
self.phase_percent = self.moon.phase
return self.phase_percent
def getPhaseDegrees(self):
self.phase_degrees = self.moon.moon_phase
return self.phase_degrees
def getPhaseName(self):
pcnt = self.getPhaseDay()
if pcnt < 2:
self.phase_name = 'New'
elif pcnt <= 6:
self.phase_name = 'Cresent Waxing'
elif pcnt > 46 and pcnt < 54:
self.phase_name = 'Half Moon'
elif pcnt > 99:
self.phase_name = 'Full'
else:
self.phase_name = None
def getAll(self):
return (self.getPhasePercent(), self.getPhaseDegrees(), self.getPhaseName())
def getStage(self):
try:
if self.stage:
return self.stage
except:
pass
start_day = self.start
start = self.getPhasePercent()
day_before = self.getMoonPhaseDate(start_day - 1)
self.setDate(day_before)
day_before = self.getPhasePercent()
day_after = self.getMoonPhaseDate(start_day + 1)
self.setDate(day_after)
day_after = self.getPhasePercent()
#print day_before, start, day_after
if day_after > start and day_before > start:
stage = 0
elif day_after < start and day_before < start:
stage = 1
elif day_after > start and day_before < start:
stage = 0.5
elif day_after < start and day_before > start:
stage = -0.5
# put things back
self.resetMoon(start_day)
self.stage = stage
return self.stage
def getStageName(self):
try:
if self.stage_name:
return self.stage_name
except:
self.getStage()
stages = {
0: 'New',
1: 'Full',
0.5: 'Waxing',
-0.5: 'Waning',
}
self.stage_name = stages[self.stage]
return self.stage_name
def resetMoon(self, dt_object):
start = self.getMoonPhaseDate(dt_object)
self.setDate(start)
def getRecentNew(self):
'''
Returns a string date, but saves a DateTime object to instance
'''
start_day = self.start
if self.getStageName == 'New':
return start_day
else:
for day in range(1,31):
check_day = DateTimeFromJDN(start_day.jdn - day)
check_day_str = self.getMoonPhaseDate(check_day)
#print check_day.jdn, check_day_str
self.setDate(check_day_str)
if self.getStageName() == 'New':
self.last_new = check_day
# put things back
self.resetMoon(start_day)
return check_day_str
def getMoonPhaseDate(self, dt_object):
return dt_object.strftime('%Y/%m/%d')
def getDayOfPhase(self):
try:
if self.phase_day:
return self.phase_day
except:
pass
self.getRecentNew()
self.phase_day = int(self.start.jdn - self.last_new.jdn)
return self.phase_day
getPhaseDay = getDayOfPhase
def setDate(self, date_str):
(self.year, self.month, self.day) = [ int(x) for x in date_str.split('/') ]
self.start = DateTime(self.year, self.month, self.day)
self.daybefore = DateTimeFromJDN(self.start.jdn - 1)
self.dayafter = DateTimeFromJDN(self.start.jdn + 1)
self.moon.compute(date_str)
try:
del(self.phase_day)
except:
pass
try:
del(self.phase_percent)
except:
pass
try:
del(self.phase_degrees)
except:
pass
try:
del(self.stage)
except:
pass
try:
del(self.stage_name)
except:
pass | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/util/moonphase.py | moonphase.py |
class Switch(object):
'''
Title: Readable switch construction without lambdas or dictionaries
Submitter: Brian Beck
Last Updated: 2005/04/26
Version no: 1.7
Category: Extending
Description: Python's lack of a 'switch' statement has garnered
much discussion and even a PEP. The most popular substitute uses
dictionaries to map cases to functions, which requires lots of defs
or lambdas. While the approach shown here may be O(n) for cases,
it aims to duplicate C's original 'switch' functionality and structure
with reasonable accuracy.
Source: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/410692
This class provides the functionality we want. You only need to look at
this if you want to know how this works. It only needs to be defined
once, no need to muck around with its internals.
# The following example is pretty much the exact use-case of a dictionary,
# but is included for its simplicity. Note that you can include statements
# in each suite.
>>> v = 'ten'
>>> for case in switch(v):
... if case('one'):
... print 1
... break
... if case('two'):
... print 2
... break
... if case('ten'):
... print 10
... break
... if case('eleven'):
... print 11
... break
... if case(): # default, could also just omit condition or 'if True'
... print "something else!"
... # No need to break here, it'll stop anyway
10
# break is used here to look as much like the real thing as possible, but
# elif is generally just as good and more concise.
# Empty suites are considered syntax errors, so intentional fall-throughs
# should contain 'pass'
>>> c = 'z'
>>> for case in switch(c):
... if case('a'): pass # only necessary if the rest of the suite is empty
... if case('b'): pass
... # ...
... if case('y'): pass
... if case('z'):
... print "c is lowercase!"
... break
... if case('A'): pass
... # ...
... if case('Z'):
... print "c is uppercase!"
... break
... if case(): # default
... print "I dunno what c was!"
c is lowercase!
# As suggested by Pierre Quentel, you can even expand upon the
# functionality of the classic 'case' statement by matching multiple
# cases in a single shot. This greatly benefits operations such as the
# uppercase/lowercase example above:
>>> import string
>>> c = 'A'
>>> for case in switch(c):
... if case(*string.lowercase): # note the * for unpacking as arguments
... print "c is lowercase!"
... break
... if case(*string.uppercase):
... print "c is uppercase!"
... break
... if case('!', '?', '.'): # normal argument passing style also applies
... print "c is a sentence terminator!"
... break
... if case(): # default
... print "I dunno what c was!"
c is uppercase!
# Since Pierre's suggestion is backward-compatible with the original recipe,
# I have made the necessary modification to allow for the above usage.
'''
def __init__(self, value):
self.value = value
self.fall = False
def __iter__(self):
"""Return the match method once, then stop"""
yield self.match
raise StopIteration
def match(self, *args):
"""Indicate whether or not to enter a case suite"""
if self.fall or not args:
return True
elif self.value in args:
self.fall = True
return True
else:
return False
switch = Switch
import operator
from UserDict import UserDict
class Multicast(UserDict):
'''
Class multiplexes messages to registered objects
Title: Multicasting on objects
Submitter: Eduard Hiti (other recipes)
Last Updated: 2001/12/23
Version no: 1.1
Category: OOP
Description: Use the 'Multicast' class to multiplex messages/attribute
requests to objects which share the same interface.
Source: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52289
>>> import StringIO
>>> file1 = StringIO.StringIO()
>>> file2 = StringIO.StringIO()
>>> multicast = Multicast()
>>> multicast[id(file1)] = file1
>>> multicast[id(file2)] = file2
>>> assert not multicast.closed
>>> multicast.write("Testing")
>>> assert file1.getvalue() == file2.getvalue() == "Testing"
>>> multicast.close()
>>> assert multicast.closed
'''
def __init__(self, objs=[]):
UserDict.__init__(self)
for alias, obj in objs: self.data[alias] = obj
def __call__(self, *args, **kwargs):
"Invoke method attributes and return results through another Multicast"
return self.__class__( [ (alias, obj(*args, **kwargs) ) for alias, obj in self.data.items() ] )
def __nonzero__(self):
"A Multicast is logically true if all delegate attributes are logically true"
return operator.truth(reduce(lambda a, b: a and b, self.data.values(), 1))
def __getattr__(self, name):
"Wrap requested attributes for further processing"
return self.__class__( [ (alias, getattr(obj, name) ) for alias, obj in self.data.items() ] )
def __setattr__(self, name, value):
if name == "data":
self.__dict__[name]=value
return
for object in self.values():
setattr(object, name, value)
def _test():
import doctest, python
doctest.testmod(python)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/util/python.py | python.py |
def createDsn(**kw):
'''
# check named parameters
>>> createDsn()
Traceback (most recent call last):
AssertionError: You must specify 'dbtype'.
>>> createDsn(dbname='ziff', user='me', password='secret')
Traceback (most recent call last):
AssertionError: You must specify 'dbtype'.
>>> createDsn(dbtype='mysql', user='me', passwd='secret', dbname='ziff')
Traceback (most recent call last):
AssertionError: You must use the parameter name 'password'.
>>> createDsn(dbtype='mysql', user='me', pwd='secret', dbname='ziff')
Traceback (most recent call last):
AssertionError: You must use the parameter name 'password'.
# try different forms
>>> createDsn(dbtype='mysql', dbname='ziff')
'mysql:///ziff'
>>> createDsn(dbtype='mysql', dbname='ziff', user='')
'mysql:///ziff'
>>> createDsn(dbtype='mysql', user='me', dbname='ziff')
'mysql://me/ziff'
>>> createDsn(dbtype='mysql', user='me', password='secret', dbname='ziff')
'mysql://me:secret/ziff'
>>> createDsn(dbtype='mysql', user='me', password='secret', host='', dbname='ziff')
'mysql://me:secret/ziff'
>>> createDsn(dbtype='mysql', user='me', password='secret', host='copwaf01', dbname='ziff')
'mysql://me:secret@copwaf01/ziff'
>>> createDsn(dbtype='mysql', user='', password='', host='copwaf01', dbname='ziff')
'mysql://copwaf01/ziff'
>>> createDsn(dbtype='mysql', user='me', password='secret', host='copwaf01', port='', dbname='ziff')
'mysql://me:secret@copwaf01/ziff'
>>> createDsn(dbtype='mysql', user='me', password='secret', host='copwaf01', port='6669', dbname='ziff')
'mysql://me:secret@copwaf01:6669/ziff'
>>> createDsn(dbtype='custom', dbname='mydb', path='/var/db', filename='mydb.dbz')
'custom:///mydb?path=/var/db;filename=mydb.dbz;'
'''
assert(kw is not None), \
"You must pass named parameters in the function call."
assert(kw.has_key('dbtype')), \
"You must specify 'dbtype'."
assert(True not in [ x in ['passwd', 'pwd']
for x in kw.keys() ]), \
"You must use the parameter name 'password'."
standard_params = ['dbtype', 'dbname', 'user', 'password', 'host', 'port']
used_params = {}
for param in standard_params:
if kw.has_key(param):
val = kw.pop(param)
if bool(val):
used_params[param] = val
non_standard_params = kw
dsn = '%s://' % used_params['dbtype']
if used_params.has_key('user'):
dsn += '%s' % used_params['user']
if used_params.has_key('password'):
dsn += ':%s' % used_params['password']
if used_params.has_key('host'):
host = used_params['host']
if used_params.has_key('user'):
host = '@' + host
dsn += '%s' % host
if used_params.has_key('port'):
dsn += ':%s' % used_params['port']
if used_params.has_key('dbname'):
dsn += '/%s' % used_params['dbname']
if non_standard_params:
dsn += '?'
for option, value in non_standard_params.items():
dsn += '%s=%s;' % (option, value)
return dsn
def parseDsn(dsn):
import re
def _test():
import doctest, dsn
doctest.testmod(dsn)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/util/dsn.py | dsn.py |
import sgmllib, string
class StrippingParser(sgmllib.SGMLParser):
from htmlentitydefs import entitydefs # replace entitydefs from sgmllib
def __init__(self):
sgmllib.SGMLParser.__init__(self)
self.result = ""
self.endTagList = []
def handle_data(self, data):
if data:
self.result = self.result + data
def handle_charref(self, name):
self.result = "%s&#%s;" % (self.result, name)
def handle_entityref(self, name):
if self.entitydefs.has_key(name):
x = ';'
else:
# this breaks unstandard entities that end with ';'
x = ''
self.result = "%s&%s%s" % (self.result, name, x)
valid_tags = ('b', 'a', 'i', 'br', 'p')
def unknown_starttag(self, tag, attrs):
""" Delete all tags except for legal ones """
if tag in self.valid_tags:
self.result = self.result + '<' + tag
for k, v in attrs:
if string.lower(k[0:2]) != 'on' and string.lower(v[0:10]) != 'javascript':
self.result = '%s %s="%s"' % (self.result, k, v)
endTag = '</%s>' % tag
self.endTagList.insert(0,endTag)
self.result = self.result + '>'
def unknown_endtag(self, tag):
if tag in self.valid_tags:
self.result = "%s</%s>" % (self.result, tag)
remTag = '</%s>' % tag
self.endTagList.remove(remTag)
def cleanup(self):
""" Append missing closing tags """
for j in range(len(self.endTagList)):
self.result = self.result + self.endTagList[j]
def strip(s):
parser = StrippingParser()
parser.feed(s)
parser.close()
parser.cleanup()
return parser.result
if __name__=='__main__':
text = """<HTML>
<HEAD><TITLE> New Document </TITLE>
<META NAME="Keywords" CONTENT=""></HEAD>
<BODY BGCOLOR="#FFFFFF">
<h1> Hello! </h1>
<a href="index.html" align="left" ONClick="window.open()">index.html</a>
<p>This is some <B><B><B name="my name">this is<I> a simple</I> text</B>
<a><a href="JAVascript:1/0">link</a>.
</BODY></HTML>"""
print text
print "==============================================="
print strip(text) | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/util/striphtml.py | striphtml.py |
import re
URI_REGEX = r'^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?'
URI_RE = re.compile(URI_REGEX)
# this regex doesn't work for just host or just host+port
AUTH_REGEX = r'^(([^:/?#@]+):)?(([^:/?#]+)@)?([^/?#:]*)(:([0-9]+))?'
AUTH_RE = re.compile(AUTH_REGEX)
class Uri(object):
'''
A convenience class for access parts of a URI as defined by
RFC 2396 (see http://www.faqs.org/rfcs/rfc2396.html).
>>> u = Uri("ping://hostname")
>>> u.getScheme()
'ping'
>>> u.getAuthority().getHost()
'hostname'
>>> u = Uri("http+status://hostname")
>>> u.getScheme()
'http+status'
>>> u.getAuthority().getHost()
'hostname'
>>> u = Uri("remote+process://bind@hostname")
>>> u.getScheme()
'remote+process'
>>> a = u.getAuthority()
>>> a.getUser()
'bind'
>>> a.getHost()
'hostname'
>>> u.getPath()
>>> u.getQuery()
{}
>>> u.getFragment()
>>> u = Uri("http+scrape://hostname/page_name.html?string=Welcome+to+plone")
>>> u.getScheme()
'http+scrape'
>>> a = u.getAuthority()
>>> a.getHost()
'hostname'
>>> u.getPath()
'/page_name.html'
>>> q = u.getQuery()
>>> q.string
'Welcome+to+plone'
>>> u = Uri("hostname/page_name.html?string=Welcome+to+plone")
>>> u.getScheme()
>>> a = u.getAuthority()
>>> a.getHost()
'hostname'
>>> u.getPath()
'/page_name.html'
>>> q = u.getQuery()
>>> q.string
'Welcome+to+plone'
'''
def __init__(self, uri):
self.uri = uri
self.matches = URI_RE.match(uri)
def getScheme(self):
return self.matches.group(2)
def getAuthority(self):
return Authority(self.matches.group(4))
def getPath(self):
path = self.matches.group(5)
if not path:
return None
return self.matches.group(5)
def getQuery(self):
return Query(self.matches.group(7))
def getFragment(self):
return self.matches.group(9)
class Authority(object):
'''
A custom URI Authority section parser.
This section is parsed according to the spec for URLs in
RFC 1738, section 3.1 (http://www.faqs.org/rfcs/rfc1738.html):
<user>:<password>@<host>:<port>/<url-path>
>>> auth = 'oubiwann:[email protected]:8080'
>>> a1 = Authority(auth)
>>> a1.getUser()
'oubiwann'
>>> a1.getPassword()
'secret'
>>> a1.getHost()
'adytum.us'
>>> a1.getPort()
8080
>>> auth = '[email protected]:8080'
>>> a2 = Authority(auth)
>>> a2.getUser()
'oubiwann'
>>> a2.getPassword()
>>> a2.getHost()
'adytum.us'
>>> a2.getPort()
8080
>>> auth = '[email protected]'
>>> a3 = Authority(auth)
>>> a3.getUser()
'oubiwann'
>>> a3.getPassword()
>>> a3.getHost()
'adytum.us'
>>> a3.getPort()
>>> auth = 'adytum.us:8080'
>>> a4 = Authority(auth)
>>> a4.getUser()
>>> a4.getPassword()
>>> a4.getHost()
'adytum.us'
>>> a4.getPort()
8080
'''
def __init__(self, auth):
self.auth = auth
auths = auth.split('@')
if len(auths) == 2:
userpass = auths.pop(0)
userpass = userpass.split(':')
self.user = userpass.pop(0)
try:
self.passwd = userpass.pop(0)
except:
self.passwd = None
else:
self.user = self.passwd = None
hostport = auths[0].split(':')
self.host = hostport.pop(0)
try:
self.port = hostport.pop(0)
except:
self.port = None
def getUser(self):
return self.user
def getPassword(self):
return self.passwd
def getHost(self):
return self.host
def getPort(self):
if self.port:
return int(self.port)
class Query(dict):
'''
A custom URI Query section parser.
>>> q = Query('state=ME&city=Bangor&st=Husson+Ave.&number=1320')
>>> q['state']
'ME'
>>> q['number']
'1320'
>>> q['doesntexist']
Traceback (most recent call last):
KeyError: 'doesntexist'
>>> q.state
'ME'
>>> q.number
'1320'
>>> q.doesntexist
Traceback (most recent call last):
AttributeError: 'Query' object has no attribute 'doesntexist'
>>> q = Query('state=ME')
>>> q.state
'ME'
'''
def __init__(self, query):
self.query = query
if query:
query_dict = dict([x.split('=') for x in query.split('&') ])
self.update(query_dict)
for key, val in query_dict.items():
self.__setattr__(key, val)
def _test():
import doctest, uri
doctest.testmod(uri)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/util/uri.py | uri.py |
class WorkflowError(Exception):
pass
class Workflow(object):
"""
This class is used to describe a workflow (actually it's just a
graph). A workflow has states (one of them is the initial state),
and states have transitions that go to another state.
"""
def __init__(self):
"""
Initialize the workflow.
"""
self.states = {}
self.initstate = None
def addState(self, name, **kw):
"""
Adds a new state.
The keywords argument lets to add arbitrary metadata to
describe the transition.
"""
self.states[name] = State(**kw)
def setInitState(self, name):
"""
Sets the default initial state.
"""
if name not in self.states:
raise WorkflowError, "invalid initial state: '%s'" % name
self.initstate = name
def addTrans(self, name, state_from, state_to, **kw):
"""
Adds a new transition, 'state_from' and 'state_to' are
respectively the origin and destination states of the
transition.
The keywords argument lets to add arbitrary metadata to
describe the transition.
"""
if not isinstance(state_from, list):
state_from = [state_from]
if not isinstance(state_to, list):
state_to = [state_to]
for sf in state_from:
for st in state_to:
transition = Transition(sf, st, **kw)
try:
sf = self.states[sf]
except KeyError:
raise WorkflowError, "unregistered state: '%s'" % sf
try:
st = self.states[st]
except KeyError:
raise WorkflowError, "unregistered state: '%s'" % st
sf.addTrans(name, transition)
class State(object):
"""
This class is used to describe a state. A state has transitions
to other states.
"""
def __init__(self, **kw):
"""
Initialize the state.
"""
self.transitions = {}
self.metadata = kw
def __getitem__(self, key):
"""
Access to the metadata as a mapping.
"""
return self.metadata.get(key)
def addTrans(self, name, transition):
"""
Adds a new transition.
"""
self.transitions[name] = transition
class Transition(object):
"""
This class is used to describe transitions. Transitions come from
one state and go to another.
"""
def __init__(self, state_from, state_to, **kw):
"""
Initialize the transition.
"""
self.state_from = state_from
self.state_to = state_to
self.metadata = kw
def __getitem__(self, key):
"""
Access to the metadata as a mapping.
"""
return self.metadata.get(key)
class WorkflowAware(object):
"""
Mixin class to be used for workflow aware objects. The instances of
a class that inherits from WorkflowAware can be "within" the workflow,
this means that they keep track of the current state of the object.
Specific application semantics for states and transitions can be
implemented as methods of the WorkflowAware-derived "developer
class". These methods get associated with the individual
states and transitions by a simple naming scheme. For example,
if a workflow has two states 'Private' and 'Public', and a
transition 'Publish' that goes from 'Private' to 'Public',
the following happens when the transition is executed:
1. if implemented, the method 'onLeavePrivate' is called
(it is called each time the object leaves the 'Private' state)
2. if implemented, the method 'onTransPublish' is called
(it is called whenever this transition is executed)
3. if implemented, the method 'onEnterPublic' is called
(it is called each time the object enters the 'Public' state)
These state/transition "handlers" can also be passed arguments
from the caller of the transition; for instance, in a web-based
system it might be useful to have the HTTP request that triggered
the current transition available in the handlers.
A simple stack with three methods, 'pushdata', 'popdata' adn 'getdata',
is implemented. It can be used, for example, to keep record of the states
the object has been in.
# Lets test this puppy in action.
# Instantiate and setup workflow states:
>>> wf = Workflow()
>>> wf.addState('Normal',
... description='Normal operation with no alerts')
>>> wf.addState('Warn',
... description='App is in WARN state')
>>> wf.addState('Error',
... description='App is in ERROR state')
>>> wf.addState('Escalate',
... description='App is escalating')
# setup workflow transitions
>>> wf.addTrans('Normaling', ['Normal', 'Escalate', 'Warn', 'Error'], 'Normal',
... description='The app has gone to OK')
>>> wf.addTrans('Warning', ['Normal', 'Warn', 'Error'], 'Warn',
... description='The app has gone from OK to WARN')
>>> wf.addTrans('Erring', ['Normal', 'Warn', 'Error'], 'Error',
... description='The app has gone to state ERROR')
>>> wf.addTrans('Recovering', ['Warn', 'Error', 'Escalate'], 'Normal',
... description='The app has resumed normal operation, but the previous state was either WARN or ERROR')
>>> wf.addTrans('Escalating', ['Warn', 'Error', 'Escalate'], 'Escalate',
... description='The app has received too many counts of a certain kind')
# setup initial state
>>> wf.setInitState('Normal')
# define a workflow-aware class
>>> class AppState(WorkflowAware):
... def __init__(self, workflow=None):
... self.enterWorkflow(workflow, None, "Just Created")
... def onEnterNormal(self):
... print '+ Entering normal state...'
... def onLeaveNormal(self):
... print '- Leaving normal state...'
... def onEnterWarn(self):
... print '+ Entering warning state...'
... def onLeaveWarn(self):
... print '- Leaving warning state...'
... def onEnterError(self):
... print '+ Entering error state...'
... def onLeaveError(self):
... print '- Leaving error state...'
... def onEnterRecover(self):
... print '+ Entering recover state...'
... def onLeaveRecover(self):
... print '- Leaving recover state...'
... def onTransRecovering(self):
... print '* Transitioning in recovering state...'
... def onTransEscalating(self):
... print '* Issue unaddressed: escalating...'
# constants
>>> OK = 'Normal'
>>> WARN = 'Warn'
>>> ERROR = 'Error'
>>> RECOV = 'Recover'
# lookup
>>> def getTestStateName(index):
... if index == -1: return None
... if index == 0: return OK
... if index == 1: return WARN
... if index == 2: return ERROR
# put the workflow though its paces
>>> def processStates(test_pattern):
... app_state = AppState(workflow=wf)
... count = 0
... for i in range(0, len(test_pattern)):
... state = getTestStateName(test_pattern[i])
... if i == 0: last_state = -1
... else: last_state = test_pattern[i-1]
... #print last_state
... last_state = getTestStateName(last_state)
... if state == last_state:
... count += 1
... else:
... count = 0
... #print 'count: %s' % count
... #print 'state: %s' % state
... #print 'last state: %s' % last_state
...
... if state is OK and last_state not in [OK, None]:
... app_state.doTrans('Recovering')
... elif state is not OK and count > 2 and state != 'Escalate':
... app_state.doTrans('Escalating')
... elif state is WARN:
... app_state.doTrans('Warning')
... elif state is ERROR:
... app_state.doTrans('Erring')
>>> test_pattern = [0,1,2]
>>> processStates(test_pattern)
- Leaving normal state...
+ Entering warning state...
- Leaving warning state...
+ Entering error state...
>>> test_pattern = [0,2,0,0]
>>> processStates(test_pattern)
- Leaving normal state...
+ Entering error state...
- Leaving error state...
* Transitioning in recovering state...
+ Entering normal state...
>>> test_pattern = [2,2,2,2,2]
>>> processStates(test_pattern)
- Leaving normal state...
+ Entering error state...
- Leaving error state...
+ Entering error state...
- Leaving error state...
+ Entering error state...
- Leaving error state...
* Issue unaddressed: escalating...
* Issue unaddressed: escalating...
"""
def enterWorkflow(self, workflow=None, initstate=None, *args, **kw):
"""
[Re-]Bind this object to a specific workflow, if the 'workflow'
parameter is omitted then the object associated workflow is kept.
This lets, for example, to specify the associate workflow with a
class varible instead of with an instance attribute.
The 'initstate' parameter is the workflow state that should be
taken on initially (if omitted or None, the workflow must provide
a default initial state).
Extra arguments args are passed to the enter-state handler (if any)
of the initial state.
"""
# Set the associated workflow
if workflow is not None:
self.workflow = workflow
# Set the initial state
if initstate is None:
initstate = self.workflow.initstate
if not initstate:
raise WorkflowError, 'undefined initial state'
if not self.workflow.states.has_key(initstate):
raise WorkflowError, "invalid initial state: '%s'" % initstate
self.workflow_state = initstate
self.last_workflow_state = None
# Call app-specific enter-state handler for initial state, if any
name = 'onenter_%s' % initstate
if hasattr(self, name):
getattr(self, name)(*args, **kw)
def doTrans(self, transname, *args, **kw):
"""
Performs a transition, changes the state of the object and
runs any defined state/transition handlers. Extra
arguments are passed down to all handlers called.
"""
# Get the workflow
workflow = self.workflow
# Get the current state
state = workflow.states[self.workflow_state]
try:
# Get the new state name
self.last_workflow_state = state.transitions[transname].state_from
state = state.transitions[transname].state_to
except KeyError:
raise WorkflowError, \
"transition '%s' is invalid from state '%s'" \
% (transname, self.workflow_state)
# call app-specific leave- state handler if any
name = 'onLeave%s' % self.workflow_state
if hasattr(self, name):
getattr(self, name)(*args, **kw)
# Set the new state
self.workflow_state = state
# call app-specific transition handler if any
name = 'onTrans%s' % transname
if hasattr(self, name):
getattr(self, name)(*args, **kw)
# call app-specific enter-state handler if any
name = 'onEnter%s' % state
if hasattr(self, name):
getattr(self, name)(*args, **kw)
def getStatename(self):
"""Return the name of the current state."""
return self.workflow_state
def getLastStatename(self):
return self.last_workflow_state
def getState(self):
"""Returns the current state instance."""
statename = self.getStatename()
return self.workflow.states.get(statename)
# Implements a stack that could be used to keep a record of the
# object way through the workflow.
# A tuple is used instead of a list so it will work nice with
# the ZODB.
workflow_history = ()
def pushdata(self, data):
"""
Adds a new element to the top of the stack.
"""
self.workflow_history = self.workflow_history + (data,)
def popdata(self):
"""
Removes and returns the top stack element.
"""
if len(self.workflow_history) == 0:
return None
data = self.workflow_history[-1]
self.workflow_history = self.workflow_history[:-1]
return data
def getdata(self):
"""
Returns the data from the top element without removing it.
"""
if len(self.workflow_history) == 0:
return None
return self.workflow_history[-1]
def _test():
import doctest, base
doctest.testmod(base)
if __name__ == '__main__':
_test() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/workflow/base.py | base.py |
import sys
import MySQLdb
DEBUG = 0
DBUSER = 'dotproject'
DBPASS = 'dotproject'
DB = 'dotproject'
SQL_PART_ENDDATE_NOW = 'AND task_log_date <= CURDATE()'
SQL_PART_ENDDATE = "AND task_log_date <= '%s'"
SQL_USERS = '''SELECT user_id, user_username FROM users;'''
SQL_PROJECTS = '''SELECT project_id, project_name, TRIM(SUBSTRING(project_short_name, 1, 6)) FROM projects;'''
SQL_TASK_PROJECTS = '''SELECT task_id, task_project FROM tasks;'''
SQL_ALL_CLIENTS_ALL_PROJECTS = '''SELECT task_log_task, TRIM(SUBSTRING(task_log_name, 1, 12)),
task_log_date, task_log_hours, TRIM(SUBSTRING(task_log_description, 1, 47))
FROM task_log
WHERE (task_log_creator = %s AND task_log_date >= '%s' %s AND task_log_hours > 0)
ORDER BY task_log_date; '''
SQL_ALL_CLIENTS_ALL_PROJECTS_SUM = '''SELECT SUM(task_log_hours) FROM task_log
WHERE (task_log_creator = %s AND task_log_date >= '%s' %s);
'''
class DotProjectReport:
def __init__(self, userid=None, begindate=None, enddate=None, payrate=None):
# initialize basic vars
self.userid = userid
self.begindate = begindate
self.rate = int(payrate)
self.enddate = enddate
self.usermap = {}
self.projectmap = {}
self.taskprojectmap = {}
self.userdroplist = []
self.projectdroplist = []
self.taskprojectdroplist = []
# initialize query vars
self.enddate_query = ''
# setup database connection
self.db = MySQLdb.connect(user=DBUSER, passwd=DBPASS, db=DB)
self.cursor = self.db.cursor()
def setEndDate(self):
if not self.enddate:
self.enddate_query = SQL_PART_ENDDATE_NOW
else:
self.enddate_query = SQL_PART_ENDDATE % self.enddate
def getUserMap(self):
self.cursor.execute(SQL_USERS)
for row in self.cursor.fetchall():
user_id, user_username = row
self.usermap[user_id] = user_username
return self.usermap
def getUserDropList(self):
'''
This method is useful for dropdowns, in that it preserves
the order returned by the SQL query, but also has a
value/friendly-name pairing.
'''
if not self.usermap:
self.getUserMap()
for user_id, user_username in self.usermap.items():
self.userdroplist.append({user_id: user_username})
return self.userdroplist
def getProjectMap(self):
self.cursor.execute(SQL_PROJECTS)
for row in self.cursor.fetchall():
id, long_name, short_name = row
self.projectmap[id] = (short_name, long_name)
return self.projectmap
def getProjectDropList(self):
'''
This method is useful for dropdowns, in that it preserves
the order returned by the SQL query, but also has a
value/friendly-name pairing.
'''
if not self.projectmap:
self.getProjectMap()
for project_id, name_tuple in self.projectmap.items():
long_name = name_tuple[1]
self.projectdroplist.append({project_id: long_name})
return self.projectdroplist
def getTaskProjectMap(self):
self.cursor.execute(SQL_TASK_PROJECTS)
for row in self.cursor.fetchall():
task_id, proj_id = row
self.taskprojectmap[task_id] = proj_id
return self.taskprojectmap
def getTaskProjectDropList(self):
'''
This method is useful for dropdowns, in that it preserves
the order returned by the SQL query, but also has a
value/friendly-name pairing.
'''
if not self.taskprojectmap:
self.getTaskProjectMap()
for task_id, proj_id in self.getTaskProjectMap.items():
self.taskprojectdroplist.append({task_id: proj_id})
return self.taskprojectdroplist
def getAllClientsAllProjects(self):
if not self.projectmap:
self.getProjectMap()
if not self.taskprojectmap:
self.getTaskProjectMap()
projs_map = self.projectmap
tasks_map = self.taskprojectmap
distinct_projs = {}
output = "Date \t\tProj \tTask \t\tHours \tDescription\n"
output += "==== \t\t==== \t==== \t\t===== \t===========\n"
sql = SQL_ALL_CLIENTS_ALL_PROJECTS % (self.userid, self.begindate, self.enddate_query)
#print sql
self.cursor.execute(sql)
for row in self.cursor.fetchall():
id, name, date, time, desc = row
proj = projs_map[tasks_map[id]][0]
distinct_projs.setdefault(proj, 0)
distinct_projs[proj] = distinct_projs[proj] + time
date = date.strftime('%Y-%m-%d')
output += "%s \t%s \t%s \t%s \t%s\n" % (date, proj, name, time, desc)
output += "\nProject Summaries:\n"
output += "Proj \tHours \tCost\n"
output += "==== \t===== \t====\n"
for proj, hours in distinct_projs.items():
output += "%s\t%.2f \t%.2f\n" % (proj, hours, hours * self.rate)
self.cursor.execute(SQL_ALL_CLIENTS_ALL_PROJECTS_SUM % (self.userid, self.begindate, self.enddate_query))
total = self.cursor.fetchone()[0] or 0.0
output += '\nTotal hours: %.2f\n' % total
output += 'Total amount: %.2f\n\n' % (total * self.rate)
return output
if __name__ == '__main__':
# setup the args
args = sys.argv
#print args
if DEBUG:
print args
userid, payrate, begindate = args[1:4]
try:
enddate = args[4]
except:
enddate = ''
if DEBUG:
print userid, begindate, enddate
# with the args setup, now instantiate report
report = DotProjectReport(userid=userid, payrate=payrate, begindate=begindate)
report.setEndDate()
print report.getAllClientsAllProjects() | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/reports/dotproject.py | dotproject.py |
_version = "1.1"
import os, sys, string, time, getopt, socket, select, re, errno, copy, signal
timeout=5
class WhoisRecord:
defaultserver='whois.networksolutions.com'
whoismap={ 'com' : 'whois.internic.net' , \
'org' : 'whois.internic.net' , \
'net' : 'whois.internic.net' , \
'edu' : 'whois.networksolutions.com' , \
'de' : 'whois.denic.de' , \
'gov' : 'whois.nic.gov' , \
# See http://www.nic.gov/cgi-bin/whois
'mil' : 'whois.nic.mil' , \
# See http://www.nic.mil/cgi-bin/whois
'ca' : 'whois.cdnnet.ca' , \
'uk' : 'whois.nic.uk' , \
'au' : 'whois.aunic.net' , \
'hu' : 'whois.nic.hu' , \
# All the following are unverified/checked.
'be' : 'whois.ripe.net',
'it' : 'whois.ripe.net' , \
# also whois.nic.it
'at' : 'whois.ripe.net' , \
# also www.nic.at, whois.aco.net
'dk' : 'whois.ripe.net' , \
'fo' : 'whois.ripe.net' , \
'lt' : 'whois.ripe.net' , \
'no' : 'whois.ripe.net' , \
'sj' : 'whois.ripe.net' , \
'sk' : 'whois.ripe.net' , \
'tr' : 'whois.ripe.net' , \
# also whois.metu.edu.tr
'il' : 'whois.ripe.net' , \
'bv' : 'whois.ripe.net' , \
'se' : 'whois.nic-se.se' , \
'br' : 'whois.nic.br' , \
# a.k.a. whois.fapesp.br?
'fr' : 'whois.nic.fr' , \
'sg' : 'whois.nic.net.sg' , \
'hm' : 'whois.registry.hm' , \
# see also whois.nic.hm
'nz' : 'domainz.waikato.ac.nz' , \
'nl' : 'whois.domain-registry.nl' , \
# RIPE also handles other countries
# See http://www.ripe.net/info/ncc/rir-areas.html
'ru' : 'whois.ripn.net' , \
'ch' : 'whois.nic.ch' , \
# see http://www.nic.ch/whois_readme.html
'jp' : 'whois.nic.ad.jp' , \
# (use DOM foo.jp/e for english; need to lookup !handles separately)
'to' : 'whois.tonic.to' , \
'nu' : 'whois.nic.nu' , \
'fm' : 'www.dot.fm' , \
# http request http://www.dot.fm/search.html
'am' : 'whois.nic.am' , \
'nu' : 'www.nunames.nu' , \
# http request
# e.g. http://www.nunames.nu/cgi-bin/drill.cfm?domainname=nunames.nu
#'cx' : 'whois.nic.cx' , \ # no response from this server
'af' : 'whois.nic.af' , \
'as' : 'whois.nic.as' , \
'li' : 'whois.nic.li' , \
'lk' : 'whois.nic.lk' , \
'mx' : 'whois.nic.mx' , \
'pw' : 'whois.nic.pw' , \
'sh' : 'whois.nic.sh' , \
# consistently resets connection
'tj' : 'whois.nic.tj' , \
'tm' : 'whois.nic.tm' , \
'pt' : 'whois.dns.pt' , \
'kr' : 'whois.nic.or.kr' , \
# see also whois.krnic.net
'kz' : 'whois.nic.or.kr' , \
# see also whois.krnic.net
'al' : 'whois.ripe.net' , \
'az' : 'whois.ripe.net' , \
'ba' : 'whois.ripe.net' , \
'bg' : 'whois.ripe.net' , \
'by' : 'whois.ripe.net' , \
'cy' : 'whois.ripe.net' , \
'cz' : 'whois.ripe.net' , \
'dz' : 'whois.ripe.net' , \
'ee' : 'whois.ripe.net' , \
'eg' : 'whois.ripe.net' , \
'es' : 'whois.ripe.net' , \
'fi' : 'whois.ripe.net' , \
'gr' : 'whois.ripe.net' , \
'hr' : 'whois.ripe.net' , \
'lu' : 'whois.ripe.net' , \
'lv' : 'whois.ripe.net' , \
'ma' : 'whois.ripe.net' , \
'md' : 'whois.ripe.net' , \
'mk' : 'whois.ripe.net' , \
'mt' : 'whois.ripe.net' , \
'pl' : 'whois.ripe.net' , \
'ro' : 'whois.ripe.net' , \
'si' : 'whois.ripe.net' , \
'sm' : 'whois.ripe.net' , \
'su' : 'whois.ripe.net' , \
'tn' : 'whois.ripe.net' , \
'ua' : 'whois.ripe.net' , \
'va' : 'whois.ripe.net' , \
'yu' : 'whois.ripe.net' , \
# unchecked
'ac' : 'whois.nic.ac' , \
'cc' : 'whois.nic.cc' , \
#'cn' : 'whois.cnnic.cn' , \ # connection refused
'gs' : 'whois.adamsnames.tc' , \
'hk' : 'whois.apnic.net' , \
#'ie' : 'whois.ucd.ie' , \ # connection refused
#'is' : 'whois.isnet.is' , \# connection refused
#'mm' : 'whois.nic.mm' , \ # connection refused
'ms' : 'whois.adamsnames.tc' , \
'my' : 'whois.mynic.net' , \
#'pe' : 'whois.rcp.net.pe' , \ # connection refused
'st' : 'whois.nic.st' , \
'tc' : 'whois.adamsnames.tc' , \
'tf' : 'whois.adamsnames.tc' , \
'th' : 'whois.thnic.net' , \
'tw' : 'whois.twnic.net' , \
'us' : 'whois.isi.edu' , \
'vg' : 'whois.adamsnames.tc' , \
#'za' : 'whois.co.za' # connection refused
}
def __init__(self,domain=None):
self.domain=domain
self.whoisserver=None
self.page=None
return
def whois(self,domain=None, server=None, cache=0):
if domain is not None:
self.domain=domain
pass
if server is not None:
self.whoisserver=server
pass
if self.domain is None:
print "No Domain"
raise "No Domain"
if self.whoisserver is None:
self.chooseserver()
if self.whoisserver is None:
print "No Server"
raise "No Server"
if cache:
fn = "%s.dom" % domainname
if os.path.exists(fn):
return open(fn).read()
pass
self.page=self._whois()
if cache:
open(fn, "w").write(page)
pass
return
def chooseserver(self):
try:
(secondlevel,toplevel)=string.split(self.domain,'.')
self.whoisserver=WhoisRecord.whoismap.get(toplevel)
if self.whoisserver==None:
self.whoisserver=WhoisRecord.defaultserver
return
pass
except:
self.whoisserver=WhoisRecord.defaultserver
return
if(toplevel=='com' or toplevel=='org' or toplevel=='net'):
tmp=self._whois()
m=re.search("Whois Server:(.+)",tmp)
if m:
self.whoisserver=string.strip(m.group(1))
return
self.whoisserver='whois.networksolutions.com'
tmp=self._whois()
m=re.search("Whois Server:(.+)",tmp)
if m:
self.whoisserver=string.strip(m.group(1))
return
pass
return
def _whois(self):
def alrmhandler(signum,frame):
raise "TimedOut", "on connect"
s = None
## try until we timeout
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setblocking(0)
signal.signal(signal.SIGALRM,alrmhandler)
signal.alarm(timeout)
while 1:
try:
s.connect(self.whoisserver, 43)
except socket.error, (ecode, reason):
if ecode==errno.EINPROGRESS:
continue
elif ecode==errno.EALREADY:
continue
else:
raise socket.error, (ecode, reason)
pass
break
signal.alarm(0)
ret = select.select ([s], [s], [], 30)
if len(ret[1])== 0 and len(ret[0]) == 0:
s.close()
raise TimedOut, "on data"
s.setblocking(1)
s.send("%s\n" % self.domain)
page = ""
while 1:
data = s.recv(8196)
if not data: break
page = page + data
pass
s.close()
if string.find(page, "No match for") != -1:
raise 'NoSuchDomain', self.domain
if string.find(page, "No entries found") != -1:
raise 'NoSuchDomain', self.domain
if string.find(page, "no domain specified") != -1:
raise 'NoSuchDomain', self.domain
if string.find(page, "NO MATCH:") != -1:
raise 'NoSuchDomain', self.domain
return page
##
## ----------------------------------------------------------------------
##
class ContactRecord:
def __init__(self):
self.type=None
self.organization=None
self.person=None
self.handle=None
self.address=None
self.email=None
self.phone=None
self.fax=None
self.lastupdated=None
return
def __str__(self):
return "Type: %s\nOrganization: %s\nPerson: %s\nHandle: %s\nAddress: %s\nEmail: %s\nPhone: %s\nFax: %s\nLastupdate: %s\n" % (self.type,self.organization,self.person,self.handle,self.address,self.email,self.phone,self.fax,self.lastupdated)
class DomainRecord(WhoisRecord):
parsemap={ 'whois.networksolutions.com' : 'ParseWhois_NetworkSolutions' , \
'whois.register.com' : 'ParseWhois_RegisterCOM' }
def __init__(self,domain=None):
WhoisRecord.__init__(self,domain)
self.domainid = None
self.created = None
self.lastupdated = None
self.expires = None
self.databaseupdated = None
self.servers = None
self.registrant = ContactRecord()
self.registrant.type='registrant'
self.contacts = {}
return
def __str__(self):
con=''
for (k,v) in self.contacts.items():
con=con + str(v) +'\n'
return "%s (%s):\nWhoisServer: %s\nCreated : %s\nLastupdated : %s\nDatabaseupdated : %s\nExpires : %s\nServers : %s\nRegistrant >>\n\n%s\nContacts >>\n\n%s\n" % (self.domain, self.domainid,self.whoisserver,self.created, self.lastupdated, self.databaseupdated, self.expires,self.servers, self.registrant, con)
def Parse(self):
self._ParseWhois()
return
def _ParseWhois(self):
parser=DomainRecord.parsemap.get(self.whoisserver)
if parser==None:
raise 'NoParser'
parser='self.'+parser+'()'
eval(parser)
return
##
## ----------------------------------------------------------------------
##
def _ParseContacts_RegisterCOM(self,page):
parts = re.split("((?:(?:Administrative|Billing|Technical|Zone) Contact,?[ ]*)+:)\n", page)
contacttypes = None
for part in parts:
if string.find(part, "Contact:") != -1:
if part[-1] == ":": part = part[:-1]
contacttypes = string.split(part, ",")
continue
part = string.strip(part)
if not part: continue
contact=ContactRecord()
m = re.search("Email: (.+@.+)", part)
if m:
contact.email=string.lower(string.strip(m.group(1)))
m = re.search("\s+Phone: (.+)", part)
if m:
contact.phone=m.group(1)
end=m.start(0)
start=0
lines = string.split(part[start:end], "\n")
lines = map(string.strip,lines)
contact.organization = lines.pop(0)
contact.person = lines.pop(0)
contact.address=string.join(lines,'\n')
for contacttype in contacttypes:
contacttype = string.lower(string.strip(contacttype))
contacttype = string.replace(contacttype, " contact", "")
contact.type=contacttype
self.contacts[contacttype] = copy.copy(contact)
pass
pass
return
def ParseWhois_RegisterCOM(self):
m = re.search("Record last updated on.*: (.+)", self.page)
if m: self.lastupdated = m.group(1)
m = re.search("Created on.*: (.+)", self.page)
if m: self.created = m.group(1)
m = re.search("Expires on.*: (.+)", self.page)
if m: self.expires = m.group(1)
m = re.search("Phone: (.+)", self.page)
if m: self.registrant.phone=m.group(1)
m = re.search("Email: (.+@.+)",self.page)
if m: self.registrant.email=m.group(1)
m = re.search("Organization:(.+?)Phone:",self.page,re.S)
if m:
start=m.start(1)
end=m.end(1)
registrant = string.strip(self.page[start:end])
registrant = string.split(registrant, "\n")
registrant = map(string.strip,registrant)
self.registrant.organization = registrant[0]
self.registrant.person =registrant[1]
self.registrant.address = string.join(registrant[2:], "\n")
pass
m = re.search("Domain servers in listed order:\n\n(.+?)\n\n", self.page, re.S)
if m:
start = m.start(1)
end = m.end(1)
servers = string.strip(self.page[start:end])
lines = string.split(servers, "\n")
self.servers = []
for line in lines:
m=re.search("(\w|\.)+?\s*(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})",string.strip(line))
if m:
self.servers.append(m.group(1), m.group(2))
pass
pass
pass
m = re.search("((?:(?:Administrative|Billing|Technical|Zone) Contact,?[ ]*)+:)\n", self.page)
if m:
i = m.start()
m = re.search("Domain servers in listed order", self.page)
j = m.start()
contacts = string.strip(self.page[i:j])
pass
self._ParseContacts_RegisterCOM(contacts)
return
def _ParseContacts_NetworkSolutions(self,page):
parts = re.split("((?:(?:Administrative|Billing|Technical|Zone) Contact,?[ ]*)+:)\n", page)
contacttypes = None
for part in parts:
if string.find(part, "Contact:") != -1:
if part[-1] == ":": part = part[:-1]
contacttypes = string.split(part, ",")
continue
part = string.strip(part)
if not part: continue
record=ContactRecord()
lines = string.split(part, "\n")
m = re.search("(.+) \((.+)\) (.+@.+)", lines.pop(0))
if m:
record.person = string.strip(m.group(1))
record.handle = string.strip(m.group(2))
record.email = string.lower(string.strip(m.group(3)))
pass
record.organization=string.strip(lines.pop(0))
flag = 0
addresslines = []
phonelines = []
phonelines.append(string.strip(lines.pop()))
for line in lines:
line = string.strip(line)
#m=re.search("^(\d|-|\+|\s)+$",line)
#if m: flag = 1
if flag == 0:
addresslines.append(line)
else:
phonelines.append(line)
pass
pass
record.phone = string.join(phonelines, "\n")
record.address = string.join(addresslines, "\n")
for contacttype in contacttypes:
contacttype = string.lower(string.strip(contacttype))
contacttype = string.replace(contacttype, " contact", "")
record.type=contacttype
self.contacts.update({contacttype:copy.copy(record)})
pass
pass
return
def ParseWhois_NetworkSolutions(self):
m = re.search("Record last updated on (.+)\.", self.page)
if m: self.lastupdated = m.group(1)
m = re.search("Record created on (.+)\.", self.page)
if m: self.created = m.group(1)
m = re.search("Database last updated on (.+)\.", self.page)
if m: self.databaseupdated = m.group(1)
m = re.search("Record expires on (.+)\.",self.page)
if m: self.expires=m.group(1)
m = re.search("Registrant:(.+?)\n\n", self.page, re.S)
if m:
start= m.start(1)
end = m.end(1)
reg = string.strip(self.page[start:end])
reg = string.split(reg, "\n")
reg = map(string.strip,reg)
self.registrant.organization = reg[0]
self.registrant.address = string.join(reg[1:],'\n')
m = re.search("(.+) \((.+)\)", self.registrant.organization)
if m:
self.domainid = m.group(2)
pass
pass
m = re.search("Domain servers in listed order:\n\n", self.page)
if m:
i = m.end()
m = re.search("\n\n", self.page[i:])
j = m.start()
servers = string.strip(self.page[i:i+j])
lines = string.split(servers, "\n")
self.servers = []
for line in lines:
m=re.search("(\w|\.)+?\s*(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})",string.strip(line))
if m:
self.servers.append(m.group(1), m.group(2))
pass
pass
pass
m = re.search("((?:(?:Administrative|Billing|Technical|Zone) Contact,?[ ]*)+:)\n", self.page)
if m:
i = m.start()
m = re.search("Record last updated on", self.page)
j = m.start()
contacts = string.strip(self.page[i:j])
pass
self._ParseContacts_NetworkSolutions(contacts)
return
##
## ----------------------------------------------------------------------
##
##
## ----------------------------------------------------------------------
##
def usage(progname):
version = _version
print __doc__ % vars()
def main(argv, stdout, environ):
progname = argv[0]
list, args = getopt.getopt(argv[1:], "", ["help", "version"])
for (field, val) in list:
if field == "--help":
usage(progname)
return
elif field == "--version":
print progname, _version
return
rec=WhoisRecord();
for domain in args:
whoisserver=None
if string.find(domain,'@')!=-1:
(domain,whoisserver)=string.split(domain,'@')
try:
rec.whois(domain,whoisserver)
print rec.page
except 'NoSuchDomain', reason:
print "ERROR: no such domain %s" % domain
except socket.error, (ecode,reason):
print reason
except "TimedOut", reason:
print "Timed out", reason
if __name__ == "__main__":
main(sys.argv, sys.stdout, os.environ) | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/net/rwhois.py | rwhois.py |
import os
import sets
if os.name == 'posix':
TRACE_BIN = '/usr/sbin/traceroute'
else:
TRACE_BIN = None
class Trace(object):
'''
>>> from traceroute import Trace
>>> t = Trace(host='www.adytum.us')
>>> t.call()
>>> len(t.results.getHosts()) > 0
'''
def __init__(self, host='127.0.0.1', numeric=True, queries='1'):
self.host = host
self.numeric = numeric
self.queries = queries
self._setParams()
self._setCommand()
def _setParams(self):
self.params = ''
if self.numeric:
self.params +=' -n'
self.mode = 'numeric'
else:
self.mode = 'alpha'
if self.queries:
self.params += ' -q %s' % self.queries
def _setCommand(self):
self.cmd = '%s %s %s' % (TRACE_BIN, self.params, self.host)
def _executeCommand(self):
p = os.popen(self.cmd)
data = p.readlines()
err = p.close()
if err:
raise RuntimeError, '%s failed w/ exit code %d' % (cmd, err)
self.results = ParseOutput(data, self.mode)
call = _executeCommand
def _parseLine(line, mode):
if mode == 'numeric':
try:
(hop, sp1, host, sp2, time, units) = line.strip().split(' ')
except:
(hop, host, time, units) = (line[0], None, None, None)
elif mode == 'alpha':
try:
(hop, sp1, host, ip, sp2, time, units) = line.strip().split(' ')
except:
(hop, host, time, units) = (line[0], None, None, None)
parsed = {'hop': hop, 'host': host, 'time': time, 'timeunits': units}
return parsed
class ParseOutput(object):
def __init__(self, dataList, mode):
# get rid of the first line, which is the command
self.data = dataList[1:]
self.mode = mode
def getHopCount(self):
return len(self.data)
def getHosts(self):
full = [ _parseLine(x, self.mode)['host'] for x in self.data ]
return [ x for x in full if x ]
def getLinkedHosts(self):
'''
This method provides a tuple of tuples of the pattern
((1,2), (2,3), (3,4) ... (n-1, n))
for the purpose of using in graphs a la pydot.
'''
hosts = self.getHosts()
return [ (x, hosts[hosts.index(x)+1]) for x in hosts if len(hosts) > hosts.index(x) + 1 ]
def getLinkedDomains(self):
'''
This does a similar thing as the getLinkedHosts() method. In fact,
if the .mode attribute is numeric, it just calls getLinkedHosts().
However, if not, pull out host and subdomain from the FQDN and only
retain domain-level name info.
'''
if self.mode == 'numeric':
return self.getLinkedHosts()
else:
uniquer = {}
hosts = self.getHosts()
for host in hosts:
domain = '.'.join(host.split('.')[-2:])
uniquer.update({domain:hosts.index(host)})
hosts = zip(uniquer.values(), uniquer.keys())
hosts.sort()
return [ (y, hosts[hosts.index((x,y))+1][1]) for x,y in hosts if len(hosts) > hosts.index((x,y)) + 1 ] | Adytum-PyMonitor | /Adytum-PyMonitor-1.0.5.tar.bz2/Adytum-PyMonitor-1.0.5/lib/net/traceroute.py | traceroute.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.